From larry at hastings.org Fri Mar 1 00:02:58 2013 From: larry at hastings.org (Larry Hastings) Date: Thu, 28 Feb 2013 15:02:58 -0800 Subject: [Python-Dev] Announcing PEP 436: The Argument Clinic DSL In-Reply-To: References: <512BFDAB.5050103@hastings.org> <512D0342.2060603@hastings.org> Message-ID: <512FE222.8000401@hastings.org> On 02/26/2013 06:30 PM, Terry Reedy wrote: > On 2/26/2013 1:47 PM, Larry Hastings wrote: >> I think positional-only functions should be discouraged, but we don't > > If I were writing something like Clinic, I would be tempted to not > have that option. But I was actually thinking about something in the > positional-only writeup that mentioned the possibility of adding > something to the positional-only option. Clinic needs to be usable for every builtin in order to be a credible solution. And the positional-only approach to parsing is sufficiently different from the positional-and-keywords approach that I couldn't sweep the conceptual difference under the rug--I had to explicitly support weird stuff like optional positional arguments on the /left/. So I really don't have a choice. All we can do is say "please don't use this for new code". > As I understand it, C module files are structured something like the > following, which is 'unusual' for a python file. > > def meth1_impl(... > > def meth2_impl(... > > class C: > meth1 = meth1_impl > meth2 = meth2_impl At the moment Clinic is agnostic about where in Python the callable lives. The "module name" in the DSL is really just used in documentation and to construct the name of the static functions. So if you specify a class name as your "module name" it'll work fine and look correct. However, maybe we need to think about this a little more when it comes time to add Signature metadata. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug.hellmann at gmail.com Fri Mar 1 00:39:21 2013 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Thu, 28 Feb 2013 18:39:21 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: <31B54506-7B08-4E77-B966-A6171BBC3DA6@gmail.com> On Feb 27, 2013, at 11:51 AM, Michael Foord wrote: > Hello all, > > PyCon, and the Python Language Summit, is nearly upon us. We have a good number of people confirmed to attend. If you are intending to come to the language summit but haven't let me know please do so. > > The agenda of topics for discussion so far includes the following: > > * A report on pypy status - Maciej and Armin > * Jython and IronPython status reports - Dino / Frank > * Packaging (Doug Hellmann and Monty Taylor at least) Since the time I suggested we add packaging to the agenda, Nick has set up a separate summit meeting for Friday evening. I don't know if it makes sense to leave this on the agenda for Wednesday or not. Nick, what do you think? Doug > * Cleaning up interpreter initialisation (both in hopes of finding areas > to rationalise and hence speed things up, as well as making things > more embedding friendly). Nick Coghlan > * Adding new async capabilities to the standard library (Guido) > * cffi and the standard library - Maciej > * flufl.enum and the standard library - Barry Warsaw > * The argument clinic - Larry Hastings > > If you have other items you'd like to discuss please let me know and I can add them to the agenda. > > All the best, > > Michael Foord > > -- > http://www.voidspace.org.uk/ > > > May you do good and not evil > May you find forgiveness for yourself and forgive others > May you share freely, never taking more than you give. > -- the sqlite blessing > http://www.sqlite.org/different.html > > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/doug.hellmann%40gmail.com From stefan_ml at behnel.de Fri Mar 1 10:41:00 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 01 Mar 2013 10:41:00 +0100 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: Michael Foord, 27.02.2013 17:51: > PyCon, and the Python Language Summit, is nearly upon us. We have a good number of people confirmed to attend. If you are intending to come to the language summit but haven't let me know please do so. > > The agenda of topics for discussion so far includes the following: > > * A report on pypy status - Maciej and Armin > * Jython and IronPython status reports - Dino / Frank > * Packaging (Doug Hellmann and Monty Taylor at least) > * Cleaning up interpreter initialisation (both in hopes of finding areas > to rationalise and hence speed things up, as well as making things > more embedding friendly). Nick Coghlan > * Adding new async capabilities to the standard library (Guido) > * cffi and the standard library - Maciej > * flufl.enum and the standard library - Barry Warsaw > * The argument clinic - Larry Hastings Hi, as in the years before, none of the Cython developers is attending the PyCon-US, so we won't appear that the language summit either. It's a bit sad that it always takes place at that venue, but I guess there'll just always be people that can't come to one meeting or the other, so PyCon-US would just catch the majority. I think it would still be interesting for many of the attendants to get a status report about Cython, as there seems to be some confusion and incomplete knowledge about what Cython actually is, what we have achieved and where we are heading. But maybe the confusion is large enough to require more than just a little status report to clear it up. It's also true that many of the topics above aren't really interesting for us, because we just inherit them with CPython, e.g. stdlib changes. Packaging is only relevant as far as it impacts the distribution of binary extensions, and the main changes appear to be outside of that area (which doesn't mean it's not truly wonderful that they are happening, Python packaging has seen a lot of great improvements during the last years and I'm very happy to see it getting even better). Interpreter initialisation would be interesting and Cython could potentially help in some spots here by making code easier to maintain and optimise, for example. We've had this discussion for the importlib bootstrapping and I'm sure there's more that could be done. It's sad to see so much C-level work go into areas that really don't need to be that low-level. I'm not so happy with the argument clinic, but that's certainly also because I'm biased. I've written the argument unpacking code for Cython some years ago, so it's not surprising that I'm quite happy with that and fail to see the need for a totally new DSL *and* a totally new implementation, especially with its mapping to the slowish ParseTuple*() C-API functions. I've also not seen a good argument why the existing Py3 function signatures can't do what the proposed DSL tries to achieve. They'd at least make it clear that the intention is to make things more Python-like, and would at the same time provide the documentation. The topics that would be interesting for us sound more like they'd benefit from a "CPython runtime summit". I really think that it would be beneficial for the CPython developers to learn how we solved problems that they have on their lists or might at least run into in a couple of years, and for us to see if we can't come up with cleaner solutions for problems that CPython currently makes hard to do outside of the core. For example, making C-implemented code Python compatible is actually not trivial and has cost us a lot of investment. Nowadays, CPython is actually further away from that in some areas than Cython, and I don't think it needs to stay that way. It could certainly help both Cython and CPython if CPython gained some of the capabilities for itself that we had to implement ourselves in clean or hacky ways, but always outside of the core. There isn't really a reason why C-implemented parts of CPython must behave all that different from Python implemented parts, why modules must have a different API than other objects, why builtins can't accept keyword arguments, ... These things just get in the way once their camouflage as features is blown up. Another topic is C-level calling between extensions that only see each other through the CPython core. Python call semantics are nice, but also extremely slow compared to C calls. Capsules are simple but slow and static. Implementing a dynamic C calling interface between extensions with a safe way to pass C signatures around and validate them (or adapt to them) on the other side would be much easier with CPython support built into its C function type than trying to do this outside of the core. And there's a huge area of applications for such a feature, especially with the increasing number of tools that do dynamic code generation on the CPython platform. Eventually, this might also become interesting for non-CPythons, as it might provide a way to interface efficiently with CPython extensions. So, have fun at the language summit everyone, I'm looking forward to skipping through the writeup. And I'd really like to see a CPython summit happen at some point. There's so much interesting stuff going on in that area that it's worth getting some people together to move these things forward. Stefan From solipsis at pitrou.net Fri Mar 1 11:33:24 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 1 Mar 2013 11:33:24 +0100 Subject: [Python-Dev] High volumes and off topic discussions References: <20130228150326.2e7b0944@pitrou.net> Message-ID: <20130301113324.67ce5107@pitrou.net> Le Thu, 28 Feb 2013 19:17:39 +0200, Maciej Fijalkowski a ?crit : > On Thu, Feb 28, 2013 at 4:03 PM, Antoine Pitrou > wrote: > > Le Thu, 28 Feb 2013 13:36:10 +0200, > > Maciej Fijalkowski a ?crit : > >> Hi > >> > >> I know this is a hard topic, but python-dev is already incredibly > >> high-volume and dragging discussion off-topic is making following > >> important stuff (while ignoring unimportant stuff) very hard. > >> > >> For example in a recent topic "cffi in stdlib" I find a mail that > >> says "we have to find a sufficiently silly species of snake". It's > >> even funny, but it definitely makes it very hard to follow for > >> those of us who don't read python-dev 24/7. Would it be reasonable > >> for python-dev to generally try to stay on topic (for example if > >> the thread is called "silly species of snakes", I absolutely don't > >> mind people posting there whatever they feel like as long as I'm > >> not expected to read every single message). > > > > I'm afraid you're trying to devise derogatory distinctions regarding > > drifting discussions. > > > > Seriously, yes, I approve of changing the subject line, although I > > forgot to do it this time. > > For the record, you can also read the list through a NNTP gateway > > using Gmane, it can make things easier. > > How does that help with knowing what mails to read what mails not to > read? It doesn't, but at least it won't flood your personal inbox ;-) Regards Antoine. From ezio.melotti at gmail.com Fri Mar 1 14:22:57 2013 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Fri, 1 Mar 2013 15:22:57 +0200 Subject: [Python-Dev] [Python-checkins] cpython (3.3): Don't deadlock on a reentrant call. In-Reply-To: <3ZHTky0hvLzSyt@mail.python.org> References: <3ZHTky0hvLzSyt@mail.python.org> Message-ID: Hi, On Fri, Mar 1, 2013 at 2:02 PM, raymond.hettinger wrote: > http://hg.python.org/cpython/rev/1920422626a5 > changeset: 82437:1920422626a5 > branch: 3.3 > parent: 82435:43ac02b7e322 > user: Raymond Hettinger > date: Fri Mar 01 03:47:57 2013 -0800 > summary: > Don't deadlock on a reentrant call. this seems to have broken builds without threads. After this commit I get a compile error: $ ./configure --without-threads --with-pydebug && make -j2 [...] ./python -E -S -m sysconfig --generate-posix-vars Could not find platform dependent libraries Consider setting $PYTHONHOME to [:] Could not import runpy module Exception ignored in: 'garbage collection' Traceback (most recent call last): File "/home/wolf/dev/py/py3k/Lib/runpy.py", line 16, in import imp File "/home/wolf/dev/py/py3k/Lib/imp.py", line 23, in import tokenize File "/home/wolf/dev/py/py3k/Lib/tokenize.py", line 28, in import re File "/home/wolf/dev/py/py3k/Lib/re.py", line 124, in import functools File "/home/wolf/dev/py/py3k/Lib/functools.py", line 22, in from dummy_threading import RLock File "/home/wolf/dev/py/py3k/Lib/dummy_threading.py", line 45, in import threading File "/home/wolf/dev/py/py3k/Lib/threading.py", line 6, in from time import sleep as _sleep ImportError: No module named 'time' Fatal Python error: unexpected exception during garbage collection Current thread 0x00000000: make: *** [pybuilddir.txt] Aborted (core dumped) See also: http://buildbot.python.org/all/builders/AMD64%20Fedora%20without%20threads%203.x/builds/4006 http://buildbot.python.org/all/builders/AMD64%20Fedora%20without%20threads%203.3/builds/516 (Also having tests for this change would be nice.) Best Regards, Ezio Melotti From barry at python.org Fri Mar 1 15:32:23 2013 From: barry at python.org (Barry Warsaw) Date: Fri, 1 Mar 2013 09:32:23 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130227223749.2f06a328@anarchist.wooz.org> Message-ID: <20130301093223.743a04c8@anarchist.wooz.org> On Feb 28, 2013, at 08:44 AM, fwierzbicki at gmail.com wrote: >Sorry I meant "is_jython" as a sort of shorthand for a case by case >check. It would be cool if we had a nice set of checks somewhere like >"is_refcounted", etc. Would the sys.implementation area be a good >place for such things? Yep. Unless it proves too unwieldy I suppose. >On the other hand in some ways Jython is sort of like Python on a >weird virtual OS that lets the real OS bleed through some. This may >still need to be checked in that way (there's are still checks of os.name == 'nt'> right?) Yeah, but that all ooooold code ;) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From brett at python.org Fri Mar 1 15:57:59 2013 From: brett at python.org (Brett Cannon) Date: Fri, 1 Mar 2013 09:57:59 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: On Fri, Mar 1, 2013 at 4:41 AM, Stefan Behnel wrote: > Michael Foord, 27.02.2013 17:51: > > PyCon, and the Python Language Summit, is nearly upon us. We have a good > number of people confirmed to attend. If you are intending to come to the > language summit but haven't let me know please do so. > > > > The agenda of topics for discussion so far includes the following: > > > > * A report on pypy status - Maciej and Armin > > * Jython and IronPython status reports - Dino / Frank > > * Packaging (Doug Hellmann and Monty Taylor at least) > > * Cleaning up interpreter initialisation (both in hopes of finding areas > > to rationalise and hence speed things up, as well as making things > > more embedding friendly). Nick Coghlan > > * Adding new async capabilities to the standard library (Guido) > > * cffi and the standard library - Maciej > > * flufl.enum and the standard library - Barry Warsaw > > * The argument clinic - Larry Hastings > > Hi, > > as in the years before, none of the Cython developers is attending the > PyCon-US, so we won't appear that the language summit either. It's a bit > sad that it always takes place at that venue, but I guess there'll just > always be people that can't come to one meeting or the other, so PyCon-US > would just catch the majority. I think it would still be interesting for > many of the attendants to get a status report about Cython, as there seems > to be some confusion and incomplete knowledge about what Cython actually > is, what we have achieved and where we are heading. But maybe the confusion > is large enough to require more than just a little status report to clear > it up. > There are actually two language summits each year: PyCon US and EuroPython. But you are right that the US one is the biggest as it's the easiest to get the most core devs in a single room. Hopefully you can make PyCon US (it's a great conference). And if it's a US issue, it will be in Canada in 2014 and 2015. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.bucur at gmail.com Fri Mar 1 16:24:42 2013 From: stefan.bucur at gmail.com (Stefan Bucur) Date: Fri, 1 Mar 2013 16:24:42 +0100 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults Message-ID: Hi, I'm working on an automated bug finding tool that I'm trying to apply on the Python interpreter code (version 2.7.3). Because of early prototype limitations, I needed to disable string interning in stringobject.c. More precisely, I modified the PyString_FromStringAndSize and PyString_FromString to no longer check for the null and single-char cases, and create instead a new string every time (I can send the patch if needed). However, after applying this modification, when running "make test" I get a segfault in the test___all__ test case. Before digging deeper into the issue, I wanted to ask here if there are any implicit assumptions about string identity and interning throughout the interpreter implementation. For instance, are two single-char strings having the same content supposed to be identical objects? I'm assuming that it's either this, or some refcount bug in the interpreter that manifests only when certain strings are no longer interned and thus have a higher chance to get low refcount values. Thanks a lot, Stefan Bucur -------------- next part -------------- An HTML attachment was scrubbed... URL: From demianbrecht at gmail.com Fri Mar 1 16:43:06 2013 From: demianbrecht at gmail.com (Demian Brecht) Date: Fri, 1 Mar 2013 07:43:06 -0800 Subject: [Python-Dev] FileCookieJars Message-ID: Cross-posting from python-ideas due to no response there. Perhaps it's due to a general lack of usage/caring for cookiejar, but figured /someone/'s got to have an opinion about my proposal ;) Note that I've moved my discussion from bug 16942 to 16901 (http://bugs.python.org/issue16901) as they're duplicates and 16901 is more succinct. I've also posted a patch in 16901 implementing my proposal. TL;DR: CookieJar > FileCookieJar > *CookieJar are architecturally broken and this is an attempt to rectify that (and fix a couple bugs along the way). --------------- Context: http://bugs.python.org/issue16942 (my patch, changing FileCookieJar to be an abc, defining the interfaces for *FileCookieJar). This pertains to Terry's question about whether or not it makes sense that an abstract base class extends a concrete class. After putting in a little thought, he's right. It doesn't make sense. After further thought, I'm relatively confident that the hierarchy as it stands should be changed. Currently what's implemented in the stdlib looks like this: CookieJar | FileCookieJar | | | MozillaCookieJar LWPCookieJar What I'm proposing is that the structure is broken to be the following: FileCookieJarProcessor CookieJar | | | MozillaCookieJarProcessor LWPCookieJarProcessor The intention here is to have processors that operate /on/ a cookiejar object via composition rather than inheritance. This aligns with how urllib.request.HTTPCookieProcessor works (which has the added bonus of cross-module consistency). The only attributes that concrete FileCookieJarProcessor classes touch (at least, in the stdlib) are _cookies and _cookies_lock. I have mixed feelings about whether these should continue to be noted as "non-public" with the _ prefix or not as keeping the _ would break convention of operating on non-public fields, but am unsure of the ramifications of changing them to public. Making this change then allows for FileCookieJar(Processor) to be an abstract base class without inheriting from CookieJar which doesn't make a whole lot of sense from an architecture standpoint. I have yet to see what impact these changes have to the cookiejar extensions at http://wwwsearch.sf.net but plan on doing so if this approach seems sound. This will obviously break backwards compatibility, so I'm not entirely sure what best practice is around that: leave well enough alone even though it might not make sense, keep the old implementations around and deprecate them to be eventually replaced by the processors, or other ideas? -- Demian Brecht http://demianbrecht.github.com From status at bugs.python.org Fri Mar 1 18:07:29 2013 From: status at bugs.python.org (Python tracker) Date: Fri, 1 Mar 2013 18:07:29 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20130301170729.38A1356A38@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2013-02-22 - 2013-03-01) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3877 (+11) closed 25227 (+41) total 29104 (+52) Open issues with patches: 1695 Issues opened (42) ================== #16121: shlex.shlex.error_leader() reports incorrect line number http://bugs.python.org/issue16121 reopened by r.david.murray #17223: Initializing array.array with unicode type code and buffer seg http://bugs.python.org/issue17223 reopened by sbt #17278: SIGSEGV in _heapqmodule.c http://bugs.python.org/issue17278 opened by maxxedev #17279: Document which named built-in classes can be subclassed http://bugs.python.org/issue17279 opened by terry.reedy #17281: Broken links at pypi http://bugs.python.org/issue17281 opened by lyda #17282: document the defaultTest parameter to unittest.main() http://bugs.python.org/issue17282 opened by chris.jerdonek #17284: create mercurial section in devguide's committing.rst http://bugs.python.org/issue17284 opened by chris.jerdonek #17285: subprocess.check_output incorrectly state that output is alway http://bugs.python.org/issue17285 opened by Baptiste.Lepilleur #17286: Make subprocess handling text output with universal_newlines m http://bugs.python.org/issue17286 opened by Baptiste.Lepilleur #17288: cannot jump from a return after setting f_lineno http://bugs.python.org/issue17288 opened by xdegaye #17289: readline.set_completer_delims() doesn't play well with others http://bugs.python.org/issue17289 opened by bfroehle #17293: uuid.getnode() MAC address on AIX http://bugs.python.org/issue17293 opened by aivarsk #17294: compile-flag for single-execution to return value instead of p http://bugs.python.org/issue17294 opened by Albert.Zeyer #17295: __slots__ on PyVarObject subclass http://bugs.python.org/issue17295 opened by ronaldoussoren #17298: Twisted test failure triggered by change in 2.7 branch http://bugs.python.org/issue17298 opened by glyph #17299: Test cPickle with real files http://bugs.python.org/issue17299 opened by serhiy.storchaka #17301: An in-place version of many bytearray methods is needed http://bugs.python.org/issue17301 opened by gregory.p.smith #17302: HTTP/2.0 - Implementations/Testing efforts http://bugs.python.org/issue17302 opened by karlcow #17305: IDNA2008 encoding missing http://bugs.python.org/issue17305 opened by marten #17306: Improve the way abstract base classes are shown in help() http://bugs.python.org/issue17306 opened by rhettinger #17307: HTTP PUT request Example http://bugs.python.org/issue17307 opened by orsenthil #17308: Dialog.py crashes when putty Window resized http://bugs.python.org/issue17308 opened by harshaap #17309: __bytes__ doesn't work in subclass of int http://bugs.python.org/issue17309 opened by pkoning #17310: Ctypes callbacks shows problem on Windows Python (64bit) http://bugs.python.org/issue17310 opened by Matt.Clarke #17311: use distutils terminology in "PyPI package display" section http://bugs.python.org/issue17311 opened by chris.jerdonek #17312: test_aifc doesn't clean up after itself http://bugs.python.org/issue17312 opened by chris.jerdonek #17314: Stop using imp.find_module() in multiprocessing http://bugs.python.org/issue17314 opened by brett.cannon #17315: test_posixpath doesn't clean up after itself http://bugs.python.org/issue17315 opened by chris.jerdonek #17316: Add Django 1.5 to benchmarks http://bugs.python.org/issue17316 opened by brett.cannon #17317: Benchmark driver should calculate actual benchmark count in -h http://bugs.python.org/issue17317 opened by brett.cannon #17318: xml.sax and xml.dom fetch DTDs by default http://bugs.python.org/issue17318 opened by rsandwick3 #17319: http.server.BaseHTTPRequestHandler send_response_only doesn't http://bugs.python.org/issue17319 opened by karlcow #17320: os.path.abspath in window7, return error http://bugs.python.org/issue17320 opened by xiaowei.py #17322: urllib.request add_header() currently allows trailing spaces ( http://bugs.python.org/issue17322 opened by karlcow #17323: Disable [X refs, Y blocks] ouput in debug builds http://bugs.python.org/issue17323 opened by ezio.melotti #17324: SimpleHTTPServer serves files even if the URL has a trailing s http://bugs.python.org/issue17324 opened by larry #17325: improve organization of the PyPI distutils docs http://bugs.python.org/issue17325 opened by chris.jerdonek #17326: Windows build docs still referring to VS 2008 in 3.3 http://bugs.python.org/issue17326 opened by cito #17327: Add PyDict_GetItemSetDefault() as C-API for dict.setdefault() http://bugs.python.org/issue17327 opened by scoder #17328: Fix reference leak in dict_setdefault() in case of resize fail http://bugs.python.org/issue17328 opened by scoder #17329: Document unittest.SkipTest http://bugs.python.org/issue17329 opened by brett.cannon #17296: Cannot unpickle classes derived from 'Exception' http://bugs.python.org/issue17296 opened by Andreas.Hausmann Most recent 15 issues with no replies (15) ========================================== #17329: Document unittest.SkipTest http://bugs.python.org/issue17329 #17328: Fix reference leak in dict_setdefault() in case of resize fail http://bugs.python.org/issue17328 #17327: Add PyDict_GetItemSetDefault() as C-API for dict.setdefault() http://bugs.python.org/issue17327 #17326: Windows build docs still referring to VS 2008 in 3.3 http://bugs.python.org/issue17326 #17325: improve organization of the PyPI distutils docs http://bugs.python.org/issue17325 #17320: os.path.abspath in window7, return error http://bugs.python.org/issue17320 #17319: http.server.BaseHTTPRequestHandler send_response_only doesn't http://bugs.python.org/issue17319 #17317: Benchmark driver should calculate actual benchmark count in -h http://bugs.python.org/issue17317 #17316: Add Django 1.5 to benchmarks http://bugs.python.org/issue17316 #17315: test_posixpath doesn't clean up after itself http://bugs.python.org/issue17315 #17312: test_aifc doesn't clean up after itself http://bugs.python.org/issue17312 #17310: Ctypes callbacks shows problem on Windows Python (64bit) http://bugs.python.org/issue17310 #17309: __bytes__ doesn't work in subclass of int http://bugs.python.org/issue17309 #17308: Dialog.py crashes when putty Window resized http://bugs.python.org/issue17308 #17306: Improve the way abstract base classes are shown in help() http://bugs.python.org/issue17306 Most recent 15 issues waiting for review (15) ============================================= #17328: Fix reference leak in dict_setdefault() in case of resize fail http://bugs.python.org/issue17328 #17327: Add PyDict_GetItemSetDefault() as C-API for dict.setdefault() http://bugs.python.org/issue17327 #17325: improve organization of the PyPI distutils docs http://bugs.python.org/issue17325 #17314: Stop using imp.find_module() in multiprocessing http://bugs.python.org/issue17314 #17307: HTTP PUT request Example http://bugs.python.org/issue17307 #17296: Cannot unpickle classes derived from 'Exception' http://bugs.python.org/issue17296 #17293: uuid.getnode() MAC address on AIX http://bugs.python.org/issue17293 #17288: cannot jump from a return after setting f_lineno http://bugs.python.org/issue17288 #17284: create mercurial section in devguide's committing.rst http://bugs.python.org/issue17284 #17278: SIGSEGV in _heapqmodule.c http://bugs.python.org/issue17278 #17277: incorrect line numbers in backtrace after removing a trace fun http://bugs.python.org/issue17277 #17272: request.full_url: unexpected results on assignment http://bugs.python.org/issue17272 #17269: getaddrinfo segfaults on OS X when provided with invalid argum http://bugs.python.org/issue17269 #17267: datetime.time support for '+' and 'now' http://bugs.python.org/issue17267 #17264: Update Building C and C++ Extensions with distutils documentat http://bugs.python.org/issue17264 Top 10 most discussed issues (10) ================================= #17263: crash when tp_dealloc allows other threads http://bugs.python.org/issue17263 25 msgs #14468: Update cloning guidelines in devguide http://bugs.python.org/issue14468 18 msgs #16930: mention limitations and/or alternatives to hg graft http://bugs.python.org/issue16930 13 msgs #17223: Initializing array.array with unicode type code and buffer seg http://bugs.python.org/issue17223 13 msgs #12641: Remove -mno-cygwin from distutils http://bugs.python.org/issue12641 11 msgs #10572: Move test sub-packages to Lib/test http://bugs.python.org/issue10572 10 msgs #16121: shlex.shlex.error_leader() reports incorrect line number http://bugs.python.org/issue16121 10 msgs #17267: datetime.time support for '+' and 'now' http://bugs.python.org/issue17267 10 msgs #15083: Rewrite ElementTree tests in a cleaner and safer way http://bugs.python.org/issue15083 9 msgs #15305: Test harness unnecessarily disambiguating twice http://bugs.python.org/issue15305 8 msgs Issues closed (37) ================== #5033: setup.py crashes if sqlite version contains 'beta' http://bugs.python.org/issue5033 closed by petri.lehtinen #8936: webbrowser regression on windows http://bugs.python.org/issue8936 closed by terry.reedy #10886: Unhelpful backtrace for multiprocessing.Queue http://bugs.python.org/issue10886 closed by neologix #12749: lib re cannot match non-BMP ranges (all versions, all builds) http://bugs.python.org/issue12749 closed by ezio.melotti #13684: httplib tunnel infinite loop http://bugs.python.org/issue13684 closed by pitrou #14720: sqlite3 microseconds http://bugs.python.org/issue14720 closed by petri.lehtinen #15132: Let unittest.TestProgram()'s defaultTest argument be a list http://bugs.python.org/issue15132 closed by petri.lehtinen #16403: update distutils docs to say that maintainer replaces author http://bugs.python.org/issue16403 closed by petri.lehtinen #16406: move the "Uploading Packages" section to distutils/packageinde http://bugs.python.org/issue16406 closed by chris.jerdonek #16695: Clarify fnmatch & glob docs about the handling of leading "."s http://bugs.python.org/issue16695 closed by petri.lehtinen #16935: unittest should understand SkipTest at import time during test http://bugs.python.org/issue16935 closed by ezio.melotti #17079: Fix test discovery for test_ctypes.py http://bugs.python.org/issue17079 closed by ezio.melotti #17082: Fix test discovery for test_dbm*.py http://bugs.python.org/issue17082 closed by ezio.melotti #17130: Add runcall() function to profile.py and cProfile.py http://bugs.python.org/issue17130 closed by eric.araujo #17137: Malfunctioning compiled code in Python 3.3 x64 http://bugs.python.org/issue17137 closed by haypo #17197: c/profile refactoring http://bugs.python.org/issue17197 closed by giampaolo.rodola #17217: Fix test discovery for test_format.py on Windows http://bugs.python.org/issue17217 closed by ezio.melotti #17220: Little enhancements of _bootstrap.py http://bugs.python.org/issue17220 closed by serhiy.storchaka #17249: reap threads in test_capi http://bugs.python.org/issue17249 closed by ezio.melotti #17275: io.BufferedWriter shows wrong type in argument error message http://bugs.python.org/issue17275 closed by r.david.murray #17280: path.basename and ntpath.basename functions returns an incorre http://bugs.python.org/issue17280 closed by eric.smith #17283: Lib/test/__main__.py should share code with regrtest.py http://bugs.python.org/issue17283 closed by chris.jerdonek #17287: slice inconsistency http://bugs.python.org/issue17287 closed by r.david.murray #17290: pythonw - loading cursor bug when launching scripts http://bugs.python.org/issue17290 closed by python-dev #17291: Login-data raising EOFError http://bugs.python.org/issue17291 closed by gregory.p.smith #17292: Autonumbering in string.Formatter doesn't work http://bugs.python.org/issue17292 closed by eric.smith #17297: Issue with return in recursive functions http://bugs.python.org/issue17297 closed by r.david.murray #17300: ??rash when deleting deeply recursive iterator wrappers http://bugs.python.org/issue17300 closed by serhiy.storchaka #17303: Fix test discovery for test_future* http://bugs.python.org/issue17303 closed by ezio.melotti #17304: Fix test discovery for test_hash.py http://bugs.python.org/issue17304 closed by ezio.melotti #17313: test_logging doesn't clean up after itself http://bugs.python.org/issue17313 closed by python-dev #17321: Better way to pass objects between imp.find_module() and imp.l http://bugs.python.org/issue17321 closed by brett.cannon #675976: mhlib does not obey MHCONTEXT env var http://bugs.python.org/issue675976 closed by r.david.murray #1709112: test_1686475 of test_os & pagefile.sys http://bugs.python.org/issue1709112 closed by ezio.melotti #694374: Recursive regular expressions http://bugs.python.org/issue694374 closed by r.david.murray #846817: control-c is being sent to child thread rather than main http://bugs.python.org/issue846817 closed by r.david.murray #783528: Inconsistent results with super and __getattribute__ http://bugs.python.org/issue783528 closed by r.david.murray From fwierzbicki at gmail.com Fri Mar 1 18:49:28 2013 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Fri, 1 Mar 2013 09:49:28 -0800 Subject: [Python-Dev] Merging Jython code into standard Lib [was Re: Python Language Summit at PyCon: Agenda] In-Reply-To: References: <20130228210023.089665ef@pitrou.net> Message-ID: On Thu, Feb 28, 2013 at 12:35 PM, Brett Cannon wrote: > > On Thu, Feb 28, 2013 at 3:17 PM, fwierzbicki at gmail.com > wrote: >> >> It would be nice in this particular case if there was a zlib.py that >> imported _zlib -- then it would be easy to shim in Jython's version, >> whether it is written in a .py file or in Java. > > > That should be fine as that is what we already do for accelerator modules > anyway. If you want to work towards having an equivalent of CPython's > Modules/ directory so you can ditch your custom Lib/ modules by treating > your specific code as accelerators I think we can move towards that > solution. Sounds great! I'm betting that implementing PEP 420 on Jython will make mixed Python/Java code easier to deal with, so _zlib.py might just end up living next to our Java code. So deleting Jython's Lib/ may still be an option. -Frank From solipsis at pitrou.net Fri Mar 1 19:38:03 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 1 Mar 2013 19:38:03 +0100 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130227223749.2f06a328@anarchist.wooz.org> <20130301093223.743a04c8@anarchist.wooz.org> Message-ID: <20130301193803.4156607c@pitrou.net> On Fri, 1 Mar 2013 09:32:23 -0500 Barry Warsaw wrote: > > >On the other hand in some ways Jython is sort of like Python on a > >weird virtual OS that lets the real OS bleed through some. This may > >still need to be checked in that way (there's are still checks of >os.name == 'nt'> right?) > > Yeah, but that all ooooold code ;) Hmm, what do you mean? `os.name == 'nt'` is still the proper way to test that we're running on a Windows system (more accurately, over the Windows API). Regards Antoine. From brett at python.org Sat Mar 2 03:31:02 2013 From: brett at python.org (Brett Cannon) Date: Fri, 1 Mar 2013 21:31:02 -0500 Subject: [Python-Dev] Planning on removing cache invalidation for file finders Message-ID: As of right now, importlib keeps a cache of what is in a directory for its file finder instances. It uses mtime on the directory to try and detect when it has changed to know when to refresh the cache. But thanks to mtime granularities of up to a second, it is only a heuristic that isn't totally reliable, especially across filesystems on different OSs. This is why importlib.invalidate_caches() came into being. If you look in our test suite you will see it peppered around where a module is created on the fly to make sure that mtime granularity isn't a problem. But it somewhat negates the point of the mtime heuristic when you have to make this function call regardless to avoid potential race conditions. http://bugs.python.org/issue17330 originally suggested trying to add another heuristic to determine when to invalidate the cache. But even with the suggestion it's still iffy and in no way foolproof. So the current idea is to just drop the invalidation heuristic and go full-blown reliance on calls to importlib.invalidate_caches() as necessary. This makes code more filesystem-agnostic and protects people from hard-to-detect errors when importlib only occasionally doesn't detect new modules (I know it drove me nuts for a while when the buildbots kept failing sporadically and only on certain OSs). I would have just made the change but Antoine wanted it brought up here first to make sure that no one was heavily relying on the current setup. So if you have a good, legitimate reason to keep the reliance on mtime for cache invalidation please speak up. But since the common case will never care about any of this (how many people generate modules on the fly to being with?) and to be totally portable you need to call importlib.invalidate_caches() anyway, it's going to take a lot to convince me to keep it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Sat Mar 2 08:40:37 2013 From: greg at krypto.org (Gregory P. Smith) Date: Fri, 1 Mar 2013 23:40:37 -0800 Subject: [Python-Dev] cffi in stdlib In-Reply-To: References: Message-ID: On Wed, Feb 27, 2013 at 7:57 AM, Eli Bendersky wrote: > > > I read the cffi docs once again and went through some of the examples. I >> > want to divide this to two topics. >> > >> > One is what you call the "ABI" level. IMHO, it's hands down superior to >> > ctypes. Your readdir demo demonstrates this very nicely. I would >> definitely >> > want to see this in the stdlib as an alternative way to interface to C >> > shared objects & DLLs. >> > >> > Two is what you call the "API" level, which is where my opinion becomes >> > mixed. Some things just don't feel right to me: >> > >> > 1. Tying in a C compiler into the flow of a program. I'm not sure >> whether we >> > have precedents for it in the stdlib. Does this work on Windows where >> > libraries and DLLs are usually built with MSVC? >> > >> >> Yes. Precedent in the stdlib is really the C API. All the same rules >> apply (including build and ship a dll). >> > > So would you say that the main use of the API level is provide an > alternative for writing C API code to interface to C libraries. IOW, it's > in competition with Swig? > I'd hardly call it competition. The primary language I interface with is C++ and cffi appears not see that giant elephant in the room (it'd need to use clang for parsing if it were going to do that)... The goal is good, but being C only isn't super exciting to me. Would there be a goal of using cffi to replace C extension module code in the standard library with cffi based versions instead of hand written CPython C API code? If not, why not? (and what does that say about its limitations or practical it is?) -0.5 from me. My inclination is not to add this to the standard library. But even if it were to be added, it sounds like others are coming up with questions and reasons why it isn't yet ready (always the first important step to seriously considering inclusion). -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Sat Mar 2 09:01:25 2013 From: greg at krypto.org (Gregory P. Smith) Date: Sat, 2 Mar 2013 00:01:25 -0800 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130227223749.2f06a328@anarchist.wooz.org> Message-ID: On Thu, Feb 28, 2013 at 1:15 AM, Nick Coghlan wrote: > On Thu, Feb 28, 2013 at 1:37 PM, Barry Warsaw wrote: > > On Feb 27, 2013, at 11:33 AM, fwierzbicki at gmail.com wrote: > >>The easy part for Jython is pushing some of our "if is_jython:" stuff > >>into the appropriate spots in CPython's Lib/. > > > > I wonder if there isn't a better way to do this than sprinkling > is_jython, > > is_pypy, is_ironpython, is_thenextbigthing all over the code base. I > have no > > bright ideas here, but it seems like a feature matrix would be a better > way to > > go than something that assumes a particular Python implementation has a > > particular feature set (which may change in the future). > > Yes, avoiding that kind of thing is a key motivation for > sys.implementation. Any proposal for "is_jython" blocks should instead > be reformulated as a proposal for new sys.implementation attributes. > I kind of wish there were an assert-like magic "if __debug__:" type of mechanism behind this so that blocks of code destined solely for a single interpreter won't be seen in the code objects or .pyc's of non-target interpreters. That idea obviously isn't fleshed out but i figure i'd better plant the seed... It'd mean smaller code objects and less bloat from constants (docstrings for one implementation vs another, etc) being in memory. Taken further, this could even be extended beyond implementations to platforms as we have some standard library code with alternate definitions within one file for windows vs posix, etc. Antoine's point about code like that being untestable by most CPython developers is valid. I'd want --with-pydebug builds to disable any parsing -> code object exclusions to at least make sure its syntax doesn't rot but that still doesn't _test_ it unless we get someone maintains reliable buildbots for every implementation using this common stdlib. -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sat Mar 2 09:48:42 2013 From: arigo at tunes.org (Armin Rigo) Date: Sat, 2 Mar 2013 09:48:42 +0100 Subject: [Python-Dev] cffi in stdlib In-Reply-To: References: Message-ID: Hi Gregory, On Sat, Mar 2, 2013 at 8:40 AM, Gregory P. Smith wrote: > > On Wed, Feb 27, 2013 at 7:57 AM, Eli Bendersky wrote: >> So would you say that the main use of the API level is provide an >> alternative for writing C API code to interface to C libraries. IOW, it's in >> competition with Swig? > > I'd hardly call it competition. The primary language I interface with is C++ > and cffi appears not see that giant elephant in the room I don't think it's in competition with Swig, which does C++. There are certain workloads in which C++ is the elephant in the room; we don't address such workloads. If you want some more motivation, the initial goal was to access the large number of standard Linux/Posix libraries that are C (or have a C interface), but are too hard to access for ctypes (macros, partially-documented structure types, #define for constants, etc.). For this goal, it works great. > (it'd need to use clang for parsing if it were going to do that)... I fear parsing is merely the tip of the iceberg when we talk about interfacing with C++. A bient?t, Armin. From stefan_ml at behnel.de Sat Mar 2 10:10:24 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 02 Mar 2013 10:10:24 +0100 Subject: [Python-Dev] cffi in stdlib In-Reply-To: References: Message-ID: Hi, looks like no-one's taken over the role of the Advocatus Diaboli yet. =) Maciej Fijalkowski, 26.02.2013 16:13: > I would like to discuss on the language summit a potential inclusion > of cffi[1] into stdlib. This is a project Armin Rigo has been working > for a while, with some input from other developers. It seems that the > main reason why people would prefer ctypes over cffi these days is > "because it's included in stdlib", which is not generally the reason I > would like to hear. Our calls to not use C extensions and to use an > FFI instead has seen very limited success with ctypes and quite a lot > more since cffi got released. The API is fairly stable right now with > minor changes going in and it'll definitely stablize until Python 3.4 > release. You say that "the API is fairly stable". What about the implementation? Will users want to install a new version next to the stdlib one in a couple of months, just because there was a bug in the parser in Python 3.4 that you still need to support because there's code that depends on it, or because there is this new feature that is required to make it work with library X, or ... ? What's the upgrade path in that case? How will you support this? What long-term guarantees do you give to users of the stdlib package? Or, in other words, will the normal fallback import for cffi look like this: try: import stdlib_cffi except ImportError: import external_cffi or will the majority of users end up prefering this order: try: import external_cffi except ImportError: import stdlib_cffi > * Work either at the level of the ABI (Application Binary Interface) > or the API (Application Programming Interface). Usually, C libraries > have a specified C API but often not an ABI (e.g. they may document a > ?struct? as having at least these fields, but maybe more). (ctypes > works at the ABI level, whereas Cython and native C extensions work at > the API level.) Ok, so there are cases where you need a C compiler installed in order to support the API. Which means that it will be a very complicated thing for users to get working under Windows, for example, which then means that users are actually best off not using the API-support feature if they want portable code. Wouldn't it be simpler to target Windows with a binary than with dynamically compiled C code? Is there a way to translate an API description into a static ABI description for a known platform ahead of time, or do I have to implement this myself in a separate ABI code path by figuring out a suitable ABI description myself? In which cases would users choose to use the C API support? And, is this dependency explicit or can I accidentally run into the dependency on a C compiler for my code without noticing? > * We try to be complete. For now some C99 constructs are not > supported, but all C89 should be, including macros (and including > macro ?abuses?, which you can manually wrap in saner-looking C > functions). Ok, so the current status actually is that it's *not* complete, and that future versions will have to catch up in terms of C compatibility. So, why do you think it's a good time to get it into the stdlib *now*? > * We attempt to support both PyPy and CPython, with a reasonable path > for other Python implementations like IronPython and Jython. You mentioned that it's fast under PyPy and slow under CPython, though. What would be the reason to use it under CPython then? Some of the projects that are using it (you named a couple) also have equivalent (or maybe more or less so) native implementations for CPython already. Do you have any benchmarks available that compare those to their cffi versions under CPython? Is the slowdown within any reasonable bounds? Others have already mentioned the lack of C++ support. It's ok to say that you deliberately only want to support C, but it's also true that that's a substantial restriction. Stefan From arigo at tunes.org Sat Mar 2 12:13:50 2013 From: arigo at tunes.org (Armin Rigo) Date: Sat, 2 Mar 2013 12:13:50 +0100 Subject: [Python-Dev] cffi in stdlib In-Reply-To: References: Message-ID: Hi Stefan, On Sat, Mar 2, 2013 at 10:10 AM, Stefan Behnel wrote: > You say that "the API is fairly stable". What about the implementation? > Will users want to install a new version next to the stdlib one in a couple > of months, I think that the implementation is fairly stable as well. The only place I can foresee some potential changes is in details like the location of temporary files, for example, which needs to be discussed (probably with people from python-dev too) as some point. > just because there was a bug in the parser in Python 3.4 that > you still need to support because there's code that depends on it, or > because there is this new feature that is required to make it work with > library X, or ... ? What's the upgrade path in that case? How will you > support this? What long-term guarantees do you give to users of the stdlib > package? I think these are general questions for any package that ends up in the stdlib. In the case of CFFI, it is now approaching a stability point. This is also because we are going to integrate it with the stdlib of PyPy soon. Bugs in the parser have not been found so far, but if there is any, we will treat it like we treat any other bug in the stdlib. For that matter, there is actually no obvious solution for the user either: he generally has to wait for the next micro release to have the bug fixed. > Or, in other words, will the normal fallback import for cffi look like this: > > try: import stdlib_cffi > except ImportError: import external_cffi > > or will the majority of users end up prefering this order: > > try: import external_cffi > except ImportError: import stdlib_cffi I would rather drop the external CFFI entirely, or keep it only to provide backports to older Python versions. I personally see no objection to call the stdlib one "cffi" too (but any other name is fine as well). > ... Wouldn't it be simpler to target Windows with a binary than > with dynamically compiled C code? Is there a way to translate an API > description into a static ABI description for a known platform ahead of > time, or do I have to implement this myself in a separate ABI code path by > figuring out a suitable ABI description myself? No, I believe that you missed this point: when you make "binary" distributions of a package with setup.py, it precompiles a library for CFFI too. So yes, you need a C compiler on machines where you develop the program, but not on machines where you install it. It's the same needs as when writing custom C extension modules by hand. > In which cases would users choose to use the C API support? And, is this > dependency explicit or can I accidentally run into the dependency on a C > compiler for my code without noticing? A C compiler is clearly required: this is if and only if you call the function verify(), and pass it arguments that are not the same ones as the previous time (i.e. it's not in the cache). >> * We try to be complete. For now some C99 constructs are not >> supported, but all C89 should be, including macros (and including >> macro ?abuses?, which you can manually wrap in saner-looking C >> functions). > > Ok, so the current status actually is that it's *not* complete, and that > future versions will have to catch up in terms of C compatibility. So, why > do you think it's a good time to get it into the stdlib *now*? To be honest I don't have a precise list of C99 constructs missing. I used to know of a few of them, but these were eventually supported. It is unlikely to get completed, or if it is, a fairly slow process should be fine --- just like a substantial portion of the stdlib, which gets occasional updates from one Python version to the next. > You mentioned that it's fast under PyPy and slow under CPython, though. > What would be the reason to use it under CPython then? The reason is just ease of use. I pretend that it takes less effort (and little C knowledge), and is less prone to bugs and leaks, to write a perfectly working prototype of a module to access a random C library. I do not pretend that you'll get the top-most performance. For a lot of cases performance doesn't matter; and when it does, on CPython, you can really write a C extension module by hand (as long as you make sure to keep around the CFFI version for use by PyPy). This is how I see it, anyway. The fact that we are busy rewriting existing native well-tested CPython extensions with CFFI --- this is really only of use for PyPy. > Others have already mentioned the lack of C++ support. It's ok to say that > you deliberately only want to support C, but it's also true that that's a > substantial restriction. I agree that it's a restriction, or rather a possible extension that is not done. I don't have plans to do it myself. Please also keep in mind that we pitch CFFI as a better ctypes, not as the ultimate tool to access any foreign language. A bient?t, Armin. From mkawalec at lavabit.com Sat Mar 2 12:48:05 2013 From: mkawalec at lavabit.com (Michal Kawalec) Date: Sat, 02 Mar 2013 12:48:05 +0100 Subject: [Python-Dev] Possible bug in socket.py: connection reset by peer Message-ID: <5131E6F5.4090709@lavabit.com> Hello, I am experiencing an odd infrequent bug in Python 2.7.3 with GIL enabled. For some files pushed over TCP socket I get 'connection reset by peer' and clients only receive a randomly long part of the file. This situation occurs only in ~0.1% of cases but if it happens for a given file it keeps on always occuring for that file. The server host is 2-cored Linux 3.8.1 on a VMware VM. The problem is mitigated by adding time.sleep(0.001) just before the portion of data is being pushed though the socket. It also seems to be known for a long time [1]. So my question is - is it something that I can expect to be fixed in the future Python releases? Michal [1] http://stackoverflow.com/questions/441374/why-am-i-seeing-connection-reset-by-peer-error -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 555 bytes Desc: OpenPGP digital signature URL: From solipsis at pitrou.net Sat Mar 2 13:25:19 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 2 Mar 2013 13:25:19 +0100 Subject: [Python-Dev] Possible bug in socket.py: connection reset by peer References: <5131E6F5.4090709@lavabit.com> Message-ID: <20130302132519.6e142983@pitrou.net> On Sat, 02 Mar 2013 12:48:05 +0100 Michal Kawalec wrote: > I am experiencing an odd infrequent bug in Python 2.7.3 with GIL > enabled. For some files pushed over TCP socket I get 'connection reset > by peer' and clients only receive a randomly long part of the file. Why do you think it is a bug in Python? Start by doing a Wireshark capture of your traffic and find out what really happens. Regards Antoine. From ncoghlan at gmail.com Sat Mar 2 16:08:40 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 3 Mar 2013 01:08:40 +1000 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults In-Reply-To: References: Message-ID: On Sat, Mar 2, 2013 at 1:24 AM, Stefan Bucur wrote: > Hi, > > I'm working on an automated bug finding tool that I'm trying to apply on the > Python interpreter code (version 2.7.3). Because of early prototype > limitations, I needed to disable string interning in stringobject.c. More > precisely, I modified the PyString_FromStringAndSize and PyString_FromString > to no longer check for the null and single-char cases, and create instead a > new string every time (I can send the patch if needed). > > However, after applying this modification, when running "make test" I get a > segfault in the test___all__ test case. > > Before digging deeper into the issue, I wanted to ask here if there are any > implicit assumptions about string identity and interning throughout the > interpreter implementation. For instance, are two single-char strings having > the same content supposed to be identical objects? > > I'm assuming that it's either this, or some refcount bug in the interpreter > that manifests only when certain strings are no longer interned and thus > have a higher chance to get low refcount values. In theory, interning is supposed to be a pure optimisation, but it wouldn't surprise me if there are cases that assume the described strings are always interned (especially the null string case). Our test suite would never detect such bugs, as we never disable the interning. Whether or not we're interested in fixing such bugs would depend on the size of the patches needed to address them. From our point of view, such bugs are purely theoretical (as the assumption is always valid in an unpatched CPython build), so if the problem is too hard to diagnose or fix, we're more likely to declare that interning of at least those kinds of string values is required for correctness when creating modified versions of CPython. I'm not sure what kind of analyser you are writing, but if it relates to the CPython C API, you may be interested in https://gcc-python-plugin.readthedocs.org/en/latest/cpychecker.html Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Mar 2 16:17:35 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 3 Mar 2013 01:17:35 +1000 Subject: [Python-Dev] Merging Jython code into standard Lib [was Re: Python Language Summit at PyCon: Agenda] In-Reply-To: References: <20130228210023.089665ef@pitrou.net> Message-ID: On Fri, Mar 1, 2013 at 6:35 AM, Brett Cannon wrote: > > > > On Thu, Feb 28, 2013 at 3:17 PM, fwierzbicki at gmail.com > wrote: >> >> On Thu, Feb 28, 2013 at 12:00 PM, Antoine Pitrou >> wrote: >> > IMHO, we should remove the plat-* directories, they are completely >> > unmaintained, undocumented, and serve no useful purpose. >> Oh I didn't know that - so definitely adding to that is right out :) >> >> Really for cases like Jython's zlib.py (no useful code for CPython) I >> don't have any trouble keeping them entirely in Jython. It just would >> have been fun to delete our Lib/ :) >> >> It would be nice in this particular case if there was a zlib.py that >> imported _zlib -- then it would be easy to shim in Jython's version, >> whether it is written in a .py file or in Java. > > > That should be fine as that is what we already do for accelerator modules > anyway. If you want to work towards having an equivalent of CPython's > Modules/ directory so you can ditch your custom Lib/ modules by treating > your specific code as accelerators I think we can move towards that > solution. I'd go further and say we *should* move to that solution. Here's an interesting thought: for pure C modules without a Python implementation, we can migrate to this architecture even *without* creating pure Python equivalents. All we shou;d have to do is change the test of the pure Python version to be that the module *can't be imported* without the accelerator, rather than the parallel tests that we normally implement when there's a pure Python alternative to the accelerated version. (There would likely still be some mucking about to ensure robust pickle compatibility, since that leaks implementation details about exact module names if you're not careful) PyPy, Jython, IronPython would then have RPython, Java, C# versions, while CPython has a C version, and the test suite should work regardless. (If PyPy have equivalents in Python, they can either push them upstream, overwrite the "import the accelerator" version). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Sat Mar 2 16:28:57 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 2 Mar 2013 16:28:57 +0100 Subject: [Python-Dev] Merging Jython code into standard Lib [was Re: Python Language Summit at PyCon: Agenda] In-Reply-To: References: <20130228210023.089665ef@pitrou.net> Message-ID: <20130302162857.30a847e3@pitrou.net> On Sun, 3 Mar 2013 01:17:35 +1000 Nick Coghlan wrote: > > I'd go further and say we *should* move to that solution. > > Here's an interesting thought: for pure C modules without a Python > implementation, we can migrate to this architecture even *without* > creating pure Python equivalents. All we shou;d have to do is change > the test of the pure Python version to be that the module *can't be > imported* without the accelerator, rather than the parallel tests that > we normally implement when there's a pure Python alternative to the > accelerated version. (There would likely still be some mucking about > to ensure robust pickle compatibility, since that leaks implementation > details about exact module names if you're not careful) What benefit would this have? Current situation: each Python implementation has its own implementation of the zlib module (as a C module for CPython, etc.). New situation: all Python implementations share a single, mostly empty, zlib.py file. Each Python implementation has its own implementation of the _zlib module (as a C module for CPython, etc.) which is basically the same as the former zlib module. Regards Antoine. From solipsis at pitrou.net Sat Mar 2 16:31:36 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 2 Mar 2013 16:31:36 +0100 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults References: Message-ID: <20130302163136.18b78885@pitrou.net> On Fri, 1 Mar 2013 16:24:42 +0100 Stefan Bucur wrote: > > However, after applying this modification, when running "make test" I get a > segfault in the test___all__ test case. > > Before digging deeper into the issue, I wanted to ask here if there are any > implicit assumptions about string identity and interning throughout the > interpreter implementation. For instance, are two single-char strings > having the same content supposed to be identical objects? From a language POV, no, but inside a specific interpreter such as CPython it may be a reasonable expectation. > I'm assuming that it's either this, or some refcount bug in the interpreter > that manifests only when certain strings are no longer interned and thus > have a higher chance to get low refcount values. Indeed, if it's a real bug it would be nice to get it fixed :-) Regards Antoine. From ncoghlan at gmail.com Sat Mar 2 16:36:04 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 3 Mar 2013 01:36:04 +1000 Subject: [Python-Dev] Planning on removing cache invalidation for file finders In-Reply-To: References: Message-ID: On Sat, Mar 2, 2013 at 12:31 PM, Brett Cannon wrote: > As of right now, importlib keeps a cache of what is in a directory for its > file finder instances. It uses mtime on the directory to try and detect when > it has changed to know when to refresh the cache. But thanks to mtime > granularities of up to a second, it is only a heuristic that isn't totally > reliable, especially across filesystems on different OSs. > > This is why importlib.invalidate_caches() came into being. If you look in > our test suite you will see it peppered around where a module is created on > the fly to make sure that mtime granularity isn't a problem. But it somewhat > negates the point of the mtime heuristic when you have to make this function > call regardless to avoid potential race conditions. > > http://bugs.python.org/issue17330 originally suggested trying to add another > heuristic to determine when to invalidate the cache. But even with the > suggestion it's still iffy and in no way foolproof. > > So the current idea is to just drop the invalidation heuristic and go > full-blown reliance on calls to importlib.invalidate_caches() as necessary. > This makes code more filesystem-agnostic and protects people from > hard-to-detect errors when importlib only occasionally doesn't detect new > modules (I know it drove me nuts for a while when the buildbots kept failing > sporadically and only on certain OSs). > > I would have just made the change but Antoine wanted it brought up here > first to make sure that no one was heavily relying on the current setup. So > if you have a good, legitimate reason to keep the reliance on mtime for > cache invalidation please speak up. But since the common case will never > care about any of this (how many people generate modules on the fly to being > with?) and to be totally portable you need to call > importlib.invalidate_caches() anyway, it's going to take a lot to convince > me to keep it. I think you should keep it. A long running service that periodically scans the importers for plugins doesn't care if modules take a few extra seconds to show up, it just wants to see them eventually. Installers (or filesystem copy or move operations!) have no way to inform arbitrary processes that new files have been added. It's that case where the process that added the modules is separate from the process scanning for them, and the communication is one way, where the heuristic is important. Explicit invalidation only works when they're the *same* process, or when they're closely coupled so the adding process can tell the scanning process to invalidate the caches (our test suite is mostly the former although there are a couple of cases of the latter). I have no problem with documenting invalidate_caches() as explicitly required for correctness when writing new modules which are to be read back by the same process, or when there is a feedback path between two processes that may be confusing if the cache invalidation is delayed. The implicit invalidation is only needed to pick up modules written by *another* process. In addition, it may be appropriate for importlib to offer a "write_module" method that accepts (module name, target path, contents). This would: 1. Allow in-process caches to be invalidated implicitly and selectively when new modules are created 2. Allow importers to abstract write access in addition to read access 3. Allow the import system to complain at time of writing if the desired module name and target path don't actually match given the current import system state. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Mar 2 16:37:56 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 3 Mar 2013 01:37:56 +1000 Subject: [Python-Dev] Planning on removing cache invalidation for file finders In-Reply-To: References: Message-ID: On Sun, Mar 3, 2013 at 1:36 AM, Nick Coghlan wrote: > It's that case where the process that added the modules is separate > from the process scanning for them, and the communication is one way, > where the heuristic is important. Explicit invalidation only works > when they're the *same* process, or when they're closely coupled so > the adding process can tell the scanning process to invalidate the > caches (our test suite is mostly the former although there are a > couple of cases of the latter). s/are/may be/ (I don't actually remember if there are or not off the top of my head) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From brett at python.org Sat Mar 2 16:58:47 2013 From: brett at python.org (Brett Cannon) Date: Sat, 2 Mar 2013 10:58:47 -0500 Subject: [Python-Dev] Merging Jython code into standard Lib [was Re: Python Language Summit at PyCon: Agenda] In-Reply-To: <20130302162857.30a847e3@pitrou.net> References: <20130228210023.089665ef@pitrou.net> <20130302162857.30a847e3@pitrou.net> Message-ID: On Sat, Mar 2, 2013 at 10:28 AM, Antoine Pitrou wrote: > On Sun, 3 Mar 2013 01:17:35 +1000 > Nick Coghlan wrote: > > > > I'd go further and say we *should* move to that solution. > > > > Here's an interesting thought: for pure C modules without a Python > > implementation, we can migrate to this architecture even *without* > > creating pure Python equivalents. All we shou;d have to do is change > > the test of the pure Python version to be that the module *can't be > > imported* without the accelerator, rather than the parallel tests that > > we normally implement when there's a pure Python alternative to the > > accelerated version. (There would likely still be some mucking about > > to ensure robust pickle compatibility, since that leaks implementation > > details about exact module names if you're not careful) > > What benefit would this have? > > Current situation: each Python implementation has its own > implementation of the zlib module (as a C module for CPython, etc.). > > New situation: all Python implementations share a single, mostly empty, > zlib.py file. Each Python implementation has its own implementation of > the _zlib module (as a C module for CPython, etc.) which is basically > the same as the former zlib module. > Bare minimum? They all share the same module docstring. But it could be extended to explicitly import only the public API into zlib.py, helping to prevent leaking interpreter-specific APIs by accident (obviously would still be available off of _zlib if people wanted them). -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Mar 2 17:16:28 2013 From: brett at python.org (Brett Cannon) Date: Sat, 2 Mar 2013 11:16:28 -0500 Subject: [Python-Dev] Planning on removing cache invalidation for file finders In-Reply-To: References: Message-ID: On Sat, Mar 2, 2013 at 10:36 AM, Nick Coghlan wrote: > On Sat, Mar 2, 2013 at 12:31 PM, Brett Cannon wrote: > > As of right now, importlib keeps a cache of what is in a directory for > its > > file finder instances. It uses mtime on the directory to try and detect > when > > it has changed to know when to refresh the cache. But thanks to mtime > > granularities of up to a second, it is only a heuristic that isn't > totally > > reliable, especially across filesystems on different OSs. > > > > This is why importlib.invalidate_caches() came into being. If you look in > > our test suite you will see it peppered around where a module is created > on > > the fly to make sure that mtime granularity isn't a problem. But it > somewhat > > negates the point of the mtime heuristic when you have to make this > function > > call regardless to avoid potential race conditions. > > > > http://bugs.python.org/issue17330 originally suggested trying to add > another > > heuristic to determine when to invalidate the cache. But even with the > > suggestion it's still iffy and in no way foolproof. > > > > So the current idea is to just drop the invalidation heuristic and go > > full-blown reliance on calls to importlib.invalidate_caches() as > necessary. > > This makes code more filesystem-agnostic and protects people from > > hard-to-detect errors when importlib only occasionally doesn't detect new > > modules (I know it drove me nuts for a while when the buildbots kept > failing > > sporadically and only on certain OSs). > > > > I would have just made the change but Antoine wanted it brought up here > > first to make sure that no one was heavily relying on the current setup. > So > > if you have a good, legitimate reason to keep the reliance on mtime for > > cache invalidation please speak up. But since the common case will never > > care about any of this (how many people generate modules on the fly to > being > > with?) and to be totally portable you need to call > > importlib.invalidate_caches() anyway, it's going to take a lot to > convince > > me to keep it. > > I think you should keep it. A long running service that periodically > scans the importers for plugins doesn't care if modules take a few > extra seconds to show up, it just wants to see them eventually. > Installers (or filesystem copy or move operations!) have no way to > inform arbitrary processes that new files have been added. > But if they are doing the scan they can also easily invalidate the caches before performing the scan. > > It's that case where the process that added the modules is separate > from the process scanning for them, and the communication is one way, > where the heuristic is important. Explicit invalidation only works > when they're the *same* process, or when they're closely coupled so > the adding process can tell the scanning process to invalidate the > caches (our test suite is mostly the former although there are a > couple of cases of the latter). > That's only true if the scanning process has no idea that another process is adding modules. If there is an expectation then it doesn't matter who added the file as you just assume cache invalidation is necessary. > > I have no problem with documenting invalidate_caches() as explicitly > required for correctness when writing new modules which are to be read > back by the same process, or when there is a feedback path between two > processes that may be confusing if the cache invalidation is delayed. > Already documented as such. > The implicit invalidation is only needed to pick up modules written by > *another* process. > > In addition, it may be appropriate for importlib to offer a > "write_module" method that accepts (module name, target path, > contents). This would: > > 1. Allow in-process caches to be invalidated implicitly and > selectively when new modules are created > I don't think that's necessary. If people don't want to blindly clear all caches for a file they can write the file, search the keys in sys.path_importer_cache for the longest prefix for the newly created file, and then call the invalidate_cache() method on that explicit finder. 2. Allow importers to abstract write access in addition to read access > That's heading down the virtual filesystem path which I don't want to go down any farther than I have to. The API is big enough as it is and the more entangled it gets the harder it is to change/fix, especially with the finders having a nice, small API compared to the loaders. > 3. Allow the import system to complain at time of writing if the > desired module name and target path don't actually match given the > current import system state. > I think that's more checking than necessary for a use case that isn't that common. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Mar 2 17:41:16 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 3 Mar 2013 02:41:16 +1000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <31B54506-7B08-4E77-B966-A6171BBC3DA6@gmail.com> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <31B54506-7B08-4E77-B966-A6171BBC3DA6@gmail.com> Message-ID: On Fri, Mar 1, 2013 at 9:39 AM, Doug Hellmann wrote: > > On Feb 27, 2013, at 11:51 AM, Michael Foord wrote: > >> Hello all, >> >> PyCon, and the Python Language Summit, is nearly upon us. We have a good number of people confirmed to attend. If you are intending to come to the language summit but haven't let me know please do so. >> >> The agenda of topics for discussion so far includes the following: >> >> * A report on pypy status - Maciej and Armin >> * Jython and IronPython status reports - Dino / Frank >> * Packaging (Doug Hellmann and Monty Taylor at least) > > Since the time I suggested we add packaging to the agenda, Nick has set up a separate summit meeting for Friday evening. I don't know if it makes sense to leave this on the agenda for Wednesday or not. > > Nick, what do you think? I think it's definitely worth my taking some time to explain my goals for the Friday night session, and some broader things in terms of where I'd like to see packaging going, but a lot of the key packaging people aren't involved in Python language development *per se*, and hence won't be at the language summit. There's also one controversial point that *does* need to be raised at the summit: I would like to make distutils-sig the true authority for packaging standards, so we can stop cross-posting PEPs intended to apply to packaging in *current* versions of Python to python-dev. The split discussions suck, and most of the people that need to be convinced in order for packaging standards to be supported in current versions of Python aren't on python-dev, since it's a tooling issue rather than a language design issue. Standard lib support is necessary in the long run to provide a good "batteries included" experience, but it's *not* the way to create the batteries in the first place. Until these standards have been endorsed by the authors of *existing* packaging tools, proposing them for stdlib addition is premature, but has been perceived as necessary in the past due to the confused power structure. This means that those core developers that want a say in the future direction of packaging and distribution of Python software would need to be actively involved in the ongoing discussions on distutils-sig, rather than relying on being given an explicit invitation to weigh in at the last minute through a thread (or threads) on python-dev. The requirement that BDFL-delegates for packaging and distribution related PEPs also be experienced core developers will remain, however, as "suitable for future stdlib inclusion" is an important overarching requirement for packaging and distribution standards. Such delegates will just be expected to participate actively in distutils-sig *as well as* python-dev. Proposals for *actual* standard library updates (to bring it into line with updated packaging standards) would still be subject to python-dev discussion and authority (and would *not* have their Discussions-To header set). Such discussions aren't particularly relevant to most of the packaging tool developers, since the standard library version isn't updated frequently enough to be useful to them, and also isn't available on older Python releases, so python-dev is a more appropriate venue from both perspectives. At the moment, python-dev, catalog-sig and distutils-sig create an awkward trinity where decision making authority and the appropriate venues for discussion are grossly unclear. I consider this to be one of the key reasons that working on packaging issues has quite a high incidence of developer burnout - it's hard to figure out who needs to be convinced of what, so it's easy for the frustration levels to reach the "this just isn't worth the hassle" stage (especially when trying to bring python-dev members up to speed on discussions that may have taken months on distutils-sig, and when many of the details are awkward compromises forced by the need to support *existing* tools and development processes on older versions of Python). Under my proposal, the breakdown would be slightly clearer: distutils-sig: overall authority for packaging and distribution related standards, *including* the interfaces between index servers (such as PyPI) and automated tools. If a PEP has "Discussions-To" set to distutils-sig, announcements of new PEPs, new versions of those PEPs, *and* their acceptance or rejection should be announced there, and *not* on python-dev. The "Resolution" header will thus point to a distutils-sig post rather than a python-dev one. distutils-sig will focus on solutions that work for *current* versions of Python, while keeping in mind the need for future stdlib support. python-dev: authority over stdlib support for packaging and distribution standards, and the "batteries included" experience of interacting with those standards. Until a next generation distribution infrastructure is firmly established (which may involve years of running the legacy infrastructure and the next generation metadata 2.x based infrastructure in parallel), the stdlib will typically trail the upstream standards substantially, since many upstream enhancements will run afoul of the standard library's "no new features" rule, preventing their inclusion in maintenance releases of old Python versions. catalog-sig: authority over the PyPI index server (supported by the infrastructure SIG for actual operation of the service). Key people are expected to also participate in distutils-sig, as that is where the expected interface exposed to automated tools will be defined. How PyPI exposes the packaging and distribution standards to *users* through its web UI will be up to catalog-sig. The vehicle for *documenting* such a change in policy would be an update to PEP 1 to indicate that the "Discussions-To" header should always point to the list where any formal acceptance of rejection of the PEP would be announced. If preliminary discussions take place on a feeder list like python-ideas, or import-sig, or somewhere else, that will be declared as explicitly irrelevant from the PEP metadata point of view: the Discussions-To field would be documented as referring to the list that any Resolution field will eventually reference. Regards, NIck. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Mar 2 17:49:13 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 3 Mar 2013 02:49:13 +1000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130227223749.2f06a328@anarchist.wooz.org> Message-ID: On Sat, Mar 2, 2013 at 6:01 PM, Gregory P. Smith wrote: > It'd mean smaller code objects and less bloat from constants (docstrings for > one implementation vs another, etc) being in memory. Taken further, this > could even be extended beyond implementations to platforms as we have some > standard library code with alternate definitions within one file for windows > vs posix, etc. To plant seeds in the opposite direction, as you're considering this, I suggest looking at: - environment markers in PEP 345 and 426 for conditional selection based on a constrained set of platform data - compatibility tags in PEP 425 (and consider how they could be used in relation to __pycache__ and bytecode-only distribution of platform specific files) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Mar 2 17:58:18 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 3 Mar 2013 02:58:18 +1000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: On Fri, Mar 1, 2013 at 7:41 PM, Stefan Behnel wrote: > Michael Foord, 27.02.2013 17:51: > It's also true that many of the topics above aren't really interesting for > us, because we just inherit them with CPython, e.g. stdlib changes. > Packaging is only relevant as far as it impacts the distribution of binary > extensions, and the main changes appear to be outside of that area (which > doesn't mean it's not truly wonderful that they are happening, Python > packaging has seen a lot of great improvements during the last years and > I'm very happy to see it getting even better). I'm puzzled by this one. Did you leave out PEP 427 (the wheel format), because it's already approved, and hence not likely to be discussed much at the summit, or because you don't consider it to impact the distribution of binary extensions (which would be rather odd, given the nature of the PEP and the wheel format...) > > Interpreter initialisation would be interesting and Cython could > potentially help in some spots here by making code easier to maintain and > optimise, for example. We've had this discussion for the importlib > bootstrapping and I'm sure there's more that could be done. It's sad to see > so much C-level work go into areas that really don't need to be that low-level. Cython's notion of embedding is the exact opposite of CPython's, so I'm not at all clear on how Cython could help with PEP 432 at all. > I'm not so happy with the argument clinic, but that's certainly also > because I'm biased. I've written the argument unpacking code for Cython > some years ago, so it's not surprising that I'm quite happy with that and > fail to see the need for a totally new DSL *and* a totally new > implementation, especially with its mapping to the slowish ParseTuple*() > C-API functions. I've also not seen a good argument why the existing Py3 > function signatures can't do what the proposed DSL tries to achieve. They'd > at least make it clear that the intention is to make things more > Python-like, and would at the same time provide the documentation. That's why Stefan Krah is writing a competing PEP - a number of us already agree with you, and think the case needs to be made for choosing something completely different like Argument Clinic (especially given Guido's expressed tolerance for the idea of "/" as a possible marker to indicate that the preceding parameters only support positional arguments - that was in the context of Python discussion where it was eventually deemed "not necessary", but becomes interesting again in a C API signature discussion) > And I'd really like to see a CPython summit > happen at some point. There's so much interesting stuff going on in that > area that it's worth getting some people together to move these things forward. Yes, a CPython runtime summit some year would be interesting. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From fijall at gmail.com Sat Mar 2 18:18:46 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 2 Mar 2013 19:18:46 +0200 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: >> And I'd really like to see a CPython summit >> happen at some point. There's so much interesting stuff going on in that >> area that it's worth getting some people together to move these things forward. > > Yes, a CPython runtime summit some year would be interesting. > > Cheers, > Nick. I don't see why CPython-specific stuff can't be discussed on the language summit. After all, everyone can be not interested in a topic X or topic Y. I would be even more than happy to contribute my knowledge about building VMs w.r.t. CPython implementation as much as I could. Cheers, fijal From ncoghlan at gmail.com Sat Mar 2 18:24:01 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 3 Mar 2013 03:24:01 +1000 Subject: [Python-Dev] Planning on removing cache invalidation for file finders In-Reply-To: References: Message-ID: On Sun, Mar 3, 2013 at 2:16 AM, Brett Cannon wrote: > On Sat, Mar 2, 2013 at 10:36 AM, Nick Coghlan wrote: >> I think you should keep it. A long running service that periodically >> scans the importers for plugins doesn't care if modules take a few >> extra seconds to show up, it just wants to see them eventually. >> Installers (or filesystem copy or move operations!) have no way to >> inform arbitrary processes that new files have been added. > > > But if they are doing the scan they can also easily invalidate the caches > before performing the scan. "I just upgraded to Python 3.4, and now my server process isn't see new plugins" That's a major backwards compatibility breach, and hence clearly unacceptable in my view. Even the relatively *minor* compatibility breach of becoming dependent on the filesystem timestamp resolution for picking up added modules, creating a race condition between writing the file and reading it back through the import system, has caused people grief. When you're in a hole, the first thing to do is to *stop digging*. You can deprecate the heuristic if you want (and can figure out how), but a definite -1 on removing it without at least the usual deprecation period for backwards incompatible changes. It may also be worth tweaking the wording of the upgrade note in the What's New to mention the need to always invalidate the cache before scanning for new modules if you want to reliably pick up new modules created since the application started (at the moment the note really only mentions it as something to do after *creating* a new module). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From stefan_ml at behnel.de Sat Mar 2 19:30:21 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 02 Mar 2013 19:30:21 +0100 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: Hi Nick, thanks for the feedback. Nick Coghlan, 02.03.2013 17:58: > On Fri, Mar 1, 2013 at 7:41 PM, Stefan Behnel wrote: >> Michael Foord, 27.02.2013 17:51: >> It's also true that many of the topics above aren't really interesting for >> us, because we just inherit them with CPython, e.g. stdlib changes. >> Packaging is only relevant as far as it impacts the distribution of binary >> extensions, and the main changes appear to be outside of that area (which >> doesn't mean it's not truly wonderful that they are happening, Python >> packaging has seen a lot of great improvements during the last years and >> I'm very happy to see it getting even better). > > I'm puzzled by this one. Did you leave out PEP 427 (the wheel format), > because it's already approved, and hence not likely to be discussed > much at the summit, or because you don't consider it to impact the > distribution of binary extensions (which would be rather odd, given > the nature of the PEP and the wheel format...) I admit that the wheel format has been sailing mostly below my radar (I guess much of the discussion about it is buried somewhere in the distutils SIG archives?), but the last time it started blinking brightly enough to have me take a look at the PEP, I didn't really see anything that was relevant enough to Cython to pay much attention or even comment on it. As I understand it, it's almost exclusively about naming and metadata. Cython compiled extensions are in no way different from plain C extensions wrt packaging. What works for those will work for Cython just fine. Does it imply any changes in the build system that I should be aware of? Cython usually just runs as a preprocessor for distutils extensions, before even calling into setup(). The rest is just a plain old distutils extension build. >> Interpreter initialisation would be interesting and Cython could >> potentially help in some spots here by making code easier to maintain and >> optimise, for example. We've had this discussion for the importlib >> bootstrapping and I'm sure there's more that could be done. It's sad to see >> so much C-level work go into areas that really don't need to be that low-level. > > Cython's notion of embedding is the exact opposite of CPython's, so > I'm not at all clear on how Cython could help with PEP 432 at all. I wasn't thinking about embedding CPython in a Cython compiled program. That would appear like a rather strange setup here. In the context of importlib, I proposed compiling init time Python code into statically linked extension modules in order to speed it up and make it independent of the parser and interpreter, as an alternative to freezing it (which requires a working VM already and implies interpretation overhead). I agree that Cython can't help in most of the early low-level runtime bootstrap process, but once a minimum runtime is available, the more high-level parts of the initialisation could be done in compiled Python code, which other implementations might be able to reuse. >> I'm not so happy with the argument clinic, but that's certainly also >> because I'm biased. I've written the argument unpacking code for Cython >> some years ago, so it's not surprising that I'm quite happy with that and >> fail to see the need for a totally new DSL *and* a totally new >> implementation, especially with its mapping to the slowish ParseTuple*() >> C-API functions. I've also not seen a good argument why the existing Py3 >> function signatures can't do what the proposed DSL tries to achieve. They'd >> at least make it clear that the intention is to make things more >> Python-like, and would at the same time provide the documentation. > > That's why Stefan Krah is writing a competing PEP - a number of us > already agree with you, and think the case needs to be made for > choosing something completely different like Argument Clinic I'll happily provide my feedback to that approach. It might also have a positive impact on the usage of Py3 argument annotations, which I think merit some more visibility and "useful use cases". > (especially given Guido's expressed tolerance for the idea of "/" as a > possible marker to indicate that the preceding parameters only support > positional arguments - that was in the context of Python discussion > where it was eventually deemed "not necessary", but becomes > interesting again in a C API signature discussion) I've not really had that need myself yet, but I remember thinking of it at least once while writing Cython's argument unpacking code. I think it would get rid of a currently existing asymmetry between positional arguments and keyword(-only) arguments, and would remove the risk of naming collisions with positional arguments, most notably when **kwargs is used. And yes, I agree that it would be most interesting for C signatures, just like kwonly arguments are really handy there. It might not be all too hard to write up a prototype in Cython. And I should be able to find a couple of places in lxml where I could use this as an actual feature, so I might actually give it a try when I find the time. Stefan From stefan at bytereef.org Sat Mar 2 21:01:15 2013 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 2 Mar 2013 21:01:15 +0100 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: <20130302200115.GA3305@sleipnir.bytereef.org> Stefan Behnel wrote: > >> I'm not so happy with the argument clinic, but that's certainly also > >> because I'm biased. I've written the argument unpacking code for Cython > >> some years ago, so it's not surprising that I'm quite happy with that and > >> fail to see the need for a totally new DSL *and* a totally new > >> implementation, especially with its mapping to the slowish ParseTuple*() > >> C-API functions. I've also not seen a good argument why the existing Py3 > >> function signatures can't do what the proposed DSL tries to achieve. They'd > >> at least make it clear that the intention is to make things more > >> Python-like, and would at the same time provide the documentation. > > > > That's why Stefan Krah is writing a competing PEP - a number of us > > already agree with you, and think the case needs to be made for > > choosing something completely different like Argument Clinic > > I'll happily provide my feedback to that approach. It might also have a > positive impact on the usage of Py3 argument annotations, which I think > merit some more visibility and "useful use cases". BTW, I think so far no one has stepped forward to implement the custom argument handlers. I've looked at Cython output and, as you say, most of it is there already. Is it possible to write a minimal version of the code generator that just produces the argument handling code? Approximately, how many lines of code would we be talking about? Stefan Krah From tjreedy at udel.edu Sat Mar 2 21:32:02 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 02 Mar 2013 15:32:02 -0500 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults In-Reply-To: References: Message-ID: On 3/2/2013 10:08 AM, Nick Coghlan wrote: > On Sat, Mar 2, 2013 at 1:24 AM, Stefan Bucur wrote: >> Hi, >> >> I'm working on an automated bug finding tool that I'm trying to apply on the >> Python interpreter code (version 2.7.3). Because of early prototype >> limitations, I needed to disable string interning in stringobject.c. More >> precisely, I modified the PyString_FromStringAndSize and PyString_FromString >> to no longer check for the null and single-char cases, and create instead a >> new string every time (I can send the patch if needed). >> >> However, after applying this modification, when running "make test" I get a >> segfault in the test___all__ test case. >> >> Before digging deeper into the issue, I wanted to ask here if there are any >> implicit assumptions about string identity and interning throughout the >> interpreter implementation. For instance, are two single-char strings having >> the same content supposed to be identical objects? >> >> I'm assuming that it's either this, or some refcount bug in the interpreter >> that manifests only when certain strings are no longer interned and thus >> have a higher chance to get low refcount values. > > In theory, interning is supposed to be a pure optimisation, but it > wouldn't surprise me if there are cases that assume the described > strings are always interned (especially the null string case). Our > test suite would never detect such bugs, as we never disable the > interning. Since it required patching functions rather than a configuration switch, it literally seems not be a supported option. If so, I would not consider it a bug for CPython to use the assumption of interning to run faster and I don't think it should be slowed down if that would be necessary to remove the assumption. (This is all assuming that the problem is not just a ref count bug.) Stefan's question was about 2.7. I am just curious: does 3.3 still intern (some) unicode chars? Did the 256 interned bytes of 2.x carry over to 3.x? > Whether or not we're interested in fixing such bugs would depend on > the size of the patches needed to address them. From our point of > view, such bugs are purely theoretical (as the assumption is always > valid in an unpatched CPython build), so if the problem is too hard to > diagnose or fix, we're more likely to declare that interning of at > least those kinds of string values is required for correctness when > creating modified versions of CPython. -- Terry Jan Reedy From stefan.bucur at gmail.com Sat Mar 2 21:55:07 2013 From: stefan.bucur at gmail.com (Stefan Bucur) Date: Sat, 2 Mar 2013 21:55:07 +0100 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults In-Reply-To: References: Message-ID: On Sat, Mar 2, 2013 at 4:08 PM, Nick Coghlan wrote: > On Sat, Mar 2, 2013 at 1:24 AM, Stefan Bucur wrote: >> Hi, >> >> I'm working on an automated bug finding tool that I'm trying to apply on the >> Python interpreter code (version 2.7.3). Because of early prototype >> limitations, I needed to disable string interning in stringobject.c. More >> precisely, I modified the PyString_FromStringAndSize and PyString_FromString >> to no longer check for the null and single-char cases, and create instead a >> new string every time (I can send the patch if needed). >> >> However, after applying this modification, when running "make test" I get a >> segfault in the test___all__ test case. >> >> Before digging deeper into the issue, I wanted to ask here if there are any >> implicit assumptions about string identity and interning throughout the >> interpreter implementation. For instance, are two single-char strings having >> the same content supposed to be identical objects? >> >> I'm assuming that it's either this, or some refcount bug in the interpreter >> that manifests only when certain strings are no longer interned and thus >> have a higher chance to get low refcount values. > > In theory, interning is supposed to be a pure optimisation, but it > wouldn't surprise me if there are cases that assume the described > strings are always interned (especially the null string case). Our > test suite would never detect such bugs, as we never disable the > interning. I understand. In this case, I'll further investigate the issue, and see what exactly is the cause of the crash. > > Whether or not we're interested in fixing such bugs would depend on > the size of the patches needed to address them. From our point of > view, such bugs are purely theoretical (as the assumption is always > valid in an unpatched CPython build), so if the problem is too hard to > diagnose or fix, we're more likely to declare that interning of at > least those kinds of string values is required for correctness when > creating modified versions of CPython. > > I'm not sure what kind of analyser you are writing, but if it relates > to the CPython C API, you may be interested in > https://gcc-python-plugin.readthedocs.org/en/latest/cpychecker.html That's quite a neat tool, I didn't know about it! I guess that would have saved me many hours of debugging obscure refcount bugs in my own Python extensions :) In any case, my analysis tool aims to find bugs in Python programs, not in the CPython implementation itself. It works by performing symbolic execution [1] on the Python interpreter, while it is executing the target Python program. This means that the Python interpreter memory space contains symbolic expressions (i.e., mathematical formulas over the program input) instead of "concrete" values. The interned strings are pesky for symbolic execution because the PyObject* pointer allocated when creating an interned string depends on the string contents, e.g., if the contents are already interned, the old pointer is returned, otherwise a new object is created. So the pointer itself becomes "symbolic", i.e., dependant on the input data, which makes the analysis much more complicated. Stefan [1] http://en.wikipedia.org/wiki/Symbolic_execution From stefan.bucur at gmail.com Sat Mar 2 22:13:56 2013 From: stefan.bucur at gmail.com (Stefan Bucur) Date: Sat, 2 Mar 2013 22:13:56 +0100 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults In-Reply-To: <20130302163136.18b78885@pitrou.net> References: <20130302163136.18b78885@pitrou.net> Message-ID: On Sat, Mar 2, 2013 at 4:31 PM, Antoine Pitrou wrote: > On Fri, 1 Mar 2013 16:24:42 +0100 > Stefan Bucur wrote: >> >> However, after applying this modification, when running "make test" I get a >> segfault in the test___all__ test case. >> >> Before digging deeper into the issue, I wanted to ask here if there are any >> implicit assumptions about string identity and interning throughout the >> interpreter implementation. For instance, are two single-char strings >> having the same content supposed to be identical objects? > > From a language POV, no, but inside a specific interpreter such as > CPython it may be a reasonable expectation. > >> I'm assuming that it's either this, or some refcount bug in the interpreter >> that manifests only when certain strings are no longer interned and thus >> have a higher chance to get low refcount values. > > Indeed, if it's a real bug it would be nice to get it fixed :-) By the way, in that case, what would be the best way to debug such type of ref count errors? I recently ran across this document [1], which kind of applies to debugging focused on newly introduced code. But when some changes potentially impact a good fraction of the interpreter, where should I look first? I'm asking since I re-ran the failing test with gdb, and the segfault seems to occur when invoking the kill() syscall, so the error seems to manifest at some later point than when the faulty code is executed. Stefan [1] http://www.python.org/doc/essays/refcnt/ From lukas.lueg at gmail.com Sat Mar 2 23:49:52 2013 From: lukas.lueg at gmail.com (Lukas Lueg) Date: Sat, 2 Mar 2013 23:49:52 +0100 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults In-Reply-To: References: <20130302163136.18b78885@pitrou.net> Message-ID: Debugging a refcount bug? Good. Out of the door, line on the left, one cross each. 2013/3/2 Stefan Bucur > On Sat, Mar 2, 2013 at 4:31 PM, Antoine Pitrou > wrote: > > On Fri, 1 Mar 2013 16:24:42 +0100 > > Stefan Bucur wrote: > >> > >> However, after applying this modification, when running "make test" I > get a > >> segfault in the test___all__ test case. > >> > >> Before digging deeper into the issue, I wanted to ask here if there are > any > >> implicit assumptions about string identity and interning throughout the > >> interpreter implementation. For instance, are two single-char strings > >> having the same content supposed to be identical objects? > > > > From a language POV, no, but inside a specific interpreter such as > > CPython it may be a reasonable expectation. > > > >> I'm assuming that it's either this, or some refcount bug in the > interpreter > >> that manifests only when certain strings are no longer interned and thus > >> have a higher chance to get low refcount values. > > > > Indeed, if it's a real bug it would be nice to get it fixed :-) > > By the way, in that case, what would be the best way to debug such > type of ref count errors? I recently ran across this document [1], > which kind of applies to debugging focused on newly introduced code. > But when some changes potentially impact a good fraction of the > interpreter, where should I look first? > > I'm asking since I re-ran the failing test with gdb, and the segfault > seems to occur when invoking the kill() syscall, so the error seems to > manifest at some later point than when the faulty code is executed. > > Stefan > > [1] http://www.python.org/doc/essays/refcnt/ > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/lukas.lueg%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Mar 3 00:38:15 2013 From: brett at python.org (Brett Cannon) Date: Sat, 2 Mar 2013 18:38:15 -0500 Subject: [Python-Dev] Planning on removing cache invalidation for file finders In-Reply-To: References: Message-ID: On Sat, Mar 2, 2013 at 12:24 PM, Nick Coghlan wrote: > On Sun, Mar 3, 2013 at 2:16 AM, Brett Cannon wrote: > > On Sat, Mar 2, 2013 at 10:36 AM, Nick Coghlan > wrote: > >> I think you should keep it. A long running service that periodically > >> scans the importers for plugins doesn't care if modules take a few > >> extra seconds to show up, it just wants to see them eventually. > >> Installers (or filesystem copy or move operations!) have no way to > >> inform arbitrary processes that new files have been added. > > > > > > But if they are doing the scan they can also easily invalidate the caches > > before performing the scan. > > "I just upgraded to Python 3.4, and now my server process isn't see new > plugins" > > That's a major backwards compatibility breach, and hence clearly > unacceptable in my view. Even the relatively *minor* compatibility > breach of becoming dependent on the filesystem timestamp resolution > for picking up added modules, creating a race condition between > writing the file and reading it back through the import system, has > caused people grief. When you're in a hole, the first thing to do is > to *stop digging*. > > You can deprecate the heuristic if you want (and can figure out how), > but a definite -1 on removing it without at least the usual > deprecation period for backwards incompatible changes. > That part is easy: ImportWarning still exists so simply continuing to check the directory and noticing when a difference exists that affects subsequent imports and then raising the warning will handle that. > > It may also be worth tweaking the wording of the upgrade note in the > What's New to mention the need to always invalidate the cache before > scanning for new modules if you want to reliably pick up new modules > created since the application started (at the moment the note really > only mentions it as something to do after *creating* a new module). > > As of right now with the check that's all that is needed, but yes, if the deprecation does occur it would be worth changing it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sun Mar 3 00:36:36 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 3 Mar 2013 00:36:36 +0100 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults In-Reply-To: References: <20130302163136.18b78885@pitrou.net> Message-ID: <20130303003636.505ed760@pitrou.net> On Sat, 2 Mar 2013 22:13:56 +0100 Stefan Bucur wrote: > On Sat, Mar 2, 2013 at 4:31 PM, Antoine Pitrou wrote: > > On Fri, 1 Mar 2013 16:24:42 +0100 > > Stefan Bucur wrote: > >> > >> However, after applying this modification, when running "make test" I get a > >> segfault in the test___all__ test case. > >> > >> Before digging deeper into the issue, I wanted to ask here if there are any > >> implicit assumptions about string identity and interning throughout the > >> interpreter implementation. For instance, are two single-char strings > >> having the same content supposed to be identical objects? > > > > From a language POV, no, but inside a specific interpreter such as > > CPython it may be a reasonable expectation. > > > >> I'm assuming that it's either this, or some refcount bug in the interpreter > >> that manifests only when certain strings are no longer interned and thus > >> have a higher chance to get low refcount values. > > > > Indeed, if it's a real bug it would be nice to get it fixed :-) > > By the way, in that case, what would be the best way to debug such > type of ref count errors? I recently ran across this document [1], > which kind of applies to debugging focused on newly introduced code. That documents looks a bit outdated (1998!). I would suggest you enable core dumps (`ulimit -c unlimited`), then let Python crash and inspect the stack trace with gdb. You will get better results if using a debug build and the modern gdb inspection helpers: http://docs.python.org/devguide/gdb.html Oh, by the way, it would be better to do your work on Python 3 rather than 2.7. Either the `default` branch or the `3.3` branch, I guess. See http://docs.python.org/devguide/setup.html#checkout Regards Antoine. From solipsis at pitrou.net Sun Mar 3 00:51:57 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 3 Mar 2013 00:51:57 +0100 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <855C0532-804F-48BB-BE16-2604D2DF0AE4@voidspace.org.uk> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130227195021.0101c919@pitrou.net> <855C0532-804F-48BB-BE16-2604D2DF0AE4@voidspace.org.uk> Message-ID: <20130303005157.5911a233@pitrou.net> On Thu, 28 Feb 2013 11:39:52 +0000 Michael Foord wrote: > > > > Perhaps someone wants to discuss > > http://www.python.org/dev/peps/pep-0428/, but I won't be there and the > > PEP isn't terribly up-to-date either :-) > > If you can find someone familiar with pathlib to champion the discussion it is more likely to happen and be productive... Getting the PEP up to date before the summit will also help. (I very much like the *idea* of pathlib and the bits I've seen / read through - but I haven't used it in anger yet so I don't feel qualified to champion it myself.) I've made the PEP up-to-date now. http://mail.python.org/pipermail/python-ideas/2013-March/019731.html Regards Antoine. From erik.m.bray at gmail.com Sun Mar 3 02:09:49 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Sat, 2 Mar 2013 20:09:49 -0500 Subject: [Python-Dev] Planning on removing cache invalidation for file finders In-Reply-To: References: Message-ID: On Sat, Mar 2, 2013 at 10:36 AM, Nick Coghlan wrote: > In addition, it may be appropriate for importlib to offer a > "write_module" method that accepts (module name, target path, > contents). This would: > > 1. Allow in-process caches to be invalidated implicitly and > selectively when new modules are created > 2. Allow importers to abstract write access in addition to read access > 3. Allow the import system to complain at time of writing if the > desired module name and target path don't actually match given the > current import system state. +1 to write_module(). This would be useful in general, I think. Though perhaps the best solution to the original problem is to more forcefully document: "If you're writing a module and expect to be able to import it immediately within the same process, it's necessary to manually invalidate the directory cache." I might go a little further and suggest adding a function to only invalidate the cache for the relevant directory (the proposed write_module() function could do this). This can already be done with something like: dirname = os.path.dirname(module_filename) sys.path_importer_cache[dirname].invalidate_caches() But that's a bit onerous considering that this wasn't even necessary before 3.3. There should be an easier way to do this, as there's no sense in invalidating all the directory caches if one is only writing new modules to a specific directory or directories. Erik From solipsis at pitrou.net Sun Mar 3 02:16:30 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 3 Mar 2013 02:16:30 +0100 Subject: [Python-Dev] Planning on removing cache invalidation for file finders References: Message-ID: <20130303021630.3f5340d5@pitrou.net> On Sat, 2 Mar 2013 11:16:28 -0500 Brett Cannon wrote: > > In addition, it may be appropriate for importlib to offer a > > "write_module" method that accepts (module name, target path, > > contents). This would: > > > > 1. Allow in-process caches to be invalidated implicitly and > > selectively when new modules are created > > I don't think that's necessary. If people don't want to blindly clear all > caches for a file they can write the file, search the keys in > sys.path_importer_cache for the longest prefix for the newly created file, > and then call the invalidate_cache() method on that explicit finder. That's too complicated for non-import experts IMHO. Regards Antoine. From trent at snakebite.org Sun Mar 3 02:29:40 2013 From: trent at snakebite.org (Trent Nelson) Date: Sat, 2 Mar 2013 20:29:40 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: <20130303012939.GC62205@snakebite.org> On Wed, Feb 27, 2013 at 08:51:16AM -0800, Michael Foord wrote: > If you have other items you'd like to discuss please let me know and I > can add them to the agenda. Hmm, seems like this might be a good forum to introduce the parallel/async stuff I've been working on the past few months. TL;DR version is I've come up with an alternative approach for exploiting multiple cores that doesn't rely on GIL-removal or STM (and has a negligible performance overhead when executing single-threaded code). (For those that are curious, it lives in the px branch of the sandbox/trent repo on hg.p.o, albeit in a very experimental/prototype/proof-of-concept state (i.e. it's an unorganized, undocumented, uncommented hackfest); on the plus side, it works. Sort of.) Second suggestion: perhaps a little segment on Snakebite? What it is, what's available to committers, feedback/kvetching from those who have already used it, etc. (I forgot the format of these summits -- is there a projector?) Trent. From stefan_ml at behnel.de Sun Mar 3 11:14:41 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 03 Mar 2013 11:14:41 +0100 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <20130302200115.GA3305@sleipnir.bytereef.org> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130302200115.GA3305@sleipnir.bytereef.org> Message-ID: Stefan Krah, 02.03.2013 21:01: > Stefan Behnel wrote: >>>> I'm not so happy with the argument clinic, but that's certainly also >>>> because I'm biased. I've written the argument unpacking code for Cython >>>> some years ago, so it's not surprising that I'm quite happy with that and >>>> fail to see the need for a totally new DSL *and* a totally new >>>> implementation, especially with its mapping to the slowish ParseTuple*() >>>> C-API functions. I've also not seen a good argument why the existing Py3 >>>> function signatures can't do what the proposed DSL tries to achieve. They'd >>>> at least make it clear that the intention is to make things more >>>> Python-like, and would at the same time provide the documentation. >>> >>> That's why Stefan Krah is writing a competing PEP - a number of us >>> already agree with you, and think the case needs to be made for >>> choosing something completely different like Argument Clinic >> >> I'll happily provide my feedback to that approach. It might also have a >> positive impact on the usage of Py3 argument annotations, which I think >> merit some more visibility and "useful use cases". > > > BTW, I think so far no one has stepped forward to implement the custom > argument handlers. I've looked at Cython output and, as you say, most of > it is there already. > > Is it possible to write a minimal version of the code generator that just > produces the argument handling code? It should be possible, although it does have a lot of dependencies on Cython's type system, so a part of that would have to be extracted as well or at least emulated. Conversion functions are based on it, for example. However, I think it would actually be easiest to just let Cython generate the module interface completely. I.e. you'd remove all code that currently deals with Python function signatures from the C module, only leaving the bare C API, and then generate a Cython interface module like this: cdef extern from *: object original_c_xyzfunc(object x, int y, double z) def xyzfunc(x, int y=0, *, double z=1.0): "docstring goes here" return original_c_xyzfunc(x,y,z) Finally, #include the generated C file at the end of the original module. There'd be a bit of a hassle with the module init function, I guess. Maybe renaming it in the Cython C code (even just a #define) and calling it from the original module would work. Or do it the other way round and add a hook function somewhere that does the manually written parts of the module setup. Sounds simple enough in both cases, although I'm sure there's lots of little details. Extension types and their methods are certainly part of those details ... Stefan From solipsis at pitrou.net Sun Mar 3 12:29:48 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 3 Mar 2013 12:29:48 +0100 Subject: [Python-Dev] Planning on removing cache invalidation for file finders References: Message-ID: <20130303122948.67bcc39b@pitrou.net> On Sat, 2 Mar 2013 18:38:15 -0500 Brett Cannon wrote: > > > > You can deprecate the heuristic if you want (and can figure out how), > > but a definite -1 on removing it without at least the usual > > deprecation period for backwards incompatible changes. > > > > That part is easy: ImportWarning still exists so simply continuing to check > the directory and noticing when a difference exists that affects subsequent > imports and then raising the warning will handle that. Won't that raise spurious ImportWarnings for people who don't actually care about that? Regards Antoine. From doug.hellmann at gmail.com Sun Mar 3 15:11:06 2013 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Sun, 3 Mar 2013 09:11:06 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <31B54506-7B08-4E77-B966-A6171BBC3DA6@gmail.com> Message-ID: <61AACA32-A8E5-4AF6-9246-C5CB99C397D4@gmail.com> On Mar 2, 2013, at 11:41 AM, Nick Coghlan wrote: > On Fri, Mar 1, 2013 at 9:39 AM, Doug Hellmann wrote: >> >> On Feb 27, 2013, at 11:51 AM, Michael Foord wrote: >> >>> Hello all, >>> >>> PyCon, and the Python Language Summit, is nearly upon us. We have a good number of people confirmed to attend. If you are intending to come to the language summit but haven't let me know please do so. >>> >>> The agenda of topics for discussion so far includes the following: >>> >>> * A report on pypy status - Maciej and Armin >>> * Jython and IronPython status reports - Dino / Frank >>> * Packaging (Doug Hellmann and Monty Taylor at least) >> >> Since the time I suggested we add packaging to the agenda, Nick has set up a separate summit meeting for Friday evening. I don't know if it makes sense to leave this on the agenda for Wednesday or not. >> >> Nick, what do you think? > > I think it's definitely worth my taking some time to explain my goals > for the Friday night session, and some broader things in terms of > where I'd like to see packaging going, but a lot of the key packaging > people aren't involved in Python language development *per se*, and > hence won't be at the language summit. OK, a summary seems like a good idea. > > There's also one controversial point that *does* need to be raised at > the summit: I would like to make distutils-sig the true authority for > packaging standards, so we can stop cross-posting PEPs intended to > apply to packaging in *current* versions of Python to python-dev. The > split discussions suck, and most of the people that need to be > convinced in order for packaging standards to be supported in current > versions of Python aren't on python-dev, since it's a tooling issue > rather than a language design issue. Standard lib support is necessary > in the long run to provide a good "batteries included" experience, but > it's *not* the way to create the batteries in the first place. Until > these standards have been endorsed by the authors of *existing* > packaging tools, proposing them for stdlib addition is premature, but > has been perceived as necessary in the past due to the confused power > structure. +1 -- anything to reduce the confusion about where to get involved :) > > This means that those core developers that want a say in the future > direction of packaging and distribution of Python software would need > to be actively involved in the ongoing discussions on distutils-sig, > rather than relying on being given an explicit invitation to weigh in > at the last minute through a thread (or threads) on python-dev. The > requirement that BDFL-delegates for packaging and distribution related > PEPs also be experienced core developers will remain, however, as > "suitable for future stdlib inclusion" is an important overarching > requirement for packaging and distribution standards. Such delegates > will just be expected to participate actively in distutils-sig *as > well as* python-dev. > > Proposals for *actual* standard library updates (to bring it into line > with updated packaging standards) would still be subject to python-dev > discussion and authority (and would *not* have their Discussions-To > header set). Such discussions aren't particularly relevant to most of > the packaging tool developers, since the standard library version > isn't updated frequently enough to be useful to them, and also isn't > available on older Python releases, so python-dev is a more > appropriate venue from both perspectives. > > At the moment, python-dev, catalog-sig and distutils-sig create an > awkward trinity where decision making authority and the appropriate > venues for discussion are grossly unclear. I consider this to be one > of the key reasons that working on packaging issues has quite a high > incidence of developer burnout - it's hard to figure out who needs to > be convinced of what, so it's easy for the frustration levels to reach > the "this just isn't worth the hassle" stage (especially when trying > to bring python-dev members up to speed on discussions that may have > taken months on distutils-sig, and when many of the details are > awkward compromises forced by the need to support *existing* tools and > development processes on older versions of Python). Under my proposal, > the breakdown would be slightly clearer: > > distutils-sig: overall authority for packaging and distribution > related standards, *including* the interfaces between index servers > (such as PyPI) and automated tools. If a PEP has "Discussions-To" set > to distutils-sig, announcements of new PEPs, new versions of those > PEPs, *and* their acceptance or rejection should be announced there, > and *not* on python-dev. The "Resolution" header will thus point to a > distutils-sig post rather than a python-dev one. distutils-sig will > focus on solutions that work for *current* versions of Python, while > keeping in mind the need for future stdlib support. > > python-dev: authority over stdlib support for packaging and > distribution standards, and the "batteries included" experience of > interacting with those standards. Until a next generation distribution > infrastructure is firmly established (which may involve years of > running the legacy infrastructure and the next generation metadata 2.x > based infrastructure in parallel), the stdlib will typically trail the > upstream standards substantially, since many upstream enhancements > will run afoul of the standard library's "no new features" rule, > preventing their inclusion in maintenance releases of old Python > versions. > > catalog-sig: authority over the PyPI index server (supported by the > infrastructure SIG for actual operation of the service). Key people > are expected to also participate in distutils-sig, as that is where > the expected interface exposed to automated tools will be defined. How > PyPI exposes the packaging and distribution standards to *users* > through its web UI will be up to catalog-sig. > > The vehicle for *documenting* such a change in policy would be an > update to PEP 1 to indicate that the "Discussions-To" header should > always point to the list where any formal acceptance of rejection of > the PEP would be announced. If preliminary discussions take place on a > feeder list like python-ideas, or import-sig, or somewhere else, that > will be declared as explicitly irrelevant from the PEP metadata point > of view: the Discussions-To field would be documented as referring to > the list that any Resolution field will eventually reference. > > Regards, > NIck. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From brett at python.org Sun Mar 3 18:08:00 2013 From: brett at python.org (Brett Cannon) Date: Sun, 3 Mar 2013 12:08:00 -0500 Subject: [Python-Dev] Planning on removing cache invalidation for file finders In-Reply-To: <20130303021630.3f5340d5@pitrou.net> References: <20130303021630.3f5340d5@pitrou.net> Message-ID: On Sat, Mar 2, 2013 at 8:16 PM, Antoine Pitrou wrote: > On Sat, 2 Mar 2013 11:16:28 -0500 > Brett Cannon wrote: > > > In addition, it may be appropriate for importlib to offer a > > > "write_module" method that accepts (module name, target path, > > > contents). This would: > > > > > > 1. Allow in-process caches to be invalidated implicitly and > > > selectively when new modules are created > > > > I don't think that's necessary. If people don't want to blindly clear all > > caches for a file they can write the file, search the keys in > > sys.path_importer_cache for the longest prefix for the newly created > file, > > and then call the invalidate_cache() method on that explicit finder. > > That's too complicated for non-import experts IMHO. > Which is why they can just call importlib.import_module(). -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Mar 3 18:12:27 2013 From: brett at python.org (Brett Cannon) Date: Sun, 3 Mar 2013 12:12:27 -0500 Subject: [Python-Dev] Planning on removing cache invalidation for file finders In-Reply-To: <20130303122948.67bcc39b@pitrou.net> References: <20130303122948.67bcc39b@pitrou.net> Message-ID: On Sun, Mar 3, 2013 at 6:29 AM, Antoine Pitrou wrote: > On Sat, 2 Mar 2013 18:38:15 -0500 > Brett Cannon wrote: > > > > > > You can deprecate the heuristic if you want (and can figure out how), > > > but a definite -1 on removing it without at least the usual > > > deprecation period for backwards incompatible changes. > > > > > > > That part is easy: ImportWarning still exists so simply continuing to > check > > the directory and noticing when a difference exists that affects > subsequent > > imports and then raising the warning will handle that. > > Won't that raise spurious ImportWarnings for people who don't actually > care about that? > It shouldn't. If the implementation I have in my head works (set of original files, another set of what mtime says is there to know what would not have been found w/o the cache invalidation), then it will only come up when someone would break in Python 3.5 if the cache invalidation is removed. Plus warnings are off by default as it is. > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Mar 3 18:31:56 2013 From: brett at python.org (Brett Cannon) Date: Sun, 3 Mar 2013 12:31:56 -0500 Subject: [Python-Dev] Planning on removing cache invalidation for file finders In-Reply-To: References: <20130303021630.3f5340d5@pitrou.net> Message-ID: On Sun, Mar 3, 2013 at 12:08 PM, Brett Cannon wrote: > > > > On Sat, Mar 2, 2013 at 8:16 PM, Antoine Pitrou wrote: > >> On Sat, 2 Mar 2013 11:16:28 -0500 >> Brett Cannon wrote: >> > > In addition, it may be appropriate for importlib to offer a >> > > "write_module" method that accepts (module name, target path, >> > > contents). This would: >> > > >> > > 1. Allow in-process caches to be invalidated implicitly and >> > > selectively when new modules are created >> > >> > I don't think that's necessary. If people don't want to blindly clear >> all >> > caches for a file they can write the file, search the keys in >> > sys.path_importer_cache for the longest prefix for the newly created >> file, >> > and then call the invalidate_cache() method on that explicit finder. >> >> That's too complicated for non-import experts IMHO. >> > > Which is why they can just call importlib.import_module(). > That obviously should have said importlib.invalidate_caches(). =) But how about this as a compromise over introducing write_module(): invalidate_caches() can take a path for something to specifically invalidate. The path can then be passed to the invalidate_caches() on sys.meta_path. In the case of PathFinder it would take that path, try to find the directory in sys.path_importer_cache, and then invalidate the most specific finder for that path (if there is one that has any directory prefix match). Lots of little details to specify (e.g. absolute path forced anywhere in case a relative path is passed in by sys.path is all absolute paths? How do we know something is a file if it has not been written yet?), but this would prevent importlib from subsuming file writing specifically for source files and minimize performance overhead of invalidating all caches for a single file. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Mon Mar 4 06:24:21 2013 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 4 Mar 2013 18:24:21 +1300 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: On 28 February 2013 05:51, Michael Foord wrote: > Hello all, > > PyCon, and the Python Language Summit, is nearly upon us. We have a good number of people confirmed to attend. If you are intending to come to the language summit but haven't let me know please do so. > > The agenda of topics for discussion so far includes the following: > > * A report on pypy status - Maciej and Armin > * Jython and IronPython status reports - Dino / Frank > * Packaging (Doug Hellmann and Monty Taylor at least) > * Cleaning up interpreter initialisation (both in hopes of finding areas > to rationalise and hence speed things up, as well as making things > more embedding friendly). Nick Coghlan > * Adding new async capabilities to the standard library (Guido) > * cffi and the standard library - Maciej > * flufl.enum and the standard library - Barry Warsaw > * The argument clinic - Larry Hastings > > If you have other items you'd like to discuss please let me know and I can add them to the agenda. I'd like to talk about overhauling - not tweaking, overhauling - the standard library testing facilities. -Rob -- Robert Collins Distinguished Technologist HP Cloud Services From guido at python.org Mon Mar 4 06:54:00 2013 From: guido at python.org (Guido van Rossum) Date: Sun, 3 Mar 2013 21:54:00 -0800 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: On Sun, Mar 3, 2013 at 9:24 PM, Robert Collins wrote: > I'd like to talk about overhauling - not tweaking, overhauling - the > standard library testing facilities. That seems like too big a topic and too vague a description to discuss usefully. Perhaps you have a specific proposal? Or at least just a use case that's poorly covered? TBH, your choice of words is ambiguous -- are you interested in overhauling the facilities for testing *of* the standard library (i.e. the 'test' package), or the testing facilities *provided by* the standard library (i.e. the unittest module)? -- --Guido van Rossum (python.org/~guido) From robertc at robertcollins.net Mon Mar 4 07:26:12 2013 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 4 Mar 2013 19:26:12 +1300 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: On 4 March 2013 18:54, Guido van Rossum wrote: > On Sun, Mar 3, 2013 at 9:24 PM, Robert Collins > wrote: >> I'd like to talk about overhauling - not tweaking, overhauling - the >> standard library testing facilities. > > That seems like too big a topic and too vague a description to discuss > usefully. Perhaps you have a specific proposal? Or at least just a use > case that's poorly covered? I have both - I have a draft implementation for a new test result API (and forwards and backwards compat code etc), and use cases that drive it. I started a thread here - http://lists.idyll.org/pipermail/testing-in-python/2013-February/005434.html , with blog posts https://rbtcollins.wordpress.com/2013/02/14/time-to-revise-the-subunit-protocol/ https://rbtcollins.wordpress.com/2013/02/15/more-subunit-needs/ https://rbtcollins.wordpress.com/2013/02/19/first-experience-implementing-streamresult/ https://rbtcollins.wordpress.com/2013/02/23/simpler-is-better/ They are focused on subunit, but much of subunit's friction has been due to issues encountered from the stdlibrary TestResult API - in particular three things: - the single-active-test model that the current API (or at least implementation) has. - the expectation that all test outcomes will originate from the same interpreter (or something with a live traceback object) - the inability to supply details about errors other than the exception All of which start to bite rather deep when working on massively parallel test environments. It is of course possible for subunit and related tools to run their own implementation, but it seems ideal to me to have a common API which regular unittest, nose, py.test and others can all agree on and use : better reuse for pretty printers, GUI displays and the like depend on some common API. > TBH, your choice of words is ambiguous -- are you interested in > overhauling the facilities for testing *of* the standard library (i.e. > the 'test' package), or the testing facilities *provided by* the > standard library (i.e. the unittest module)? Sorry! Testing facilities provided by the standard library. They should naturally facilitate testing of the standard library too. -Rob > -- > --Guido van Rossum (python.org/~guido) -- Robert Collins Distinguished Technologist HP Cloud Services From ncoghlan at gmail.com Mon Mar 4 07:40:48 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Mar 2013 16:40:48 +1000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: On Mon, Mar 4, 2013 at 4:26 PM, Robert Collins wrote: > On 4 March 2013 18:54, Guido van Rossum wrote: >> On Sun, Mar 3, 2013 at 9:24 PM, Robert Collins >> wrote: >>> I'd like to talk about overhauling - not tweaking, overhauling - the >>> standard library testing facilities. >> >> That seems like too big a topic and too vague a description to discuss >> usefully. Perhaps you have a specific proposal? Or at least just a use >> case that's poorly covered? > > I have both - I have a draft implementation for a new test result API > (and forwards and backwards compat code etc), and use cases that drive > it. I started a thread here - > http://lists.idyll.org/pipermail/testing-in-python/2013-February/005434.html > , with blog posts > https://rbtcollins.wordpress.com/2013/02/14/time-to-revise-the-subunit-protocol/ > https://rbtcollins.wordpress.com/2013/02/15/more-subunit-needs/ > https://rbtcollins.wordpress.com/2013/02/19/first-experience-implementing-streamresult/ > https://rbtcollins.wordpress.com/2013/02/23/simpler-is-better/ > > They are focused on subunit, but much of subunit's friction has been > due to issues encountered from the stdlibrary TestResult API - in > particular three things: > - the single-active-test model that the current API (or at least > implementation) has. > - the expectation that all test outcomes will originate from the same > interpreter (or something with a live traceback object) > - the inability to supply details about errors other than the exception > > All of which start to bite rather deep when working on massively > parallel test environments. Your feedback on http://bugs.python.org/issue16997 would be greatly appreciated. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Mon Mar 4 07:46:07 2013 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 4 Mar 2013 19:46:07 +1300 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: On 4 March 2013 19:40, Nick Coghlan wrote: > Your feedback on http://bugs.python.org/issue16997 would be greatly appreciated. Done directly to Antoine on IRC the other day in a conversation with him and Michael about the compatability impact of subtests. Happy to do a full code review if that would be useful. -Rob -- Robert Collins Distinguished Technologist HP Cloud Services From ncoghlan at gmail.com Mon Mar 4 07:58:04 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 4 Mar 2013 16:58:04 +1000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: On Mon, Mar 4, 2013 at 4:46 PM, Robert Collins wrote: > On 4 March 2013 19:40, Nick Coghlan wrote: > >> Your feedback on http://bugs.python.org/issue16997 would be greatly appreciated. > > Done directly to Antoine on IRC the other day in a conversation with > him and Michael about the compatability impact of subtests. Happy to > do a full code review if that would be useful. The extra set of eyes couldn't hurt (and if you can spot a better way to tie @expectedfailure into the rest of the test running machinery, that would be great. Making sure that decorator doesn't break is the ugliest part of the whole patch) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Mon Mar 4 10:29:13 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 4 Mar 2013 10:29:13 +0100 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: <20130304102913.030ef355@pitrou.net> Le Mon, 4 Mar 2013 19:46:07 +1300, Robert Collins a ?crit : > On 4 March 2013 19:40, Nick Coghlan wrote: > > > Your feedback on http://bugs.python.org/issue16997 would be greatly > > appreciated. > > Done directly to Antoine on IRC the other day in a conversation with > him and Michael about the compatability impact of subtests. Happy to > do a full code review if that would be useful. Indeed and some of the changes in the latest patch stem from that conversation. Regards Antoine. From storchaka at gmail.com Mon Mar 4 16:32:21 2013 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 04 Mar 2013 17:32:21 +0200 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults In-Reply-To: References: Message-ID: On 02.03.13 22:32, Terry Reedy wrote: > I am just curious: does 3.3 still > intern (some) unicode chars? Did the 256 interned bytes of 2.x carry > over to 3.x? Yes, Python 3 interns an empty string and first 256 Unicode characters. From storchaka at gmail.com Mon Mar 4 16:40:54 2013 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 04 Mar 2013 17:40:54 +0200 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults In-Reply-To: References: Message-ID: On 01.03.13 17:24, Stefan Bucur wrote: > Before digging deeper into the issue, I wanted to ask here if there are > any implicit assumptions about string identity and interning throughout > the interpreter implementation. For instance, are two single-char > strings having the same content supposed to be identical objects? I think this is not a bug if the code relies on the fact that an empty string is a singleton. This obviously is an immutable object and there is no public method to create different empty string. But a user can create different 1-character strings with same value (first create uninitialized a 1-character string and than fill a content). If some code fails when none of 1-character strings are interned, this obviously is a bug. From barry at python.org Mon Mar 4 17:29:37 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Mar 2013 11:29:37 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: <20130304112937.520651d4@anarchist.wooz.org> On Mar 04, 2013, at 07:26 PM, Robert Collins wrote: >It is of course possible for subunit and related tools to run their >own implementation, but it seems ideal to me to have a common API >which regular unittest, nose, py.test and others can all agree on and >use : better reuse for pretty printers, GUI displays and the like >depend on some common API. And One True Way of invoking and/or discovering how to invoke, a package's test suite. -Barry From brian at python.org Mon Mar 4 17:30:12 2013 From: brian at python.org (Brian Curtin) Date: Mon, 4 Mar 2013 10:30:12 -0600 Subject: [Python-Dev] Introducing Electronic Contributor Agreements Message-ID: The full announcement is at http://blog.python.org/2013/03/introducing-electronic-contributor.html, but a summary follows. We've now moved to an electronic Contributor License Agreement form at http://www.python.org/psf/contrib/contrib-form/ which will hopefully ease the signing and sending of forms for our potential contributors. The form shows the required fields whether you're signing as an individual or a representative of an organization, and removes the need to print, scan, fax, etc. When a new contributor fills in the form, they are emailed a copy of the form and asked to confirm the email address that they used (and received that copy at). Upon confirming, the signed form is sent to the PSF Administrator and filed away. The signature can either be generated from your typed name, or you can draw or upload your actual written signature if you choose. From brett at python.org Mon Mar 4 17:34:34 2013 From: brett at python.org (Brett Cannon) Date: Mon, 4 Mar 2013 11:34:34 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <20130304112937.520651d4@anarchist.wooz.org> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> Message-ID: On Mon, Mar 4, 2013 at 11:29 AM, Barry Warsaw wrote: > On Mar 04, 2013, at 07:26 PM, Robert Collins wrote: > > >It is of course possible for subunit and related tools to run their > >own implementation, but it seems ideal to me to have a common API > >which regular unittest, nose, py.test and others can all agree on and > >use : better reuse for pretty printers, GUI displays and the like > >depend on some common API. > > And One True Way of invoking and/or discovering how to invoke, a package's > test suite. > How does unittest's test discovery not solve that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Mar 4 17:36:15 2013 From: brett at python.org (Brett Cannon) Date: Mon, 4 Mar 2013 11:36:15 -0500 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: Message-ID: On Mon, Mar 4, 2013 at 11:30 AM, Brian Curtin wrote: > The full announcement is at > http://blog.python.org/2013/03/introducing-electronic-contributor.html, > but a summary follows. > > We've now moved to an electronic Contributor License Agreement form at > http://www.python.org/psf/contrib/contrib-form/ which will hopefully > ease the signing and sending of forms for our potential contributors. > The form shows the required fields whether you're signing as an > individual or a representative of an organization, and removes the > need to print, scan, fax, etc. > > When a new contributor fills in the form, they are emailed a copy of > the form and asked to confirm the email address that they used (and > received that copy at). Upon confirming, the signed form is sent to > the PSF Administrator and filed away. > > The signature can either be generated from your typed name, or you can > draw or upload your actual written signature if you choose. With this in place I would like to propose that all patches submitted to bugs.python.org must come from someone who has signed the CLA before we consider committing it (if you want to be truly paranoid we could say that we won't even look at the code w/o a CLA). -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Mar 4 17:44:16 2013 From: cournape at gmail.com (David Cournapeau) Date: Mon, 4 Mar 2013 16:44:16 +0000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> Message-ID: On Mon, Mar 4, 2013 at 4:34 PM, Brett Cannon wrote: > > > > On Mon, Mar 4, 2013 at 11:29 AM, Barry Warsaw wrote: >> >> On Mar 04, 2013, at 07:26 PM, Robert Collins wrote: >> >> >It is of course possible for subunit and related tools to run their >> >own implementation, but it seems ideal to me to have a common API >> >which regular unittest, nose, py.test and others can all agree on and >> >use : better reuse for pretty printers, GUI displays and the like >> >depend on some common API. >> >> And One True Way of invoking and/or discovering how to invoke, a package's >> test suite. > > > How does unittest's test discovery not solve that? It is not always obvious how to test a package when one is not familiar with it. Are the tests in pkgname/tests or tests or ... ? In the scientific community, we have used the convention of making the test suite available at runtime with pkgname.tests(). David From brett at python.org Mon Mar 4 17:47:34 2013 From: brett at python.org (Brett Cannon) Date: Mon, 4 Mar 2013 11:47:34 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> Message-ID: On Mon, Mar 4, 2013 at 11:44 AM, David Cournapeau wrote: > On Mon, Mar 4, 2013 at 4:34 PM, Brett Cannon wrote: > > > > > > > > On Mon, Mar 4, 2013 at 11:29 AM, Barry Warsaw wrote: > >> > >> On Mar 04, 2013, at 07:26 PM, Robert Collins wrote: > >> > >> >It is of course possible for subunit and related tools to run their > >> >own implementation, but it seems ideal to me to have a common API > >> >which regular unittest, nose, py.test and others can all agree on and > >> >use : better reuse for pretty printers, GUI displays and the like > >> >depend on some common API. > >> > >> And One True Way of invoking and/or discovering how to invoke, a > package's > >> test suite. > > > > > > How does unittest's test discovery not solve that? > > It is not always obvious how to test a package when one is not > familiar with it. Are the tests in pkgname/tests or tests or ... ? > I would argue that's a packaging problem and not a testing infrastructure in the stdlib problem. If we want to standardize on always having the tests in a 'tests' sub-package that's fine, but I don't see unittest or subtest directly controlling that short of some registration hook that has to be called to declare where the tests are. -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Mar 4 17:51:04 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Mar 2013 11:51:04 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> Message-ID: <20130304115104.224493ef@anarchist.wooz.org> On Mar 04, 2013, at 11:34 AM, Brett Cannon wrote: >> And One True Way of invoking and/or discovering how to invoke, a package's >> test suite. > >How does unittest's test discovery not solve that? I should have added "from the command line". E.g. is it: $ python -m unittest discover $ python setup.py test $ python setup.py nosetests $ python -m nose test $ nosetests-X.Y Besides having a multitude of choices, there's almost no way to automatically discover (e.g. by metadata inspection or some such) how to invoke the tests. You're often lucky if there's a README.test and it's still accurate. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From skip at pobox.com Mon Mar 4 18:02:04 2013 From: skip at pobox.com (Skip Montanaro) Date: Mon, 4 Mar 2013 11:02:04 -0600 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: Message-ID: On Mon, Mar 4, 2013 at 10:30 AM, Brian Curtin wrote: > The full announcement is at > http://blog.python.org/2013/03/introducing-electronic-contributor.html, > but a summary follows. > ... Brian, Do you want old-timers like me who have a wet-signed fax gathering dust in a box at PSF World Headquarters to execute the electronic contributor agreement? While not strictly necessary, I suspect it might be nice for you to have all agreements in a common form. Skip -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Mon Mar 4 18:04:25 2013 From: brian at python.org (Brian Curtin) Date: Mon, 4 Mar 2013 11:04:25 -0600 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: Message-ID: On Mon, Mar 4, 2013 at 11:02 AM, Skip Montanaro wrote: > > > On Mon, Mar 4, 2013 at 10:30 AM, Brian Curtin wrote: >> >> The full announcement is at >> http://blog.python.org/2013/03/introducing-electronic-contributor.html, >> but a summary follows. >> ... > > > Brian, > > Do you want old-timers like me who have a wet-signed fax gathering dust in a > box at PSF World Headquarters to execute the electronic contributor > agreement? While not strictly necessary, I suspect it might be nice for you > to have all agreements in a common form. I'll check on that, but I don't think it's necessary since the gathered data is no different. From solipsis at pitrou.net Mon Mar 4 19:41:48 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 4 Mar 2013 19:41:48 +0100 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> Message-ID: <20130304194148.7e6b42f5@pitrou.net> On Mon, 4 Mar 2013 11:51:04 -0500 Barry Warsaw wrote: > On Mar 04, 2013, at 11:34 AM, Brett Cannon wrote: > > >> And One True Way of invoking and/or discovering how to invoke, a package's > >> test suite. > > > >How does unittest's test discovery not solve that? > > I should have added "from the command line". E.g. is it: > > $ python -m unittest discover > $ python setup.py test > $ python setup.py nosetests > $ python -m nose test > $ nosetests-X.Y > > Besides having a multitude of choices, there's almost no way to automatically > discover (e.g. by metadata inspection or some such) how to invoke the tests. > You're often lucky if there's a README.test and it's still accurate. I hope we can have a "pytest" utility that does the right thing in 3.4 :-) Typing "python -m unittest discover" is too cumbersome. Regards Antoine. From amauryfa at gmail.com Mon Mar 4 20:06:31 2013 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 4 Mar 2013 20:06:31 +0100 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults In-Reply-To: References: Message-ID: 2013/3/4 Serhiy Storchaka > On 01.03.13 17:24, Stefan Bucur wrote: > >> Before digging deeper into the issue, I wanted to ask here if there are >> any implicit assumptions about string identity and interning throughout >> the interpreter implementation. For instance, are two single-char >> strings having the same content supposed to be identical objects? >> > > I think this is not a bug if the code relies on the fact that an empty > string is a singleton. This obviously is an immutable object and there is > no public method to create different empty string. Really? >>> x = u'\xe9'.encode('ascii', 'ignore') >>> x == '', x is '' (True, False) -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Mar 4 20:08:50 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 4 Mar 2013 21:08:50 +0200 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <20130304194148.7e6b42f5@pitrou.net> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> Message-ID: On Mon, Mar 4, 2013 at 8:41 PM, Antoine Pitrou wrote: > On Mon, 4 Mar 2013 11:51:04 -0500 > Barry Warsaw wrote: >> On Mar 04, 2013, at 11:34 AM, Brett Cannon wrote: >> >> >> And One True Way of invoking and/or discovering how to invoke, a package's >> >> test suite. >> > >> >How does unittest's test discovery not solve that? >> >> I should have added "from the command line". E.g. is it: >> >> $ python -m unittest discover >> $ python setup.py test >> $ python setup.py nosetests >> $ python -m nose test >> $ nosetests-X.Y >> >> Besides having a multitude of choices, there's almost no way to automatically >> discover (e.g. by metadata inspection or some such) how to invoke the tests. >> You're often lucky if there's a README.test and it's still accurate. > > I hope we can have a "pytest" utility that does the right thing in > 3.4 :-) > Typing "python -m unittest discover" is too cumbersome. > > Regards > > Antoine. Please pick a different name though, pytest is already widely used. From guido at python.org Mon Mar 4 20:11:54 2013 From: guido at python.org (Guido van Rossum) Date: Mon, 4 Mar 2013 11:11:54 -0800 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults In-Reply-To: References: Message-ID: On Mon, Mar 4, 2013 at 11:06 AM, Amaury Forgeot d'Arc wrote: > > > 2013/3/4 Serhiy Storchaka >> >> On 01.03.13 17:24, Stefan Bucur wrote: >>> >>> Before digging deeper into the issue, I wanted to ask here if there are >>> any implicit assumptions about string identity and interning throughout >>> the interpreter implementation. For instance, are two single-char >>> strings having the same content supposed to be identical objects? >> >> >> I think this is not a bug if the code relies on the fact that an empty >> string is a singleton. This obviously is an immutable object and there is no >> public method to create different empty string. > > > Really? > >>>> x = u'\xe9'.encode('ascii', 'ignore') >>>> x == '', x is '' > (True, False) Code that relies on this is incorrect (the language doesn't guarantee interning) but nevertheless given the intention of the implementation, that behavior of encode() is also a bug. -- --Guido van Rossum (python.org/~guido) From barry at python.org Mon Mar 4 20:14:57 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Mar 2013 14:14:57 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <20130304194148.7e6b42f5@pitrou.net> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> Message-ID: <20130304141457.603b4965@anarchist.wooz.org> On Mar 04, 2013, at 07:41 PM, Antoine Pitrou wrote: >> $ python -m unittest discover >> $ python setup.py test >> $ python setup.py nosetests >> $ python -m nose test >> $ nosetests-X.Y >> >> Besides having a multitude of choices, there's almost no way to >> automatically discover (e.g. by metadata inspection or some such) how to >> invoke the tests. You're often lucky if there's a README.test and it's >> still accurate. > >I hope we can have a "pytest" utility that does the right thing in 3.4 :-) >Typing "python -m unittest discover" is too cumbersome. Where is this work being done (e.g. is there a PEP)? One thing to keep in mind is how to invoke this on a system with multiple versions of Python available. For example, in Debian, a decision was recently made to drop all the nosetests-X.Y scripts from /usr/bin[1]. This makes sense when you think about having at least two major versions of Python (2.x and 3.x) and maybe up to four (2.6, 2.7, 3.2, 3.3), *plus* debug versions of each. Add to that, we don't actually know at package build time which versions of Python you might have installed on your system. A suggestion was made to provide a main entry point so that `pythonX.Y -m nose` would work, which makes sense to me and was adopted by the nose-devs[2]. So while a top level `pytest` command may make sense, it also might not ;). While PEP 426 has a way to declare test dependencies (a good thing), it seems to have no way to declare how to actually run the tests. Cheers, -Barry [1] Start of thread: http://comments.gmane.org/gmane.linux.debian.devel.python/8572 [2] https://github.com/nose-devs/nose/issues/634 From amauryfa at gmail.com Mon Mar 4 20:20:00 2013 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 4 Mar 2013 20:20:00 +0100 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults In-Reply-To: References: Message-ID: 2013/3/4 Guido van Rossum > >>>> x = u'\xe9'.encode('ascii', 'ignore') > >>>> x == '', x is '' > > (True, False) > > Code that relies on this is incorrect (the language doesn't guarantee > interning) but nevertheless given the intention of the implementation, > that behavior of encode() is also a bug. > The example above is obviously from python2.7; there is a similar example with python3.2: >>> x = b'\xe9\xe9'.decode('ascii', 'ignore') >>> x == '', x is '' (True, False) ...but this bug has been fixed in 3.3: PyUnicode_Resize() always returns the unicode_empty singleton. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From berker.peksag at gmail.com Mon Mar 4 20:26:18 2013 From: berker.peksag at gmail.com (=?UTF-8?Q?Berker_Peksa=C4=9F?=) Date: Mon, 4 Mar 2013 21:26:18 +0200 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <20130304141457.603b4965@anarchist.wooz.org> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> <20130304141457.603b4965@anarchist.wooz.org> Message-ID: On Mon, Mar 4, 2013 at 9:14 PM, Barry Warsaw wrote: > On Mar 04, 2013, at 07:41 PM, Antoine Pitrou wrote: > >>> $ python -m unittest discover >>> $ python setup.py test >>> $ python setup.py nosetests >>> $ python -m nose test >>> $ nosetests-X.Y >>> >>> Besides having a multitude of choices, there's almost no way to >>> automatically discover (e.g. by metadata inspection or some such) how to >>> invoke the tests. You're often lucky if there's a README.test and it's >>> still accurate. >> >>I hope we can have a "pytest" utility that does the right thing in 3.4 :-) >>Typing "python -m unittest discover" is too cumbersome. > > Where is this work being done (e.g. is there a PEP)? There is an open issue on the tracker: http://bugs.python.org/issue14266 --Berker From benno at benno.id.au Mon Mar 4 20:31:33 2013 From: benno at benno.id.au (Ben Leslie) Date: Tue, 5 Mar 2013 06:31:33 +1100 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: Message-ID: On Tue, Mar 5, 2013 at 3:30 AM, Brian Curtin wrote: > The full announcement is at > http://blog.python.org/2013/03/introducing-electronic-contributor.html, > but a summary follows. > > We've now moved to an electronic Contributor License Agreement form at > http://www.python.org/psf/contrib/contrib-form/ which will hopefully > ease the signing and sending of forms for our potential contributors. > The form shows the required fields whether you're signing as an > individual or a representative of an organization, and removes the > need to print, scan, fax, etc. > > When a new contributor fills in the form, they are emailed a copy of > the form and asked to confirm the email address that they used (and > received that copy at). Upon confirming, the signed form is sent to > the PSF Administrator and filed away. > > The signature can either be generated from your typed name, or you can > draw or upload your actual written signature if you choose. > I had been procrastinating on filling in the paper version, but having this means no excuse. The process was very simple and straight forward. (The only difficult part was actually working out my python bugs username). Thanks for taking the administrative effort to get this all in place. Cheers, Benno -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Mon Mar 4 21:00:38 2013 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 5 Mar 2013 09:00:38 +1300 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> Message-ID: On 5 March 2013 05:34, Brett Cannon wrote: > > > > On Mon, Mar 4, 2013 at 11:29 AM, Barry Warsaw wrote: >> >> On Mar 04, 2013, at 07:26 PM, Robert Collins wrote: >> >> >It is of course possible for subunit and related tools to run their >> >own implementation, but it seems ideal to me to have a common API >> >which regular unittest, nose, py.test and others can all agree on and >> >use : better reuse for pretty printers, GUI displays and the like >> >depend on some common API. >> >> And One True Way of invoking and/or discovering how to invoke, a package's >> test suite. > > > How does unittest's test discovery not solve that? Three reasons a) There are some bugs (all filed I think) - I intend to hack on these in the near future - that prevent discovery working at all for some use cases. b) discovery requires magic parameters that are project specific (e.g. is it 'discover .' or 'discover . lib' to run it). This is arguably a setup.py/packaging entrypoint issue. c) Test suites written for e.g. Twisted, or nose, or other non-stdunit-runner-compatible test runners will fail to execute even when discovered correctly. There are ways to solve this without addressing a/b/c - just defining a standard command to run that signals success/failure with it's exit code. Packages can export a particular flavour of that in their setup.py if they have exceptional needs, and do nothing in the common case. That doesn't solve 'how to introspect a package test suite' but for distro packagers - and large scale CI integration - that doesn't matter. For instance testrepository offers a setuptools extension to let it be used trivially, I believe nose does something similar. Having something that would let *any* test suite spit out folk's favourite test protocol de jour would be brilliant of course :). [junit-xml, subunit, TAP, ...] -Rob -- Robert Collins Distinguished Technologist HP Cloud Services From dholth at gmail.com Mon Mar 4 21:02:35 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 4 Mar 2013 15:02:35 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> <20130304141457.603b4965@anarchist.wooz.org> Message-ID: On Mon, Mar 4, 2013 at 2:26 PM, Berker Peksa? wrote: > On Mon, Mar 4, 2013 at 9:14 PM, Barry Warsaw wrote: >> On Mar 04, 2013, at 07:41 PM, Antoine Pitrou wrote: >> >>>> $ python -m unittest discover >>>> $ python setup.py test >>>> $ python setup.py nosetests >>>> $ python -m nose test >>>> $ nosetests-X.Y >>>> >>>> Besides having a multitude of choices, there's almost no way to >>>> automatically discover (e.g. by metadata inspection or some such) how to >>>> invoke the tests. You're often lucky if there's a README.test and it's >>>> still accurate. >>> >>>I hope we can have a "pytest" utility that does the right thing in 3.4 :-) >>>Typing "python -m unittest discover" is too cumbersome. >> >> Where is this work being done (e.g. is there a PEP)? > > There is an open issue on the tracker: http://bugs.python.org/issue14266 > > --Berker setup.py's setup(test_suite="x")... not sure if this is a distutils or setuptools feature. PEP 426 has an extension mechanism that could do the job. From robertc at robertcollins.net Mon Mar 4 21:07:15 2013 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 5 Mar 2013 09:07:15 +1300 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <20130304115104.224493ef@anarchist.wooz.org> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> Message-ID: On 5 March 2013 05:51, Barry Warsaw wrote: > I should have added "from the command line". E.g. is it: > > $ python -m unittest discover > $ python setup.py test > $ python setup.py nosetests > $ python -m nose test > $ nosetests-X.Y $ testr run :) > Besides having a multitude of choices, there's almost no way to automatically > discover (e.g. by metadata inspection or some such) how to invoke the tests. > You're often lucky if there's a README.test and it's still accurate. If there is a .testr.conf you can run 'testr init; testr run'. Thats the defined entry point for testr, and .testr.conf can specify running make, or setup.py build or whatever else is needed to run tests. I would love to see a declaritive interface so that you can tell that is what you should run. -Rob -- Robert Collins Distinguished Technologist HP Cloud Services From barry at python.org Mon Mar 4 21:14:39 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Mar 2013 15:14:39 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> <20130304141457.603b4965@anarchist.wooz.org> Message-ID: <20130304151439.611dd9aa@anarchist.wooz.org> On Mar 04, 2013, at 03:02 PM, Daniel Holth wrote: >setup.py's setup(test_suite="x")... not sure if this is a distutils or >setuptools feature. PEP 426 has an extension mechanism that could do >the job. Shouldn't "testing" be a first order feature? -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From dholth at gmail.com Mon Mar 4 21:40:43 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 4 Mar 2013 15:40:43 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <20130304151439.611dd9aa@anarchist.wooz.org> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> <20130304141457.603b4965@anarchist.wooz.org> <20130304151439.611dd9aa@anarchist.wooz.org> Message-ID: On Mon, Mar 4, 2013 at 3:14 PM, Barry Warsaw wrote: > On Mar 04, 2013, at 03:02 PM, Daniel Holth wrote: > >>setup.py's setup(test_suite="x")... not sure if this is a distutils or >>setuptools feature. PEP 426 has an extension mechanism that could do >>the job. > > Shouldn't "testing" be a first order feature? Unfortunately there are so many potential first-order features that we've had to leave some out in order to save time. "How to run the tests" is not something that you need to know when searching PyPI for a distribution and its dependencies. From barry at python.org Mon Mar 4 21:45:05 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Mar 2013 15:45:05 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> <20130304141457.603b4965@anarchist.wooz.org> <20130304151439.611dd9aa@anarchist.wooz.org> Message-ID: <20130304154505.016092cf@anarchist.wooz.org> On Mar 04, 2013, at 03:40 PM, Daniel Holth wrote: >On Mon, Mar 4, 2013 at 3:14 PM, Barry Warsaw wrote: >> On Mar 04, 2013, at 03:02 PM, Daniel Holth wrote: >> >>>setup.py's setup(test_suite="x")... not sure if this is a distutils or >>>setuptools feature. PEP 426 has an extension mechanism that could do >>>the job. >> >> Shouldn't "testing" be a first order feature? > >Unfortunately there are so many potential first-order features that >we've had to leave some out in order to save time. "How to run the >tests" is not something that you need to know when searching PyPI for >a distribution and its dependencies. Although "has unittests that I can run" might be a deciding factor of which of the many implementations of a particular feature you might choose. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From tjreedy at udel.edu Mon Mar 4 21:46:48 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 04 Mar 2013 15:46:48 -0500 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: Message-ID: On 3/4/2013 11:36 AM, Brett Cannon wrote: > > > > On Mon, Mar 4, 2013 at 11:30 AM, Brian Curtin > wrote: > > The full announcement is at > http://blog.python.org/2013/03/introducing-electronic-contributor.html, > but a summary follows. > > We've now moved to an electronic Contributor License Agreement form at > http://www.python.org/psf/contrib/contrib-form/ which will hopefully > ease the signing and sending of forms for our potential contributors. > The form shows the required fields whether you're signing as an > individual or a representative of an organization, and removes the > need to print, scan, fax, etc. > > When a new contributor fills in the form, they are emailed a copy of > the form and asked to confirm the email address that they used (and > received that copy at). Upon confirming, the signed form is sent to > the PSF Administrator and filed away. > > The signature can either be generated from your typed name, or you can > draw or upload your actual written signature if you choose. > > > With this in place I would like to propose that all patches submitted to > bugs.python.org must come from someone who has > signed the CLA before we consider committing it (if you want to be truly > paranoid we could say that we won't even look at the code w/o a CLA). Either policy could be facilitated by tracker changes. In order to see the file upload box, one must login and the tracker knows who has a CLA on file (as indicated by a * suffix on the name). If a file is uploaded by someone without, a box could popup with the link to the e-form and a message that a CLA is required. -- Terry Jan Reedy From victor.stinner at gmail.com Mon Mar 4 21:46:49 2013 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 4 Mar 2013 21:46:49 +0100 Subject: [Python-Dev] Disabling string interning for null and single-char causes segfaults In-Reply-To: References: Message-ID: Hi, 2013/3/4 Amaury Forgeot d'Arc : > The example above is obviously from python2.7; there is a similar example > with python3.2: >>>> x = b'\xe9\xe9'.decode('ascii', 'ignore') >>>> x == '', x is '' > (True, False) > > ...but this bug has been fixed in 3.3: PyUnicode_Resize() always returns the > unicode_empty singleton. Yeah, I tried to reuse singletons (empty string and latin-1 single letters) as much as possible to reduce memory footprint, not to ensure that an empty string is always the '' singleton. I wouldn't call this a bug. Victor From solipsis at pitrou.net Mon Mar 4 21:46:24 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 4 Mar 2013 21:46:24 +0100 Subject: [Python-Dev] Introducing Electronic Contributor Agreements References: Message-ID: <20130304214624.0f6dec8c@pitrou.net> On Mon, 04 Mar 2013 15:46:48 -0500 Terry Reedy wrote: > On 3/4/2013 11:36 AM, Brett Cannon wrote: > > > > > > > > On Mon, Mar 4, 2013 at 11:30 AM, Brian Curtin > > wrote: > > > > The full announcement is at > > http://blog.python.org/2013/03/introducing-electronic-contributor.html, > > but a summary follows. > > > > We've now moved to an electronic Contributor License Agreement form at > > http://www.python.org/psf/contrib/contrib-form/ which will hopefully > > ease the signing and sending of forms for our potential contributors. > > The form shows the required fields whether you're signing as an > > individual or a representative of an organization, and removes the > > need to print, scan, fax, etc. > > > > When a new contributor fills in the form, they are emailed a copy of > > the form and asked to confirm the email address that they used (and > > received that copy at). Upon confirming, the signed form is sent to > > the PSF Administrator and filed away. > > > > The signature can either be generated from your typed name, or you can > > draw or upload your actual written signature if you choose. > > > > > > With this in place I would like to propose that all patches submitted to > > bugs.python.org must come from someone who has > > signed the CLA before we consider committing it (if you want to be truly > > paranoid we could say that we won't even look at the code w/o a CLA). > > Either policy could be facilitated by tracker changes. In order to see > the file upload box, one must login and the tracker knows who has a CLA > on file (as indicated by a * suffix on the name). If a file is uploaded > by someone without, a box could popup with the link to the e-form and a > message that a CLA is required. And how about people who upload something else than a patch? Regards Antoine. From pje at telecommunity.com Mon Mar 4 21:52:14 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 4 Mar 2013 15:52:14 -0500 Subject: [Python-Dev] Planning on removing cache invalidation for file finders In-Reply-To: References: <20130303021630.3f5340d5@pitrou.net> Message-ID: On Sun, Mar 3, 2013 at 12:31 PM, Brett Cannon wrote: > But how about this as a compromise over introducing write_module(): > invalidate_caches() can take a path for something to specifically > invalidate. The path can then be passed to the invalidate_caches() on > sys.meta_path. In the case of PathFinder it would take that path, try to > find the directory in sys.path_importer_cache, and then invalidate the most > specific finder for that path (if there is one that has any directory prefix > match). > > Lots of little details to specify (e.g. absolute path forced anywhere in > case a relative path is passed in by sys.path is all absolute paths? How do > we know something is a file if it has not been written yet?), but this would > prevent importlib from subsuming file writing specifically for source files > and minimize performance overhead of invalidating all caches for a single > file. ISTR that when we were first discussing caching, I'd proposed a TTL-based workaround for the timestamp granularity problem, and it was mooted because somebody already proposed and implemented a similar idea. But my approach -- or at least the one I have in mind now -- would provide an "eventual consistency" guarantee, while still allowing fast startup times. However I think the experience with this heuristic so far shows that the real problem isn't that the heuristic doesn't work for the normal case; it works fine for that. Instead, what happens is that *it doesn't work when you generate modules*. And *that* problem can be fixed without even invalidating the caches: it can be fixed by doing some extra work when writing a module - e.g. by making sure the directory mtime changes again after the module is written. For example, create the module under a temporary name, verify that the directory mtime is different than it was before, then keep renaming it to different temporary names until the mtime changes again, then rename it to the final name. (This would be very fast on some platforms, much slower on others, but the OS itself would tell you when it had worked.) A utility routine to "write_module()" or "write_package()" would be easier to find than advice that says to invalidate the cache under thus-and-such conditions, as it would show up in searches for writing modules dynamically or creating modules dynamically, where you could only search for info about the cache if you knew the cache existed. From eliben at gmail.com Mon Mar 4 22:26:57 2013 From: eliben at gmail.com (Eli Bendersky) Date: Mon, 4 Mar 2013 13:26:57 -0800 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) Message-ID: [Splitting into a separate thread] Do we really need to overthink something that requires a trivial alias to set up for one's own convenience? Picking a Python version (as Barry mentions) is just one of the problems. What's wrong with: alias rupytests='python3 -m unittest discover" alias runpytests2='python2 -m unittest discover" ? Don't get me wrong, I love the "discover" option and agree that it should be the recommended way to go - but isn't this largely a documentation issue? Eli On Mon, Mar 4, 2013 at 11:14 AM, Barry Warsaw wrote: > On Mar 04, 2013, at 07:41 PM, Antoine Pitrou wrote: > > >> $ python -m unittest discover > >> $ python setup.py test > >> $ python setup.py nosetests > >> $ python -m nose test > >> $ nosetests-X.Y > >> > >> Besides having a multitude of choices, there's almost no way to > >> automatically discover (e.g. by metadata inspection or some such) how to > >> invoke the tests. You're often lucky if there's a README.test and it's > >> still accurate. > > > >I hope we can have a "pytest" utility that does the right thing in 3.4 :-) > >Typing "python -m unittest discover" is too cumbersome. > > Where is this work being done (e.g. is there a PEP)? > > One thing to keep in mind is how to invoke this on a system with multiple > versions of Python available. For example, in Debian, a decision was > recently > made to drop all the nosetests-X.Y scripts from /usr/bin[1]. > > This makes sense when you think about having at least two major versions of > Python (2.x and 3.x) and maybe up to four (2.6, 2.7, 3.2, 3.3), *plus* > debug > versions of each. Add to that, we don't actually know at package build > time > which versions of Python you might have installed on your system. > > A suggestion was made to provide a main entry point so that `pythonX.Y -m > nose` would work, which makes sense to me and was adopted by the > nose-devs[2]. > > So while a top level `pytest` command may make sense, it also might not ;). > While PEP 426 has a way to declare test dependencies (a good thing), it > seems > to have no way to declare how to actually run the tests. > > Cheers, > -Barry > > [1] Start of thread: > http://comments.gmane.org/gmane.linux.debian.devel.python/8572 > > [2] https://github.com/nose-devs/nose/issues/634 > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/eliben%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From breamoreboy at yahoo.co.uk Mon Mar 4 22:33:55 2013 From: breamoreboy at yahoo.co.uk (Mark Lawrence) Date: Mon, 04 Mar 2013 21:33:55 +0000 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: Message-ID: On 04/03/2013 20:46, Terry Reedy wrote: > On 3/4/2013 11:36 AM, Brett Cannon wrote: >> >> >> >> On Mon, Mar 4, 2013 at 11:30 AM, Brian Curtin > > wrote: >> >> The full announcement is at >> >> http://blog.python.org/2013/03/introducing-electronic-contributor.html, >> but a summary follows. >> >> We've now moved to an electronic Contributor License Agreement >> form at >> http://www.python.org/psf/contrib/contrib-form/ which will hopefully >> ease the signing and sending of forms for our potential contributors. >> The form shows the required fields whether you're signing as an >> individual or a representative of an organization, and removes the >> need to print, scan, fax, etc. >> >> When a new contributor fills in the form, they are emailed a copy of >> the form and asked to confirm the email address that they used (and >> received that copy at). Upon confirming, the signed form is sent to >> the PSF Administrator and filed away. >> >> The signature can either be generated from your typed name, or you >> can >> draw or upload your actual written signature if you choose. >> >> >> With this in place I would like to propose that all patches submitted to >> bugs.python.org must come from someone who has >> signed the CLA before we consider committing it (if you want to be truly >> paranoid we could say that we won't even look at the code w/o a CLA). > > Either policy could be facilitated by tracker changes. In order to see > the file upload box, one must login and the tracker knows who has a CLA > on file (as indicated by a * suffix on the name). If a file is uploaded > by someone without, a box could popup with the link to the e-form and a > message that a CLA is required. > People already use the bug tracker as an excuse not to contribute, wouldn't this requirement make the situation worse? -- Cheers. Mark Lawrence From solipsis at pitrou.net Mon Mar 4 22:28:48 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 4 Mar 2013 22:28:48 +0100 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) References: Message-ID: <20130304222848.5006bf3b@pitrou.net> On Mon, 4 Mar 2013 13:26:57 -0800 Eli Bendersky wrote: > [Splitting into a separate thread] > > Do we really need to overthink something that requires a trivial alias to > set up for one's own convenience? > > Picking a Python version (as Barry mentions) is just one of the problems. > What's wrong with: > > alias rupytests='python3 -m unittest discover" > alias runpytests2='python2 -m unittest discover" > > ? > > Don't get me wrong, I love the "discover" option and agree that it should > be the recommended way to go - but isn't this largely a documentation issue? I would personally call it a typing issue :-) "python -m unittest discover" is just too long. Regards Antoine. From robertc at robertcollins.net Mon Mar 4 23:01:38 2013 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 5 Mar 2013 11:01:38 +1300 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: Message-ID: On 5 March 2013 10:26, Eli Bendersky wrote: > [Splitting into a separate thread] > > Do we really need to overthink something that requires a trivial alias to > set up for one's own convenience? The big thing is automated tools, not developers. When distributors want to redistribute packages they want to be sure they work. Running the tests is a pretty good signal for that, but having every package slightly different adds to the work they need to do. Being able to do 'setup.py test' consistently, everywhere - that would be great. -Rob -- Robert Collins Distinguished Technologist HP Cloud Services From brett at python.org Mon Mar 4 23:04:13 2013 From: brett at python.org (Brett Cannon) Date: Mon, 4 Mar 2013 17:04:13 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <20130304154505.016092cf@anarchist.wooz.org> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> <20130304141457.603b4965@anarchist.wooz.org> <20130304151439.611dd9aa@anarchist.wooz.org> <20130304154505.016092cf@anarchist.wooz.org> Message-ID: On Mon, Mar 4, 2013 at 3:45 PM, Barry Warsaw wrote: > On Mar 04, 2013, at 03:40 PM, Daniel Holth wrote: > > >On Mon, Mar 4, 2013 at 3:14 PM, Barry Warsaw wrote: > >> On Mar 04, 2013, at 03:02 PM, Daniel Holth wrote: > >> > >>>setup.py's setup(test_suite="x")... not sure if this is a distutils or > >>>setuptools feature. PEP 426 has an extension mechanism that could do > >>>the job. > >> > >> Shouldn't "testing" be a first order feature? > > > >Unfortunately there are so many potential first-order features that > >we've had to leave some out in order to save time. "How to run the > >tests" is not something that you need to know when searching PyPI for > >a distribution and its dependencies. > > Although "has unittests that I can run" might be a deciding factor of > which of > the many implementations of a particular feature you might choose. > Sure, but that has nothing to do with programmatic package discovery. That's something you will have to do as a person in making a qualitative decision along the same lines as API design. Flipping a bit in a config file saying "I have tests" doesn't say much beyond you flipped a bit, e.g. no idea on coverage, quality, etc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Mar 4 23:08:11 2013 From: brett at python.org (Brett Cannon) Date: Mon, 4 Mar 2013 17:08:11 -0500 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: Message-ID: On Mon, Mar 4, 2013 at 4:33 PM, Mark Lawrence wrote: > On 04/03/2013 20:46, Terry Reedy wrote: > >> On 3/4/2013 11:36 AM, Brett Cannon wrote: >> >>> >>> >>> >>> On Mon, Mar 4, 2013 at 11:30 AM, Brian Curtin >> > wrote: >>> >>> The full announcement is at >>> >>> http://blog.python.org/2013/**03/introducing-electronic-** >>> contributor.html >>> , >>> but a summary follows. >>> >>> We've now moved to an electronic Contributor License Agreement >>> form at >>> http://www.python.org/psf/**contrib/contrib-form/which will hopefully >>> ease the signing and sending of forms for our potential contributors. >>> The form shows the required fields whether you're signing as an >>> individual or a representative of an organization, and removes the >>> need to print, scan, fax, etc. >>> >>> When a new contributor fills in the form, they are emailed a copy of >>> the form and asked to confirm the email address that they used (and >>> received that copy at). Upon confirming, the signed form is sent to >>> the PSF Administrator and filed away. >>> >>> The signature can either be generated from your typed name, or you >>> can >>> draw or upload your actual written signature if you choose. >>> >>> >>> With this in place I would like to propose that all patches submitted to >>> bugs.python.org must come from someone who has >>> signed the CLA before we consider committing it (if you want to be truly >>> paranoid we could say that we won't even look at the code w/o a CLA). >>> >> >> Either policy could be facilitated by tracker changes. In order to see >> the file upload box, one must login and the tracker knows who has a CLA >> on file (as indicated by a * suffix on the name). If a file is uploaded >> by someone without, a box could popup with the link to the e-form and a >> message that a CLA is required. >> >> > People already use the bug tracker as an excuse not to contribute, > wouldn't this requirement make the situation worse? Depends on your paranoia. If you're worried about accidentally lifting IP merely by reading someone's source code, then you wouldn't want to touch code without the CLA signed. Now I'm not that paranoid, but I'm still not about to commit someone's code now without the CLA signed to make sure we are legally covered for the patch. If someone chooses not to contribute because of the CLA that's fine, but since we have already told at least Anatoly that we won't accept patches from him until he signs the CLA I'm not going to start acting differently towards others. I view legally covering our ass by having someone fill in a form is worth the potential loss of some contribution in the grand scheme of things. -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Mar 4 23:14:39 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Mar 2013 17:14:39 -0500 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: Message-ID: <20130304171439.7c37c840@anarchist.wooz.org> On Mar 05, 2013, at 11:01 AM, Robert Collins wrote: >The big thing is automated tools, not developers. Exactly. -Barry From barry at python.org Mon Mar 4 23:24:55 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Mar 2013 17:24:55 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> <20130304141457.603b4965@anarchist.wooz.org> <20130304151439.611dd9aa@anarchist.wooz.org> <20130304154505.016092cf@anarchist.wooz.org> Message-ID: <20130304172455.5ef0aa5e@anarchist.wooz.org> On Mar 04, 2013, at 05:04 PM, Brett Cannon wrote: >Sure, but that has nothing to do with programmatic package discovery. >That's something you will have to do as a person in making a qualitative >decision along the same lines as API design. Flipping a bit in a config >file saying "I have tests" doesn't say much beyond you flipped a bit, e.g. >no idea on coverage, quality, etc. What I'm looking for is something that automated tools can use to easily discover how to run a package's tests. I want it to be dead simple for developers of a package to declare how their tests are to be run, and what extra dependencies they might need. It seems like PEP 426 only addresses the latter. Maybe that's fine and a different PEP is needed to describe automated test discover, but I still think it's an important use case. Imagine: * Every time you upload a package to PyPI, snakebite runs your test suite on a variety of Python versions and platforms. You get a nice link to the Jenkins results so you and your users get a good sense of overall package quality. * You have an automated gatekeeper that will prevent commits or uploads if your coverage or test results get worse instead of better. * Distro packagers can build tools that auto-discover the tests so that they are run automatically when the package is built, ensuring high quality packages specifically targeted to those distros. As a community, we know how important tests are, so I think our tools should reflect that and make it easy for those tests to be expressed. As a selfish side-effect, I want to reduce the amount of guesswork I need to perform in order to know how to run a package's test when I `$vcs clone` their repository. ;) Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From tjreedy at udel.edu Tue Mar 5 00:23:46 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 04 Mar 2013 18:23:46 -0500 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: <20130304214624.0f6dec8c@pitrou.net> References: <20130304214624.0f6dec8c@pitrou.net> Message-ID: On 3/4/2013 3:46 PM, Antoine Pitrou wrote: > On Mon, 04 Mar 2013 15:46:48 -0500 > Terry Reedy wrote: >> On 3/4/2013 11:36 AM, Brett Cannon wrote: >>> With this in place I would like to propose that all patches submitted to >>> bugs.python.org must come from someone who has >>> signed the CLA before we consider committing it (if you want to be truly >>> paranoid we could say that we won't even look at the code w/o a CLA). While I regard CLAs as partly being a form of legal theater, I regard our participation as necessary, both to make explicit to contributors what should be implicit in the act of submission *and* to show to copyright holders a good-faith effort to not improperly incorporate their code. Note: no one expected the Linux copyright challenge, nor our European trademark challenge, but they happened. I expect there will be more challenges to open source projects, perhaps some legitimate as the number of contributors increases. >> Either policy could be facilitated by tracker changes. In order to see >> the file upload box, one must login and the tracker knows who has a CLA >> on file (as indicated by a * suffix on the name). If a file is uploaded >> by someone without, a box could popup with the link to the e-form and a >> message that a CLA is required. > > And how about people who upload something else than a patch? Limit the popup to files with .diff or .patch extension. Reviewers can check for '*' for the occasionally patch lacking that. -- Terry Jan Reedy From eliben at gmail.com Tue Mar 5 00:47:37 2013 From: eliben at gmail.com (Eli Bendersky) Date: Mon, 4 Mar 2013 15:47:37 -0800 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: <20130304222848.5006bf3b@pitrou.net> References: <20130304222848.5006bf3b@pitrou.net> Message-ID: On Mon, Mar 4, 2013 at 1:28 PM, Antoine Pitrou wrote: > On Mon, 4 Mar 2013 13:26:57 -0800 > Eli Bendersky wrote: > > [Splitting into a separate thread] > > > > Do we really need to overthink something that requires a trivial alias to > > set up for one's own convenience? > > > > Picking a Python version (as Barry mentions) is just one of the problems. > > What's wrong with: > > > > alias rupytests='python3 -m unittest discover" > > alias runpytests2='python2 -m unittest discover" > > > > ? > > > > Don't get me wrong, I love the "discover" option and agree that it should > > be the recommended way to go - but isn't this largely a documentation > issue? > > I would personally call it a typing issue :-) "python -m unittest > discover" is just too long. > Command-line options for advanced capabilities can get long, yes. It's not a reason to add an extra layer, which is extra complexity, IMHO. The user is free to create his own shortcuts if this is too much typing. Moreover, many projects already have a way to run "all tests" from their root directory. As a case in point, we also have the useful: $ python -m SimpleHTTPServer So why not create a new "pyserve" command to reduce the amount of typing? Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Tue Mar 5 00:49:50 2013 From: eliben at gmail.com (Eli Bendersky) Date: Mon, 4 Mar 2013 15:49:50 -0800 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: <20130304171439.7c37c840@anarchist.wooz.org> References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: On Mon, Mar 4, 2013 at 2:14 PM, Barry Warsaw wrote: > On Mar 05, 2013, at 11:01 AM, Robert Collins wrote: > > >The big thing is automated tools, not developers. > > Exactly. > I don't understand. Is "python -m unittest discover" too much typing for automatic tools? If anything, it's much more portable across Python versions since any new coommand/script won't be added before 3.4, while the longer version works in 2.7 and 3.2+ Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Tue Mar 5 01:09:39 2013 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 5 Mar 2013 13:09:39 +1300 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: On 5 March 2013 12:49, Eli Bendersky wrote: > > On Mon, Mar 4, 2013 at 2:14 PM, Barry Warsaw wrote: >> >> On Mar 05, 2013, at 11:01 AM, Robert Collins wrote: >> >> >The big thing is automated tools, not developers. >> >> Exactly. > > I don't understand. Is "python -m unittest discover" too much typing for > automatic tools? If anything, it's much more portable across Python versions > since any new coommand/script won't be added before 3.4, while the longer > version works in 2.7 and 3.2+ It isn't about length. It is about knowing that *that* is what to type (and btw that exact command cannot run twisted's tests, among many other projects tests). Perhaps we are talking about different things. A top level script to run tests is interesting, but orthogonal to the thing Barry was asking for. -Rob -- Robert Collins Distinguished Technologist HP Cloud Services From michael at voidspace.org.uk Tue Mar 5 01:13:19 2013 From: michael at voidspace.org.uk (Michael Foord) Date: Tue, 5 Mar 2013 00:13:19 +0000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <20130303012939.GC62205@snakebite.org> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130303012939.GC62205@snakebite.org> Message-ID: <1BC5ABCC-7683-4D74-85AB-622A57E9F38D@voidspace.org.uk> On 3 Mar 2013, at 01:29, Trent Nelson wrote: > On Wed, Feb 27, 2013 at 08:51:16AM -0800, Michael Foord wrote: >> If you have other items you'd like to discuss please let me know and I >> can add them to the agenda. > > Hmm, seems like this might be a good forum to introduce the > parallel/async stuff I've been working on the past few months. > TL;DR version is I've come up with an alternative approach for > exploiting multiple cores that doesn't rely on GIL-removal or > STM (and has a negligible performance overhead when executing > single-threaded code). (For those that are curious, it lives > in the px branch of the sandbox/trent repo on hg.p.o, albeit > in a very experimental/prototype/proof-of-concept state (i.e. > it's an unorganized, undocumented, uncommented hackfest); on > the plus side, it works. Sort of.) > > Second suggestion: perhaps a little segment on Snakebite? What > it is, what's available to committers, feedback/kvetching from > those who have already used it, etc. > I've added both to the agenda. > (I forgot the format of these summits -- is there a projector?) > I've asked for a projector, yes. Michael > Trent. -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From tjreedy at udel.edu Tue Mar 5 01:16:27 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 04 Mar 2013 19:16:27 -0500 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <20130304172455.5ef0aa5e@anarchist.wooz.org> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> <20130304141457.603b4965@anarchist.wooz.org> <20130304151439.611dd9aa@anarchist.wooz.org> <20130304154505.016092cf@anarchist.wooz.org> <20130304172455.5ef0aa5e@anarchist.wooz.org> Message-ID: On 3/4/2013 5:24 PM, Barry Warsaw wrote: > What I'm looking for is something that automated tools can use to easily > discover how to run a package's tests. I want it to be dead simple for > developers of a package to declare how their tests are to be run, and what I am writing a package that has tests for each module (which I so far run individually for each module) using a custom test framework. I am planning to add a function to the package to run all of them. Should I call it 'testall', 'test_all', 'runtests', or something else? I really do not care. It would be used like this. import xxx; xxx.testall() Of course, this would not work with the stdlib since /lib is not a package that can be imported. I could put the same code in the top level of a module, to be run when imported (but that would not work with re-imports), or put the function in my test module. I am willing to adjust to a standard when there is one. What I do suggest is that package developers should only have to provide one standard entry point that hides all package-specific details. I presume the side-effect spec would be error messages to sdterr. Any return requirements should be a simple as possible, as in all pass True/False, or (number run, number fail) by whatever counting method the package/test framework uses. (Note: my framework does not count tests, as I only care about failure messages, but testall could count modules tested and those with a failure.) > extra dependencies they might need. It seems like PEP 426 only addresses the > latter. Maybe that's fine and a different PEP is needed to describe automated > test discover, but I still think it's an important use case. New PEP. -- Terry Jan Reedy From ncoghlan at gmail.com Tue Mar 5 01:20:00 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Mar 2013 10:20:00 +1000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <20130304141457.603b4965@anarchist.wooz.org> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> <20130304141457.603b4965@anarchist.wooz.org> Message-ID: On 5 Mar 2013 05:21, "Barry Warsaw" wrote: > > On Mar 04, 2013, at 07:41 PM, Antoine Pitrou wrote: > > >> $ python -m unittest discover > >> $ python setup.py test > >> $ python setup.py nosetests > >> $ python -m nose test > >> $ nosetests-X.Y > >> > >> Besides having a multitude of choices, there's almost no way to > >> automatically discover (e.g. by metadata inspection or some such) how to > >> invoke the tests. You're often lucky if there's a README.test and it's > >> still accurate. > > > >I hope we can have a "pytest" utility that does the right thing in 3.4 :-) > >Typing "python -m unittest discover" is too cumbersome. > > Where is this work being done (e.g. is there a PEP)? > > One thing to keep in mind is how to invoke this on a system with multiple > versions of Python available. For example, in Debian, a decision was recently > made to drop all the nosetests-X.Y scripts from /usr/bin[1]. > > This makes sense when you think about having at least two major versions of > Python (2.x and 3.x) and maybe up to four (2.6, 2.7, 3.2, 3.3), *plus* debug > versions of each. Add to that, we don't actually know at package build time > which versions of Python you might have installed on your system. > > A suggestion was made to provide a main entry point so that `pythonX.Y -m > nose` would work, which makes sense to me and was adopted by the > nose-devs[2]. > > So while a top level `pytest` command may make sense, it also might not ;). > While PEP 426 has a way to declare test dependencies (a good thing), it seems > to have no way to declare how to actually run the tests. Metadata 2.0 won't cover that, 2.1 probably will. Please give us time to solve problems incrementally rather than trying to fix everything at once. Regards, Nick. > > Cheers, > -Barry > > [1] Start of thread: > http://comments.gmane.org/gmane.linux.debian.devel.python/8572 > > [2] https://github.com/nose-devs/nose/issues/634 > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at voidspace.org.uk Tue Mar 5 01:21:49 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 5 Mar 2013 00:21:49 +0000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: On 4 Mar 2013, at 06:26, Robert Collins wrote: > On 4 March 2013 18:54, Guido van Rossum wrote: >> On Sun, Mar 3, 2013 at 9:24 PM, Robert Collins >> wrote: >>> I'd like to talk about overhauling - not tweaking, overhauling - the >>> standard library testing facilities. >> >> That seems like too big a topic and too vague a description to discuss >> usefully. Perhaps you have a specific proposal? Or at least just a use >> case that's poorly covered? > > I have both - I have a draft implementation for a new test result API > (and forwards and backwards compat code etc), and use cases that drive > it. I started a thread here - > http://lists.idyll.org/pipermail/testing-in-python/2013-February/005434.html > , with blog posts > https://rbtcollins.wordpress.com/2013/02/14/time-to-revise-the-subunit-protocol/ > https://rbtcollins.wordpress.com/2013/02/15/more-subunit-needs/ > https://rbtcollins.wordpress.com/2013/02/19/first-experience-implementing-streamresult/ > https://rbtcollins.wordpress.com/2013/02/23/simpler-is-better/ > > They are focused on subunit, but much of subunit's friction has been > due to issues encountered from the stdlibrary TestResult API - in > particular three things: > - the single-active-test model that the current API (or at least > implementation) has. > - the expectation that all test outcomes will originate from the same > interpreter (or something with a live traceback object) > - the inability to supply details about errors other than the exception > > All of which start to bite rather deep when working on massively > parallel test environments. > > It is of course possible for subunit and related tools to run their > own implementation, but it seems ideal to me to have a common API > which regular unittest, nose, py.test and others can all agree on and > use : better reuse for pretty printers, GUI displays and the like > depend on some common API. > >> TBH, your choice of words is ambiguous -- are you interested in >> overhauling the facilities for testing *of* the standard library (i.e. >> the 'test' package), or the testing facilities *provided by* the >> standard library (i.e. the unittest module)? > > Sorry! Testing facilities provided by the standard library. They > should naturally facilitate testing of the standard library too. We can certainly talk about it - although as Guido says, something specific may be easier to have a useful discussion about. Reading through your blog articles it seemed like a whole lot of subunit context was required to understand the specific proposal you're making for the TestResult. It also *seems* like you're redesigning the TestResult for a single use case (distributed testing) with an api that looks quite "odd" for anything that isn't that use case. I'd rather see how we can make the TestResult play *better* with those requirements. That discussion probably belongs in another thread - or at the summit. Michael > > -Rob > >> -- >> --Guido van Rossum (python.org/~guido) > > > > -- > Robert Collins > Distinguished Technologist > HP Cloud Services > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From fuzzyman at voidspace.org.uk Tue Mar 5 01:21:58 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 5 Mar 2013 00:21:58 +0000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> <20130304141457.603b4965@anarchist.wooz.org> Message-ID: On 4 Mar 2013, at 19:26, Berker Peksa? wrote: > On Mon, Mar 4, 2013 at 9:14 PM, Barry Warsaw wrote: >> On Mar 04, 2013, at 07:41 PM, Antoine Pitrou wrote: >> >>>> $ python -m unittest discover >>>> $ python setup.py test >>>> $ python setup.py nosetests >>>> $ python -m nose test >>>> $ nosetests-X.Y >>>> >>>> Besides having a multitude of choices, there's almost no way to >>>> automatically discover (e.g. by metadata inspection or some such) how to >>>> invoke the tests. You're often lucky if there's a README.test and it's >>>> still accurate. >>> >>> I hope we can have a "pytest" utility that does the right thing in 3.4 :-) >>> Typing "python -m unittest discover" is too cumbersome. >> >> Where is this work being done (e.g. is there a PEP)? > > There is an open issue on the tracker: http://bugs.python.org/issue14266 > Indeed, and unittest2 (the backport) which has to work with Python 2.6 (where "python -m package_name" doesn't work) has "unit2" as a shortcut. So it has an advantage over the standard library version here. I'd like to see pyunit as a short-cut for "python -m unittest discover", with a "pyunit-3.x" variant too. Barry objects that Linux distributions won't want to support all of these, which is frankly their problem. Michael > --Berker > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From robertc at robertcollins.net Tue Mar 5 01:23:25 2013 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 5 Mar 2013 13:23:25 +1300 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: On 5 March 2013 13:21, Michael Foord wrote: > > We can certainly talk about it - although as Guido says, something specific may be easier to have a useful discussion about. > > Reading through your blog articles it seemed like a whole lot of subunit context was required to understand the specific > proposal you're making for the TestResult. It also *seems* like you're redesigning the TestResult for a single use case > (distributed testing) with an api that looks quite "odd" for anything that isn't that use case. I'd rather see how we can > make the TestResult play *better* with those requirements. That discussion probably belongs in another thread - or at > the summit. Right - all I wanted was to flag that you and I and any other interested parties should discuss this at the summit :). -Rob -- Robert Collins Distinguished Technologist HP Cloud Services From fuzzyman at voidspace.org.uk Tue Mar 5 01:33:20 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 5 Mar 2013 00:33:20 +0000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: On 28 Feb 2013, at 13:49, Brett Cannon wrote: > > > > On Thu, Feb 28, 2013 at 6:34 AM, Michael Foord wrote: > > On 28 Feb 2013, at 07:36, Georg Brandl wrote: > > > Am 27.02.2013 17:51, schrieb Michael Foord: > >> Hello all, > >> > >> PyCon, and the Python Language Summit, is nearly upon us. We have a good number of people confirmed to attend. If you are intending to come to the language summit but haven't let me know please do so. > >> > >> The agenda of topics for discussion so far includes the following: > >> > >> * A report on pypy status - Maciej and Armin > >> * Jython and IronPython status reports - Dino / Frank > >> * Packaging (Doug Hellmann and Monty Taylor at least) > >> * Cleaning up interpreter initialisation (both in hopes of finding areas > >> to rationalise and hence speed things up, as well as making things > >> more embedding friendly). Nick Coghlan > >> * Adding new async capabilities to the standard library (Guido) > >> * cffi and the standard library - Maciej > >> * flufl.enum and the standard library - Barry Warsaw > >> * The argument clinic - Larry Hastings > >> > >> If you have other items you'd like to discuss please let me know and I can add them to the agenda. > > > > May I in absentia propose at least a short discussion of the XML fixes > > and accompanying security releases? FWIW, for 3.2 and 3.3 I have no > > objections to secure-by-default. > > > > Sure. It would be good if someone who *will* be there can champion the discussion. > > While Christian is in the best position to discuss this, I did review his various monkeypatch fixes + expat patches so I can attempt to answer any questions people may have. I've put you next to the topic in the agenda Brett :-) Michael -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From fuzzyman at voidspace.org.uk Tue Mar 5 01:35:51 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 5 Mar 2013 00:35:51 +0000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> <20130304141457.603b4965@anarchist.wooz.org> Message-ID: On 4 Mar 2013, at 20:02, Daniel Holth wrote: > On Mon, Mar 4, 2013 at 2:26 PM, Berker Peksa? wrote: >> On Mon, Mar 4, 2013 at 9:14 PM, Barry Warsaw wrote: >>> On Mar 04, 2013, at 07:41 PM, Antoine Pitrou wrote: >>> >>>>> $ python -m unittest discover >>>>> $ python setup.py test >>>>> $ python setup.py nosetests >>>>> $ python -m nose test >>>>> $ nosetests-X.Y >>>>> >>>>> Besides having a multitude of choices, there's almost no way to >>>>> automatically discover (e.g. by metadata inspection or some such) how to >>>>> invoke the tests. You're often lucky if there's a README.test and it's >>>>> still accurate. >>>> >>>> I hope we can have a "pytest" utility that does the right thing in 3.4 :-) >>>> Typing "python -m unittest discover" is too cumbersome. >>> >>> Where is this work being done (e.g. is there a PEP)? >> >> There is an open issue on the tracker: http://bugs.python.org/issue14266 >> >> --Berker > > setup.py's setup(test_suite="x")... not sure if this is a distutils or > setuptools feature. PEP 426 has an extension mechanism that could do > the job. This is a setuptools extension. There was some discussion for packaging/distutils2 of having test support but I have no idea if that has been picked up for the new bunch of packaging related work. Michael > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From eliben at gmail.com Tue Mar 5 01:35:41 2013 From: eliben at gmail.com (Eli Bendersky) Date: Mon, 4 Mar 2013 16:35:41 -0800 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: On Mon, Mar 4, 2013 at 4:09 PM, Robert Collins wrote: > On 5 March 2013 12:49, Eli Bendersky wrote: > > > > On Mon, Mar 4, 2013 at 2:14 PM, Barry Warsaw wrote: > >> > >> On Mar 05, 2013, at 11:01 AM, Robert Collins wrote: > >> > >> >The big thing is automated tools, not developers. > >> > >> Exactly. > > > > I don't understand. Is "python -m unittest discover" too much typing for > > automatic tools? If anything, it's much more portable across Python > versions > > since any new coommand/script won't be added before 3.4, while the longer > > version works in 2.7 and 3.2+ > > It isn't about length. It is about knowing that *that* is what to type > (and btw that exact command cannot run twisted's tests, among many > other projects tests). > > Perhaps we are talking about different things. A top level script to > run tests is interesting, but orthogonal to the thing Barry was asking > for. > Perhaps :-) I'm specifically referring to a new top-level script that will run all unittests in discovery mode from the current directory, as a shortcut to "python -m unittest discover". ISTM this is at leas in part what was discussed, and my email was in this context. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at voidspace.org.uk Tue Mar 5 01:39:40 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 5 Mar 2013 00:39:40 +0000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <20130301193803.4156607c@pitrou.net> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130227223749.2f06a328@anarchist.wooz.org> <20130301093223.743a04c8@anarchist.wooz.org> <20130301193803.4156607c@pitrou.net> Message-ID: On 1 Mar 2013, at 18:38, Antoine Pitrou wrote: > On Fri, 1 Mar 2013 09:32:23 -0500 > Barry Warsaw wrote: >> >>> On the other hand in some ways Jython is sort of like Python on a >>> weird virtual OS that lets the real OS bleed through some. This may >>> still need to be checked in that way (there's are still checks of >> os.name == 'nt'> right?) >> >> Yeah, but that all ooooold code ;) > > Hmm, what do you mean? `os.name == 'nt'` is still the proper way to > test that we're running on a Windows system (more accurately, over the > Windows API). > It has been used incorrectly in a few places in the Python standard library - Windows support code that would work correctly on IronPython is skipped because os.name is *not* 'nt' on IronPython. That was the case in the past anyway. It's quite some time since I've used IronPython now. Michael > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From robertc at robertcollins.net Tue Mar 5 01:41:01 2013 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 5 Mar 2013 13:41:01 +1300 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: On 5 March 2013 13:35, Eli Bendersky wrote: > Perhaps :-) > I'm specifically referring to a new top-level script that will run all > unittests in discovery mode from the current directory, as a shortcut to > "python -m unittest discover". ISTM this is at leas in part what was > discussed, and my email was in this context. So that is interesting, but its not sufficient to meet the automation need Barry is calling out, unless all test suites can be run by 'python -m unittest discover' with no additional parameters [and a pretty large subset cannot]. -Rob -- Robert Collins Distinguished Technologist HP Cloud Services From breamoreboy at yahoo.co.uk Tue Mar 5 01:51:24 2013 From: breamoreboy at yahoo.co.uk (Mark Lawrence) Date: Tue, 05 Mar 2013 00:51:24 +0000 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: Message-ID: On 04/03/2013 22:08, Brett Cannon wrote: > > > > On Mon, Mar 4, 2013 at 4:33 PM, Mark Lawrence > wrote: > > On 04/03/2013 20:46, Terry Reedy wrote: > > On 3/4/2013 11:36 AM, Brett Cannon wrote: > > > > > On Mon, Mar 4, 2013 at 11:30 AM, Brian Curtin > > >> wrote: > > The full announcement is at > > http://blog.python.org/2013/__03/introducing-electronic-__contributor.html > , > but a summary follows. > > We've now moved to an electronic Contributor License > Agreement > form at > http://www.python.org/psf/__contrib/contrib-form/ > which will > hopefully > ease the signing and sending of forms for our potential > contributors. > The form shows the required fields whether you're > signing as an > individual or a representative of an organization, and > removes the > need to print, scan, fax, etc. > > When a new contributor fills in the form, they are > emailed a copy of > the form and asked to confirm the email address that > they used (and > received that copy at). Upon confirming, the signed > form is sent to > the PSF Administrator and filed away. > > The signature can either be generated from your typed > name, or you > can > draw or upload your actual written signature if you choose. > > > With this in place I would like to propose that all patches > submitted to > bugs.python.org > must come from someone who has > signed the CLA before we consider committing it (if you want > to be truly > paranoid we could say that we won't even look at the code > w/o a CLA). > > > Either policy could be facilitated by tracker changes. In order > to see > the file upload box, one must login and the tracker knows who > has a CLA > on file (as indicated by a * suffix on the name). If a file is > uploaded > by someone without, a box could popup with the link to the > e-form and a > message that a CLA is required. > > > People already use the bug tracker as an excuse not to contribute, > wouldn't this requirement make the situation worse? > > > Depends on your paranoia. If you're worried about accidentally lifting > IP merely by reading someone's source code, then you wouldn't want to > touch code without the CLA signed. > > Now I'm not that paranoid, but I'm still not about to commit someone's > code now without the CLA signed to make sure we are legally covered for > the patch. If someone chooses not to contribute because of the CLA > that's fine, but since we have already told at least Anatoly that we > won't accept patches from him until he signs the CLA I'm not going to > start acting differently towards others. I view legally covering our ass > by having someone fill in a form is worth the potential loss of some > contribution in the grand scheme of things. > > Who's talking source code, you're previously mentioned *ALL* patches needing a CLA. Does this mean you have to sign a CLA for a one line documentation patch? What is the definition of a patch, an actual patch file or a proposal for a change that is given within a bug tracker message? -- Cheers. Mark Lawrence From fuzzyman at voidspace.org.uk Tue Mar 5 01:50:20 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 5 Mar 2013 00:50:20 +0000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> Message-ID: <17397A0A-200F-41F5-ACF1-2E3C2338865E@voidspace.org.uk> On 5 Mar 2013, at 00:23, Robert Collins wrote: > On 5 March 2013 13:21, Michael Foord wrote: >> > >> We can certainly talk about it - although as Guido says, something specific may be easier to have a useful discussion about. >> >> Reading through your blog articles it seemed like a whole lot of subunit context was required to understand the specific >> proposal you're making for the TestResult. It also *seems* like you're redesigning the TestResult for a single use case >> (distributed testing) with an api that looks quite "odd" for anything that isn't that use case. I'd rather see how we can >> make the TestResult play *better* with those requirements. That discussion probably belongs in another thread - or at >> the summit. > > Right - all I wanted was to flag that you and I and any other > interested parties should discuss this at the summit :). I've added a testing topic to the agenda. At the very least you could outline your streaming test result proposal, or kick off a meta discussion. We'll probably time limit the discussion so some specific focus will make it more productive - or maybe you can get a feel for how open to major changes in this area other python devs are. Michael > > -Rob > > > > > > > -- > Robert Collins > Distinguished Technologist > HP Cloud Services -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From robertc at robertcollins.net Tue Mar 5 01:52:51 2013 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 5 Mar 2013 13:52:51 +1300 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <17397A0A-200F-41F5-ACF1-2E3C2338865E@voidspace.org.uk> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <17397A0A-200F-41F5-ACF1-2E3C2338865E@voidspace.org.uk> Message-ID: On 5 March 2013 13:50, Michael Foord wrote: >> Right - all I wanted was to flag that you and I and any other >> interested parties should discuss this at the summit :). > > I've added a testing topic to the agenda. At the very least you could outline your streaming test result proposal, or kick off a meta discussion. We'll probably time limit the discussion so some specific focus will make it more productive - or maybe you can get a feel for how open to major changes in this area other python devs are. Cool. I can step through the core use cases and differences to what TestResult is in pretty short order. We can spider out from there as folk desire. -Rob -- Robert Collins Distinguished Technologist HP Cloud Services From fuzzyman at voidspace.org.uk Tue Mar 5 01:54:33 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 5 Mar 2013 00:54:33 +0000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: <20130304172455.5ef0aa5e@anarchist.wooz.org> References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> <20130304115104.224493ef@anarchist.wooz.org> <20130304194148.7e6b42f5@pitrou.net> <20130304141457.603b4965@anarchist.wooz.org> <20130304151439.611dd9aa@anarchist.wooz.org> <20130304154505.016092cf@anarchist.wooz.org> <20130304172455.5ef0aa5e@anarchist.wooz.org> Message-ID: On 4 Mar 2013, at 22:24, Barry Warsaw wrote: > On Mar 04, 2013, at 05:04 PM, Brett Cannon wrote: > >> Sure, but that has nothing to do with programmatic package discovery. >> That's something you will have to do as a person in making a qualitative >> decision along the same lines as API design. Flipping a bit in a config >> file saying "I have tests" doesn't say much beyond you flipped a bit, e.g. >> no idea on coverage, quality, etc. > > What I'm looking for is something that automated tools can use to easily > discover how to run a package's tests. I want it to be dead simple for > developers of a package to declare how their tests are to be run, and what > extra dependencies they might need. It seems like PEP 426 only addresses the > latter. Maybe that's fine and a different PEP is needed to describe automated > test discover, but I still think it's an important use case. > > Imagine: > > * Every time you upload a package to PyPI, snakebite runs your test suite on > a variety of Python versions and platforms. You get a nice link to the > Jenkins results so you and your users get a good sense of overall package > quality. > > * You have an automated gatekeeper that will prevent commits or uploads if > your coverage or test results get worse instead of better. > > * Distro packagers can build tools that auto-discover the tests so that they > are run automatically when the package is built, ensuring high quality > packages specifically targeted to those distros. > > As a community, we know how important tests are, so I think our tools should > reflect that and make it easy for those tests to be expressed. As a selfish > side-effect, I want to reduce the amount of guesswork I need to perform in > order to know how to run a package's test when I `$vcs clone` their > repository. ;) > Distutils2 had a way of specifying this in the metadata. It looks like this hasn't made it into the reboot: http://alexis.notmyidea.org/distutils2/distutils/newcommands.html Michael > Cheers, > -Barry > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From fuzzyman at voidspace.org.uk Tue Mar 5 01:56:42 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 5 Mar 2013 00:56:42 +0000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130304112937.520651d4@anarchist.wooz.org> Message-ID: On 4 Mar 2013, at 20:00, Robert Collins wrote: > On 5 March 2013 05:34, Brett Cannon wrote: >> >> >> >> On Mon, Mar 4, 2013 at 11:29 AM, Barry Warsaw wrote: >>> >>> On Mar 04, 2013, at 07:26 PM, Robert Collins wrote: >>> >>>> It is of course possible for subunit and related tools to run their >>>> own implementation, but it seems ideal to me to have a common API >>>> which regular unittest, nose, py.test and others can all agree on and >>>> use : better reuse for pretty printers, GUI displays and the like >>>> depend on some common API. >>> >>> And One True Way of invoking and/or discovering how to invoke, a package's >>> test suite. >> >> >> How does unittest's test discovery not solve that? > > Three reasons > a) There are some bugs (all filed I think) - I intend to hack on > these in the near future - that prevent discovery working at all for > some use cases. The only discovery related issues I'm aware of are: * Issue 16079 (filed by you) - trivial to fix just needs a test * Issue 15010 obscure and unlikely to be an issue for standard discovery I'm not aware of any other discovery related issues. Please let me know (or add me as nosy) to them. > b) discovery requires magic parameters that are project specific > (e.g. is it 'discover .' or 'discover . lib' to run it). This is > arguably a setup.py/packaging entrypoint issue. This was addressed by Barry - and yes discovery has to be done with the right parameters. If you layout your project in a particular way then "python -m unittest discover" in the project root will just work. This is project specific metadata though and not a particular problem of any testing library. > c) Test suites written for e.g. Twisted, or nose, or other > non-stdunit-runner-compatible test runners will fail to execute even > when discovered correctly. > > There are ways to solve this without addressing a/b/c - just defining > a standard command to run that signals success/failure with it's exit > code. Packages can export a particular flavour of that in their > setup.py if they have exceptional needs, and do nothing in the common > case. That doesn't solve 'how to introspect a package test suite' but > for distro packagers - and large scale CI integration - that doesn't > matter. > > For instance testrepository offers a setuptools extension to let it be > used trivially, I believe nose does something similar. > unittest2 also has setuptools compatible test command. > Having something that would let *any* test suite spit out folk's > favourite test protocol de jour would be brilliant of course :). > [junit-xml, subunit, TAP, ...] > Yes. Michael > -Rob > > -- > Robert Collins > Distinguished Technologist > HP Cloud Services > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From dholth at gmail.com Tue Mar 5 03:24:57 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 4 Mar 2013 21:24:57 -0500 Subject: [Python-Dev] running tests; mebs Message-ID: >> As a community, we know how important tests are, so I think our tools should >> reflect that and make it easy for those tests to be expressed. As a selfish >> side-effect, I want to reduce the amount of guesswork I need to perform in >> order to know how to run a package's test when I `$vcs clone` their >> repository. ;) >> > > > Distutils2 had a way of specifying this in the metadata. It looks like this hasn't made it into the reboot: > > http://alexis.notmyidea.org/distutils2/distutils/newcommands.html > > Michael > >> Cheers, >> -Barry I'm not aware of a reboot of the setup.py replacement / improvement effort. The work that has been done has proceeded backwards from the installer end of things. I had a proposal called "mebs, not an actual project". A completely plugin-based system would recognize any sdist format and provide a minimal, consistent interface. Add tests to the below text from October. ... A very simple meta-build system "mebs" is used to recognize sdists and build binary packages. Build systems provide plugins having three methods, .recognize() .metadata() .build() An installer downloads an sdist. For each installed build plugin, .recognize(dir) is called. The first plugin to return True is used. From barry at python.org Tue Mar 5 03:43:15 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Mar 2013 21:43:15 -0500 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: <20130304214315.3037ebe3@limelight.wooz.org> On Mar 05, 2013, at 01:09 PM, Robert Collins wrote: >It isn't about length. It is about knowing that *that* is what to type >(and btw that exact command cannot run twisted's tests, among many >other projects tests). >Perhaps we are talking about different things. A top level script to >run tests is interesting, but orthogonal to the thing Barry was asking >for. Right, two different things. -Barry From stephen at xemacs.org Tue Mar 5 04:13:44 2013 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 05 Mar 2013 12:13:44 +0900 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: Message-ID: <87k3pmk07r.fsf@uwakimon.sk.tsukuba.ac.jp> Mark Lawrence writes: > People already use the bug tracker as an excuse not to contribute, > wouldn't this requirement make the situation worse? A failure to sign the CLA is already a decision not to contribute to the distribution, no matter how noisy they are on the tracker and list. I think that pretty much any upload is potential content for inclusion in Python. For example, uploading a log of an interactive session reproducing a bug could easily evolve into contribution of a doctest. Since the proposed page only triggers on uploads, I think we're in "yes, we really do want this person's CLA" territory. The procedure is actually rather cool. As Eli says, the tough part is finding your user name, but OpenID or browser memory makes that reasonably close to trivial for many people. It's true that people upload "one-line documentation patches," and these don't require a CLA under even the most paranoid interpretation of US law. The FSF's guideline is 16 lines, I believe. However, the FSF's guideline also says those 16 lines are lifetime cumulative (per copyrighted work, but we're only talking about one, Python). In my experience (with a different project, so FWIW) somebody who goes to the trouble of uploading a doc typo patch is likely to be a repeat offender, whereas "drive-by" contributors who just need that one feature so their web2.0 app works as desired are often going to be in 16-line territory anyway. This argument doesn't catch 100% of those who might be deterred by the popup, but it's definitely enough to make the popup worthwhile. IANAL-but-I-like-a-good-license-flamewar-as-much-as-the-next-guy-ly y'rs, From jdhardy at gmail.com Tue Mar 5 06:39:12 2013 From: jdhardy at gmail.com (Jeff Hardy) Date: Mon, 4 Mar 2013 21:39:12 -0800 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130227223749.2f06a328@anarchist.wooz.org> <20130301093223.743a04c8@anarchist.wooz.org> <20130301193803.4156607c@pitrou.net> Message-ID: On Mon, Mar 4, 2013 at 4:39 PM, Michael Foord wrote: > > On 1 Mar 2013, at 18:38, Antoine Pitrou wrote: > >> On Fri, 1 Mar 2013 09:32:23 -0500 >> Barry Warsaw wrote: >>> >>>> On the other hand in some ways Jython is sort of like Python on a >>>> weird virtual OS that lets the real OS bleed through some. This may >>>> still need to be checked in that way (there's are still checks of >>> os.name == 'nt'> right?) >>> >>> Yeah, but that all ooooold code ;) >> >> Hmm, what do you mean? `os.name == 'nt'` is still the proper way to >> test that we're running on a Windows system (more accurately, over the >> Windows API). >> > > It has been used incorrectly in a few places in the Python standard library - Windows support code that would work correctly on IronPython is skipped because os.name is *not* 'nt' on IronPython. That was the case in the past anyway. It's quite some time since I've used IronPython now. I think you misremembered - there's lots of code that uses `sys.platform == 'win32'` to detect Windows, but sys.platform is 'cli' for IronPython. I'm pretty sure `os.name has always been 'nt' (when running on Windows), and if not, it definitely is now. Jython sets os.name to 'java' (IIRC), so there isn't a uniform way to detect Windows across all implementations. - Jeff From regebro at gmail.com Tue Mar 5 08:02:09 2013 From: regebro at gmail.com (Lennart Regebro) Date: Tue, 5 Mar 2013 08:02:09 +0100 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: On Tue, Mar 5, 2013 at 1:41 AM, Robert Collins wrote: > So that is interesting, but its not sufficient to meet the automation > need Barry is calling out, unless all test suites can be run by > 'python -m unittest discover' with no additional parameters [and a > pretty large subset cannot]. But can they be changed so they are? That's gotta be the important bit. What's needed here is not a tool that can run all unittests in existence, but an official way for automated tools to run tests, with the ability for any test and test framework to hook into that, so that you can run any test suite automatically from an automated tool. The, once that mechanism has been identified/implemented, we need to tell everybody to do this. I don't care much what that mechanism is, but I think the easiest way to get there is to tell people to extend distutils with a test command (or use Distribute) and perhaps add such a command in 3.4 that will do the unittest discover thingy. I remember looking into zope.testrunner hooking into that mechanism as well, but I don't remember what the outcome was. //Lennart From donald.stufft at gmail.com Tue Mar 5 08:11:44 2013 From: donald.stufft at gmail.com (Donald Stufft) Date: Tue, 5 Mar 2013 02:11:44 -0500 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: On Tuesday, March 5, 2013 at 2:02 AM, Lennart Regebro wrote: > On Tue, Mar 5, 2013 at 1:41 AM, Robert Collins > wrote: > > So that is interesting, but its not sufficient to meet the automation > > need Barry is calling out, unless all test suites can be run by > > 'python -m unittest discover' with no additional parameters [and a > > pretty large subset cannot]. > > > > > But can they be changed so they are? That's gotta be the important bit. > > What's needed here is not a tool that can run all unittests in > existence, but an official way for automated tools to run tests, with > the ability for any test and test framework to hook into that, so that > you can run any test suite automatically from an automated tool. The, > once that mechanism has been identified/implemented, we need to tell > everybody to do this. > > I don't care much what that mechanism is, but I think the easiest way > to get there is to tell people to extend distutils with a test command > (or use Distribute) and perhaps add such a command in 3.4 that will do > the unittest discover thingy. I remember looking into zope.testrunner > hooking into that mechanism as well, but I don't remember what the > outcome was. > > Doesn't setuptools/distribute already have a setup.py test command? That seems like the easiest way forward? > > //Lennart > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org (mailto:Python-Dev at python.org) > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/donald.stufft%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Tue Mar 5 08:13:02 2013 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 5 Mar 2013 20:13:02 +1300 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: On 5 March 2013 20:02, Lennart Regebro wrote: > On Tue, Mar 5, 2013 at 1:41 AM, Robert Collins > wrote: >> So that is interesting, but its not sufficient to meet the automation >> need Barry is calling out, unless all test suites can be run by >> 'python -m unittest discover' with no additional parameters [and a >> pretty large subset cannot]. > > But can they be changed so they are? That's gotta be the important bit. In principle maybe. Need to talk with the trial developers, nose developers, py.test developers etc - to get consensus on a number of internal API friction points. > What's needed here is not a tool that can run all unittests in > existence, but an official way for automated tools to run tests, with > the ability for any test and test framework to hook into that, so that > you can run any test suite automatically from an automated tool. The, > once that mechanism has been identified/implemented, we need to tell > everybody to do this. I think the command line is the right place to do that - declare as metadata the command line to run a packages tests. > I don't care much what that mechanism is, but I think the easiest way > to get there is to tell people to extend distutils with a test command > (or use Distribute) and perhaps add such a command in 3.4 that will do > the unittest discover thingy. I remember looking into zope.testrunner > hooking into that mechanism as well, but I don't remember what the > outcome was. Agreed. -Rob -- Robert Collins Distinguished Technologist HP Cloud Services From solipsis at pitrou.net Tue Mar 5 08:19:16 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 5 Mar 2013 08:19:16 +0100 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) References: <20130304222848.5006bf3b@pitrou.net> Message-ID: <20130305081916.27f38906@pitrou.net> On Mon, 4 Mar 2013 15:47:37 -0800 Eli Bendersky wrote: > On Mon, Mar 4, 2013 at 1:28 PM, Antoine Pitrou wrote: > > > On Mon, 4 Mar 2013 13:26:57 -0800 > > Eli Bendersky wrote: > > > [Splitting into a separate thread] > > > > > > Do we really need to overthink something that requires a trivial alias to > > > set up for one's own convenience? > > > > > > Picking a Python version (as Barry mentions) is just one of the problems. > > > What's wrong with: > > > > > > alias rupytests='python3 -m unittest discover" > > > alias runpytests2='python2 -m unittest discover" > > > > > > ? > > > > > > Don't get me wrong, I love the "discover" option and agree that it should > > > be the recommended way to go - but isn't this largely a documentation > > issue? > > > > I would personally call it a typing issue :-) "python -m unittest > > discover" is just too long. > > > > Command-line options for advanced capabilities can get long, yes. The whole point is that discovery is not "advanced capability", it's pretty basic by today's standards. So it should actually be the default behaviour (like it is with nose). Regards Antoine. From tjreedy at udel.edu Tue Mar 5 08:36:41 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 05 Mar 2013 02:36:41 -0500 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: Message-ID: On 3/4/2013 7:51 PM, Mark Lawrence wrote: > Who's talking source code, you're previously mentioned *ALL* patches > needing a CLA. Does this mean you have to sign a CLA for a one line > documentation patch? It it is a one char typo, I would not bother downloading the patch, or adding a person to ACKS. If the patch is big enough to download and apply, then I want a CLA. If a person is sophisticated enough to submit a respository file diff, they are likely to submit more, and I want them to feel encouraged to do so by already having done the CLA. If we do not get it with the first submission, then when? Who keeps track of cumulative lines? > What is the definition of a patch, an actual patch file Usually. > or a proposal for a change that is given within a bug tracker message? I view a proposal for a change as just an idea. Such usually get re-written by whoever creates an actual patch. I would like the link to the e-form to be accessible somewhere on the tracker so I can refer people to it easily. -- Terry Jan Reedy From ncoghlan at gmail.com Tue Mar 5 09:25:38 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 5 Mar 2013 18:25:38 +1000 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: On Tue, Mar 5, 2013 at 5:02 PM, Lennart Regebro wrote: > I don't care much what that mechanism is, but I think the easiest way > to get there is to tell people to extend distutils with a test command > (or use Distribute) and perhaps add such a command in 3.4 that will do > the unittest discover thingy. I remember looking into zope.testrunner > hooking into that mechanism as well, but I don't remember what the > outcome was. There is no easy way forward at this point in time. There just isn't. If people want to dispute that claim, please feel free to solve all the other problems distutils-sig is trying to tackle, so we can pay attention to this one. We'll get to this eventually - there are just several other more important things ahead of it in the queue for packaging and distribution infrastructure enhancements (and python-dev is not the group that will solve them). Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From stephen at xemacs.org Tue Mar 5 09:31:03 2013 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 05 Mar 2013 17:31:03 +0900 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: Message-ID: <87a9qijliw.fsf@uwakimon.sk.tsukuba.ac.jp> Terry Reedy writes: > > or a proposal for a change that is given within a bug tracker message? > > I view a proposal for a change as just an idea. Such usually get > re-written by whoever creates an actual patch. Precisely how U.S. law would view it, implying no copyright issue. If this really need further discussion, it should move to python-legal. From glyph at twistedmatrix.com Tue Mar 5 10:02:46 2013 From: glyph at twistedmatrix.com (Glyph) Date: Tue, 5 Mar 2013 01:02:46 -0800 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: <183E8AF2-3D1D-4780-9015-686E94635A9F@twistedmatrix.com> On Mar 4, 2013, at 11:13 PM, Robert Collins wrote: > In principle maybe. Need to talk with the trial developers, nose > developers, py.test developers etc - to get consensus on a number of > internal API friction points. Some of trial's lessons might be also useful for the stdlib going forward, given the hope of doing some event-loop stuff in the core. But, I feel like this might be too much to cover at the language summit; there could be a test frameworks summit of its own, of about equivalent time and scope, and we'd still have a lot to discuss. Is there a unit testing SIG someone from Twisted ought to be a member of, to represent Trial, and to get consensus on these points going forward? -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger.krekel at gmail.com Tue Mar 5 11:15:28 2013 From: holger.krekel at gmail.com (Holger Krekel) Date: Tue, 5 Mar 2013 11:15:28 +0100 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: <183E8AF2-3D1D-4780-9015-686E94635A9F@twistedmatrix.com> References: <20130304171439.7c37c840@anarchist.wooz.org> <183E8AF2-3D1D-4780-9015-686E94635A9F@twistedmatrix.com> Message-ID: On Tue, Mar 5, 2013 at 10:02 AM, Glyph wrote: > On Mar 4, 2013, at 11:13 PM, Robert Collins > wrote: > > In principle maybe. Need to talk with the trial developers, nose > developers, py.test developers etc - to get consensus on a number of > internal API friction points. > > > Some of trial's lessons might be also useful for the stdlib going forward, > given the hope of doing some event-loop stuff in the core. > > But, I feel like this might be too much to cover at the language summit; > there could be a test frameworks summit of its own, of about equivalent > time and scope, and we'd still have a lot to discuss. > > Is there a unit testing SIG someone from Twisted ought to be a member of, > to represent Trial, and to get consensus on these points going forward? > > The testing-in-python list is pretty much where most test tool authors hang out, see http://lists.idyll.org/listinfo/testing-in-python Also, maybe related, i am heading the "tox" effort which many people use to have a frontend for their testing process, see http://tox.testrun.org -- it has a somewhat different focus in that it sets up virtualenv and install test specific dependencies. However, positional arguments (often used to select tests) can be configured to be passed on to the test runner of choice. I was considering extending tox to directly support nose, pytest, unittest and trial "drivers" and offer a unified (minimal) command line API. Am open to collaboration on that. cheers, holger > -glyph > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/holger.krekel%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at voidspace.org.uk Tue Mar 5 11:52:20 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 5 Mar 2013 10:52:20 +0000 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: <20130305081916.27f38906@pitrou.net> References: <20130304222848.5006bf3b@pitrou.net> <20130305081916.27f38906@pitrou.net> Message-ID: On 5 Mar 2013, at 07:19, Antoine Pitrou wrote: > On Mon, 4 Mar 2013 15:47:37 -0800 > Eli Bendersky wrote: >> On Mon, Mar 4, 2013 at 1:28 PM, Antoine Pitrou wrote: >> >>> On Mon, 4 Mar 2013 13:26:57 -0800 >>> Eli Bendersky wrote: >>>> [Splitting into a separate thread] >>>> >>>> Do we really need to overthink something that requires a trivial alias to >>>> set up for one's own convenience? >>>> >>>> Picking a Python version (as Barry mentions) is just one of the problems. >>>> What's wrong with: >>>> >>>> alias rupytests='python3 -m unittest discover" >>>> alias runpytests2='python2 -m unittest discover" >>>> >>>> ? >>>> >>>> Don't get me wrong, I love the "discover" option and agree that it should >>>> be the recommended way to go - but isn't this largely a documentation >>> issue? >>> >>> I would personally call it a typing issue :-) "python -m unittest >>> discover" is just too long. >>> >> >> Command-line options for advanced capabilities can get long, yes. > > The whole point is that discovery is not "advanced capability", it's > pretty basic by today's standards. So it should actually be the default > behaviour (like it is with nose). > For Python 3.3 onwards "python -m unittest" does run test discovery by default. However if you want to provide parameters you still need the "discover" subcommand to disambiguate from the other command line options. So I agree - a shorthand command would be an improvement. Michael > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From fuzzyman at voidspace.org.uk Tue Mar 5 11:55:34 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 5 Mar 2013 10:55:34 +0000 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: <183E8AF2-3D1D-4780-9015-686E94635A9F@twistedmatrix.com> References: <20130304171439.7c37c840@anarchist.wooz.org> <183E8AF2-3D1D-4780-9015-686E94635A9F@twistedmatrix.com> Message-ID: <5F687C3C-7145-4473-A08D-E4786D38E945@voidspace.org.uk> On 5 Mar 2013, at 09:02, Glyph wrote: > On Mar 4, 2013, at 11:13 PM, Robert Collins wrote: > >> In principle maybe. Need to talk with the trial developers, nose >> developers, py.test developers etc - to get consensus on a number of >> internal API friction points. > > Some of trial's lessons might be also useful for the stdlib going forward, given the hope of doing some event-loop stuff in the core. > > But, I feel like this might be too much to cover at the language summit; there could be a test frameworks summit of its own, of about equivalent time and scope, and we'd still have a lot to discuss. > > Is there a unit testing SIG someone from Twisted ought to be a member of, to represent Trial, and to get consensus on these points going forward? The testing-on-python mailing list is probably the best place (and if doesn't have that status already I'd be keen to elevate it to "official sig for Python testing issues" status). http://lists.idyll.org/listinfo/testing-in-python Like the "massively distributed testing" use case, I'd be very happy for the standard library testing capabilities to better support this use case - but I wouldn't like to design their apis around that as the sole use case. :-) Michael > > -glyph > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From fuzzyman at voidspace.org.uk Tue Mar 5 12:07:52 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 5 Mar 2013 11:07:52 +0000 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130227223749.2f06a328@anarchist.wooz.org> <20130301093223.743a04c8@anarchist.wooz.org> <20130301193803.4156607c@pitrou.net> Message-ID: <3BF8EF8F-98CF-47C7-A498-D043E27BC200@voidspace.org.uk> On 5 Mar 2013, at 05:39, Jeff Hardy wrote: > On Mon, Mar 4, 2013 at 4:39 PM, Michael Foord wrote: >> >> On 1 Mar 2013, at 18:38, Antoine Pitrou wrote: >> >>> On Fri, 1 Mar 2013 09:32:23 -0500 >>> Barry Warsaw wrote: >>>> >>>>> On the other hand in some ways Jython is sort of like Python on a >>>>> weird virtual OS that lets the real OS bleed through some. This may >>>>> still need to be checked in that way (there's are still checks of >>>> os.name == 'nt'> right?) >>>> >>>> Yeah, but that all ooooold code ;) >>> >>> Hmm, what do you mean? `os.name == 'nt'` is still the proper way to >>> test that we're running on a Windows system (more accurately, over the >>> Windows API). >>> >> >> It has been used incorrectly in a few places in the Python standard library - Windows support code that would work correctly on IronPython is skipped because os.name is *not* 'nt' on IronPython. That was the case in the past anyway. It's quite some time since I've used IronPython now. > > I think you misremembered - there's lots of code that uses > `sys.platform == 'win32'` to detect Windows, but sys.platform is 'cli' > for IronPython. I'm pretty sure `os.name has always been 'nt' (when > running on Windows), and if not, it definitely is now. > > Jython sets os.name to 'java' (IIRC), so there isn't a uniform way to > detect Windows across all implementations. > Ah, I'm sure you're correct. Thanks. > - Jeff -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From ezio.melotti at gmail.com Tue Mar 5 13:04:46 2013 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Tue, 5 Mar 2013 14:04:46 +0200 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: Message-ID: Hi, On Mon, Mar 4, 2013 at 10:46 PM, Terry Reedy wrote: > On 3/4/2013 11:36 AM, Brett Cannon wrote: >> On Mon, Mar 4, 2013 at 11:30 AM, Brian Curtin > > wrote: >> >> With this in place I would like to propose that all patches submitted to >> bugs.python.org must come from someone who has >> >> signed the CLA before we consider committing it (if you want to be truly >> paranoid we could say that we won't even look at the code w/o a CLA). > > > Either policy could be facilitated by tracker changes. In order to see the > file upload box, one must login and the tracker knows who has a CLA on file > (as indicated by a * suffix on the name). If a file is uploaded by someone > without, a box could popup with the link to the e-form and a message that a > CLA is required. > http://psf.upfronthosting.co.za/roundup/meta/issue461 Best Regards, Ezio Melotti > -- > Terry Jan Reedy > From devel at baptiste-carvello.net Tue Mar 5 17:48:07 2013 From: devel at baptiste-carvello.net (Baptiste Carvello) Date: Tue, 05 Mar 2013 17:48:07 +0100 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: <87k3pmk07r.fsf@uwakimon.sk.tsukuba.ac.jp> References: <87k3pmk07r.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: Le 05/03/2013 04:13, Stephen J. Turnbull a ?crit : > Mark Lawrence writes: > > > People already use the bug tracker as an excuse not to contribute, > > wouldn't this requirement make the situation worse? > > A failure to sign the CLA is already a decision not to contribute to > the distribution my 2 cents as an occasional contributor of minor patches: I understand that the scarce resource is reviewer time, so I would definitely accept to sign the CLA with my next contribution before a reviewer invests his time in it. However, please don't make the popup too pushy. I abhor websites which push people into entering legally binding agreements "with one click" without the opportunity to study them carefully (personnally, this would not be a problem as I already know what the CLA is about, but other contributors might not). Also, please keep the possibility to use the old paper-based signing procedure. I for one don't consider so-called "electronic signatures" based on email address verification (as opposed to real crypto) to be as good as a handwritten signature, and I don't want to legitimize them by using them. Cheers, Baptiste From fwierzbicki at gmail.com Tue Mar 5 17:55:06 2013 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Tue, 5 Mar 2013 08:55:06 -0800 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130227223749.2f06a328@anarchist.wooz.org> <20130301093223.743a04c8@anarchist.wooz.org> <20130301193803.4156607c@pitrou.net> Message-ID: On Mon, Mar 4, 2013 at 9:39 PM, Jeff Hardy wrote: > I think you misremembered - there's lots of code that uses > `sys.platform == 'win32'` to detect Windows, but sys.platform is 'cli' > for IronPython. I'm pretty sure `os.name has always been 'nt' (when > running on Windows), and if not, it definitely is now. > > Jython sets os.name to 'java' (IIRC), so there isn't a uniform way to > detect Windows across all implementations. I've been thinking that this is a bit of a historical mistake on our part. I'm strongly considering setting os.name properly in Jython3. -Frank From fwierzbicki at gmail.com Tue Mar 5 18:00:14 2013 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Tue, 5 Mar 2013 09:00:14 -0800 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130227223749.2f06a328@anarchist.wooz.org> <20130301093223.743a04c8@anarchist.wooz.org> <20130301193803.4156607c@pitrou.net> Message-ID: On Tue, Mar 5, 2013 at 8:55 AM, fwierzbicki at gmail.com wrote: > I've been thinking that this is a bit of a historical mistake on our > part. I'm strongly considering setting os.name properly in Jython3. In fairness to Jython implementers past - it wasn't a mistake but a deliberate design choice at the time since in the old days we where going for the now defunct "100% Java" thing (does anyone remember that?) so this is more of "the design has evolved such that Jython's os.name now feels incorrect". I'll stop having conversations with myself in public now, sorry :) -Frank From barry at python.org Tue Mar 5 20:49:00 2013 From: barry at python.org (Barry Warsaw) Date: Tue, 5 Mar 2013 14:49:00 -0500 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: <20130305144900.5ecf90ba@resist.wooz.org> On Mar 05, 2013, at 02:11 AM, Donald Stufft wrote: >Doesn't setuptools/distribute already have a setup.py test command? That >seems like the easiest way forward? Yes, and in theory it can make `python setup.py test` work well. But there are lots of little details (such as API differences for ensuring that doctests run, "additional tests" discovery, etc.) that make this often not work so well in practice. Some of that is social and some of it is technical. I still claim that including test suite information in a package's metadata would be a win, but maybe that's just too much to hope for right now. -Barry From dholth at gmail.com Tue Mar 5 21:20:24 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 5 Mar 2013 15:20:24 -0500 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: <20130305144900.5ecf90ba@resist.wooz.org> References: <20130304171439.7c37c840@anarchist.wooz.org> <20130305144900.5ecf90ba@resist.wooz.org> Message-ID: On Tue, Mar 5, 2013 at 2:49 PM, Barry Warsaw wrote: > On Mar 05, 2013, at 02:11 AM, Donald Stufft wrote: > >>Doesn't setuptools/distribute already have a setup.py test command? That >>seems like the easiest way forward? > > Yes, and in theory it can make `python setup.py test` work well. But there > are lots of little details (such as API differences for ensuring that doctests > run, "additional tests" discovery, etc.) that make this often not work so well > in practice. Some of that is social and some of it is technical. I still > claim that including test suite information in a package's metadata would be a > win, but maybe that's just too much to hope for right now. It would be a win, but "parsing the metadata" is just not what happens right now, let alone writing anything about which and where the modules are defined in the sdist. We can barely install packages by using the dependency metadata from PKG-INFO; pip always re-generates it from "setup.py egg_info". Your testing metadata prototype would only have to write two lines to the metadata instead of one a-la: Extension: flufl; flufl/test_suite: nose.collector; document the extension; write some tool to actually parse the metadata and invoke the tests; it may become a core feature in the next version, or having a monolithic specification may become less important. Thanks, Daniel Holth From brett at python.org Tue Mar 5 21:22:07 2013 From: brett at python.org (Brett Cannon) Date: Tue, 5 Mar 2013 15:22:07 -0500 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: <87k3pmk07r.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Tue, Mar 5, 2013 at 11:48 AM, Baptiste Carvello < devel at baptiste-carvello.net> wrote: > Le 05/03/2013 04:13, Stephen J. Turnbull a ?crit : > > Mark Lawrence writes: > > > > > People already use the bug tracker as an excuse not to contribute, > > > wouldn't this requirement make the situation worse? > > > > A failure to sign the CLA is already a decision not to contribute to > > the distribution > > my 2 cents as an occasional contributor of minor patches: I understand > that the scarce resource is reviewer time, so I would definitely accept > to sign the CLA with my next contribution before a reviewer invests his > time in it. > > However, please don't make the popup too pushy. I abhor websites which > push people into entering legally binding agreements "with one click" > without the opportunity to study them carefully (personnally, this would > not be a problem as I already know what the CLA is about, but other > contributors might not). > > Also, please keep the possibility to use the old paper-based signing > procedure. I for one don't consider so-called "electronic signatures" > based on email address verification (as opposed to real crypto) to be as > good as a handwritten signature, and I don't want to legitimize them by > using them. > At the bottom of the CLA page there are instructions on how to still use the paper form. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Mar 5 21:33:14 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 05 Mar 2013 15:33:14 -0500 Subject: [Python-Dev] built-in Python test runner In-Reply-To: <20130305144900.5ecf90ba@resist.wooz.org> References: <20130304171439.7c37c840@anarchist.wooz.org> <20130305144900.5ecf90ba@resist.wooz.org> Message-ID: <5136568A.3090406@stufft.io> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Barry Warsaw wrote: > On Mar 05, 2013, at 02:11 AM, Donald Stufft wrote: > >> Doesn't setuptools/distribute already have a setup.py test command? >> That seems like the easiest way forward? > > Yes, and in theory it can make `python setup.py test` work well. But > there are lots of little details (such as API differences for > ensuring that doctests run, "additional tests" discovery, etc.) that > make this often not work so well in practice. Some of that is social > and some of it is technical. I still claim that including test suite > information in a package's metadata would be a win, but maybe that's > just too much to hope for right now. For the "right now" solution you can easily override the test command in setup.py. Long term this would be something that could be added to the METADATA but I think it is (and rightly should be) lower priority to actually getting it working for it's core purpose. > > -Barry _______________________________________________ Python-Dev > mailing list Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: > http://mail.python.org/mailman/options/python-dev/donald%40stufft.io -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.18 (Darwin) iQIcBAEBAgAGBQJRNlaJAAoJEG48vOkzctz6SvsP/2+MYkn1NgcHdmaQg09A3MDH a++7+hGGsXgQwwJ3q3u6T6Hzd1pokJI7hqGHAXBnkODrZZ8f9Z0+OP9I8HPUmo0D mJDsDxn2VZImBkNZJlBGNFKIz4EjS5llFapzdl58ZkIVZ7Rz3PTaPVSiXOc+ppp7 xYkWXGnx+2DTFaHywi9qGsHrbAXqgcwIhXO29NYl98xA9X98/XSRiXmHVfNURA3U 1GV1G9H1Qfvu8YjfYfBCCUn6db6eLGVPO7VcxRh6Cyzfk5SFSuziCVI8v3t3msjw KSba+8Pe3RQ7RS17VEJqCMQjkhhGnAndgIL3Jho41qb3g+Rdk2OP+weWbYV92Q8F HL6QtPgm5/QS5tKyl6nK97+9q+NdhGOEzKOL0pBiF4HKdT0mKyBxqttVIgUDAVMQ XcjhBu1wnpQnhkeZ8F3yGNubmE2tRdVVfhTfVDaA3ICl7uVBlbtUUMTJRK7DQ4vW gzDk5aKJB8OHimC1ijeTQm3M2lXkS5z0e5IuaION2WrG5A2BEvH12d/I7ekc1Ixa lVhScABFwY0UcrMt4td65er/w4Z0S+BL87SZMH1mffoS6XmL3fxUuOtAB+MUWDk7 Rd4xXnUFvr3SMmMRjSogpO/HO5IBpuzUwu0wqSz8qPcex+l+lHyOEHBDvacEcv9b zvGYjfQuk+2hWlAqw5o4 =6ELp -----END PGP SIGNATURE----- From andrew.svetlov at gmail.com Tue Mar 5 21:40:38 2013 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Tue, 5 Mar 2013 22:40:38 +0200 Subject: [Python-Dev] tp_dictoffset and tp_weaklistoffset slots for Stable API Message-ID: Looking on PEP http://www.python.org/dev/peps/pep-0384/ and docs I don't figure out how to specify this values. Maybe I've missed something? If not I like to solve that problem at us pycon sprints. Hope, Martin von Loewis will visit the conference. -- Thanks, Andrew Svetlov From rdmurray at bitdance.com Tue Mar 5 22:06:17 2013 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 05 Mar 2013 16:06:17 -0500 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: <87k3pmk07r.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <20130305210618.40C50250BCF@webabinitio.net> On Tue, 05 Mar 2013 15:22:07 -0500, Brett Cannon wrote: > On Tue, Mar 5, 2013 at 11:48 AM, Baptiste Carvello < > devel at baptiste-carvello.net> wrote: > > > Le 05/03/2013 04:13, Stephen J. Turnbull a ??crit : > > > Mark Lawrence writes: > > > > > > > People already use the bug tracker as an excuse not to contribute, > > > > wouldn't this requirement make the situation worse? > > > > > > A failure to sign the CLA is already a decision not to contribute to > > > the distribution > > > > my 2 cents as an occasional contributor of minor patches: I understand > > that the scarce resource is reviewer time, so I would definitely accept > > to sign the CLA with my next contribution before a reviewer invests his > > time in it. > > > > However, please don't make the popup too pushy. I abhor websites which > > push people into entering legally binding agreements "with one click" > > without the opportunity to study them carefully (personnally, this would > > not be a problem as I already know what the CLA is about, but other > > contributors might not). > > > > Also, please keep the possibility to use the old paper-based signing > > procedure. I for one don't consider so-called "electronic signatures" > > based on email address verification (as opposed to real crypto) to be as > > good as a handwritten signature, and I don't want to legitimize them by > > using them. > > > > At the bottom of the CLA page there are instructions on how to still use > the paper form. Then there also needs to be a way to ACK the popup so that it never shows up again. Which we should have anyway. Ideally that would be tied to the account and not to, say, a browser cookie. --David From ncoghlan at gmail.com Tue Mar 5 23:44:34 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Mar 2013 08:44:34 +1000 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: <20130305144900.5ecf90ba@resist.wooz.org> References: <20130304171439.7c37c840@anarchist.wooz.org> <20130305144900.5ecf90ba@resist.wooz.org> Message-ID: On 6 Mar 2013 05:51, "Barry Warsaw" wrote: > > On Mar 05, 2013, at 02:11 AM, Donald Stufft wrote: > > >Doesn't setuptools/distribute already have a setup.py test command? That > >seems like the easiest way forward? > > Yes, and in theory it can make `python setup.py test` work well. But there > are lots of little details (such as API differences for ensuring that doctests > run, "additional tests" discovery, etc.) that make this often not work so well > in practice. Some of that is social and some of it is technical. I still > claim that including test suite information in a package's metadata would be a > win, but maybe that's just too much to hope for right now. I think it's the right answer, too, but PEP 426 is already huge, so metadata 2.1 is likely the earliest we will try to tackle the problem. Cheers, Nick. > > -Barry > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From regebro at gmail.com Wed Mar 6 07:39:53 2013 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 6 Mar 2013 07:39:53 +0100 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: On Tue, Mar 5, 2013 at 8:11 AM, Donald Stufft wrote: > I don't care much what that mechanism is, but I think the easiest way > to get there is to tell people to extend distutils with a test command > (or use Distribute) and perhaps add such a command in 3.4 that will do > the unittest discover thingy. I remember looking into zope.testrunner > hooking into that mechanism as well, but I don't remember what the > outcome was. > > Doesn't setuptools/distribute already have a setup.py test command? Yes, but distutils do not. > That seems like the easiest way forward? Yup. Although I can understand people if they want something that is independent of packaging/distribution. //Lennart From regebro at gmail.com Wed Mar 6 07:41:41 2013 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 6 Mar 2013 07:41:41 +0100 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: On Tue, Mar 5, 2013 at 8:13 AM, Robert Collins wrote: > On 5 March 2013 20:02, Lennart Regebro wrote: >> What's needed here is not a tool that can run all unittests in >> existence, but an official way for automated tools to run tests, with >> the ability for any test and test framework to hook into that, so that >> you can run any test suite automatically from an automated tool. The, >> once that mechanism has been identified/implemented, we need to tell >> everybody to do this. > > I think the command line is the right place to do that - declare as > metadata the command line to run a packages tests. Yeah, that's good and simple solution. //Lennart From regebro at gmail.com Wed Mar 6 07:45:58 2013 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 6 Mar 2013 07:45:58 +0100 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: On Tue, Mar 5, 2013 at 9:25 AM, Nick Coghlan wrote: > On Tue, Mar 5, 2013 at 5:02 PM, Lennart Regebro wrote: >> I don't care much what that mechanism is, but I think the easiest way >> to get there is to tell people to extend distutils with a test command >> (or use Distribute) and perhaps add such a command in 3.4 that will do >> the unittest discover thingy. I remember looking into zope.testrunner >> hooking into that mechanism as well, but I don't remember what the >> outcome was. > > There is no easy way forward at this point in time. There just isn't. > If people want to dispute that claim, please feel free to solve all > the other problems distutils-sig is trying to tackle, so we can pay > attention to this one. I have to admit that of all the packaging problems out there, this is one of the easiest ones. ;-) That said, it's not easy. > We'll get to this eventually - there are just several other more > important things ahead of it in the queue for packaging and > distribution infrastructure enhancements (and python-dev is not the > group that will solve them). To be honest I'm not sure distutils-sig is the right place for this. It's really only a packaging problem because Setuptools has a "test" command. :-) Perhaps we can solve this outside distutils-sig so that distutils-sig can concentrate on the harder problems? //Lennart From tjreedy at udel.edu Wed Mar 6 11:20:31 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 06 Mar 2013 05:20:31 -0500 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? Message-ID: Clicking this link http://www.microsoft.com/en-us/download/details.aspx?id=14597 on this Developer Guide page http://docs.python.org/devguide/setup.html#windows now returns a "We are sorry, the page you requested cannot be found." page with search results. The first search result http://social.msdn.microsoft.com/Forums/nl/Vsexpressinstall/thread/2dc7ae6a-a0e7-436b-a1b3-3597ffac6a97 suggests that one must first go to http://profile.microsoft.com which forwards to the live.com login page. Logging in with my un-expired non-developer account did not make the original link work. The mdsn page http://msdn.microsoft.com/en-US/ has Visual Studio / Download trial, which leads to https://www.microsoft.com/visualstudio/eng/downloads which lists 2012 and 2010 but not 2008. I suspect that an msdn account is required for most people to get 2008. A later link leads to https://www.dreamspark.com/Product/Product.aspx?productid=34# which suggests that vc++2008 express is also available to verified degree students. I don't qualify so I will not try. So it would appear that section "1.1.3.3. Windows" of "1. Getting Started" (setup.rst) needs further revision. Or perhaps we could persuade Microsoft to let us distribute it ourselves so Windows versions of 2.7 do not become increasingly unusable. -- Terry Jan Reedy From mcepl at redhat.com Wed Mar 6 14:09:54 2013 From: mcepl at redhat.com (=?UTF-8?Q?Mat=C4=9Bj?= Cepl) Date: Wed, 06 Mar 2013 14:09:54 +0100 Subject: [Python-Dev] Difference in RE between 3.2 and 3.3 (or Aaron Swartz memorial) In-Reply-To: References: Message-ID: <1362575394.23949.2.camel@wycliff.ceplovi.cz> On 2013-02-26, 16:25 GMT, Terry Reedy wrote: > On 2/21/2013 4:22 PM, Matej Cepl wrote: >> as my method to commemorate Aaron Swartz, I have decided to port his >> html2text to work fully with the latest python 3.3. After some time >> dealing with various bugs, I have now in my repo >> https://github.com/mcepl/html2text (branch python3) working solution >> which works all the way to python 3.2 (inclusive; >> https://travis-ci.org/mcepl/html2text). However, the last problem >> remains. This >> >>
  • Run this command: >>
    ls -l *.html
  • >>
  • ?
  • >> >> should lead to >> >> * Run this command: >> >> ls -l *.html >> >> * ? >> >> but it doesn?t. It leads to this (with python 3.3 only) >> >> * Run this command: >> ls -l *.html >> >> * ? >> >> Does anybody know about something which changed in modules re or >> http://docs.python.org/3.3/whatsnew/changelog.html between 3.2 and >> 3.3, which could influence this script? > > Search the changelob or 3.3 misc/News for items affecting those two > modules. There are at least 4. > http://docs.python.org/3.3/whatsnew/changelog.html > > It is faintly possible that the switch from narrow/wide builds to > unified builds somehow affected that. Have you tested with 2.7/3.2 on > both narrow and wide unicode builds? So, in the end, I have went the long way and bisected cpython to find the commit which broke my tests, and it seems that the culprit is http://hg.python.org/cpython/rev/123f2dc08b3e so it is clearly something Unicode related. Unfortunately, it really doesn't tell me what exactly is broken (is it a known regression) and if there is known workaround. Could anybody suggest a way how to find bugs on http://bugs.python.org related to some particular commit (plain search for 123f2dc0 didn?t find anything). Any thoughts? Mat?j P.S.: Crossposting to python-devel in hope there would be somebody understanding more about that particular commit. For that I have also intentionally not trim the original messages to preserve context. -- http://www.ceplovi.cz/matej/, Jabber: mceplceplovi.cz GPG Finger: 89EF 4BC6 288A BF43 1BAB 25C3 E09F EF25 D964 84AC When you're happy that cut and paste actually works I think it's a sign you've been using X-Windows for too long. -- from /. discussion on poor integration between KDE and GNOME -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 190 bytes Desc: This is a digitally signed message part URL: From ncoghlan at gmail.com Wed Mar 6 14:50:34 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Mar 2013 23:50:34 +1000 Subject: [Python-Dev] built-in Python test runner (was: Python Language Summit at PyCon: Agenda) In-Reply-To: References: <20130304171439.7c37c840@anarchist.wooz.org> Message-ID: On Wed, Mar 6, 2013 at 4:45 PM, Lennart Regebro wrote: > Perhaps we can solve this outside distutils-sig so that distutils-sig > can concentrate on the harder problems? It's a distutils-sig problem because you need a way to publish any new testing related metadata, and because we're planning to evolve a hooks system to cover the different steps in the build process in a decoupled manner. "Run the tests" will be just another hook, but we're not up to dealing with that yet (the only hook that will be in metadata 2.0 is the post-install hook that will bring the wheel format up to the point of being a near-total replacement for "./setup.py install", and even that isn't written up formally yet - it's just a post in a thread on distutils-sig). You could, as Daniel suggested, work on defining a PEP 426 extension as a prototype concept, but it won't help you much until PEP 426 support is widespread, and by then we'll probably be looking at the meta-build system more broadly and figuring out the full set of desired hooks (including test invocation). Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From rdmurray at bitdance.com Wed Mar 6 14:51:20 2013 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 06 Mar 2013 08:51:20 -0500 Subject: [Python-Dev] Difference in RE between 3.2 and 3.3 (or Aaron Swartz memorial) In-Reply-To: <1362575394.23949.2.camel@wycliff.ceplovi.cz> References: <1362575394.23949.2.camel@wycliff.ceplovi.cz> Message-ID: <20130306135121.65795250BD8@webabinitio.net> On Wed, 06 Mar 2013 14:09:54 +0100, =?UTF-8?Q?Mat=C4=9Bj?= Cepl wrote: > So, in the end, I have went the long way and bisected cpython to > find the commit which broke my tests, and it seems that the > culprit is http://hg.python.org/cpython/rev/123f2dc08b3e so it is > clearly something Unicode related. > > Unfortunately, it really doesn't tell me what exactly is broken > (is it a known regression) and if there is known workaround. > Could anybody suggest a way how to find bugs on > http://bugs.python.org related to some particular commit (plain > search for 123f2dc0 didn???t find anything). If no issue number is mentioned in the commit message, then chances are there's no specific issue in the tracker related to that particular commit. Normally there will be an issue, but sometimes things are done without one (a practice we should maybe think about changing). Most likely the commit's author, Victor Stinner, will see your message or this one and respond. That particular change recently came up (by implication) in another context (unicode singletons...) --David From amauryfa at gmail.com Wed Mar 6 15:18:30 2013 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 6 Mar 2013 15:18:30 +0100 Subject: [Python-Dev] Difference in RE between 3.2 and 3.3 (or Aaron Swartz memorial) In-Reply-To: <1362575394.23949.2.camel@wycliff.ceplovi.cz> References: <1362575394.23949.2.camel@wycliff.ceplovi.cz> Message-ID: Hi, 2013/3/6 Mat?j Cepl > > On 2013-02-26, 16:25 GMT, Terry Reedy wrote: > > On 2/21/2013 4:22 PM, Matej Cepl wrote: > >> as my method to commemorate Aaron Swartz, I have decided to port his > >> html2text to work fully with the latest python 3.3. After some time > >> dealing with various bugs, I have now in my repo > >> https://github.com/mcepl/html2text (branch python3) working solution > >> which works all the way to python 3.2 (inclusive; > >> https://travis-ci.org/mcepl/html2text). However, the last problem > >> remains. This > >> > >>
  • Run this command: > >>
    ls -l *.html
  • > >>
  • ?
  • > >> > >> should lead to > >> > >> * Run this command: > >> > >> ls -l *.html > >> > >> * ? > >> > >> but it doesn?t. It leads to this (with python 3.3 only) > >> > >> * Run this command: > >> ls -l *.html > >> > >> * ? > >> > >> Does anybody know about something which changed in modules re or > >> http://docs.python.org/3.3/whatsnew/changelog.html between 3.2 and > >> 3.3, which could influence this script? > > > > Search the changelob or 3.3 misc/News for items affecting those two > > modules. There are at least 4. > > http://docs.python.org/3.3/whatsnew/changelog.html > > > > It is faintly possible that the switch from narrow/wide builds to > > unified builds somehow affected that. Have you tested with 2.7/3.2 on > > both narrow and wide unicode builds? > > So, in the end, I have went the long way and bisected cpython to > find the commit which broke my tests, and it seems that the > culprit is http://hg.python.org/cpython/rev/123f2dc08b3e so it is > clearly something Unicode related. > > Unfortunately, it really doesn't tell me what exactly is broken > (is it a known regression) and if there is known workaround. > Could anybody suggest a way how to find bugs on > http://bugs.python.org related to some particular commit (plain > search for 123f2dc0 didn?t find anything). > I strongly suspect an incorrect usage of the "is" operator: https://github.com/mcepl/html2text/blob/master/html2text.py#L95 Identity of strings is not guaranteed... Does it change something if you use "==" instead? -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezio.melotti at gmail.com Wed Mar 6 15:40:42 2013 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Wed, 6 Mar 2013 16:40:42 +0200 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: Hi, On Wed, Mar 6, 2013 at 12:20 PM, Terry Reedy wrote: > Clicking this link > http://www.microsoft.com/en-us/download/details.aspx?id=14597 > on this Developer Guide page > http://docs.python.org/devguide/setup.html#windows > now returns a > "We are sorry, the page you requested cannot be found." > page with search results. > > The first search result > http://social.msdn.microsoft.com/Forums/nl/Vsexpressinstall/thread/2dc7ae6a-a0e7-436b-a1b3-3597ffac6a97 > suggests that one must first go to http://profile.microsoft.com > which forwards to the live.com login page. Logging in with my un-expired > non-developer account did not make the original link work. > > The mdsn page http://msdn.microsoft.com/en-US/ > has Visual Studio / Download trial, which leads to > https://www.microsoft.com/visualstudio/eng/downloads > which lists 2012 and 2010 but not 2008. > > I suspect that an msdn account is required for most people to get 2008. > > A later link leads to > https://www.dreamspark.com/Product/Product.aspx?productid=34# > which suggests that vc++2008 express is also available to verified degree > students. I don't qualify so I will not try. > I did try a few weeks ago, when I had to download a copy of Windows for a project. Long story short, after 30+ minutes and a number of confirmation emails I reached a point where I had a couple of new accounts on MSDN/Dreamspark, a "purchased" free copy of Windows in my e-cart, and some .exe I had to download in order to download and verify the purchased copy. That's where I gave up. Best Regards, Ezio Melotti > So it would appear that section "1.1.3.3. Windows" of "1. Getting Started" > (setup.rst) needs further revision. > > Or perhaps we could persuade Microsoft to let us distribute it ourselves so > Windows versions of 2.7 do not become increasingly unusable. > > -- > Terry Jan Reedy > From python at mrabarnett.plus.com Wed Mar 6 17:22:15 2013 From: python at mrabarnett.plus.com (MRAB) Date: Wed, 06 Mar 2013 16:22:15 +0000 Subject: [Python-Dev] Difference in RE between 3.2 and 3.3 (or Aaron Swartz memorial) In-Reply-To: References: <1362575394.23949.2.camel@wycliff.ceplovi.cz> Message-ID: <51376D37.4040209@mrabarnett.plus.com> On 2013-03-06 14:18, Amaury Forgeot d'Arc wrote: > Hi, > > 2013/3/6 Mat?j Cepl > > > > On 2013-02-26, 16:25 GMT, Terry Reedy wrote: > > On 2/21/2013 4:22 PM, Matej Cepl wrote: > >> as my method to commemorate Aaron Swartz, I have decided to port his > >> html2text to work fully with the latest python 3.3. After some time > >> dealing with various bugs, I have now in my repo > >> https://github.com/mcepl/html2text (branch python3) working solution > >> which works all the way to python 3.2 (inclusive; > >> https://travis-ci.org/mcepl/html2text). However, the last problem > >> remains. This > >> > >>
  • Run this command: > >>
    ls -l *.html
  • > >>
  • ?
  • > >> > >> should lead to > >> > >> * Run this command: > >> > >> ls -l *.html > >> > >> * ? > >> > >> but it doesn?t. It leads to this (with python 3.3 only) > >> > >> * Run this command: > >> ls -l *.html > >> > >> * ? > >> > >> Does anybody know about something which changed in modules re or > >> http://docs.python.org/3.3/whatsnew/changelog.html between 3.2 and > >> 3.3, which could influence this script? > > > > Search the changelob or 3.3 misc/News for items affecting those two > > modules. There are at least 4. > > http://docs.python.org/3.3/whatsnew/changelog.html > > > > It is faintly possible that the switch from narrow/wide builds to > > unified builds somehow affected that. Have you tested with 2.7/3.2 on > > both narrow and wide unicode builds? > > So, in the end, I have went the long way and bisected cpython to > find the commit which broke my tests, and it seems that the > culprit is http://hg.python.org/cpython/rev/123f2dc08b3e so it is > clearly something Unicode related. > > Unfortunately, it really doesn't tell me what exactly is broken > (is it a known regression) and if there is known workaround. > Could anybody suggest a way how to find bugs on > http://bugs.python.org related to some particular commit (plain > search for 123f2dc0 didn?t find anything). > > > I strongly suspect an incorrect usage of the "is" operator: > https://github.com/mcepl/html2text/blob/master/html2text.py#L95 > Identity of strings is not guaranteed... > > Does it change something if you use "==" instead? > That function looks a little odd to me. Maybe I just don't understand what it's doing! :-) From rosuav at gmail.com Wed Mar 6 17:30:14 2013 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 7 Mar 2013 03:30:14 +1100 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: On Thu, Mar 7, 2013 at 1:40 AM, Ezio Melotti wrote: > I did try a few weeks ago, when I had to download a copy of Windows > for a project. Long story short, after 30+ minutes and a number of > confirmation emails I reached a point where I had a couple of new > accounts on MSDN/Dreamspark, a "purchased" free copy of Windows in my > e-cart, and some .exe I had to download in order to download and > verify the purchased copy. That's where I gave up. That's the point where I'd start looking at peer-to-peer downloads. These sorts of things are often available on torrent sites; once the original publisher starts making life harder, third-party sources become more attractive. ChrisA From stefan_ml at behnel.de Wed Mar 6 17:46:46 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 06 Mar 2013 17:46:46 +0100 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: Chris Angelico, 06.03.2013 17:30: > On Thu, Mar 7, 2013 at 1:40 AM, Ezio Melotti wrote: >> I did try a few weeks ago, when I had to download a copy of Windows >> for a project. Long story short, after 30+ minutes and a number of >> confirmation emails I reached a point where I had a couple of new >> accounts on MSDN/Dreamspark, a "purchased" free copy of Windows in my >> e-cart, and some .exe I had to download in order to download and >> verify the purchased copy. That's where I gave up. > > That's the point where I'd start looking at peer-to-peer downloads. > These sorts of things are often available on torrent sites; once the > original publisher starts making life harder, third-party sources > become more attractive. May I express my doubts that the license allows a redistribution of the software in this form? Stefan From rosuav at gmail.com Wed Mar 6 17:55:40 2013 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 7 Mar 2013 03:55:40 +1100 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: On Thu, Mar 7, 2013 at 3:46 AM, Stefan Behnel wrote: > Chris Angelico, 06.03.2013 17:30: >> On Thu, Mar 7, 2013 at 1:40 AM, Ezio Melotti wrote: >>> I did try a few weeks ago, when I had to download a copy of Windows >>> for a project. Long story short, after 30+ minutes and a number of >>> confirmation emails I reached a point where I had a couple of new >>> accounts on MSDN/Dreamspark, a "purchased" free copy of Windows in my >>> e-cart, and some .exe I had to download in order to download and >>> verify the purchased copy. That's where I gave up. >> >> That's the point where I'd start looking at peer-to-peer downloads. >> These sorts of things are often available on torrent sites; once the >> original publisher starts making life harder, third-party sources >> become more attractive. > > May I express my doubts that the license allows a redistribution of the > software in this form? Someone would have to check, but in most cases, software licenses govern the use, more than the distribution. If you're allowed to download it free of charge from microsoft.com, you should be able to get hold of it in some other way and it be exactly the same. But yeah, if you want to be legal you'd have to actually read the EULA. Is there any plan for future Python versions to use a free compiler on Windows? That would eliminate this issue, but presumably would create others. ChrisA From brian at python.org Wed Mar 6 18:03:11 2013 From: brian at python.org (Brian Curtin) Date: Wed, 6 Mar 2013 11:03:11 -0600 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: On Wed, Mar 6, 2013 at 10:55 AM, Chris Angelico wrote: > Is there any plan for future Python versions to use a free compiler on > Windows? That would eliminate this issue, but presumably would create > others. No plan, although there are at times patches/issues floating around to add some level of support for MinGW (or something like it) in addition to Microsoft's compiler. From casevh at gmail.com Wed Mar 6 18:00:04 2013 From: casevh at gmail.com (Case Van Horsen) Date: Wed, 6 Mar 2013 09:00:04 -0800 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: On Wed, Mar 6, 2013 at 2:20 AM, Terry Reedy wrote: > Clicking this link > http://www.microsoft.com/en-us/download/details.aspx?id=14597 > on this Developer Guide page > http://docs.python.org/devguide/setup.html#windows > now returns a > "We are sorry, the page you requested cannot be found." > page with search results. > > The first search result > http://social.msdn.microsoft.com/Forums/nl/Vsexpressinstall/thread/2dc7ae6a-a0e7-436b-a1b3-3597ffac6a97 > suggests that one must first go to http://profile.microsoft.com > which forwards to the live.com login page. Logging in with my un-expired > non-developer account did not make the original link work. > > The mdsn page http://msdn.microsoft.com/en-US/ > has Visual Studio / Download trial, which leads to > https://www.microsoft.com/visualstudio/eng/downloads > which lists 2012 and 2010 but not 2008. > > I suspect that an msdn account is required for most people to get 2008. > > A later link leads to > https://www.dreamspark.com/Product/Product.aspx?productid=34# > which suggests that vc++2008 express is also available to verified degree > students. I don't qualify so I will not try. > > So it would appear that section "1.1.3.3. Windows" of "1. Getting Started" > (setup.rst) needs further revision. > > Or perhaps we could persuade Microsoft to let us distribute it ourselves so > Windows versions of 2.7 do not become increasingly unusable. The "Microsoft Windows SDK for Windows 7 and .NET Framework 3.5 SP1" is still available for download. It includes the command line compilers that are used with VS 2008. I have used to create extensions for Python 2.6 to 3.2. There is a later version of the SDK (for .NET 4.x) that includes the compilers from VS 2010. To use the SDK compiler, you need to do a few manual steps first. After starting a command window, you need to run a batch file to configure your environment. Choose the appropriate option from C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvars64.bat or C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvars32.bat Then set two environment variables: set MSSdk=1 set DISTUTILS_USE_SDK=1 After these steps, the standard python setup.py install should work. casevh > > -- > Terry Jan Reedy > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/casevh%40gmail.com From robert.kern at gmail.com Wed Mar 6 18:16:28 2013 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 06 Mar 2013 17:16:28 +0000 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: On 2013-03-06 16:55, Chris Angelico wrote: > On Thu, Mar 7, 2013 at 3:46 AM, Stefan Behnel wrote: >> Chris Angelico, 06.03.2013 17:30: >>> On Thu, Mar 7, 2013 at 1:40 AM, Ezio Melotti wrote: >>>> I did try a few weeks ago, when I had to download a copy of Windows >>>> for a project. Long story short, after 30+ minutes and a number of >>>> confirmation emails I reached a point where I had a couple of new >>>> accounts on MSDN/Dreamspark, a "purchased" free copy of Windows in my >>>> e-cart, and some .exe I had to download in order to download and >>>> verify the purchased copy. That's where I gave up. >>> >>> That's the point where I'd start looking at peer-to-peer downloads. >>> These sorts of things are often available on torrent sites; once the >>> original publisher starts making life harder, third-party sources >>> become more attractive. >> >> May I express my doubts that the license allows a redistribution of the >> software in this form? > > Someone would have to check, but in most cases, software licenses > govern the use, more than the distribution. If you're allowed to > download it free of charge from microsoft.com, you should be able to > get hold of it in some other way and it be exactly the same. Sorry, but that's not how copyright works. The owner of the copyright on a work has to give you permission to allow you to distribute their work (modulo certain statutorily-defined exceptions that don't apply here). Just because you got the work from them free of charge doesn't mean that they have given you permission to redistribute it. If the agreements that you have with the copyright owner do not mention redistribution, you do not have permission to redistribute it. IANAL, TINLA. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Steve.Dower at microsoft.com Wed Mar 6 18:13:49 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Wed, 6 Mar 2013 17:13:49 +0000 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: <47cc440a8ac74ef8b82dec8b1b961ed1@BLUPR03MB035.namprd03.prod.outlook.com> From: Terry Reedy > Clicking this link > http://www.microsoft.com/en-us/download/details.aspx?id=14597 > on this Developer Guide page > http://docs.python.org/devguide/setup.html#windows > now returns a > "We are sorry, the page you requested cannot be found." > page with search results. > > The first search result > http://social.msdn.microsoft.com/Forums/nl/Vsexpressinstall/thread/2dc7a > e6a-a0e7-436b-a1b3-3597ffac6a97 > suggests that one must first go to http://profile.microsoft.com which > forwards to the live.com login page. Logging in with my un-expired non- > developer account did not make the original link work. > > The mdsn page http://msdn.microsoft.com/en-US/ has Visual Studio / > Download trial, which leads to > https://www.microsoft.com/visualstudio/eng/downloads > which lists 2012 and 2010 but not 2008. > > I suspect that an msdn account is required for most people to get 2008. Worse than that, it looks like you need a subscription and then a download "helper", which will get you the web installer that then goes off and downloads it for you. > A later link leads to > https://www.dreamspark.com/Product/Product.aspx?productid=34# > which suggests that vc++2008 express is also available to verified degree > students. I don't qualify so I will not try. > > So it would appear that section "1.1.3.3. Windows" of "1. Getting Started" > (setup.rst) needs further revision. > > Or perhaps we could persuade Microsoft to let us distribute it ourselves so > Windows versions of 2.7 do not become increasingly unusable. I'll ask around and see what we can do. We clearly still have the download available, so it may just be a case of making the web installer publicly available again. Chances are if you have the installer then it will still work. We may also make just the compilers available in some other way. It looks like the Windows Development Kits (previously Platform SDK) don't have it, but IIRC the driver kits occasionally ship with compilers. I'll get back to the list when I get something. Cheers, Steve From eliben at gmail.com Wed Mar 6 18:19:23 2013 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 6 Mar 2013 09:19:23 -0800 Subject: [Python-Dev] [docs] undocumented argtypes magic in ctypes? In-Reply-To: References: Message-ID: On Wed, Mar 6, 2013 at 8:33 AM, Andrew Svetlov wrote: > Looks like bug for me. > ctypes seems to auto-convert arguments when argtypes is specified. This fact is documented. However, I'm not sure whether this auto-conversion is advanced enough to apply byref. Because otherwise, DIRENT is certainly not convertible to DIRENT_p Eli > > On Tue, Mar 5, 2013 at 4:26 PM, Eli Bendersky wrote: > > Hello, > > > > While playing with ctypes a bit, I noticed a feature that doesn't appear > to > > be documented. Suppose I import the readdir_r function (assuming DIRENT > is a > > correctly declared ctypes.Structure): > > > > DIR_p = c_void_p > > DIRENT_p = POINTER(DIRENT) > > DIRENT_pp = POINTER(DIRENT_p) > > > > readdir_r = lib.readdir_r > > readdir_r.argtypes = [DIR_p, DIRENT_p, DIRENT_pp] > > readdir_r.restype = c_int > > > > It seems that I can then call it as follows: > > > > dirent = DIRENT() > > result = DIRENT_p() > > > > readdir_r(dir_fd, dirent, result) > > > > Note that while readdir_r takes DIRENT_p and DIRENT_pp as its second and > > third args, I pass in just DIRENT and DIRENT_p, accordingly. What I > should > > have done is use byref() on both, but ctypes seems to have some magic > > applied when argtypes declares pointer types. If I use byref, it still > > works. However, if I keep the same call and comment out the argtypes > > declaration, I get a segfault. > > > > So, is it a feature that should be documented, explicitly discouraged or > is > > it a bug? > > > > Eli > > > > > > _______________________________________________ > > docs mailing list > > docs at python.org > > http://mail.python.org/mailman/listinfo/docs > > > > > > -- > Thanks, > Andrew Svetlov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Steve.Dower at microsoft.com Wed Mar 6 18:29:11 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Wed, 6 Mar 2013 17:29:11 +0000 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: From: Case Van Horsen > On Wed, Mar 6, 2013 at 2:20 AM, Terry Reedy wrote: > > Clicking this link > > http://www.microsoft.com/en-us/download/details.aspx?id=14597 > > on this Developer Guide page > > http://docs.python.org/devguide/setup.html#windows > > now returns a > > "We are sorry, the page you requested cannot be found." > > page with search results. > > > > The first search result > > http://social.msdn.microsoft.com/Forums/nl/Vsexpressinstall/thread/2dc > > 7ae6a-a0e7-436b-a1b3-3597ffac6a97 suggests that one must first go to > > http://profile.microsoft.com which forwards to the live.com login > > page. Logging in with my un-expired non-developer account did not make > > the original link work. > > > > The mdsn page http://msdn.microsoft.com/en-US/ has Visual Studio / > > Download trial, which leads to > > https://www.microsoft.com/visualstudio/eng/downloads > > which lists 2012 and 2010 but not 2008. > > > > I suspect that an msdn account is required for most people to get 2008. > > > > A later link leads to > > https://www.dreamspark.com/Product/Product.aspx?productid=34# > > which suggests that vc++2008 express is also available to verified > > degree students. I don't qualify so I will not try. > > > > So it would appear that section "1.1.3.3. Windows" of "1. Getting Started" > > (setup.rst) needs further revision. > > > > Or perhaps we could persuade Microsoft to let us distribute it > > ourselves so Windows versions of 2.7 do not become increasingly > unusable. > > The "Microsoft Windows SDK for Windows 7 and .NET Framework 3.5 SP1" > is still available for download. It includes the command line compilers that are > used with VS 2008. I have used to create extensions for Python 2.6 to 3.2. > There is a later version of the SDK (for .NET > 4.x) that includes the compilers from VS 2010. This is the same response that I got internally. The download link is http://www.microsoft.com/en-us/download/details.aspx?id=3138 and you can choose to only download and install the compilers. Cheers, Steve > To use the SDK compiler, you need to do a few manual steps first. > > After starting a command window, you need to run a batch file to configure > your environment. Choose the appropriate option from > > C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvars64.bat > > or > > C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvars32.bat > > Then set two environment variables: > > set MSSdk=1 > set DISTUTILS_USE_SDK=1 > > > After these steps, the standard python setup.py install should work. > > casevh > > > > -- > > Terry Jan Reedy From victor.stinner at gmail.com Wed Mar 6 19:34:10 2013 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 6 Mar 2013 19:34:10 +0100 Subject: [Python-Dev] Difference in RE between 3.2 and 3.3 (or Aaron Swartz memorial) In-Reply-To: References: <1362575394.23949.2.camel@wycliff.ceplovi.cz> Message-ID: Hi, In short, Unicode was rewritten in Python 3.3 for the PEP 393. It's not surprising that minor details like singleton differ. You should not use "is" to compare strings in Python, or your program will fail on other Python implementations (like PyPy, IronPython, or Jython) or even on a different CPython version. Anyway, you spotted a missed optimization: it's now "fixed" in Python 3.3 and 3.4 by the following commits. Copy/paste of the CIA IRC bot: 19:30 < irker555> cpython: Victor Stinner 3.3 * 82517:3dd2fa78fb89 / Objects/unicodeobject.c: _PyUnicode_Writer() now also reuses Unicode singletons: empty string and latin1 single character http://hg.python.org/cpython/rev/3dd2fa78fb89 19:30 < irker032> cpython: Victor Stinner default * 82518:fa59a85b373f / Objects/unicodeobject.c: (Merge 3.3) _PyUnicode_Writer() now also reuses Unicode singletons: empty string and latin1 single character http://hg.python.org/cpython/rev/fa59a85b373f Victor 2013/3/6 Amaury Forgeot d'Arc : >> So, in the end, I have went the long way and bisected cpython to >> find the commit which broke my tests, and it seems that the >> culprit is http://hg.python.org/cpython/rev/123f2dc08b3e so it is >> clearly something Unicode related. >> >> Unfortunately, it really doesn't tell me what exactly is broken >> (is it a known regression) and if there is known workaround. >> Could anybody suggest a way how to find bugs on >> http://bugs.python.org related to some particular commit (plain >> search for 123f2dc0 didn?t find anything). > > > I strongly suspect an incorrect usage of the "is" operator: > https://github.com/mcepl/html2text/blob/master/html2text.py#L95 > Identity of strings is not guaranteed... > > Does it change something if you use "==" instead? > > -- > Amaury Forgeot d'Arc From steve at pearwood.info Wed Mar 6 21:06:15 2013 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 07 Mar 2013 07:06:15 +1100 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: References: Message-ID: <5137A1B7.6020002@pearwood.info> On 05/03/13 09:08, Brett Cannon wrote: > Depends on your paranoia. If you're worried about accidentally lifting IP > merely by reading someone's source code, then you wouldn't want to touch > code without the CLA signed. > > Now I'm not that paranoid, but I'm still not about to commit someone's code > now without the CLA signed to make sure we are legally covered for the > patch. If someone chooses not to contribute because of the CLA that's fine, > but since we have already told at least Anatoly that we won't accept > patches from him until he signs the CLA I'm not going to start acting > differently towards others. I view legally covering our ass by having > someone fill in a form is worth the potential loss of some contribution in > the grand scheme of things. Pardon my ignorance, but how does a CLA protect us in the event of an IP violation? -- Steven From brett at python.org Wed Mar 6 21:28:29 2013 From: brett at python.org (Brett Cannon) Date: Wed, 6 Mar 2013 15:28:29 -0500 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: <5137A1B7.6020002@pearwood.info> References: <5137A1B7.6020002@pearwood.info> Message-ID: On Wed, Mar 6, 2013 at 3:06 PM, Steven D'Aprano wrote: > On 05/03/13 09:08, Brett Cannon wrote: > > Depends on your paranoia. If you're worried about accidentally lifting IP >> merely by reading someone's source code, then you wouldn't want to touch >> code without the CLA signed. >> >> Now I'm not that paranoid, but I'm still not about to commit someone's >> code >> now without the CLA signed to make sure we are legally covered for the >> patch. If someone chooses not to contribute because of the CLA that's >> fine, >> but since we have already told at least Anatoly that we won't accept >> patches from him until he signs the CLA I'm not going to start acting >> differently towards others. I view legally covering our ass by having >> someone fill in a form is worth the potential loss of some contribution in >> the grand scheme of things. >> > > Pardon my ignorance, but how does a CLA protect us in the event of an IP > violation? Maybe it doesn't. IANAL and I was just trying to think in as paranoid of a fashion as I could. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Wed Mar 6 23:06:10 2013 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Thu, 07 Mar 2013 07:06:10 +0900 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: <5137A1B7.6020002@pearwood.info> References: <5137A1B7.6020002@pearwood.info> Message-ID: <87r4jsi3ot.fsf@uwakimon.sk.tsukuba.ac.jp> Steven D'Aprano writes: > Pardon my ignorance, but how does a CLA protect us in the event of an IP > violation? By licensing the content to the PSF, the contributor implicitly claims that he has the right to do so (I think the AFL even has an explicit provenance clause). This protects the PSF against criminal infringement and statutory damages for copyright violation (which require wilful infringement). I don't know it if helps for patent infringement. From tjreedy at udel.edu Wed Mar 6 23:43:15 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 06 Mar 2013 17:43:15 -0500 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: <5137A1B7.6020002@pearwood.info> References: <5137A1B7.6020002@pearwood.info> Message-ID: On 3/6/2013 3:06 PM, Steven D'Aprano wrote: > On 05/03/13 09:08, Brett Cannon wrote: > >> Depends on your paranoia. If you're worried about accidentally lifting IP >> merely by reading someone's source code, then you wouldn't want to touch >> code without the CLA signed. >> >> Now I'm not that paranoid, but I'm still not about to commit someone's >> code >> now without the CLA signed to make sure we are legally covered for the >> patch. If someone chooses not to contribute because of the CLA that's >> fine, >> but since we have already told at least Anatoly that we won't accept >> patches from him until he signs the CLA I'm not going to start acting >> differently towards others. I view legally covering our ass by having >> someone fill in a form is worth the potential loss of some >> contribution in >> the grand scheme of things. > > Pardon my ignorance, but how does a CLA protect us in the event of an IP > violation? The penalty for willful copyright violation (possible punitive damages) is higher than for inadvertent violation (typically, remove the offending code). In the CLA, contributors affirm that they will only contribute code they have a legal right to contribute. This makes it clear that PSF only wants legal code. We do not grab 3rd party code without author participation even if the license would seem to make it legal to do so. Good repository software, including svn and hg, can trace every line to a specific commit. Commit messages typically have an issue number and credit (blame) any patch author other than the one making the commit. So any line should be traceable to a specific person and we should have a CLA for that person. -- Terry Jan Reedy From tjreedy at udel.edu Wed Mar 6 23:52:27 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 06 Mar 2013 17:52:27 -0500 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: On 3/6/2013 11:55 AM, Chris Angelico wrote: > Someone would have to check, but in most cases, software licenses > govern the use, more than the distribution. If you're allowed to > download it free of charge from microsoft.com, you should be able to > get hold of it in some other way and it be exactly the same. But yeah, > if you want to be legal you'd have to actually read the EULA. As I remember, the 2008 vcexpress license specifically prohibits redistribtion even though MS gave it away for free. So we can not document other means of obtaining it. We went through the same issue with vc2005 when that was pulled from the MS site. I had the file but could not legally send it to anyone. As it is, my copy of 2008 file, which I meant to keep, seems gone (I believe the directory I had it in got corrupted). -- Terry Jan Reedy From tjreedy at udel.edu Thu Mar 7 00:32:30 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 06 Mar 2013 18:32:30 -0500 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: On 3/6/2013 12:29 PM, Steve Dower wrote: > From: Case Van Horsen >> The "Microsoft Windows SDK for Windows 7 and .NET Framework 3.5 SP1" >> is still available for download. It includes the command line compilers that are >> used with VS 2008. I have used to create extensions for Python 2.6 to 3.2. >> There is a later version of the SDK (for .NET >> 4.x) that includes the compilers from VS 2010. > > This is the same response that I got internally. > > The download link is > http://www.microsoft.com/en-us/download/details.aspx?id=3138 > and you can choose to only download and install the compilers. The C++ compiler appears to the the full compiler that will build both 32 and 64 bits apps. Will downloading just the compiler(s) allow one to build Python with the project files in PCBuild or does something else need to be checked also? >> To use the SDK compiler, you need to do a few manual steps first. >> >> After starting a command window, you need to run a batch file to configure >> your environment. Choose the appropriate option from >> >> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvars64.bat >> >> or >> >> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvars32.bat >> >> Then set two environment variables: >> >> set MSSdk=1 >> set DISTUTILS_USE_SDK=1 >> >> After these steps, the standard python setup.py install should work. This may be fine for building extensions, but it appears that more instructions are needed for a novice to build python itself. Following the instruction in the developer's guide, http://docs.python.org/devguide/setup.html#windows I was able to download and install vc express, double click on /PCBuild/pcbuild.sln to bring up the VS GUI, and use the menu to build a debug version of that branch. The new python is put in the same directory and can be run with another menu selection. Any alternate path should be that easy too. -- Terry Jan Reedy From rosuav at gmail.com Thu Mar 7 01:59:59 2013 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 7 Mar 2013 11:59:59 +1100 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: On Thu, Mar 7, 2013 at 9:52 AM, Terry Reedy wrote: > On 3/6/2013 11:55 AM, Chris Angelico wrote: > >> Someone would have to check, but in most cases, software licenses >> govern the use, more than the distribution. If you're allowed to >> download it free of charge from microsoft.com, you should be able to >> get hold of it in some other way and it be exactly the same. But yeah, >> if you want to be legal you'd have to actually read the EULA. > > > As I remember, the 2008 vcexpress license specifically prohibits > redistribtion even though MS gave it away for free. So we can not document > other means of obtaining it. We went through the same issue with vc2005 when > that was pulled from the MS site. I had the file but could not legally send > it to anyone. As it is, my copy of 2008 file, which I meant to keep, seems > gone (I believe the directory I had it in got corrupted). Blah. Okay, that settles that, then. Of course, everything I said above is still possible, just not something the PSF will officially condone. ChrisA From tjreedy at udel.edu Thu Mar 7 02:08:03 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 06 Mar 2013 20:08:03 -0500 Subject: [Python-Dev] Introducing Electronic Contributor Agreements In-Reply-To: <20130304214624.0f6dec8c@pitrou.net> References: <20130304214624.0f6dec8c@pitrou.net> Message-ID: On 3/4/2013 3:46 PM, Antoine Pitrou wrote: > On Mon, 04 Mar 2013 15:46:48 -0500 > Terry Reedy wrote: >> Either policy could be facilitated by tracker changes. In order to see >> the file upload box, one must login and the tracker knows who has a CLA >> on file (as indicated by a * suffix on the name). If a file is uploaded >> by someone without, a box could popup with the link to the e-form and a >> message that a CLA is required. > > And how about people who upload something else than a patch? Restrict the popup to filenames ending in .diff or .patch. -- Terry Jan Reedy From casevh at gmail.com Thu Mar 7 06:01:50 2013 From: casevh at gmail.com (Case Van Horsen) Date: Wed, 6 Mar 2013 21:01:50 -0800 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: On Wed, Mar 6, 2013 at 3:32 PM, Terry Reedy wrote: > On 3/6/2013 12:29 PM, Steve Dower wrote: >> >> From: Case Van Horsen > > >>> The "Microsoft Windows SDK for Windows 7 and .NET Framework 3.5 SP1" >>> is still available for download. It includes the command line compilers >>> that are >>> used with VS 2008. I have used to create extensions for Python 2.6 to >>> 3.2. >>> There is a later version of the SDK (for .NET >>> 4.x) that includes the compilers from VS 2010. >> >> >> This is the same response that I got internally. >> >> The download link is > >> http://www.microsoft.com/en-us/download/details.aspx?id=3138 >> and you can choose to only download and install the compilers. > > The C++ compiler appears to the the full compiler that will build both 32 > and 64 bits apps. Will downloading just the compiler(s) allow one to build > Python with the project files in PCBuild or does something else need to be > checked also? > > >>> To use the SDK compiler, you need to do a few manual steps first. >>> >>> After starting a command window, you need to run a batch file to >>> configure >>> your environment. Choose the appropriate option from >>> >>> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvars64.bat >>> >>> or >>> >>> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\vcvars32.bat >>> >>> Then set two environment variables: >>> >>> set MSSdk=1 >>> set DISTUTILS_USE_SDK=1 >>> >>> After these steps, the standard python setup.py install should work. > > > This may be fine for building extensions, but it appears that more > instructions are needed for a novice to build python itself. There is a build.bat file in the PCbuild directory that will rebuild Python from a command prompt. After entering the commands listed above at a command prompt, I was able to build a debug version of Python 2.7.3 by moving to \PCbuild and entering "build -d" (the -d indicates a debug build). > > Following the instruction in the developer's guide, > http://docs.python.org/devguide/setup.html#windows > I was able to download and install vc express, double click on > /PCBuild/pcbuild.sln to bring up the VS GUI, and use the menu to > build a debug version of that branch. The new python is put in the same > directory and can be run with another menu selection. Any alternate path > should be that easy too. casevh > > > -- > Terry Jan Reedy > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/casevh%40gmail.com From stefan_ml at behnel.de Thu Mar 7 08:32:59 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Thu, 07 Mar 2013 08:32:59 +0100 Subject: [Python-Dev] Add PyDict_GetItemSetDefault() as C-API for dict.setdefault() Message-ID: Hi, I've written a patch that adds a new C-API call for dict.setdefault(). The reason is that there is currently no way to test for a key and insert a fallback value for it without either evaluating the hash function twice or calling through the Python function. Both may involve considerable overhead and the double hash may have side-effects. http://bugs.python.org/issue17327 It does not include an explicit test because it does not add any code over the normal dict.setdefault() implementation, which is already tested at the Python level. If you prefer having a dummy test that checks that the function is there, I don't mind adding one. Please comment and/or apply the patch. Thanks, Stefan From mcepl at redhat.com Thu Mar 7 11:08:40 2013 From: mcepl at redhat.com (Matej Cepl) Date: Thu, 7 Mar 2013 11:08:40 +0100 Subject: [Python-Dev] Difference in RE between 3.2 and 3.3 (or Aaron Swartz memorial) In-Reply-To: References: <1362575394.23949.2.camel@wycliff.ceplovi.cz> Message-ID: <20130307100840.GA24941@wycliff.ceplovi.cz> On 2013-03-06, 18:34 GMT, Victor Stinner wrote: > In short, Unicode was rewritten in Python 3.3 for the PEP 393. It's > not surprising that minor details like singleton differ. You should > not use "is" to compare strings in Python, or your program will fail > on other Python implementations (like PyPy, IronPython, or Jython) or > even on a different CPython version. I am sorry, I don't understand what you are saying. Even though this has been changed to https://github.com/mcepl/html2text/blob/fix_tests/html2text.py#L90 the tests still fail. But, Amaury is right: the function doesn't make much sense. However, ... when I have ?fixed it? from https://github.com/mcepl/html2text/blob/master/html2text.py#L95 def onlywhite(line): """Return true if the line does only consist of whitespace characters.""" for c in line: if c is not ' ' and c is not ' ': return c is ' ' return line to https://github.com/mcepl/html2text/blob/fix_tests/html2text.py#L90 def onlywhite(line): """Return true if the line does only consist of whitespace characters.""" for c in line: if c != ' ' and c != ' ': return c == ' ' return line tests on ALL versions of Python are suddenly failing ... https://travis-ci.org/mcepl/html2text/builds/5288190 Curiouser and curiouser! At least, I seem to have the point, where things are breaking, but I have to admit that condition really doesn?t make any sense to me. > Anyway, you spotted a missed optimization: it's now "fixed" in > Python 3.3 and 3.4 by the following commits. Well, whatever is the problem, it is not fixed in python 3.3.0 (as you can see in https://travis-ci.org/mcepl/html2text/builds/4969045) as I can see on my computer. Actually, good news is that it seems to be fixed in the master branch of cpython (or the tip, as they say in the Mercurial world). Any thoughts? Mat?j From catch-all at masklinn.net Thu Mar 7 11:31:03 2013 From: catch-all at masklinn.net (Xavier Morel) Date: Thu, 7 Mar 2013 11:31:03 +0100 Subject: [Python-Dev] Difference in RE between 3.2 and 3.3 (or Aaron Swartz memorial) In-Reply-To: <20130307100840.GA24941@wycliff.ceplovi.cz> References: <1362575394.23949.2.camel@wycliff.ceplovi.cz> <20130307100840.GA24941@wycliff.ceplovi.cz> Message-ID: <3EAD780A-61B6-4460-87C6-C2C7BF6F9CA1@masklinn.net> On 2013-03-07, at 11:08 , Matej Cepl wrote: > On 2013-03-06, 18:34 GMT, Victor Stinner wrote: >> In short, Unicode was rewritten in Python 3.3 for the PEP 393. It's >> not surprising that minor details like singleton differ. You should >> not use "is" to compare strings in Python, or your program will fail >> on other Python implementations (like PyPy, IronPython, or Jython) or >> even on a different CPython version. > > I am sorry, I don't understand what you are saying. Even though > this has been changed to > https://github.com/mcepl/html2text/blob/fix_tests/html2text.py#L90 > the tests still fail. > > But, Amaury is right: the function doesn't make much sense. > However, ... > > when I have ?fixed it? from > https://github.com/mcepl/html2text/blob/master/html2text.py#L95 > > def onlywhite(line): > """Return true if the line does only consist of whitespace characters.""" > for c in line: > if c is not ' ' and c is not ' ': > return c is ' ' > return line > > to > https://github.com/mcepl/html2text/blob/fix_tests/html2text.py#L90 > > def onlywhite(line): > """Return true if the line does only consist of whitespace > characters.""" > for c in line: > if c != ' ' and c != ' ': > return c == ' ' > return line The second test looks like some kind of corruption, it's supposedly iterating on the characters of a line yet testing for two spaces? Is it possible that the original was a literal tab embedded in the source code (instead of '\t') and that got broken at some point? According to its name + docstring, the implementation of this method should really be replaced by `return line and line.isspace()` (the first part being to handle the case of an empty line: in the current implementation the line will be returned directly if no whitespace is found, which will be "negative" for an empty line, and ''.isspace() -> false). Does that fix the failing tests? From theller at ctypes.org Thu Mar 7 12:37:14 2013 From: theller at ctypes.org (Thomas Heller) Date: Thu, 07 Mar 2013 12:37:14 +0100 Subject: [Python-Dev] [docs] undocumented argtypes magic in ctypes? In-Reply-To: References: Message-ID: Am 06.03.2013 18:19, schrieb Eli Bendersky: > > > > On Wed, Mar 6, 2013 at 8:33 AM, Andrew Svetlov > wrote: > > Looks like bug for me. > > > ctypes seems to auto-convert arguments when argtypes is specified. This > fact is documented. However, I'm not sure whether this auto-conversion > is advanced enough to apply byref. Because otherwise, DIRENT is > certainly not convertible to DIRENT_p If argtypes specify a 'POINTER(X)' type as an argument, then ctypes automatically applies byref() if an 'X' instance is passed to the actual call. This is by design, but I'm not sure if it is documented or not. However, if argtypes is not given, this does (and of course cannot) work. Thomas From eliben at gmail.com Thu Mar 7 14:25:18 2013 From: eliben at gmail.com (Eli Bendersky) Date: Thu, 7 Mar 2013 05:25:18 -0800 Subject: [Python-Dev] [docs] undocumented argtypes magic in ctypes? In-Reply-To: References: Message-ID: On Thu, Mar 7, 2013 at 3:37 AM, Thomas Heller wrote: > Am 06.03.2013 18:19, schrieb Eli Bendersky: > >> >> >> >> On Wed, Mar 6, 2013 at 8:33 AM, Andrew Svetlov > >> wrote: >> >> Looks like bug for me. >> >> >> ctypes seems to auto-convert arguments when argtypes is specified. This >> fact is documented. However, I'm not sure whether this auto-conversion >> is advanced enough to apply byref. Because otherwise, DIRENT is >> certainly not convertible to DIRENT_p >> > > If argtypes specify a 'POINTER(X)' type as an argument, then ctypes > automatically applies byref() if an 'X' instance is passed to the > actual call. This is by design, but I'm not sure if it is documented > or not. > > However, if argtypes is not given, this does (and of course cannot) work. > Great, thanks for confirming this, Thomas. I had the feeling it's a documentation issue (hence I sent it to the docs@ list first), because the behavior seems very deliberate and looking at the code of ctypes I did see conversions going on. Have I missed that this is documented somewhere, or should I open a docs issue? Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Thu Mar 7 14:34:25 2013 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 7 Mar 2013 14:34:25 +0100 Subject: [Python-Dev] Difference in RE between 3.2 and 3.3 (or Aaron Swartz memorial) In-Reply-To: <20130307100840.GA24941@wycliff.ceplovi.cz> References: <1362575394.23949.2.camel@wycliff.ceplovi.cz> <20130307100840.GA24941@wycliff.ceplovi.cz> Message-ID: You should try to write a simple test not using your library (just copy/paste code) reproducing the issue. If you can do that, please fill an issue on bugs.python.org. Victor 2013/3/7 Matej Cepl : > On 2013-03-06, 18:34 GMT, Victor Stinner wrote: >> In short, Unicode was rewritten in Python 3.3 for the PEP 393. It's >> not surprising that minor details like singleton differ. You should >> not use "is" to compare strings in Python, or your program will fail >> on other Python implementations (like PyPy, IronPython, or Jython) or >> even on a different CPython version. > > I am sorry, I don't understand what you are saying. Even though > this has been changed to > https://github.com/mcepl/html2text/blob/fix_tests/html2text.py#L90 > the tests still fail. > > But, Amaury is right: the function doesn't make much sense. > However, ... > > when I have ?fixed it? from > https://github.com/mcepl/html2text/blob/master/html2text.py#L95 > > def onlywhite(line): > """Return true if the line does only consist of whitespace characters.""" > for c in line: > if c is not ' ' and c is not ' ': > return c is ' ' > return line > > to > https://github.com/mcepl/html2text/blob/fix_tests/html2text.py#L90 > > def onlywhite(line): > """Return true if the line does only consist of whitespace > characters.""" > for c in line: > if c != ' ' and c != ' ': > return c == ' ' > return line > > tests on ALL versions of Python are suddenly failing ... > https://travis-ci.org/mcepl/html2text/builds/5288190 > > Curiouser and curiouser! At least, I seem to have the point, > where things are breaking, but I have to admit that condition > really doesn?t make any sense to me. > >> Anyway, you spotted a missed optimization: it's now "fixed" in >> Python 3.3 and 3.4 by the following commits. > > Well, whatever is the problem, it is not fixed in python 3.3.0 > (as you can see in > https://travis-ci.org/mcepl/html2text/builds/4969045) as I can > see on my computer. Actually, good news is that it seems to be > fixed in the master branch of cpython (or the tip, as they say in > the Mercurial world). > > Any thoughts? > > Mat?j > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com From theller at ctypes.org Thu Mar 7 14:53:14 2013 From: theller at ctypes.org (Thomas Heller) Date: Thu, 07 Mar 2013 14:53:14 +0100 Subject: [Python-Dev] [docs] undocumented argtypes magic in ctypes? In-Reply-To: References: Message-ID: > ctypes seems to auto-convert arguments when argtypes is > specified. This > fact is documented. However, I'm not sure whether this > auto-conversion > is advanced enough to apply byref. Because otherwise, DIRENT is > certainly not convertible to DIRENT_p > > If argtypes specify a 'POINTER(X)' type as an argument, then ctypes > automatically applies byref() if an 'X' instance is passed to the > actual call. This is by design, but I'm not sure if it is documented > or not. > > However, if argtypes is not given, this does (and of course cannot) > work. > > Great, thanks for confirming this, Thomas. I had the feeling it's a > documentation issue (hence I sent it to the docs@ list first), because > the behavior seems very deliberate and looking at the code of ctypes I > did see conversions going on. > > Have I missed that this is documented somewhere, or should I open a docs > issue? I didn't find anything in the docs (in the two minutes I spent for that), so please open a docs issue, or, better, fix it. Thomas From eliben at gmail.com Thu Mar 7 15:09:58 2013 From: eliben at gmail.com (Eli Bendersky) Date: Thu, 7 Mar 2013 06:09:58 -0800 Subject: [Python-Dev] [docs] undocumented argtypes magic in ctypes? In-Reply-To: References: Message-ID: On Thu, Mar 7, 2013 at 5:53 AM, Thomas Heller wrote: > ctypes seems to auto-convert arguments when argtypes is >> specified. This >> fact is documented. However, I'm not sure whether this >> auto-conversion >> is advanced enough to apply byref. Because otherwise, DIRENT is >> certainly not convertible to DIRENT_p >> >> If argtypes specify a 'POINTER(X)' type as an argument, then ctypes >> automatically applies byref() if an 'X' instance is passed to the >> actual call. This is by design, but I'm not sure if it is documented >> or not. >> >> However, if argtypes is not given, this does (and of course cannot) >> work. >> >> Great, thanks for confirming this, Thomas. I had the feeling it's a >> documentation issue (hence I sent it to the docs@ list first), because >> the behavior seems very deliberate and looking at the code of ctypes I >> did see conversions going on. >> >> Have I missed that this is documented somewhere, or should I open a docs >> issue? >> > > I didn't find anything in the docs (in the two minutes I spent for that), > so please open a docs issue, or, better, fix it. http://bugs.python.org/issue17378, has a patch already. Please take a look. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From Steve.Dower at microsoft.com Thu Mar 7 18:53:56 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Thu, 7 Mar 2013 17:53:56 +0000 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: > From: Terry Reedy > On 3/6/2013 12:29 PM, Steve Dower wrote: > > From: Case Van Horsen > > >> The "Microsoft Windows SDK for Windows 7 and .NET Framework 3.5 SP1" > >> is still available for download. It includes the command line > >> compilers that are used with VS 2008. I have used to create extensions for > Python 2.6 to 3.2. > >> There is a later version of the SDK (for .NET > >> 4.x) that includes the compilers from VS 2010. > > > > This is the same response that I got internally. > > > > The download link is > > http://www.microsoft.com/en-us/download/details.aspx?id=3138 > > and you can choose to only download and install the compilers. > > The C++ compiler appears to the the full compiler that will build both > 32 and 64 bits apps. Will downloading just the compiler(s) allow one to build > Python with the project files in PCBuild or does something else need to be > checked also? Just testing this now, but Any version of Visual Studio (Professional or higher), OR Visual Studio 2012 Express for Desktop (http://www.microsoft.com/visualstudio/eng/products/visual-studio-express-for-windows-desktop) OR Visual C++ 2010 Express (http://www.microsoft.com/visualstudio/en-us/express-cpp/overview) (maybe - haven't confirmed this yet) For Python 3.3: the compilers and headers from the "Windows SDK for Windows 7 and .NET Framework 4" (http://www.microsoft.com/en-us/download/details.aspx?id=8279) For earlier versions: the compilers and headers from the "Windows SDK for Windows 7 and .NET Framework 3.5" (http://www.microsoft.com/en-us/download/details.aspx?id=3138) (You can install both compilers on the same machine.) Once these compilers have been installed, VS will let you choose which one your project will use. In Project Properties there is a "Platform Toolset" list that will include all of the installed compilers. For Python 3.3, you'll want VC100, and earlier versions will want VC90. If you open an existing project (including PCBuild.sln), VS will offer to update it. If you don't update it, and you have the earlier compilers installed, it will use them. Right now, I've only tested this with 3.3, which used a different project format to earlier versions (.vcxproj, rather than .vcproj). I assume we know how to upgrade the project files without changing the platform target, but I haven't confirmed that yet. > >> To use the SDK compiler, you need to do a few manual steps first. > >> > >> After starting a command window, you need to run a batch file to > >> configure your environment. Choose the appropriate option from > >> > >> C:\Program Files (x86)\Microsoft Visual Studio > >> 9.0\VC\bin\vcvars64.bat > >> > >> or > >> > >> C:\Program Files (x86)\Microsoft Visual Studio > >> 9.0\VC\bin\vcvars32.bat > >> > >> Then set two environment variables: > >> > >> set MSSdk=1 > >> set DISTUTILS_USE_SDK=1 > >> > >> After these steps, the standard python setup.py install should work. > > This may be fine for building extensions, but it appears that more instructions > are needed for a novice to build python itself. I'm not even sure that these variables are necessary - certainly without the compilers installed setup.py looks in the right place for them. I'll try this as well. > Following the instruction in the developer's guide, > http://docs.python.org/devguide/setup.html#windows > I was able to download and install vc express, double click on > /PCBuild/pcbuild.sln to bring up the VS GUI, and use the menu > to build a debug version of that branch. The new python is put in the same > directory and can be run with another menu selection. Any alternate path > should be that easy too. I'll admit I'm not a huge fan of the current Windows build setup, but since so few people seem to use it I understand why it hasn't changed. As for the documentation, I'd be happy to provide an update for this section once I've checked out that everything works. Cheers, Steve > -- > Terry Jan Reedy From g.brandl at gmx.net Thu Mar 7 21:20:57 2013 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 07 Mar 2013 21:20:57 +0100 Subject: [Python-Dev] Difference in RE between 3.2 and 3.3 (or Aaron Swartz memorial) In-Reply-To: <20130307100840.GA24941@wycliff.ceplovi.cz> References: <1362575394.23949.2.camel@wycliff.ceplovi.cz> <20130307100840.GA24941@wycliff.ceplovi.cz> Message-ID: Am 07.03.2013 11:08, schrieb Matej Cepl: >> Anyway, you spotted a missed optimization: it's now "fixed" in >> Python 3.3 and 3.4 by the following commits. > > Well, whatever is the problem, it is not fixed in python 3.3.0 > (as you can see in > https://travis-ci.org/mcepl/html2text/builds/4969045) as I can > see on my computer. Actually, good news is that it seems to be > fixed in the master branch of cpython (or the tip, as they say in > the Mercurial world). It's not a "fix", it's an optimization. Please understand that using the "is" operator on strings is entirely wrong. Georg From tjreedy at udel.edu Thu Mar 7 23:21:35 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 07 Mar 2013 17:21:35 -0500 Subject: [Python-Dev] PEP 434: IDLE Enhancement Exception Message-ID: This re-write of Todd's draft focuses better on the specific proposal and motivation. It tries to take into account comments posted both here and on python-ideas -------------------------------------------------------------------- PEP: 434 Title: IDLE Enhancement Exception for All Branches Version: $Revision$ Last-Modified: $Date$ Author: Todd Rovito , Terry Reedy BDFL-Delegate: Nick Coghlan Status: Draft Type: Informational Content-Type: text/x-rst Created: 16-Feb-2013 Post-History: 16-Feb-2013 Abstract ======== Most CPython tracker issues are classified as behavior or enhancement. Most behavior patches are backported to branches for existing versions. Enhancement patches are restricted to the default branch that becomes the next Python version. This PEP proposes that the restriction on applying enhancements be relaxed for IDLE code, residing in .../Lib/idlelib/. In practice, this would mean that IDLE developers would not have to classify or agree on the classification of a patch but could instead focus on what is best for IDLE users and future IDLE developement. It would also mean that IDLE patches would not necessarily have to be split into 'bugfix' changes and enhancement changes. The PEP would apply to changes in existing features and addition of small features, such as would require a new menu entry, but not necessarily to possible major re-writes such as switching to themed widgets or tabbed windows. Motivation ========== This PEP was prompted by controversy on both the tracker and pydev list over adding Cut, Copy, and Paste to right-click context menus (Issue 1207589, opened in 2005 [1]_; pydev thread [2]_). The features were available as keyboard shortcuts but not on the context menu. It is standard, at least on Windows, that they should be when applicable (a read-only window would only have Copy), so users do not have to shift to the keyboard after selecting text for cutting or copying or a slice point for pasting. The context menu was not documented until 10 days before the new options were added (Issue 10405 [3]_). Normally, behavior is called a bug if it conflicts with documentation judged to be correct. But if there is no doc, what is the standard? If the code is its own documentation, most IDLE issues on the tracker are enhancement issues. If we substitute reasonable user expectation, (which can, of course, be its own subject of disagreement), many more issues are behavior issues. For context menus, people disagreed on the status of the additions -- bugfix or enhancement. Even people who called it an enhancement disagreed as to whether the patch should be backported. This PEP proposes to make the status disagreement irrelevant by explicitly allowing more liberal backporting than for other stdlib modules. Rationale ========= People primarily use IDLE by running the gui application, rather than by directly importing the effectively private (undocumented) implementation modules in idlelib. Whether they use the shell, the editor, or both, we believe they will benefit more from consistency across the latest releases of current Python versions than from consistency within the bugfix releases for one Python version. This is especially true when existing behavior is clearly unsatisfactory. When people use the standard interpreter, the OS-provided frame works pretty much the same for all Python versions. If, for instance, Microsoft were to upgrade the Command Prompt gui, the improvements would be present regardless of which Python were running within it. Similarly, if one edits Python code with editor X, behaviors such as the right-click context menu and the search-replace box do not depend on the version of Python being edited or even the language being edited. The benefit for IDLE developers is mixed. On the one hand, testing more versions and possibly having to adjust a patch, especially for 2.7, is more work. (There is, of course, the option on not backporting everything. For issue 12510, some changes to calltips for classes were not included in the 2.7 patch because of issues with old-style classes [4]_.) On the other hand, bike-shedding can be an energy drain. If the obvious fix for a bug looks like an enhancement, writing a separate bugfix-only patch is more work. And making the code diverge between versions makes future multi-version patches more difficult. These issue are illustrated by the search-and-replace dialog box. It used to raise an exception for certain user entries [5]_. The uncaught exception caused IDLE to exit. At least on Windows, the exit was silent (no visible traceback) and looked like a crash if IDLE was started normally, from an icon. Was this a bug? IDLE Help (on the current Help submenu) just says "Replace... Open a search-and-replace dialog box", and a box *was* opened. It is not, in general, a bug for a library method to raise an exception. And it is not, in general, a bug for a library method to ignore an exception raised by functions it calls. So if we were to adopt the 'code = doc' philosopy in the absence of detailed docs, one might say 'No'. However, IDLE exiting when it does not need to is definitely obnoxious. So four of us agreed that it should be prevented. But there was still the question of what to do instead? Catch the exception? Just not raise the exception? Beep? Display an error message box? Or try to do something useful with the user's entry? Would replacing a 'crash' with useful behavior be an enhancement, limited to future Python releases? Should IDLE developers have to ask that? Backwards Compatibility ======================= For IDLE, there are three types of users who might be concerned about back compatibility. First are people who run IDLE as an application. We have already discussed them above. Second are people who import one of the idlelib modules. As far as we know, this is only done to start the IDLE application, and we do not propose breaking such use. Otherwise, the modules are undocumented and effectively private implementations. If an IDLE module were defined as public, documented, and perhaps moved to the tkinter package, it would then follow the normal rules. (Documenting the private interfaces for the benefit of people working on the IDLE code is a separate issue.) Third are people who write IDLE extensions. The guaranteed extension interface is given in idlelib/extension.txt. This should be respected at least in existing versions, and not frivolously changed in future versions. But there is a warning that "The extension cannot assume much about this [EditorWindow] argument." This guarantee should rarely be an issue with patches, and the issue is not specific to 'enhancement' versus 'bugfix' patches. As is happens, after the context menu patch was applied, it came up that extensions that added items to the context menu (rare) would be broken because the patch a) added a new item to standard rmenu_specs and b) expected every rmenu_spec to be lengthened. It is not clear whether this violates the guarantee, but there is a second patch that fixes assumption b). It should be applied when it is clear that the first patch will not have to be reverted. References ========== .. [1] IDLE: Right Click Context Menu, Foord, Michael (http://bugs.python.org/issue1207589) .. [2] Cut/Copy/Paste items in IDLE right click context menu (http://mail.python.org/pipermail/python-dev/2012-November/122514.html) .. [3] IDLE breakpoint facility undocumented, Daily, Ned (http://bugs.python.org/issue10405) .. [4] IDLE: calltips mishandle raw strings and other examples, Reedy, Terry (http://bugs.python.org/issue12510) .. [5] IDLE: replace ending with '\' causes crash, Reedy, Terry (http://bugs.python.org/issue13052) Copyright ========= This document has been placed in the public domain. -- Terry Jan Reedy From victor.stinner at gmail.com Fri Mar 8 00:39:39 2013 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 8 Mar 2013 00:39:39 +0100 Subject: [Python-Dev] pytracemalloc 0.7: new tool to track memory leaks in Python Message-ID: Hi, See below for a copy of my email posted to python-list and python-announce mailing lists. pytracemalloc tool requires a patch on Python to hook memory allocation functions. I posted the patch there: http://bugs.python.org/issue3329 Thanks to this patch, it would also be possible to enable or disable debug memory allocators (ex: _PyMem_DebugMalloc vs PyMem_Malloc) at runtime, instead of having to decide at compile time. Ezio proposed a similar idea for the "[X refs, Y blocks]" message display at Python (compiled in debug mode) exit. He proposed to disable the message by default, and add a (command line) option to show the message: http://bugs.python.org/issue17323 -- Wyplay is proud to announce the release of a new tool to track Python memory allocations: "pytracemalloc". https://pypi.python.org/pypi/pytracemalloc https://github.com/wyplay/pytracemalloc pytracemalloc provides the following information: - Allocated size and number of allocations per file, or optionally per file and line number - Compute the average size of memory allocations - Compute delta between two "snapshots" - Get the source of a memory allocation: filename and line number It helps to track memory leaks: show directly in which Python files the memory increases. Example of pytracemalloc output (compact): 2013-02-28 23:40:18: Top 5 allocations per file #1: .../Lib/test/regrtest.py: 3998 KB #2: .../Lib/unittest/case.py: 2343 KB #3: .../ctypes/test/__init__.py: 513 KB #4: .../Lib/encodings/__init__.py: 525 KB #5: .../Lib/compiler/transformer.py: 438 KB other: 32119 KB Total allocated size: 39939 KB Example of pytracemalloc output (full): 2013-03-04 01:01:55: Top 10 allocations per file and line #1: .../2.7/Lib/linecache.py:128: size=408 KiB (+408 KiB), count=5379 (+5379), average=77 B #2: .../unittest/test/__init__.py:14: size=401 KiB (+401 KiB), count=6668 (+6668), average=61 B #3: .../2.7/Lib/doctest.py:506: size=319 KiB (+319 KiB), count=197 (+197), average=1 KiB #4: .../Lib/test/regrtest.py:918: size=429 KiB (+301 KiB), count=5806 (+3633), average=75 B #5: .../Lib/unittest/case.py:332: size=162 KiB (+136 KiB), count=452 (+380), average=367 B #6: .../Lib/test/test_doctest.py:8: size=105 KiB (+105 KiB), count=1125 (+1125), average=96 B #7: .../Lib/unittest/main.py:163: size=77 KiB (+77 KiB), count=1149 (+1149), average=69 B #8: .../Lib/test/test_types.py:7: size=75 KiB (+75 KiB), count=1644 (+1644), average=46 B #9: .../2.7/Lib/doctest.py:99: size=64 KiB (+64 KiB), count=1000 (+1000), average=66 B #10: .../Lib/test/test_exceptions.py:6: size=56 KiB (+56 KiB), count=932 (+932), average=61 B 3023 more: size=1580 KiB (+1138 KiB), count=12635 (+7801), average=128 B Total: size=3682 KiB (+3086 KiB), count=36987 (+29908), average=101 B To install pytracemalloc, you need to patch and recompile your own version of Python to be able to hook all Python memory allocations. -- Wyplay was created in March 2006 in the south of France. Independent, Europe-based, and internationally recognized, Wyplay?s TV-centric software solutions power the world?s most popular operator and consumer electronic brand names. Targeted products includes: Connected-HDTVs, Media Center CE devices, HD IPTV boxes, DVB-S/C/T HD STBs, and in-home media-HDD products. http://www.wyplay.com/ Victor From status at bugs.python.org Fri Mar 8 18:07:03 2013 From: status at bugs.python.org (Python tracker) Date: Fri, 8 Mar 2013 18:07:03 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20130308170703.5F554568ED@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2013-03-01 - 2013-03-08) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3895 (+18) closed 25265 (+38) total 29160 (+56) Open issues with patches: 1703 Issues opened (42) ================== #10886: Unhelpful backtrace for multiprocessing.Queue http://bugs.python.org/issue10886 reopened by neologix #14489: repr() function link on the built-in function documentation is http://bugs.python.org/issue14489 reopened by r.david.murray #17330: Stop checking for directory cache invalidation in importlib http://bugs.python.org/issue17330 opened by erik.bray #17332: typo in json docs - "convered" should be "converted" http://bugs.python.org/issue17332 opened by ernest #17335: FieldStorageClass is messed up http://bugs.python.org/issue17335 opened by Neal.Norwitz #17337: input() and raw_input() do not work correctly with colored pro http://bugs.python.org/issue17337 opened by mic_e #17338: Add length_hint parameter to list, dict, set constructors to a http://bugs.python.org/issue17338 opened by alex #17339: bytes() TypeError message is misleadingly narrow http://bugs.python.org/issue17339 opened by terry.reedy #17340: Handle malformed cookie http://bugs.python.org/issue17340 opened by keakon #17341: Poor error message when compiling invalid regex http://bugs.python.org/issue17341 opened by roysmith #17342: datetime.strptime does not implement %z http://bugs.python.org/issue17342 opened by zwn #17343: Add a version of str.split which returns an iterator http://bugs.python.org/issue17343 opened by alex #17344: checking size of size_t... configure: error: http://bugs.python.org/issue17344 opened by shilpi #17345: Portable and extended type specifiers for array module http://bugs.python.org/issue17345 opened by nnemkin #17348: Unicode - encoding seems to be lost for inputs of unicode char http://bugs.python.org/issue17348 opened by Pradyun.Gedam #17349: wsgiref.simple_server.demo_app is not PEP-3333 compatible http://bugs.python.org/issue17349 opened by kedder #17350: Use STAF call python script will case 1124861 issue in 2.7.2 v http://bugs.python.org/issue17350 opened by gwtking #17351: Fixed python3 descriptor documentation example + removal of ex http://bugs.python.org/issue17351 opened by pelson #17352: Be clear that __prepare__ must be declared as a class method http://bugs.python.org/issue17352 opened by ncoghlan #17353: Plistlib outputs empty data tags when deeply nested http://bugs.python.org/issue17353 opened by jfortier #17354: TypeError when running setup.py upload --show-response http://bugs.python.org/issue17354 opened by mitya57 #17357: Add missing verbosity message to importlib http://bugs.python.org/issue17357 opened by brett.cannon #17358: imp.load_module() leads to the improper caching of the 'file' http://bugs.python.org/issue17358 opened by brett.cannon #17359: python modules.zip is not documented http://bugs.python.org/issue17359 opened by lemburg #17360: Regular expressions on mmap'd files can overflow http://bugs.python.org/issue17360 opened by fatlotus #17362: enable-new-dtags only for GNU ELF linker http://bugs.python.org/issue17362 opened by rpetrov #17365: Remove Python 2 code from test_print http://bugs.python.org/issue17365 opened by berker.peksag #17368: Python version of JSON decoder does not work with object_pairs http://bugs.python.org/issue17368 opened by Kuukunen #17369: Message.get_filename produces exception if the RFC2231 encodin http://bugs.python.org/issue17369 opened by r.david.murray #17370: PEP should note if it has been superseded http://bugs.python.org/issue17370 opened by brandon-rhodes #17371: Mismatch between Python 3.3 build environment and distutils co http://bugs.python.org/issue17371 opened by mayaa #17372: provide pretty printer for xml.etree.ElementTree http://bugs.python.org/issue17372 opened by eric.snow #17373: Add inspect.Signature.from_callable() http://bugs.python.org/issue17373 opened by eric.snow #17374: Remove restriction against Semaphore having a negative value http://bugs.python.org/issue17374 opened by rhettinger #17375: Add docstrings to methods in the threading module http://bugs.python.org/issue17375 opened by rhettinger #17376: TimedRotatingFileHandler documentation regarding 'Week day' la http://bugs.python.org/issue17376 opened by tshepang #17380: initproc return value is unclear http://bugs.python.org/issue17380 opened by zbysz #17381: IGNORECASE breaks unicode literal range matching http://bugs.python.org/issue17381 opened by acdha #17382: debugging with idle: current line not highlighted http://bugs.python.org/issue17382 opened by dzabel #17383: Possibly ambiguous phrasing in tutorial/modules#more-on-module http://bugs.python.org/issue17383 opened by Piotr.Kuchta #17384: test_logging failures on Windows http://bugs.python.org/issue17384 opened by ezio.melotti #17385: Use deque instead of list the threading.Condition waiter queue http://bugs.python.org/issue17385 opened by rhettinger Most recent 15 issues with no replies (15) ========================================== #17385: Use deque instead of list the threading.Condition waiter queue http://bugs.python.org/issue17385 #17373: Add inspect.Signature.from_callable() http://bugs.python.org/issue17373 #17372: provide pretty printer for xml.etree.ElementTree http://bugs.python.org/issue17372 #17365: Remove Python 2 code from test_print http://bugs.python.org/issue17365 #17362: enable-new-dtags only for GNU ELF linker http://bugs.python.org/issue17362 #17354: TypeError when running setup.py upload --show-response http://bugs.python.org/issue17354 #17351: Fixed python3 descriptor documentation example + removal of ex http://bugs.python.org/issue17351 #17350: Use STAF call python script will case 1124861 issue in 2.7.2 v http://bugs.python.org/issue17350 #17349: wsgiref.simple_server.demo_app is not PEP-3333 compatible http://bugs.python.org/issue17349 #17348: Unicode - encoding seems to be lost for inputs of unicode char http://bugs.python.org/issue17348 #17345: Portable and extended type specifiers for array module http://bugs.python.org/issue17345 #17342: datetime.strptime does not implement %z http://bugs.python.org/issue17342 #17340: Handle malformed cookie http://bugs.python.org/issue17340 #17335: FieldStorageClass is messed up http://bugs.python.org/issue17335 #17332: typo in json docs - "convered" should be "converted" http://bugs.python.org/issue17332 Most recent 15 issues waiting for review (15) ============================================= #17385: Use deque instead of list the threading.Condition waiter queue http://bugs.python.org/issue17385 #17376: TimedRotatingFileHandler documentation regarding 'Week day' la http://bugs.python.org/issue17376 #17375: Add docstrings to methods in the threading module http://bugs.python.org/issue17375 #17373: Add inspect.Signature.from_callable() http://bugs.python.org/issue17373 #17369: Message.get_filename produces exception if the RFC2231 encodin http://bugs.python.org/issue17369 #17365: Remove Python 2 code from test_print http://bugs.python.org/issue17365 #17362: enable-new-dtags only for GNU ELF linker http://bugs.python.org/issue17362 #17354: TypeError when running setup.py upload --show-response http://bugs.python.org/issue17354 #17351: Fixed python3 descriptor documentation example + removal of ex http://bugs.python.org/issue17351 #17338: Add length_hint parameter to list, dict, set constructors to a http://bugs.python.org/issue17338 #17330: Stop checking for directory cache invalidation in importlib http://bugs.python.org/issue17330 #17329: Document unittest.SkipTest http://bugs.python.org/issue17329 #17325: improve organization of the PyPI distutils docs http://bugs.python.org/issue17325 #17324: SimpleHTTPServer serves files even if the URL has a trailing s http://bugs.python.org/issue17324 #17323: Disable [X refs, Y blocks] ouput in debug builds http://bugs.python.org/issue17323 Top 10 most discussed issues (10) ================================= #17322: urllib.request add_header() currently allows trailing spaces ( http://bugs.python.org/issue17322 17 msgs #13564: ftplib and sendfile() http://bugs.python.org/issue13564 16 msgs #17330: Stop checking for directory cache invalidation in importlib http://bugs.python.org/issue17330 15 msgs #17338: Add length_hint parameter to list, dict, set constructors to a http://bugs.python.org/issue17338 15 msgs #13477: tarfile module should have a command line http://bugs.python.org/issue13477 14 msgs #12921: http.server.BaseHTTPRequestHandler.send_error and trailing new http://bugs.python.org/issue12921 13 msgs #10967: move regrtest over to using more unittest infrastructure http://bugs.python.org/issue10967 10 msgs #12768: docstrings for the threading module http://bugs.python.org/issue12768 8 msgs #17383: Possibly ambiguous phrasing in tutorial/modules#more-on-module http://bugs.python.org/issue17383 8 msgs #16997: subtests http://bugs.python.org/issue16997 7 msgs Issues closed (38) ================== #11732: Skip decorator for tests requiring manual intervention on Wind http://bugs.python.org/issue11732 closed by ezio.melotti #11787: File handle leak in TarFile lib http://bugs.python.org/issue11787 closed by ezio.melotti #13747: ssl_version documentation error http://bugs.python.org/issue13747 closed by pitrou #13898: Ignored exception in test_ssl http://bugs.python.org/issue13898 closed by nadeem.vawda #14123: Indicate that there are no current plans to deprecate printf-s http://bugs.python.org/issue14123 closed by ezio.melotti #14645: Generator does not translate linesep characters in certain cir http://bugs.python.org/issue14645 closed by r.david.murray #15448: utimes() functions fail with ENOSYS even when detected by conf http://bugs.python.org/issue15448 closed by neologix #15465: Improved documentation for C API version info http://bugs.python.org/issue15465 closed by python-dev #16669: Docstrings for namedtuple http://bugs.python.org/issue16669 closed by rhettinger #16848: Mac OS X: python-config --ldflags and location of Python.frame http://bugs.python.org/issue16848 closed by ned.deily #16860: Use O_CLOEXEC in the tempfile module http://bugs.python.org/issue16860 closed by neologix #16962: _posixsubprocess module uses outdated getdents system call http://bugs.python.org/issue16962 closed by gregory.p.smith #17032: Misleading error message: global name 'X' is not defined http://bugs.python.org/issue17032 closed by ezio.melotti #17146: Improve test.support.import_fresh_module() http://bugs.python.org/issue17146 closed by eric.snow #17278: SIGSEGV in _heapqmodule.c http://bugs.python.org/issue17278 closed by pitrou #17298: Twisted test failure triggered by change in 2.7 branch http://bugs.python.org/issue17298 closed by ezio.melotti #17302: HTTP/2.0 - Implementations/Testing efforts http://bugs.python.org/issue17302 closed by terry.reedy #17309: __bytes__ doesn't work in subclass of int and str http://bugs.python.org/issue17309 closed by benjamin.peterson #17312: test_aifc doesn't clean up after itself http://bugs.python.org/issue17312 closed by ezio.melotti #17315: test_posixpath doesn't clean up after itself http://bugs.python.org/issue17315 closed by ezio.melotti #17327: Add PyDict_GetItemSetDefault() as C-API for dict.setdefault() http://bugs.python.org/issue17327 closed by python-dev #17328: Fix reference leak in dict_setdefault() in case of resize fail http://bugs.python.org/issue17328 closed by python-dev #17331: namedtuple raises a SyntaxError instead of ValueError on inval http://bugs.python.org/issue17331 closed by rhettinger #17333: Fix test discovery for test_imaplib.py http://bugs.python.org/issue17333 closed by ezio.melotti #17334: Fix test discovery for test_index.py http://bugs.python.org/issue17334 closed by ezio.melotti #17336: Complex number representation round-trip doesn't work with sig http://bugs.python.org/issue17336 closed by mark.dickinson #17346: Pickle tests do not test protocols 0, 1, and 2 for bytes http://bugs.python.org/issue17346 closed by ezio.melotti #17347: bsddb._openDBEnv() should not touch current directory http://bugs.python.org/issue17347 closed by jcea #17355: http tests testing more than the error code are fragile http://bugs.python.org/issue17355 closed by r.david.murray #17356: Invalid link to repr() built-in function description http://bugs.python.org/issue17356 closed by r.david.murray #17361: use CC to test compiler flags in setup.py http://bugs.python.org/issue17361 closed by skrah #17363: Argument Mixup in PyState_AddModule http://bugs.python.org/issue17363 closed by ezio.melotti #17364: Multiprocessing documentation mentions function that doesn't e http://bugs.python.org/issue17364 closed by ezio.melotti #17366: os.chdir win32 http://bugs.python.org/issue17366 closed by amaury.forgeotdarc #17367: subprocess deadlock when read() is interrupted http://bugs.python.org/issue17367 closed by sbt #17377: JSON module in standard library behaves incorrectly on input l http://bugs.python.org/issue17377 closed by r.david.murray #17378: Document that ctypes automatically applies byref() when argtyp http://bugs.python.org/issue17378 closed by python-dev #17379: Zen amendment http://bugs.python.org/issue17379 closed by brett.cannon From steve at pearwood.info Sat Mar 9 02:13:54 2013 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 09 Mar 2013 12:13:54 +1100 Subject: [Python-Dev] FileCookieJars In-Reply-To: References: Message-ID: <513A8CD2.8090105@pearwood.info> On 02/03/13 02:43, Demian Brecht wrote: > Cross-posting from python-ideas due to no response there. Perhaps it's > due to a general lack of usage/caring for cookiejar, but figured > /someone/'s got to have an opinion about my proposal ;) Apparently not :-( > TL;DR: CookieJar > FileCookieJar > *CookieJar are architecturally > broken and this is an attempt to rectify that (and fix a couple bugs > along the way). [...] > This will obviously break backwards compatibility, so I'm not entirely > sure what best practice is around that: leave well enough alone even > though it might not make sense, keep the old implementations around > and deprecate them to be eventually replaced by the processors, or > other ideas? I don't have an opinion on cookiejars per se, but I think that the first thing to do is get an idea of just how major a backward-compatibility breakage this would be. If you change the cookiejar architecture, then run the Python test suite, what happens? The number of failures will give you an idea of how bad it will be. If there are no failures, you could consider just making the change. You probably should make an attempt to find out what third party apps use the cookiejars and see what they do. If there are failures, then you need to add a second cookiejar implementation, and deprecate the old one. Oh, and please don't call the new cookier jar anything like "NewCookieJar". Because in a few years, it won't be. Actually, I lied, I do have an opinion on cookiejars. I agree with Terry that it is a bit weird to have an ABC inherit from a concrete class. Not just weird, but a violation of the Liskov Substitution Principle that an instance of a subclass should be usable anywhere an instance of the parent class is. If you can't even instantiate the subclass, that's a pretty major violation for no apparent benefit :-) -- Steven From rdmurray at bitdance.com Sun Mar 10 21:59:40 2013 From: rdmurray at bitdance.com (R. David Murray) Date: Sun, 10 Mar 2013 16:59:40 -0400 Subject: [Python-Dev] FileCookieJars In-Reply-To: <513A8CD2.8090105@pearwood.info> References: <513A8CD2.8090105@pearwood.info> Message-ID: <20130310205940.D93E8250BC3@webabinitio.net> On Sat, 09 Mar 2013 12:13:54 +1100, Steven D'Aprano wrote: > On 02/03/13 02:43, Demian Brecht wrote: > > Cross-posting from python-ideas due to no response there. Perhaps it's > > due to a general lack of usage/caring for cookiejar, but figured > > /someone/'s got to have an opinion about my proposal ;) > > Apparently not :-( > > > > TL;DR: CookieJar > FileCookieJar > *CookieJar are architecturally > > broken and this is an attempt to rectify that (and fix a couple bugs > > along the way). > [...] > > This will obviously break backwards compatibility, so I'm not entirely > > sure what best practice is around that: leave well enough alone even > > though it might not make sense, keep the old implementations around > > and deprecate them to be eventually replaced by the processors, or > > other ideas? > > I don't have an opinion on cookiejars per se, but I think that the first > thing to do is get an idea of just how major a backward-compatibility > breakage this would be. If you change the cookiejar architecture, then > run the Python test suite, what happens? The number of failures will give > you an idea of how bad it will be. > > If there are no failures, you could consider just making the change. You > probably should make an attempt to find out what third party apps use the > cookiejars and see what they do. To be clear, just passing the stdlib tests is *not* sufficient to think that backward compatibility is not likely to be broken. Deciding about the likelihood of breakage is a hard problem, to which we generally employ gut-level heuristics :) (And code search, as Steven suggests). Since you say that it will "obviously" break backward compatibility, I'd say that if we are going to do anything we'd have to think about how best to introduce a more sane implementation and deprecate the old...and if we are going to do that, we probably ought to spend a bit of time seeing if there are any other open cookiejar issues we can tackle at the same time. If, that is, you are interested enough to continue to be the point person for this, which probably won't be a short process :) The problem here is getting people interested, apparently :( Since I start my Pycon diversion-from-work next week, maybe I can find some time to take at least a preliminary look. --David From tjreedy at udel.edu Sun Mar 10 22:36:59 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 10 Mar 2013 17:36:59 -0400 Subject: [Python-Dev] FileCookieJars In-Reply-To: <20130310205940.D93E8250BC3@webabinitio.net> References: <513A8CD2.8090105@pearwood.info> <20130310205940.D93E8250BC3@webabinitio.net> Message-ID: On 3/10/2013 4:59 PM, R. David Murray wrote: > To be clear, just passing the stdlib tests is *not* sufficient to think > that backward compatibility is not likely to be broken. Deciding about > the likelihood of breakage is a hard problem, to which we generally > employ gut-level heuristics :) (And code search, as Steven suggests). > > Since you say that it will "obviously" break backward compatibility, I'd > say that if we are going to do anything we'd have to think about how best > to introduce a more sane implementation and deprecate the old...and if we > are going to do that, we probably ought to spend a bit of time seeing if > there are any other open cookiejar issues we can tackle at the same time. A) For similar reasons, I consider the proposal a first draft, and probably not the exact right thing to do. B) I have had similar thoughts about taking a broader look. Searching open issues for cookie gets 24 hits and I think at least half are about cookie.py or cookiejar.py. > If, that is, you are interested enough to continue to be the point person > for this, which probably won't be a short process :) > > The problem here is getting people interested, apparently :( The number of relatively recent problem reports indicates that people are using the two modules, so fixing them is worthwhile in that sense. On the other no, it does not seem that any *current* developers are working with cookies. Messages on http://bugs.python.org/issue17340 suggest that cookie.py should be based on http://tools.ietf.org/html/rfc6265 I added you as nosy to get your opinion. > Since I start my Pycon diversion-from-work next week, maybe I can find > some time to take at least a preliminary look. I am willing to learn and help, but my only experience with them is as a browser user defending against the onslaught of cookies. (I once sped up IExplorer by deleting a massive cookie cache.) -- Terry Jan Reedy From ezio.melotti at gmail.com Mon Mar 11 02:14:37 2013 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Mon, 11 Mar 2013 03:14:37 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #17385: Fix quadratic behavior in threading.Condition In-Reply-To: <3ZPLXL0jVYzS7C@mail.python.org> References: <3ZPLXL0jVYzS7C@mail.python.org> Message-ID: Hi, On Mon, Mar 11, 2013 at 2:58 AM, raymond.hettinger wrote: > http://hg.python.org/cpython/rev/0f86b51f8f8b > changeset: 82592:0f86b51f8f8b > user: Raymond Hettinger > date: Sun Mar 10 17:57:28 2013 -0700 > summary: > Issue #17385: Fix quadratic behavior in threading.Condition > > files: > Lib/threading.py | 10 ++++++++-- > Misc/NEWS | 3 +++ > 2 files changed, 11 insertions(+), 2 deletions(-) > > > diff --git a/Lib/threading.py b/Lib/threading.py > --- a/Lib/threading.py > +++ b/Lib/threading.py > @@ -10,6 +10,12 @@ > from time import time as _time > from traceback import format_exc as _format_exc > from _weakrefset import WeakSet > +try: > + from _itertools import islice as _slice > + from _collections import deque as _deque > +except ImportError: > + from itertools import islice as _islice > + from collections import deque as _deque > Shouldn't the one in the 'try' be _islice too? Best Regards, Ezio Melotti From demianbrecht at gmail.com Mon Mar 11 06:46:26 2013 From: demianbrecht at gmail.com (Demian Brecht) Date: Sun, 10 Mar 2013 22:46:26 -0700 Subject: [Python-Dev] FileCookieJars In-Reply-To: <20130310205940.D93E8250BC3@webabinitio.net> References: <513A8CD2.8090105@pearwood.info> <20130310205940.D93E8250BC3@webabinitio.net> Message-ID: <513D6FB2.4020105@gmail.com> On 2013-03-10 1:59 PM, R. David Murray wrote: > To be clear, just passing the stdlib tests is*not* sufficient to think > that backward compatibility is not likely to be broken. Deciding about > the likelihood of breakage is a hard problem, to which we generally > employ gut-level heuristics:) (And code search, as Steven suggests). I figured that this would be a hard problem, which is also why I didn't delve into a patch further than a proposed first stab at a more sane implementation, coupled with changes to the unit tests. > Since you say that it will "obviously" break backward compatibility, I'd > say that if we are going to do anything we'd have to think about how best > to introduce a more sane implementation and deprecate the old...and if we > are going to do that, we probably ought to spend a bit of time seeing if > there are any other open cookiejar issues we can tackle at the same time. I was hoping that there would be a little more interest (and potentially some further historical context on why the module was implemented as it was) from those in the group. > If, that is, you are interested enough to continue to be the point person > for this, which probably won't be a short process:) I'm not sure who this was directed to (me or Steven), but I was looking for an area in the stdlib that I could really sink my teeth into and get my hands dirty with and this definitely seems to be just that. I figured that it wouldn't be a short process and the more that I read up on RFC 6265 (and 2965) and compare them to the implementation in cookie and cookiejar, the more I'm thinking that this will be a relatively complex and lengthy process. (Definitely interested in that btw :)). > > The problem here is getting people interested, apparently:( > > Since I start my Pycon diversion-from-work next week, maybe I can find > some time to take at least a preliminary look. In case you haven't already seen it, I had posted a second patch (that doesn't break the Liskov substitution principle as Terry pointed out after reviewing my overzealous initial patch) here: http://bugs.python.org/issue16901. I think the design is much more sane than what's currently there and aligns with how HTTP cookies are processed in urllib.request. Now having said all that, the more I think about it and the more I read, the more I wonder why there are even specialized implementations (LWP and Mozilla) in the stdlib to begin with. I would assume that the only thing that the stdlib /should/ be covering is the RFC (6265, but still allowing 2965). If there are deviations (and some are eluded to throughout the code), then I would think that those should be handled by packages external to the stdlib. It seems that the Mozilla implementation covers 2965, but LWP is based on the Perl library (which isn't known to be supported by any browser environment). Why is this even there to begin with? To paraphrase the comments that I read in the code: "This isn't supported by any browser, but they're easy to parse". In my mind, this shouldn't be reason enough for inclusion in the stdlib. I'd also go as far to say that if cookies are implemented as consistently as, say, OAuth 2.0 providers (meaning very little to no consistency), then there really shouldn't be a cookie implementation in the stdlib anyway. So to sum it up, yes I'm very much interested in doing what I can to help the development of the stdlib (more so interested in parts that don't currently have experts listed, such as http and imaplib), but will definitely need to be shown the ropes a bit as my professional life has revolved around closed source games. From demianbrecht at gmail.com Mon Mar 11 06:48:33 2013 From: demianbrecht at gmail.com (Demian Brecht) Date: Sun, 10 Mar 2013 22:48:33 -0700 Subject: [Python-Dev] FileCookieJars In-Reply-To: References: <513A8CD2.8090105@pearwood.info> <20130310205940.D93E8250BC3@webabinitio.net> Message-ID: <513D7031.1000603@gmail.com> On 2013-03-10 2:36 PM, Terry Reedy wrote: > A) For similar reasons, I consider the proposal a first draft, and > probably not the exact right thing to do. That is correct. The more I think about it, the more I'm convincing myself that even though the proposal is more sane than what's there right now, it's definitely not the exact correct thing to do. From rdmurray at bitdance.com Mon Mar 11 13:44:01 2013 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 11 Mar 2013 08:44:01 -0400 Subject: [Python-Dev] FileCookieJars In-Reply-To: <513D6FB2.4020105@gmail.com> References: <513A8CD2.8090105@pearwood.info> <20130310205940.D93E8250BC3@webabinitio.net> <513D6FB2.4020105@gmail.com> Message-ID: <20130311124402.F0AD3250BCD@webabinitio.net> On Sun, 10 Mar 2013 22:46:26 -0700, Demian Brecht wrote: > On 2013-03-10 1:59 PM, R. David Murray wrote: > I was hoping that there would be a little more interest (and potentially > some further historical context on why the module was implemented as it > was) from those in the group. It isn't clear who wrote the original code. It looks like Martin von L??wis checked it in, so he may be the author (or not). It is pretty old code, checked in May 31 18:22:40 2004, according to the repo history. > > If, that is, you are interested enough to continue to be the point person > > for this, which probably won't be a short process:) > > I'm not sure who this was directed to (me or Steven), but I was looking > for an area in the stdlib that I could really sink my teeth into and get > my hands dirty with and this definitely seems to be just that. I figured > that it wouldn't be a short process and the more that I read up on RFC > 6265 (and 2965) and compare them to the implementation in cookie and > cookiejar, the more I'm thinking that this will be a relatively complex > and lengthy process. (Definitely interested in that btw :)). It was directed to you. We love having people pick up maintenance of modules that don't currently have someone specifically interested in them, so it is great that you are interested. If you produce code and proposals and keep asking, people *will* respond, though some patience and persistence may be required. > > The problem here is getting people interested, apparently:( > > > > Since I start my Pycon diversion-from-work next week, maybe I can find > > some time to take at least a preliminary look. > > In case you haven't already seen it, I had posted a second patch (that > doesn't break the Liskov substitution principle as Terry pointed out > after reviewing my overzealous initial patch) here: > http://bugs.python.org/issue16901. I think the design is much more sane > than what's currently there and aligns with how HTTP cookies are > processed in urllib.request. I haven't looked it over yet, but I put it on my todo list. > Now having said all that, the more I think about it and the more I read, > the more I wonder why there are even specialized implementations (LWP > and Mozilla) in the stdlib to begin with. I would assume that the only > thing that the stdlib /should/ be covering is the RFC (6265, but still > allowing 2965). Because reality. Take a look at http://bugs.python.org/issue2193 (for example), and see if you still want to tackle this topic :) (I hope you do). > If there are deviations (and some are eluded to throughout the code), > then I would think that those should be handled by packages external to > the stdlib. It seems that the Mozilla implementation covers 2965, but > LWP is based on the Perl library (which isn't known to be supported by > any browser environment). Why is this even there to begin with? To > paraphrase the comments that I read in the code: "This isn't supported > by any browser, but they're easy to parse". In my mind, this shouldn't > be reason enough for inclusion in the stdlib. Well, at the time it probably was. And given that it is there, *someone* is probably depending on it. But, we can probably pay less attention to that variant, and perhaps not carry it forward if we do decide to go through a deprecation of some sort (*). The other reality is that our cookie support won't be very useful if it adheres strictly to the RFCs, since the servers and browsers don't. What we need is something practical...which may differ to a greater or lesser degree from what we currently have. > I'd also go as far to say that if cookies are implemented as > consistently as, say, OAuth 2.0 providers (meaning very little to no > consistency), then there really shouldn't be a cookie implementation in > the stdlib anyway. But there is, and in fact it *is* useful and used by many people, so IMO it is worth maintaining. > So to sum it up, yes I'm very much interested in doing what I can to > help the development of the stdlib (more so interested in parts that > don't currently have experts listed, such as http and imaplib), but will > definitely need to be shown the ropes a bit as my professional life has > revolved around closed source games. Excellent. If you aren't already on the core-mentorship mailing list, you might want to sign up. Your approach (adopting modules without current maintainers) is a good one. --David (*) Our deprecation for stuff like this tends to be that we pretty much stop maintaining it, document it as deprecated, but don't delete it. From brett at python.org Mon Mar 11 14:22:38 2013 From: brett at python.org (Brett Cannon) Date: Mon, 11 Mar 2013 09:22:38 -0400 Subject: [Python-Dev] [Python-checkins] cpython (2.7): #16004: Add `make touch`. In-Reply-To: <3ZPVtS3mkkzSHQ@mail.python.org> References: <3ZPVtS3mkkzSHQ@mail.python.org> Message-ID: Should this also touch Python/importlib.h? On Mon, Mar 11, 2013 at 3:14 AM, ezio.melotti wrote: > http://hg.python.org/cpython/rev/da3f4774b939 > changeset: 82600:da3f4774b939 > branch: 2.7 > parent: 82593:3e14aafeca04 > user: Ezio Melotti > date: Mon Mar 11 09:14:09 2013 +0200 > summary: > #16004: Add `make touch`. > > files: > Makefile.pre.in | 6 +++++- > Misc/NEWS | 2 ++ > 2 files changed, 7 insertions(+), 1 deletions(-) > > > diff --git a/Makefile.pre.in b/Makefile.pre.in > --- a/Makefile.pre.in > +++ b/Makefile.pre.in > @@ -1250,6 +1250,10 @@ > etags Include/*.h; \ > for i in $(SRCDIRS); do etags -a $$i/*.[ch]; done > > +# Touch generated files > +touch: > + touch Include/Python-ast.h Python/Python-ast.c > + > # Sanitation targets -- clean leaves libraries, executables and tags > # files, which clobber removes as well > pycremoval: > @@ -1339,7 +1343,7 @@ > .PHONY: frameworkinstall frameworkinstallframework > frameworkinstallstructure > .PHONY: frameworkinstallmaclib frameworkinstallapps > frameworkinstallunixtools > .PHONY: frameworkaltinstallunixtools recheck autoconf clean clobber > distclean > -.PHONY: smelly funny patchcheck altmaninstall > +.PHONY: smelly funny patchcheck touch altmaninstall > .PHONY: gdbhooks > > # IF YOU PUT ANYTHING HERE IT WILL GO AWAY > diff --git a/Misc/NEWS b/Misc/NEWS > --- a/Misc/NEWS > +++ b/Misc/NEWS > @@ -874,6 +874,8 @@ > Build > ----- > > +- Issue #16004: Add `make touch`. > + > - Issue #5033: Fix building of the sqlite3 extension module when the > SQLite library version has "beta" in it. Patch by Andreas Pelme. > > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Mar 11 14:23:07 2013 From: brett at python.org (Brett Cannon) Date: Mon, 11 Mar 2013 09:23:07 -0400 Subject: [Python-Dev] [Python-checkins] cpython (2.7): #16004: Add `make touch`. In-Reply-To: References: <3ZPVtS3mkkzSHQ@mail.python.org> Message-ID: On Mon, Mar 11, 2013 at 9:22 AM, Brett Cannon wrote: > Should this also touch Python/importlib.h? > > nm, noticed this was added on 2.7 and not default. > > On Mon, Mar 11, 2013 at 3:14 AM, ezio.melotti wrote: > >> http://hg.python.org/cpython/rev/da3f4774b939 >> changeset: 82600:da3f4774b939 >> branch: 2.7 >> parent: 82593:3e14aafeca04 >> user: Ezio Melotti >> date: Mon Mar 11 09:14:09 2013 +0200 >> summary: >> #16004: Add `make touch`. >> >> files: >> Makefile.pre.in | 6 +++++- >> Misc/NEWS | 2 ++ >> 2 files changed, 7 insertions(+), 1 deletions(-) >> >> >> diff --git a/Makefile.pre.in b/Makefile.pre.in >> --- a/Makefile.pre.in >> +++ b/Makefile.pre.in >> @@ -1250,6 +1250,10 @@ >> etags Include/*.h; \ >> for i in $(SRCDIRS); do etags -a $$i/*.[ch]; done >> >> +# Touch generated files >> +touch: >> + touch Include/Python-ast.h Python/Python-ast.c >> + >> # Sanitation targets -- clean leaves libraries, executables and tags >> # files, which clobber removes as well >> pycremoval: >> @@ -1339,7 +1343,7 @@ >> .PHONY: frameworkinstall frameworkinstallframework >> frameworkinstallstructure >> .PHONY: frameworkinstallmaclib frameworkinstallapps >> frameworkinstallunixtools >> .PHONY: frameworkaltinstallunixtools recheck autoconf clean clobber >> distclean >> -.PHONY: smelly funny patchcheck altmaninstall >> +.PHONY: smelly funny patchcheck touch altmaninstall >> .PHONY: gdbhooks >> >> # IF YOU PUT ANYTHING HERE IT WILL GO AWAY >> diff --git a/Misc/NEWS b/Misc/NEWS >> --- a/Misc/NEWS >> +++ b/Misc/NEWS >> @@ -874,6 +874,8 @@ >> Build >> ----- >> >> +- Issue #16004: Add `make touch`. >> + >> - Issue #5033: Fix building of the sqlite3 extension module when the >> SQLite library version has "beta" in it. Patch by Andreas Pelme. >> >> >> -- >> Repository URL: http://hg.python.org/cpython >> >> _______________________________________________ >> Python-checkins mailing list >> Python-checkins at python.org >> http://mail.python.org/mailman/listinfo/python-checkins >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Mon Mar 11 14:28:40 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 11 Mar 2013 14:28:40 +0100 Subject: [Python-Dev] cpython (2.7): #16004: Add `make touch`. References: <3ZPVtS3mkkzSHQ@mail.python.org> Message-ID: <20130311142840.09935555@pitrou.net> On Mon, 11 Mar 2013 08:14:24 +0100 (CET) ezio.melotti wrote: > http://hg.python.org/cpython/rev/da3f4774b939 > changeset: 82600:da3f4774b939 > branch: 2.7 > parent: 82593:3e14aafeca04 > user: Ezio Melotti > date: Mon Mar 11 09:14:09 2013 +0200 > summary: > #16004: Add `make touch`. Shouldn't that be mentioned / explained / documented somewhere? It doesn't look obvious in which circumstances it could be useful. Regards Antoine. From demianbrecht at gmail.com Mon Mar 11 16:35:17 2013 From: demianbrecht at gmail.com (Demian Brecht) Date: Mon, 11 Mar 2013 08:35:17 -0700 Subject: [Python-Dev] FileCookieJars In-Reply-To: <20130311124402.F0AD3250BCD@webabinitio.net> References: <513A8CD2.8090105@pearwood.info> <20130310205940.D93E8250BC3@webabinitio.net> <513D6FB2.4020105@gmail.com> <20130311124402.F0AD3250BCD@webabinitio.net> Message-ID: <513DF9B5.6040503@gmail.com> On 2013-03-11 5:44 AM, R. David Murray wrote: > though some patience > and persistence may be required. I have a wife and kids. This, I've become quite good at ;) > Take a look at http://bugs.python.org/issue2193 (for example), and see > if you still want to tackle this topic :) (I hope you do). Egad. I knew that cookies were quite the can of worms prior to digging into this as much as I have, but I didn't realize that the RFC had been written /after/ cookie implementations had already surfaced in the wild (I guess I shouldn't have actually been surprised either). Just makes this more challenging and therefore interesting to work on imo :) > The other reality is that our cookie support won't be very useful if > it adheres strictly to the RFCs, since the servers and browsers don't. > What we need is something practical...which may differ to a greater or > lesser degree from what we currently have. Yes, I wasn't sure of the general standpoint of Python stdlibs in terms of practicality versus strict adherence. While adhering to Postel's law in cases such as cookies can definitely make an implementation much more tricky, it increases its practical usage (I didn't realize just how deviant servers and browsers were for this particular topic until after reading through issue 2193). > But there is, and in fact it *is* useful and used by many people, so > IMO it is worth maintaining. I see your point here and agree. It's much different when changes can be dictated in closed source packages (what I'm most accustomed to) than dealing with an open source project at the scale of Python and the stdlib. > Excellent. If you aren't already on the core-mentorship mailing list, you > might want to sign up. Your approach (adopting modules without current > maintainers) is a good one. Thanks, I wasn't aware of the core-mentorship list. I'll be signing up shortly. Good to know my approach is sane :) From ezio.melotti at gmail.com Mon Mar 11 21:11:28 2013 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Mon, 11 Mar 2013 22:11:28 +0200 Subject: [Python-Dev] cpython (2.7): #16004: Add `make touch`. In-Reply-To: <20130311142840.09935555@pitrou.net> References: <3ZPVtS3mkkzSHQ@mail.python.org> <20130311142840.09935555@pitrou.net> Message-ID: Hi, On Mon, Mar 11, 2013 at 3:28 PM, Antoine Pitrou wrote: > On Mon, 11 Mar 2013 08:14:24 +0100 (CET) > ezio.melotti wrote: >> http://hg.python.org/cpython/rev/da3f4774b939 >> changeset: 82600:da3f4774b939 >> branch: 2.7 >> parent: 82593:3e14aafeca04 >> user: Ezio Melotti >> date: Mon Mar 11 09:14:09 2013 +0200 >> summary: >> #16004: Add `make touch`. > > Shouldn't that be mentioned / explained / documented somewhere? > It doesn't look obvious in which circumstances it could be useful. > It will be documented in http://bugs.python.org/issue15964 (SyntaxError in asdl when building 2.7 with system Python 3). Best Regards, Ezio Melotti > Regards > > Antoine. > From ezio.melotti at gmail.com Tue Mar 12 07:52:18 2013 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Tue, 12 Mar 2013 08:52:18 +0200 Subject: [Python-Dev] [Python-checkins] CANNOT Patch 3.x NEWS [was cpython (2.7): Issue #14707: add news entry\ In-Reply-To: <513EC7E2.90400@udel.edu> References: <3ZQ4z44QVLzRXr@mail.python.org> <513EC7E2.90400@udel.edu> Message-ID: Hi, On Tue, Mar 12, 2013 at 8:14 AM, Terry Reedy wrote: > On 3/12/2013 1:50 AM, terry.reedy wrote: >> >> http://hg.python.org/cpython/rev/c162e2ff15bd >> changeset: 82624:c162e2ff15bd >> branch: 2.7 >> parent: 82617:cd0191a9b5c9 >> user: Terry Jan Reedy >> date: Tue Mar 12 01:26:28 2013 -0400 >> summary: >> Issue #14707: add news entry >> >> files: >> Misc/NEWS | 3 +++ >> 1 files changed, 3 insertions(+), 0 deletions(-) >> >> >> diff --git a/Misc/NEWS b/Misc/NEWS >> --- a/Misc/NEWS >> +++ b/Misc/NEWS >> @@ -944,6 +944,9 @@ >> Documentation >> ------------- >> >> +- Issue #14707: remove doubled words in docs and docstrings >> + reported by Serhiy Storchaka and Matthew Barnett. >> + >> - Issue #16406: combine the pages for uploading and registering to PyPI. >> >> - Issue #16403: Document how distutils uses the maintainer field in > > > The above was easy. When I tried to transplant this patch to 3.2, export and > import, or directly edit 3.2 NEWS with Notepad++ or IDLE, hg makes a 319kb > patch that deletes and add the entire file in chunks. I did not think I > should commit and push that. > What are the exact commands you used? Are your clones up to date (i.e. did you do "hg pull" and "hg up" before "hg graft")? If not, you should pull/update. Does "hg heads ." show you more than one head? If so you should do "hg merge". Is your clone "clean" (i.e. does "hg status" show anything as 'M')? If not, you should do "hg revert -ar 3.2" or "hg up -C 3.2". Once your clone is clean you can just edit Misc/NEWS manually since it's easier than trying to graft the 2 changesets you made on 2.7 to add and edit the Misc/NEWS entry. You can also check with "hg in" and "hg out" if there's something you haven't pulled/pushed yet, but that shouldn't be a problem. > The failure of transplant and import are perhaps understandable because 3.2 > has a gratuitous case difference with /combine/Combine/. > > - Issue #16406: Combine the pages for uploading and registering to PyPI. > > But the inability to make a proper diff from direct edit is something else. > If I add just a single blank line, even that generates a mega patch. Same > with 3.3 NEWS. I also tried deleting the file to make hg regenerate from the > repository database. > > Anyone have any idea what the problem is? Has anything changed with hg, > windows, line endings and this text file in the last few months? I just > pushed patches for about 20 scattered files in Docs, Lib, Modules, and Tools > earlier today, so the problem seems to be specific to NEWS. > Not sure about this, but in the meanwhile you could try what I suggested above -- if that doesn't work we can find some other solution. (If you prefer you can come on #python-dev too.) Best Regards, Ezio Melotti > tjr > From ncoghlan at gmail.com Tue Mar 12 16:26:07 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 13 Mar 2013 01:26:07 +1000 Subject: [Python-Dev] [Python-checkins] cpython (2.7): #16004: Add `make touch`. In-Reply-To: References: <3ZPVtS3mkkzSHQ@mail.python.org> Message-ID: On 11 Mar 2013 06:23, "Brett Cannon" wrote: > > > > > On Mon, Mar 11, 2013 at 9:22 AM, Brett Cannon wrote: >> >> Should this also touch Python/importlib.h? >> > > nm, noticed this was added on 2.7 and not default. Default already had it, this was a back port so that "make touch" could be given as a consistent fix for certain build problems in the devguide. (Specifically, make trying to rebuild those files when you don't yet have the necessary pieces available to do so) Cheers, Nick. > >> >> >> On Mon, Mar 11, 2013 at 3:14 AM, ezio.melotti wrote: >>> >>> http://hg.python.org/cpython/rev/da3f4774b939 >>> changeset: 82600:da3f4774b939 >>> branch: 2.7 >>> parent: 82593:3e14aafeca04 >>> user: Ezio Melotti >>> date: Mon Mar 11 09:14:09 2013 +0200 >>> summary: >>> #16004: Add `make touch`. >>> >>> files: >>> Makefile.pre.in | 6 +++++- >>> Misc/NEWS | 2 ++ >>> 2 files changed, 7 insertions(+), 1 deletions(-) >>> >>> >>> diff --git a/Makefile.pre.in b/Makefile.pre.in >>> --- a/Makefile.pre.in >>> +++ b/Makefile.pre.in >>> @@ -1250,6 +1250,10 @@ >>> etags Include/*.h; \ >>> for i in $(SRCDIRS); do etags -a $$i/*.[ch]; done >>> >>> +# Touch generated files >>> +touch: >>> + touch Include/Python-ast.h Python/Python-ast.c >>> + >>> # Sanitation targets -- clean leaves libraries, executables and tags >>> # files, which clobber removes as well >>> pycremoval: >>> @@ -1339,7 +1343,7 @@ >>> .PHONY: frameworkinstall frameworkinstallframework frameworkinstallstructure >>> .PHONY: frameworkinstallmaclib frameworkinstallapps frameworkinstallunixtools >>> .PHONY: frameworkaltinstallunixtools recheck autoconf clean clobber distclean >>> -.PHONY: smelly funny patchcheck altmaninstall >>> +.PHONY: smelly funny patchcheck touch altmaninstall >>> .PHONY: gdbhooks >>> >>> # IF YOU PUT ANYTHING HERE IT WILL GO AWAY >>> diff --git a/Misc/NEWS b/Misc/NEWS >>> --- a/Misc/NEWS >>> +++ b/Misc/NEWS >>> @@ -874,6 +874,8 @@ >>> Build >>> ----- >>> >>> +- Issue #16004: Add `make touch`. >>> + >>> - Issue #5033: Fix building of the sqlite3 extension module when the >>> SQLite library version has "beta" in it. Patch by Andreas Pelme. >>> >>> >>> -- >>> Repository URL: http://hg.python.org/cpython >>> >>> _______________________________________________ >>> Python-checkins mailing list >>> Python-checkins at python.org >>> http://mail.python.org/mailman/listinfo/python-checkins >>> >> > > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nad at acm.org Tue Mar 12 20:23:30 2013 From: nad at acm.org (Ned Deily) Date: Tue, 12 Mar 2013 12:23:30 -0700 Subject: [Python-Dev] CANNOT Patch 3.x NEWS [was cpython (2.7): Issue #14707: add news entry\ References: <3ZQ4z44QVLzRXr@mail.python.org> <513EC7E2.90400@udel.edu> <513F63BB.8060805@udel.edu> Message-ID: In article <513F63BB.8060805 at udel.edu>, Terry Reedy wrote: > I have tried deleting the NEWS file and reverting the deletion. > hg update does not restore the file as it apparently thinks I actually > want the uncommitted deletion. I'm not sure exactly the sequence of events here but chances are you ran into the "normal" problem of trying to merge a change that modifies Misc/NEWS. Unless you are very lucky or careful, merging Misc/NEWS changes from, say, 3.2 to 3.3 or 3.3 to default seldom works cleanly. After merging but before committing, you can just revert Misc/NEWS and then manually re-insert the changes. The auto-merge of Misc/NEWS is more often than not useless. -- Ned Deily, nad at acm.org From anuj at codesyrup.com Tue Mar 12 21:41:54 2013 From: anuj at codesyrup.com (Anuj Gupta) Date: Wed, 13 Mar 2013 02:11:54 +0530 Subject: [Python-Dev] Namaste, Python-Dev Message-ID: <513F9312.6070909@codesyrup.com> Hello, I've just joined Python-Dev and I intend to spend an increasing amount of time contributing to Python. A couple of days back I contributed my first (very) few lines, to the benchmark suite. Incidentally, it was my first contribution ever towards Free Software, and the feeling is incredible! I am truly honoured to be a part of this amazing community, and I hope to serve it well. Regards, Anuj Gupta From solipsis at pitrou.net Tue Mar 12 21:48:02 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 12 Mar 2013 21:48:02 +0100 Subject: [Python-Dev] Namaste, Python-Dev References: <513F9312.6070909@codesyrup.com> Message-ID: <20130312214802.3bff927b@pitrou.net> Hello Anuj, On Wed, 13 Mar 2013 02:11:54 +0530 Anuj Gupta wrote: > > I've just joined Python-Dev and I intend to spend an increasing amount > of time contributing to Python. A couple of days back I contributed my > first (very) few lines, to the benchmark suite. Incidentally, it was my > first contribution ever towards Free Software, and the feeling is > incredible! Thanks for joining us, and congratulations :-) If you haven't already done so, we have a developers' guide which will help you make further contributions: http://docs.python.org/devguide/ Regards Antoine. From ezio.melotti at gmail.com Wed Mar 13 00:34:58 2013 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Wed, 13 Mar 2013 01:34:58 +0200 Subject: [Python-Dev] [Python-checkins] CANNOT Patch 3.x NEWS [was cpython (2.7): Issue #14707: add news entry\ In-Reply-To: <513F63BB.8060805@udel.edu> References: <3ZQ4z44QVLzRXr@mail.python.org> <513EC7E2.90400@udel.edu> <513F63BB.8060805@udel.edu> Message-ID: Hi, On Tue, Mar 12, 2013 at 7:19 PM, Terry Reedy wrote: > On 3/12/2013 2:52 AM, Ezio Melotti wrote: >> What are the exact commands you used? > > Clicks on TortoiseHg HgWorkbench GUI ;-). > I wonder if TortoiseHg is doing something wrong here. Maybe you could try from cmd too. >> Are your clones up to date (i.e. did you do "hg pull" and "hg up" >> before "hg graft")? > > There were no other pushes between my last de-double patch and this, and I > am sure I ran my pull + 3*update .bat first. I have run it multiple times > since. > Around the time you pushed on 2.7 I also pushed something, so that might have created some conflict. How does your .bat look like? One gotcha of the share extension is that if you use "hg pull -u" and there's nothing to pull because you already pulled in one of the shared clones, the update won't be executed (this is actually normal behaviour of "hg pull", but the consequences are especially noticeable while using shared clones). >> Does "hg heads ." show you more than one head? > > The DAG window shows the normal one head per branch as appropriate for the > particular branch display. At the moment, hg heads shows the four commits > from Eli, 82628 to 82631 as heads plus old 2.6 and 3.1 heads. > >> Is your clone "clean" (i.e. does "hg status" show anything as 'M')? > > The status window is empty until I edit NEWS and click Refresh, at which > point M Misc/News shows up with the megadiff. > Right click/ Revert/yes and the file is reverted. > >> Once your clone is clean you can just edit Misc/NEWS manually > > Since the graft and import failed (producing no diff), I have been editing > manually and that is when I get the megadiff. I added a couple of blank > lines to ACKS and got a normal diff. Now, adding a blank line to 2.7 NEWS > also gives a blank line. > > Could the failed graft have messed up the master copy in my cpython > repository. > That's possible. From "hg help graft": If a graft merge results in conflicts, the graft process is interrupted so that the current merge can be manually resolved. Once all conflicts are addressed, the graft process can be continued with the -c/--continue option. This doesn't mean that you copy is messed up though. "hg up -C 3.2" should restore it. When I graft/merge and there are conflicts I use kdiff3, and it takes just a few seconds to solve the conflicts usually (for Misc/NEWS is ctrl+2, ctrl+3, ctrl+s, alt+f4, that roughly translates too "include both the conflicting news, save and quit). > I have tried deleting the NEWS file and reverting the deletion. > hg update does not restore the file as it apparently thinks I actually want > the uncommitted deletion. > How did you delete it? I assume that if you do it from the TortoiseHG GUI, it will mark it as "deleted" ('D' in "hg status"). If you do it from cmd/file manager hg should see it as missing ('!' in "hg status") and you can use "hg revert Misc/NEWS" to restore it. >> it's easier than trying to graft the 2 changesets you made on 2.7 to >> add and edit the Misc/NEWS entry. > > There was only one 2.7 changeset with only the NEWS patch. > I was referring to the one that added the news + the one that fixed the issue id. >> You can also check with "hg in" and "hg out" if there's something you >> haven't pulled/pushed yet, but that shouldn't be a problem. > > I tried both and got 'no changes'. > >> (If you prefer you can come on #python-dev too.) > > I may try that, but I suspect that my registration/nick has expired again > and last time is was obnoxiously hard to get re-established. > There's no need to register your nick for #python-dev (there is for #python though). You can just fire up your favourite IRC client (or even http://webchat.freenode.net/) and join. (Registering the nick shouldn't be difficult though.) > Terry > From fwierzbicki at gmail.com Wed Mar 13 04:09:25 2013 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Tue, 12 Mar 2013 20:09:25 -0700 Subject: [Python-Dev] Python Language Summit at PyCon: Agenda In-Reply-To: References: <13FB490E-3548-4959-9C2B-2880B8ACA6F5@voidspace.org.uk> <20130227223749.2f06a328@anarchist.wooz.org> <20130301093223.743a04c8@anarchist.wooz.org> <20130301193803.4156607c@pitrou.net> Message-ID: Hi all, I won't be able to make it to the summit and probably not the conference. I have a raging 104F fever (40C for many of you =) The doctor says I have influenza and am highly contagious, so I shouldn't be going anywhere near conference full of people for five days - looks like I'm missing this one :( I may make it down for a couple of sprint days. -Frank From ncoghlan at gmail.com Wed Mar 13 08:01:23 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 13 Mar 2013 00:01:23 -0700 Subject: [Python-Dev] [Python-checkins] CANNOT Patch 3.x NEWS [was cpython (2.7): Issue #14707: add news entry\ In-Reply-To: <51402D99.3020507@udel.edu> References: <3ZQ4z44QVLzRXr@mail.python.org> <513EC7E2.90400@udel.edu> <513F63BB.8060805@udel.edu> <51402D99.3020507@udel.edu> Message-ID: On Wed, Mar 13, 2013 at 12:41 AM, Terry Reedy wrote: > Bottom line: I decided to restart from scratch. I am still not sure if the > glitch was hg, disk 1, disk 2, or Windows, or some combination. > > After making and posting a patch to the tracker today, I tried to annotate a > file and got an error something like 'cannot find revision -1'. I then > noticed that there was no dag in the workbench dag window, as if there were > no revisions. When I looked in .hg/store, the big file seemed to be missing. > So I wiped, defragmented and compacted, and reloaded TortoiseHg. Tomorrow I > will re-clone and share the repository. Since this is the second time I have > re-cloned from python.org, I will follow the advice I read somewhere to make > a _backup clone that I leave alone until I need it, so I only have to pull > from now until then when I do. I still keep a pristine clone around so "nuke it from orbit" remains an option. "hg histedit" lets me deal with most of my screw-ups these days, though. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ezio.melotti at gmail.com Wed Mar 13 09:06:55 2013 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Wed, 13 Mar 2013 10:06:55 +0200 Subject: [Python-Dev] [Python-checkins] CANNOT Patch 3.x NEWS [was cpython (2.7): Issue #14707: add news entry\ In-Reply-To: <51402D99.3020507@udel.edu> References: <3ZQ4z44QVLzRXr@mail.python.org> <513EC7E2.90400@udel.edu> <513F63BB.8060805@udel.edu> <51402D99.3020507@udel.edu> Message-ID: On Wed, Mar 13, 2013 at 9:41 AM, Terry Reedy wrote: > Bottom line: I decided to restart from scratch. I am still not sure if the > glitch was hg, disk 1, disk 2, or Windows, or some combination. > > After making and posting a patch to the tracker today, I tried to annotate a > file and got an error something like 'cannot find revision -1'. I then > noticed that there was no dag in the workbench dag window, as if there were > no revisions. When I looked in .hg/store, the big file seemed to be missing. Note that with the share extension, the "big file" (which I assume is the store/ directory) only exists in the "main" clone. In the shared clones you'll find an .hg/sharedpath file that contains the path to the original .hg/ that contains the store/ dir with all the changesets. > So I wiped, defragmented and compacted, and reloaded TortoiseHg. Tomorrow I > will re-clone and share the repository. Since this is the second time I have > re-cloned from python.org, I will follow the advice I read somewhere to make > a _backup clone that I leave alone until I need it, so I only have to pull > from now until then when I do. > Good idea :) >> On 3/12/2013 7:34 PM, Ezio Melotti wrote: >> I wonder if TortoiseHg is doing something wrong here. Maybe you could >> try from cmd too. > > Workbench has a 'command' window for typing hg commands which it should pass > as is to Windows much as Command Prompt does. I tried some of the things you > suggested there. > >> Around the time you pushed on 2.7 I also pushed something, so that >> might have created some conflict. > > I do not remember seeing that. > I pushed on 3.3/default about half an hour after you pushed on 2.7, so that might have caused a push race, if during that time you were doing the merges and eventually tried to push after me without having pulled/updated in the meanwhile. The problem you described doesn't seem to be related to push races though. >> How does your .bat look like? > > pull -u to cpython + update of each of the three shares, much like written > in the devguide. > It's better to avoid using "hg pull -u", because if there's nothing to pull the "update" won't be executed. Here it shouldn't be a big problem, but you could break it if you manually pull something in one of the shared clones, and then run the .bat. Unless you also have an explicit "hg up" in the clone where you do "hg pull -u", that clone won't be updated by the script. >> That's possible. From "hg help graft": >> If a graft merge results in conflicts, the graft process is interrupted so >> that the current merge can be manually resolved. Once all conflicts are >> addressed, the graft process can be continued with the -c/--continue >> option. > > When merge produces a conflict, a window appears offering options including > using kdiff3 to resolve. When I tried the graft, the message in the command > window was just 'aborted', and I do not remember getting the resolve window. > What version of HG are you using? >> When I graft/merge and there are conflicts I use kdiff3, and it takes >> just a few seconds to solve the conflicts usually (for Misc/NEWS is >> ctrl+2, ctrl+3, ctrl+s, alt+f4, that roughly translates too "include >> both the conflicting news, save and quit). > > Since I have perhaps never gotten that sequence right, that info will be > helpful. > Glad to help, however I got it the other way around. The 1st pane is the parent and you can just ignore it; the 2nd pane is the local copy and the 3rd pane is the one from the previous branch that you are merging. The bottom pane will be the resulting file. For Misc/NEWS (the file that usually conflicts), you want the newest NEWS entry first, so you do ctrl+3 to get the one you just added, and ctrl+2 to get the one that was there already. Note that for other files you usually want to get only one of the versions, usually the one you have in the 3rd pane, so that sequence only applies to Misc/NEWS. Another tip is to use ctrl+q instead of alt+f4. >> If you do it from cmd/file manager hg should see it as missing ('!' in >> "hg status") and you can use "hg revert Misc/NEWS" to restore it. > > This. > > Thanks for trying to help. I will let you know if there are any more > problems after the re-clone. > Sure, and if you find part of the devguide that are not clear let me know (I also just uploaded a new patch to http://bugs.python.org/issue14468 to add a few new Mercurial FAQs to the devguide). Best Regards, Ezio Melotti > I still need to comment on the tcl/tk.dll and tkinter situation, but will > just mention now that I ran the four test_txxxx files on 3.3a0 (on Windows) > and they seemed to finish and be ok other than altering the environment. > > > Terry From trent at snakebite.org Thu Mar 14 03:05:41 2013 From: trent at snakebite.org (Trent Nelson) Date: Wed, 13 Mar 2013 19:05:41 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk Message-ID: <20130314020540.GB22505@snakebite.org> Just posted the slides for those that didn't have the benefit of attending the language summit today: https://speakerdeck.com/trent/parallelizing-the-python-interpreter-an-alternate-approach-to-async Trent. From christian at python.org Thu Mar 14 13:21:09 2013 From: christian at python.org (Christian Heimes) Date: Thu, 14 Mar 2013 13:21:09 +0100 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <20130314020540.GB22505@snakebite.org> References: <20130314020540.GB22505@snakebite.org> Message-ID: <5141C0B5.6060904@python.org> Am 14.03.2013 03:05, schrieb Trent Nelson: > Just posted the slides for those that didn't have the benefit of > attending the language summit today: > > https://speakerdeck.com/trent/parallelizing-the-python-interpreter-an-alternate-approach-to-async Wow, neat! Your idea with Py_PXCTC is ingenious. As far as I remember the FS and GS segment registers are used by most modern operating systems on x86 and x86_64 platforms nowadays to distinguish threads. TLS is implemented with FS and GS registers. I guess the __read[gf]sdword() intrinsics do exactly the same. Reading registers is super fast and should have a negligible effect on code. ARM CPUs don't have segment registers because they have a simpler addressing model. The register CP15 came up after a couple of Google searches. IMHO you should target x86, x86_64, ARMv6 and ARMv7. ARMv7 is going to be more important than x86 in the future. We are going to see more ARM based servers. Christian From solipsis at pitrou.net Thu Mar 14 13:38:47 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 14 Mar 2013 13:38:47 +0100 Subject: [Python-Dev] Slides from today's parallel/async Python talk References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> Message-ID: <20130314133847.5c1ae965@pitrou.net> Le Thu, 14 Mar 2013 13:21:09 +0100, Christian Heimes a ?crit : > > IMHO you should target x86, x86_64, ARMv6 and ARMv7. ARMv7 is going to > be more important than x86 in the future. We are going to see more > ARM based servers. Well we can't really see less of them, since there are hardly any ;-) Related reading: http://www.anandtech.com/show/6757/calxedas-arm-server-tested Regards Antoine. From a.cavallo at cavallinux.eu Thu Mar 14 13:45:59 2013 From: a.cavallo at cavallinux.eu (a.cavallo at cavallinux.eu) Date: Thu, 14 Mar 2013 13:45:59 +0100 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <20130314133847.5c1ae965@pitrou.net> References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314133847.5c1ae965@pitrou.net> Message-ID: <3dfe6492f42e332d837aa32b1b34ef90@cavallinux.eu> By the way on the arm (and any platform that can do cross-compiling) I've created a Makefile based build of the python 2.7.x: https://bitbucket.org/cavallo71/android Please don't be fooled by the Android name, it really can take any crosscompiler (provided it follows the gcc synatx). It was born out of the frustration with trying to adapt ./configure to do cross compiling. It is a sliglty different update to the problem as tried by the kiwy project for example. I hope this helps, Antonio On 2013-03-14 13:38, Antoine Pitrou wrote: > Le Thu, 14 Mar 2013 13:21:09 +0100, > Christian Heimes a ?crit : >> >> IMHO you should target x86, x86_64, ARMv6 and ARMv7. ARMv7 is going >> to >> be more important than x86 in the future. We are going to see more >> ARM based servers. > > Well we can't really see less of them, since there are hardly any ;-) > > Related reading: > http://www.anandtech.com/show/6757/calxedas-arm-server-tested > From martin at v.loewis.de Thu Mar 14 18:46:22 2013 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 14 Mar 2013 10:46:22 -0700 Subject: [Python-Dev] VC++ 2008 Express Edition now locked away? In-Reply-To: References: Message-ID: <51420CEE.9000905@v.loewis.de> Am 07.03.13 09:53, schrieb Steve Dower: >>>> To use the SDK compiler, you need to do a few manual steps >>>> first. >>>> >>>> After starting a command window, you need to run a batch file >>>> to configure your environment. Choose the appropriate option >>>> from >>>> >>>> C:\Program Files (x86)\Microsoft Visual Studio >>>> 9.0\VC\bin\vcvars64.bat >>>> >>>> or >>>> >>>> C:\Program Files (x86)\Microsoft Visual Studio >>>> 9.0\VC\bin\vcvars32.bat >>>> >>>> Then set two environment variables: >>>> >>>> set MSSdk=1 set DISTUTILS_USE_SDK=1 >>>> >>>> After these steps, the standard python setup.py install should >>>> work. >> >> This may be fine for building extensions, but it appears that more >> instructions are needed for a novice to build python itself. > > I'm not even sure that these variables are necessary - certainly > without the compilers installed setup.py looks in the right place for > them. I'll try this as well. Setting MSSdk shouldn't be necessary, as vcvars should already have set it (unless that changed in recent SDKs). Setting DISTUTILS_USE_SDK is necessary as a protection to avoid unintionally picking up the wrong build tools. As for distutils finding them automatically: this only works for finding VS installations. It is (AFAICT) not possible to automatically locate SDK installations (other than by exhaustive search of the disk). > As for the documentation, I'd be happy to provide an update for this > section once I've checked out that everything works. I think it should explain to to invoke msbuild, in addition to explaining how to plug old compilers into new IDEs. Regards, Martin From trent at snakebite.org Thu Mar 14 19:23:53 2013 From: trent at snakebite.org (Trent Nelson) Date: Thu, 14 Mar 2013 11:23:53 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <5141C0B5.6060904@python.org> References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> Message-ID: <20130314182352.GC24307@snakebite.org> On Thu, Mar 14, 2013 at 05:21:09AM -0700, Christian Heimes wrote: > Am 14.03.2013 03:05, schrieb Trent Nelson: > > Just posted the slides for those that didn't have the benefit of > > attending the language summit today: > > > > https://speakerdeck.com/trent/parallelizing-the-python-interpreter-an-alternate-approach-to-async > > Wow, neat! Your idea with Py_PXCTC is ingenious. Yeah, it's funny how the viability and performance of the whole approach comes down to a quirky little trick for quickly detecting if we're in a parallel thread ;-) I was very chuffed when it all fell into place. (And I hope the quirkiness of it doesn't detract from the overall approach.) > As far as I remember the FS and GS segment registers are used by most > modern operating systems on x86 and x86_64 platforms nowadays to > distinguish threads. TLS is implemented with FS and GS registers. I > guess the __read[gf]sdword() intrinsics do exactly the same. Yup, in fact, if I hadn't come up with the __read[gf]sword() trick, my only other option would have been TLS (or the GetCurrentThreadId /pthread_self() approach in the presentation). TLS is fantastic, and it's definitely an intrinsic part of the solution (the "Y" part of "if we're a parallel thread, do Y"), but it definitely more costly than a simple FS/GS register read. > Reading > registers is super fast and should have a negligible effect on code. Yeah the actual instruction is practically free; the main thing you pay for is the extra branch. However, most of the code looks like this: if (Py_PXCTX) something_small_and_inlineable(); else Py_INCREF(op); /* also small and inlineable */ In the majority of the cases, all the code for both branches is going to be in the same cache line, so a mispredicted branch is only going to result in a pipeline stall, which is better than a cache miss. > ARM CPUs don't have segment registers because they have a simpler > addressing model. The register CP15 came up after a couple of Google > searches. Noted, thanks! > IMHO you should target x86, x86_64, ARMv6 and ARMv7. ARMv7 is going to > be more important than x86 in the future. We are going to see more ARM > based servers. Yeah that's my general sentiment too. I'm definitely curious to see if other ISAs offer similar facilities (Sparc, IA64, POWER etc), but the hierarchy will be x86/x64 > ARM > * for the foreseeable future. Porting the Py_PXCTX part is trivial compared to the work that is going to be required to get this stuff working on POSIX where none of the sublime Windows concurrency, synchronisation and async IO primitives exist. > Christian Trent. From trent at snakebite.org Thu Mar 14 19:45:20 2013 From: trent at snakebite.org (Trent Nelson) Date: Thu, 14 Mar 2013 11:45:20 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <20130314020540.GB22505@snakebite.org> References: <20130314020540.GB22505@snakebite.org> Message-ID: <20130314184519.GD24307@snakebite.org> On Wed, Mar 13, 2013 at 07:05:41PM -0700, Trent Nelson wrote: > Just posted the slides for those that didn't have the benefit of > attending the language summit today: > > https://speakerdeck.com/trent/parallelizing-the-python-interpreter-an-alternate-approach-to-async Someone on /r/python asked if I could elaborate on the "do Y" part of "if we're in a parallel thread, do Y, if not, do X", which I (inadvertently) ended up replying to in detail. I've included the response below. (I'll work on converting this into a TL;DR set of slides soon.) > Can you go into a bit of depth about "X" here? That's a huge topic that I'm hoping to tackle ASAP. The basic premise is that parallel 'Context' objects (well, structs) are allocated for each parallel thread callback. The context persists for the lifetime of the "parallel work". The "lifetime of the parallel work" depends on what you're doing. For a simple ``async.submit_work(foo)``, the context is considered complete once ``foo()`` has been called (presuming no exceptions were raised). For an async client/server, the context will persist for the entirety of the connection. The context is responsible for encapsulating all resources related to the parallel thread. So, it has its own heap, and all memory allocations are taken from that heap. For any given parallel thread, only one context can be executing at a time, and this can be accessed via the ``__declspec(thread) Context *ctx`` global (which is primed by some glue code as soon as the parallel thread starts executing a callback). No reference counting or garbage collection is done during parallel thread execution. Instead, once the context is finished, it is scheduled to be released, which means it'll be "processed" by the main thread as part of its housekeeping work (during ``async.run()`` (technically, ``async.run_once()``). The main thread simply destroys the entire heap in one fell swoop, releasing all memory that was associated with that context. There are a few side effects to this. First, the heap allocator (basically, the thing that answers ``malloc()`` calls) is incredibly simple. It allocates LARGE_PAGE_SIZE chunks of memory at a time (2MB on x64), and simply returns pointers to that chunk for each memory request (adjusting h->next and allocation stats as it goes along, obviously). Once the 2MB has been exhausted, another 2MB is allocated. That approach is fine for the ``submit_(work|timer|wait)`` callbacks, which basically provide a way to run a presumably-finite-length function in a parallel thread (and invoking callbacks/errbacks as required). However, it breaks down when dealing with client/server stuff. Each invocation of a callback (say, ``data_received(...)``) may only consume, say, 500 bytes, but it might be called a million times before the connection is terminated. You can't have cumulative memory usage with possibly-infinite-length client/server-callbacks like you can with the once-off ``submit_(work|wait|timer)`` stuff. So, enter heap snapshots. The logic that handles all client/server connections is instrumented such that it takes a snapshot of the heap (and all associated stats) prior to invoking a Python method (via ``PyObject_Call()``, for example, i.e. the invocation of ``data_received``). When the method completes, we can simply roll back the snapshot. The heap's stats and next pointers et al all get reset back to what they were before the callback was invoked. That's how the chargen server is able to pump out endless streams of data for every client whilst keeping memory usage static. (Well, every new client currently consumes at least a minimum of 2MB (but down the track that can be tweaked back down to SMALL_PAGE_SIZE, 4096, for servers that need to handle hundreds of thousands of clients simultaneously). The only issue with this approach is detecting when the callback has done the unthinkable (from a shared-nothing perspective) and persisted some random object it created outside of the parallel context it was created in. That's actually a huge separate technical issue to tackle -- and it applies just as much to the normal ``submit_(wait|work|timer)`` callbacks as well. I've got a somewhat-temporary solution in place for that currently: d = async.dict() def foo(): # async.rdtsc() is a helper method # that basically wraps the result of # the assembly RDTSC (read time- # stamp counter) instruction into a # PyLong object. So, it's handy when # I need to test the very functionality # being demonstrated here (creating # an object within a parallel context # and persisting it elsewhere). d['foo'] = async.rdtsc() def bar(): d['bar'] = async.rdtsc() async.submit_work(foo) async.submit_work(bar) That'll result in two contexts being created, one for each callback invocation. ``async.dict()`` is a "parallel safe" wrapper around a normal PyDict. This is referred to as "protection". In fact, the code above could have been written as follows: d = async.protect(dict()) What ``protect()`` does is instrument the object such that we intercept ``__getitem__``, ``__setitem__``, ``__getattr__`` and ``__setattr__``. We replace these methods with counterparts that serve two purposes: 1. The read-only methods are wrapped in a read-lock, the write methods are wrapped in a write lock (using underlying system slim read/write locks, which are uber fast). (Basically, you can have unlimited readers holding the read lock, but only one writer can hold the write lock (excluding all the readers and other writers).) 2. Detecting when parallel objects (objects created from within a parallel thread, and thus, backed by the parallel context's heap) have been assigned outside the context (in this case, to a "protected" dict object that was created from the main thread). The first point is important as it ensures concurrent access doesn't corrupt the data structure. The second point is important because it allows us to prevent the persisted object's context from automatically transitioning into the complete->release->heapdestroy lifecycle when the callback completes. This is known as "persistence", as in, a context has been persisted. All sorts of things happen to the object when we detect that it's been persisted. The biggest thing is that reference counting is enabled again for the object (from the perspective of the main thread; ref counting is still a no-op within the parallel thread) -- however, once the refcount hits zero, instead of free()ing the memory like we'd normally do in the main thread (or garbage collecting it), we decref the reference count of the owning context. Once the owning context's refcount goes to zero, we know that no more references exist to objects created from that parallel thread's execution, and we're free to release the context (and thus, destroy the heap -> free the memory). That's currently implemented and works very well. There are a few drawbacks: one, the user must only assign to an "async protected" object. Use a normal dict and you're going to segfault or corrupt things (or worse) pretty quickly. Second, we're persisting the entire context potentially for a single object. The context may be huge; think of some data processing callback that ran for ages, racked up a 100MB footprint, but only generated a PyLong with the value 42 at the end, which consumes, like, 50 bytes (or whatever the size of a PyLong is these days). It's crazy keeping a 100MB context around indefinitely until that PyLong object goes away, so, we need another option. The idea I have for that is "promotion". Rather than persist the context, the object is "promoted"; basically, the parallel thread palms it off to the main thread, which proceeds to deep-copy the object, and take over ownership. This removes the need for the context to be persisted. Now, I probably shouldn't have said "deep-copy" there. Promotion is a terrible option for anything other than simple objects (scalars). If you've got a huge list that consumes 98% of your 100MB heap footprint, well, persistence is perfect. If it's a 50 byte scalar, promotion is perfect. (Also, deep-copy implies collection interrogation, which has all sorts of complexities, so, err, I'll probably end up supporting promotion if the object is a scalar that can be shallow-copied. Any form of collection or non-scalar type will get persisted by default.) I haven't implemented promotion yet (persistence works well enough for now). And none of this is integrated into the heap snapshot/rollback logic -- i.e. we don't detect if a client/server callback assigned an object created in the parallel context to a main-thread object -- we just roll back blindly as soon as the callback completes. Before this ever has a chance of being eligible for adoption into CPython, those problems will need to be addressed. As much as I'd like to ignore those corner cases that violate the shared-nothing approach -- it's inevitable someone, somewhere, will be assigning parallel objects outside of the context, maybe for good reason, maybe by accident, maybe because they don't know any better. Whatever the reason, the result shouldn't be corruption. So, the remaining challenge is preventing the use case alluded to earlier where someone tries to modify an object that hasn't been "async protected". That's a bit harder. The idea I've got in mind is to instrument the main CPython ceval loop, such that we do these checks as part of opcode processing. That allows us to keep all the logic in the one spot and not have to go hacking the internals of every single object's C backend to ensure correctness. Now, that'll probably work to an extent. I mean, after all, there are opcodes for all the things we'd be interested in instrumenting, LOAD_GLOBAL, STORE_GLOBAL, SETITEM etc. What becomes challenging is detecting arbitrary mutations via object calls, i.e. how do we know, during the ceval loop, that foo.append(x) needs to be treated specially if foo is a main-thread object and x is a parallel thread object? There may be no way to handle that *other* than hacking the internals of each object, unfortunately. So, the viability of this whole approach may rest on whether or that's deemed as an acceptable tradeoff (a necessary evil, even) to the Python developer community. If it's not, then it's unlikely this approach will ever see the light of day in CPython. If that turns out to be the case, then I see this project taking the path that Stackless took (forking off and becoming a separate interpreter). There's nothing wrong with that; I am really excited about the possibilities afforded by this approach, and I'm sure it will pique the interest of commercial entities out there that have problems perfectly suited to where this pattern excels (shared-nothing, highly concurrent), much like the relationship that developed between Stackless and Eve Online. So, it'd be great if it eventually saw the light of day in CPython, but that'll be a long way down the track (at least 4.x I'd say), and all these issues that allow you to instantly segfault or corrupt the interpreter will need to be addressed before it's even eligible for *discussion* about inclusion. Regards, Trent. From stefanrin at gmail.com Thu Mar 14 20:59:57 2013 From: stefanrin at gmail.com (Stefan Ring) Date: Thu, 14 Mar 2013 20:59:57 +0100 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <20130314182352.GC24307@snakebite.org> References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> Message-ID: > Yup, in fact, if I hadn't come up with the __read[gf]sword() trick, > my only other option would have been TLS (or the GetCurrentThreadId > /pthread_self() approach in the presentation). TLS is fantastic, > and it's definitely an intrinsic part of the solution (the "Y" part > of "if we're a parallel thread, do Y"), but it definitely more > costly than a simple FS/GS register read. I think you should be able to just take the address of a static __thread variable to achieve the same thing in a more portable way. From trent at snakebite.org Thu Mar 14 22:30:14 2013 From: trent at snakebite.org (Trent Nelson) Date: Thu, 14 Mar 2013 14:30:14 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <20130314184519.GD24307@snakebite.org> References: <20130314020540.GB22505@snakebite.org> <20130314184519.GD24307@snakebite.org> Message-ID: <20130314212949.GE24307@snakebite.org> Cross-referenced to relevant bits of code where appropriate. (And just a quick reminder regarding the code quality disclaimer: I've been hacking away on this stuff relentlessly for a few months; the aim has been to make continual forward progress without getting bogged down in non-value-add busy work. Lots of wildly inconsistent naming conventions and dead code that'll be cleaned up down the track. And the relevance of any given struct will tend to be proportional to how many unused members it has (homeless hoarder + shopping cart analogy).) On Thu, Mar 14, 2013 at 11:45:20AM -0700, Trent Nelson wrote: > The basic premise is that parallel 'Context' objects (well, structs) > are allocated for each parallel thread callback. The 'Context' struct: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel_private.h#l546 Allocated via new_context(): http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l4211 ....also relevant, new_context_for_socket() (encapsulates a client/server instance within a context). http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l4300 Primary role of the context is to isolate the memory management. This is achieved via 'Heap': http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel_private.h#l281 (Which I sort of half started refactoring to use the _HEAD_EXTRA approach when I thought I'd need to have a separate heap type for some TLS avenue I explored -- turns out that wasn't necessary). > The context persists for the lifetime of the "parallel work". > > The "lifetime of the parallel work" depends on what you're doing. For > a simple ``async.submit_work(foo)``, the context is considered > complete once ``foo()`` has been called (presuming no exceptions were > raised). Managing context lifetime is one of the main responsibilities of async.run_once(): http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l3841 > For an async client/server, the context will persist for the entirety > of the connection. Marking a socket context as 'finished' for servers is the job of PxServerSocket_ClientClosed(): http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l6885 > The context is responsible for encapsulating all resources related to > the parallel thread. So, it has its own heap, and all memory > allocations are taken from that heap. The heap is initialized in two steps during new_context(). First, a handle is allocated for the underlying system heap (via HeapCreate): http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l4224 The first "heap" is then initialized for use with our context via the Heap_Init(Context *c, size_t n, int page_size) call: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l1921 Heaps are actually linked together via a doubly-linked list. The first heap is a value member (not a pointer) of Context; however, the active heap is always accessed via the '*h' pointer which is updated as necessary. struct Heap { Heap *prev; Heap *next; void *base; void *next; int allocated; int remaining; ... struct Context { Heap heap; Heap *h; ... > For any given parallel thread, only one context can be executing at a > time, and this can be accessed via the ``__declspec(thread) Context > *ctx`` global (which is primed by some glue code as soon as the > parallel thread starts executing a callback). Glue entry point for all callbacks is _PyParallel_EnteredCallback: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l3047 On the topic of callbacks, the main workhorse for the submit_(wait|work) callbacks is _PyParallel_WorkCallback: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l3120 The interesting logic starts at start: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l3251 The interesting part is the error handling. If the callback raises an exception, we check to see if an errback has been provided. If so, we call the errback with the error details. If the callback completes successfully (or it fails, but the errback completes successfully), that is treated as successful callback or errback completion, respectively: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l3270 http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l3294 If the errback fails, or no errback was provided, the exception percolates back to the main thread. This is handled at error: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l3300 This should make the behavior of async.run_once() clearer. The first thing it does is check to see if any errors have been posted. http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l3917 Errors are returned back to calling code on a first-error-wins basis. (This involves fiddling with the context's lifetime, as we're essentially propagating an object created in a parallel context (the (exception, value, traceback) tuple) back to a main thread context -- so, we can't blow away that context until the exception has had a chance to properly bubble back up and be dealt with.) If there are no errors, we then check to see if any "call from main thread" requests have been made: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l3936 I added support for this in order to ease unit testing, but it has general usefulness. It's exposed via two decorators: @async.call_from_main_thread def foo(arg): ... def callback(): foo('abcd') async.submit_work(callback) That creates a parallel thread, invokes callback(), which then results in foo(arg) eventually being called from the main thread. This would be useful for synchronising access to a database or something like that. There's also @async.call_from_main_thread_and_wait, which I probably should have mentioned first: @async.call_from_main_thread_and_wait def update_login_details(login, details) db.update(login, details) def foo(): ... update_login_details(x, y) # execution will resume when the main thread finishes # update_login_details() ... async.submit_work(foo) Once all "main thread work requests" have been processed, completed callbacks and errbacks are processed. This basically just involves transitioning the associated context onto the "path to freedom" (the lifecycle that eventually results in the context being free()'d and the heap being destroyed). http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l4032 > No reference counting or garbage collection is done during parallel > thread execution. Instead, once the context is finished, it is > scheduled to be released, which means it'll be "processed" by the main > thread as part of its housekeeping work (during ``async.run()`` > (technically, ``async.run_once()``). > > The main thread simply destroys the entire heap in one fell swoop, > releasing all memory that was associated with that context. The "path to freedom" lifecycle is a bit complicated at the moment and could definitely use a review. But, basically, the main methods are _PxState_PurgeContexts() and _PxState_FreeContext(); the former checks that the context is ready to be freed, the latter does the actual freeing. _PxState_PurgeContexts: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l3789 _PxState_FreeContext: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l3700 The reason for the separation is to maintain bubbling effect; a context only makes one transition per run_once() invocation. Putting this in place was a key step to stop wild crashes in the early days when unittest would keep hold of exceptions longer than I was expecting -- it should probably be reviewed in light of the new persistence support I implemented (much later). > There are a few side effects to this. First, the heap allocator > (basically, the thing that answers ``malloc()`` calls) is incredibly > simple. It allocates LARGE_PAGE_SIZE chunks of memory at a time (2MB > on x64), and simply returns pointers to that chunk for each memory > request (adjusting h->next and allocation stats as it goes along, > obviously). Once the 2MB has been exhausted, another 2MB is > allocated. _PyHeap_Malloc is the workhorse here: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l2183 Very simple, just keeps nudging along the h->next pointer for each request, allocating another heap when necessary. Nice side effect is that it's ridiculously fast and very cache friendly. Python code running within parallel contexts runs faster than normal main-thread code because of this (plus the boost from not doing any ref counting). The simplicity of this approach made the heap snapshot logic really simple to implement too; taking a snapshot and then rolling back is just a couple of memcpy's and some pointer fiddling. > That approach is fine for the ``submit_(work|timer|wait)`` callbacks, > which basically provide a way to run a presumably-finite-length > function in a parallel thread (and invoking callbacks/errbacks as > required). > > However, it breaks down when dealing with client/server stuff. Each > invocation of a callback (say, ``data_received(...)``) may only > consume, say, 500 bytes, but it might be called a million times before > the connection is terminated. You can't have cumulative memory usage > with possibly-infinite-length client/server-callbacks like you can > with the once-off ``submit_(work|wait|timer)`` stuff. > > So, enter heap snapshots. The logic that handles all client/server > connections is instrumented such that it takes a snapshot of the heap > (and all associated stats) prior to invoking a Python method (via > ``PyObject_Call()``, for example, i.e. the invocation of > ``data_received``). I came up with the heap snapshot stuff in a really perverse way. The first cut introduced a new 'TLS heap' concept; the idea was that before you'd call PyObject_CallObject(), you'd enable the TLS heap, then roll it back when you were done. i.e. the socket IO loop code had a lot of stuff like this: snapshot = ENABLE_TLS_HEAP(); if (!PyObject_CallObject(...)) { DISABLE_TLS_HEAP_AND_ROLLBACK(snapshot); ... } DISABLE_TLS_HEAP(); ... /* do stuff */ ROLLBACK_TLS_HEAP(snapshot); That was fine initially, until I had to deal with the (pretty common) case of allocating memory from the TLS heap (say, for an async send), and then having the callback picked up by a different thread. That thread then had to return the other thread's snapshot and, well, it just fell apart conceptually. Then it dawned on me to just add the snapshot/rollback stuff to normal Context objects. In retrospect, it's silly I didn't think of this in the first place -- the biggest advantage of the Context abstraction is that it's thread-local, but not bindingly so (as in, it'll only ever run on one thread at a time, but it doesn't matter which one, which is essential, because the ). Once I switched out all the TLS heap cruft for Context-specific heap snapshots, everything "Just Worked". (I haven't removed the TLS heap stuff yet as I'm still using it elsewhere (where it doesn't have the issue above). It's an xxx todo.) The main consumer of this heap snapshot stuff (at the moment) is the socket IO loop logic: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l5632 Typical usage now looks like this: snapshot = PxContext_HeapSnapshot(c, NULL); if (!PxSocket_LoadInitialBytes(s)) { PxContext_RollbackHeap(c, &snapshot); PxSocket_EXCEPTION(); } /* at some later point... */ PxContext_RollbackHeap(c, &snapshot); > When the method completes, we can simply roll back the snapshot. The > heap's stats and next pointers et al all get reset back to what they > were before the callback was invoked. > > The only issue with this approach is detecting when the callback has > done the unthinkable (from a shared-nothing perspective) and persisted > some random object it created outside of the parallel context it was > created in. > > That's actually a huge separate technical issue to tackle -- and it > applies just as much to the normal ``submit_(wait|work|timer)`` > callbacks as well. I've got a somewhat-temporary solution in place > for that currently: > > That'll result in two contexts being created, one for each callback > invocation. ``async.dict()`` is a "parallel safe" wrapper around a > normal PyDict. This is referred to as "protection". > > In fact, the code above could have been written as follows: > > d = async.protect(dict()) > > What ``protect()`` does is instrument the object such that we > intercept ``__getitem__``, ``__setitem__``, ``__getattr__`` and > ``__setattr__``. The 'protect' details are pretty hairy. _protect does a few checks: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l1368 ....and then palms things off to _PyObject_PrepOrigType: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l1054 That method is where the magic happens. We basically clone the type object for the object we're protecting, then replace the setitem, getitem etc methods with our counterparts (described next): http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l1100 Note the voodoo involved in 'protecting' heap objects versus normal C-type objects, GC objects versus non-GC, etc. > We replace these methods with counterparts that > serve two purposes: > > 1. The read-only methods are wrapped in a read-lock, the write > methods are wrapped in a write lock (using underlying system slim > read/write locks, which are uber fast). (Basically, you can have > unlimited readers holding the read lock, but only one writer can hold > the write lock (excluding all the readers and other writers).) > > 2. Detecting when parallel objects (objects created from within a > parallel thread, and thus, backed by the parallel context's heap) > have been assigned outside the context (in this case, to a > "protected" dict object that was created from the main thread). This is handled via _Px_objobjargproc_ass: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l900 That is responsible for detecting when a parallel object is being assigned to a non-parallel object (and tries to persist the object where necessary). > The first point is important as it ensures concurrent access doesn't > corrupt the data structure. > > The second point is important because it allows us to prevent the > persisted object's context from automatically transitioning into the > complete->release->heapdestroy lifecycle when the callback completes. > > This is known as "persistence", as in, a context has been persisted. > All sorts of things happen to the object when we detect that it's been > persisted. The biggest thing is that reference counting is enabled > again for the object (from the perspective of the main thread; ref > counting is still a no-op within the parallel thread) -- however, once > the refcount hits zero, instead of free()ing the memory like we'd > normally do in the main thread (or garbage collecting it), we decref > the reference count of the owning context. That's the job of _Px_TryPersist (called via _Px_objobjargproc_ass as mentioned above): http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l861 That makes use of yet-another-incredibly-useful-Windows-feature called 'init once'; basically, underlying system support for ensuring something only gets done *once*. Perfect for avoiding race conditions. > Once the owning context's refcount goes to zero, we know that no more > references exist to objects created from that parallel thread's > execution, and we're free to release the context (and thus, destroy > the heap -> free the memory). All that magic is the unfortunate reason my lovely Py_INCREF/DECREF overrides when from very simple to quite-a-bit-more-involved. i.e. originally Py_INCREF was just: #define Py_INCREF(o) (Py_PXCTX ? (void)0; Py_REFCNT(o)++); With the advent of parallel object persistence and context-specific refcounts, things become less simple: Py_INCREF: http://hg.python.org/sandbox/trent/file/7148209d5490/Include/object.h#l890 890 __inline 891 void 892 _Py_IncRef(PyObject *op) 893 { 894 if ((!Py_PXCTX && (Py_ISPY(op) || Px_PERSISTED(op)))) { 895 _Py_INC_REFTOTAL; 896 (((PyObject*)(op))->ob_refcnt++); 897 } 898 } Py_DECREF: http://hg.python.org/sandbox/trent/file/7148209d5490/Include/object.h#l911 909 __inline 910 void 911 _Py_DecRef(PyObject *op) 912 { 913 if (!Py_PXCTX) { 914 if (Px_PERSISTED(op)) 915 Px_DECREF(op); 916 else if (!Px_ISPX(op)) { 917 _Py_DEC_REFTOTAL; 918 if ((--((PyObject *)(op))->ob_refcnt) != 0) { 919 _Py_CHECK_REFCNT(op); 920 } else 921 _Py_Dealloc((PyObject *)(op)); 922 } 923 } 924 } > That's currently implemented and works very well. There are a few > drawbacks: one, the user must only assign to an "async protected" > object. Use a normal dict and you're going to segfault or corrupt > things (or worse) pretty quickly. > > Second, we're persisting the entire context potentially for a single > object. The context may be huge; think of some data processing > callback that ran for ages, racked up a 100MB footprint, but only > generated a PyLong with the value 42 at the end, which consumes, like, > 50 bytes (or whatever the size of a PyLong is these days). > > It's crazy keeping a 100MB context around indefinitely until that > PyLong object goes away, so, we need another option. The idea I have > for that is "promotion". Rather than persist the context, the object > is "promoted"; basically, the parallel thread palms it off to the main > thread, which proceeds to deep-copy the object, and take over > ownership. This removes the need for the context to be persisted. > > Now, I probably shouldn't have said "deep-copy" there. Promotion is a > terrible option for anything other than simple objects (scalars). If > you've got a huge list that consumes 98% of your 100MB heap footprint, > well, persistence is perfect. If it's a 50 byte scalar, promotion is > perfect. (Also, deep-copy implies collection interrogation, which has > all sorts of complexities, so, err, I'll probably end up supporting > promotion if the object is a scalar that can be shallow-copied. Any > form of collection or non-scalar type will get persisted by default.) > > I haven't implemented promotion yet (persistence works well enough for > now). And none of this is integrated into the heap snapshot/rollback > logic -- i.e. we don't detect if a client/server callback assigned an > object created in the parallel context to a main-thread object -- we > just roll back blindly as soon as the callback completes. > > Before this ever has a chance of being eligible for adoption into > CPython, those problems will need to be addressed. As much as I'd > like to ignore those corner cases that violate the shared-nothing > approach -- it's inevitable someone, somewhere, will be assigning > parallel objects outside of the context, maybe for good reason, maybe > by accident, maybe because they don't know any better. Whatever the > reason, the result shouldn't be corruption. > > So, the remaining challenge is preventing the use case alluded to > earlier where someone tries to modify an object that hasn't been > "async protected". That's a bit harder. The idea I've got in mind is > to instrument the main CPython ceval loop, such that we do these > checks as part of opcode processing. That allows us to keep all the > logic in the one spot and not have to go hacking the internals of > every single object's C backend to ensure correctness. > > Now, that'll probably work to an extent. I mean, after all, there are > opcodes for all the things we'd be interested in instrumenting, > LOAD_GLOBAL, STORE_GLOBAL, SETITEM etc. What becomes challenging is > detecting arbitrary mutations via object calls, i.e. how do we know, > during the ceval loop, that foo.append(x) needs to be treated > specially if foo is a main-thread object and x is a parallel thread > object? > > There may be no way to handle that *other* than hacking the internals > of each object, unfortunately. So, the viability of this whole > approach may rest on whether or that's deemed as an acceptable > tradeoff (a necessary evil, even) to the Python developer community. Actually, I'd sort of forgotten that I started adding protection support for lists in _PyObject_PrepOrigType. Well, technically, support for intercepting PySequenceMethods: http://hg.python.org/sandbox/trent/file/7148209d5490/Python/pyparallel.c#l1126 I settled for just intercepting PyMappingMethods initially, which is why that chunk of code is commented out. Intercepting the mapping methods allowed me to implement the async protection for dicts and generic objects, which was sufficient for testing purposes at the time. So, er, I guess my point is that automatically detecting object mutation might not be as hard as I'm alluding to above. I'll be happy if we're able to simply raise an exception if you attempt to mutate a non-protected main-thread object. That's infinitely better than segfaulting or silent corruption. Trent. From trent at snakebite.org Thu Mar 14 22:49:02 2013 From: trent at snakebite.org (Trent Nelson) Date: Thu, 14 Mar 2013 14:49:02 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <20130314212949.GE24307@snakebite.org> References: <20130314020540.GB22505@snakebite.org> <20130314184519.GD24307@snakebite.org> <20130314212949.GE24307@snakebite.org> Message-ID: <20130314214901.GF24307@snakebite.org> On Thu, Mar 14, 2013 at 02:30:14PM -0700, Trent Nelson wrote: > Then it dawned on me to just add the snapshot/rollback stuff to > normal Context objects. In retrospect, it's silly I didn't think of > this in the first place -- the biggest advantage of the Context > abstraction is that it's thread-local, but not bindingly so (as in, > it'll only ever run on one thread at a time, but it doesn't matter > which one, which is essential, because the ). > > Once I switched ... $10 if you can guess when I took a break for lunch. "....but it doesn't matter which one, which is essential, because there are no guarantees with regards to which thread runs which context." Is along the lines of what I was going to say. Trent. From ani at aristanetworks.com Thu Mar 14 23:15:06 2013 From: ani at aristanetworks.com (Ani Sinha) Date: Thu, 14 Mar 2013 15:15:06 -0700 Subject: [Python-Dev] About issue 6560 Message-ID: Hi : I was looking into a mechanism to get the aux fields from recvmsg() in python and I came across this issue. Looks like this feature was added in python 3.3. Is there any reason why this feature was not added for python 2.7? I am now trying to backport the patch to python 2.7. any insight into this would be appreciated. Thanks ani From trent at snakebite.org Thu Mar 14 23:23:37 2013 From: trent at snakebite.org (Trent Nelson) Date: Thu, 14 Mar 2013 15:23:37 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> Message-ID: <20130314222337.GG24307@snakebite.org> On Thu, Mar 14, 2013 at 12:59:57PM -0700, Stefan Ring wrote: > > Yup, in fact, if I hadn't come up with the __read[gf]sword() trick, > > my only other option would have been TLS (or the GetCurrentThreadId > > /pthread_self() approach in the presentation). TLS is fantastic, > > and it's definitely an intrinsic part of the solution (the "Y" part > > of "if we're a parallel thread, do Y"), but it definitely more > > costly than a simple FS/GS register read. > > I think you should be able to just take the address of a static > __thread variable to achieve the same thing in a more portable way. Sure, but, uh, that's kinda' trivial in comparison to all the wildly unportable Windows-only functionality I'm using to achieve all of this at the moment :-) For the record, here are all the Windows calls I'm using that have no *direct* POSIX equivalent: Interlocked singly-linked lists: - InitializeSListHead() - InterlockedFlushSList() - QueryDepthSList() - InterlockedPushEntrySList() - InterlockedPushListSList() - InterlockedPopEntrySlist() Synchronisation and concurrency primitives: - Critical sections - InitializeCriticalSectionAndSpinCount() - EnterCriticalSection() - LeaveCriticalSection() - TryEnterCriticalSection() - Slim read/writer locks (some pthread implements have rwlocks)*: - InitializeSRWLock() - AcquireSRWLockShared() - AcquireSRWLockExclusive() - ReleaseSRWLockShared() - ReleaseSRWLockExclusive() - TryAcquireSRWLockExclusive() - TryAcquireSRWLockShared() - One-time initialization: - InitOnceBeginInitialize() - InitOnceComplete() - Generic event, signalling and wait facilities: - CreateEvent() - SetEvent() - WaitForSingleObject() - WaitForMultipleObjects() - SignalObjectAndWait() Native thread pool facilities: - TrySubmitThreadpoolCallback() - StartThreadpoolIo() - CloseThreadpoolIo() - CancelThreadpoolIo() - DisassociateCurrentThreadFromCallback() - CallbackMayRunLong() - CreateThreadpoolWait() - SetThreadpoolWait() Memory management: - HeapCreate() - HeapAlloc() - HeapDestroy() Structured Exception Handling (#ifdef Py_DEBUG): - __try/__except Sockets: - ConnectEx() - AcceptEx() - WSAEventSelect(FD_ACCEPT) - DisconnectEx(TF_REUSE_SOCKET) - Overlapped WSASend() - Overlapped WSARecv() Don't get me wrong, I grew up with UNIX and love it as much as the next guy, but you can't deny the usefulness of Windows' facilities for writing high-performance, multi-threaded IO code. It's decades ahead of POSIX. (Which is also why it bugs me when I see select() being used on Windows, or IOCP being used as if it were a poll-type "generic IO multiplexor" -- that's like having a Ferrari and speed limiting it to 5mph!) So, before any of this has a chance of working on Linux/BSD, a lot more scaffolding will need to be written to provide the things we get for free on Windows (threadpools being the biggest freebie). Trent. From martin at v.loewis.de Thu Mar 14 23:56:33 2013 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 14 Mar 2013 15:56:33 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <20130314182352.GC24307@snakebite.org> References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> Message-ID: <514255A1.1020308@v.loewis.de> Am 14.03.13 11:23, schrieb Trent Nelson: >> ARM CPUs don't have segment registers because they have a simpler >> addressing model. The register CP15 came up after a couple of Google >> searches. > > Noted, thanks! > > Yeah that's my general sentiment too. I'm definitely curious to see > if other ISAs offer similar facilities (Sparc, IA64, POWER etc), but > the hierarchy will be x86/x64 > ARM > * for the foreseeable future. Most (in particular the RISC ones) do have a general-purpose register reserved for TLS. For ARM, the interesting thing is that CP15 apparently is not available on all ARM implementations, and Linux then emulates it on processors that don't have it (by handling the trap), which is costly. Additionally, it appears that Android fails to provide that emulation (in some versions, on some processors), so that seems to be tricky ground. > Porting the Py_PXCTX part is trivial compared to the work that is > going to be required to get this stuff working on POSIX where none > of the sublime Windows concurrency, synchronisation and async IO > primitives exist. I couldn't understand from your presentation why this is essential to your approach. IIUC, you are "just" relying on the OS providing a thread pool, (and the sublime concurrency and synchronization routines are nothing more than that, ISTM). Implementing a thread pool on top of select/poll/kqueue seems straight-forward. Regards, Martin From martin at v.loewis.de Thu Mar 14 23:48:33 2013 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 14 Mar 2013 15:48:33 -0700 Subject: [Python-Dev] About issue 6560 In-Reply-To: References: Message-ID: <514253C1.9060209@v.loewis.de> Am 14.03.13 15:15, schrieb Ani Sinha: > I was looking into a mechanism to get the aux fields from recvmsg() in > python and I came across this issue. Looks like this feature was added > in python 3.3. Is there any reason why this feature was not added for > python 2.7? Most certainly: Python 2.7 (and thus Python 2) is feature-frozen; no new features can be added to it. People wanting new features need to port to Python 3. Regards, Martin From trent at snakebite.org Fri Mar 15 00:00:09 2013 From: trent at snakebite.org (Trent Nelson) Date: Thu, 14 Mar 2013 16:00:09 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <51425433.1090700@v.loewis.de> References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> <51425433.1090700@v.loewis.de> Message-ID: <20130314230007.GA24799@snakebite.org> On Thu, Mar 14, 2013 at 03:50:27PM -0700, "Martin v. L?wis" wrote: > Am 14.03.13 12:59, schrieb Stefan Ring: > > I think you should be able to just take the address of a static > > __thread variable to achieve the same thing in a more portable way. > > That assumes that the compiler supports __thread variables, which > isn't that portable in the first place. FWIW, I make extensive use of __declspec(thread). I'm aware of GCC and Clang's __thread alternative. No idea what IBM xlC, Sun Studio and others offer, if anything. Trent. From martin at v.loewis.de Thu Mar 14 23:50:27 2013 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 14 Mar 2013 15:50:27 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> Message-ID: <51425433.1090700@v.loewis.de> Am 14.03.13 12:59, schrieb Stefan Ring: > I think you should be able to just take the address of a static > __thread variable to achieve the same thing in a more portable way. That assumes that the compiler supports __thread variables, which isn't that portable in the first place. Regards, Martin From tjreedy at udel.edu Fri Mar 15 00:19:11 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 14 Mar 2013 19:19:11 -0400 Subject: [Python-Dev] About issue 6560 In-Reply-To: <514253C1.9060209@v.loewis.de> References: <514253C1.9060209@v.loewis.de> Message-ID: On 3/14/2013 6:48 PM, "Martin v. L?wis" wrote: > Am 14.03.13 15:15, schrieb Ani Sinha: >> I was looking into a mechanism to get the aux fields from recvmsg() in >> python and I came across this issue. Looks like this feature was added >> in python 3.3. Is there any reason why this feature was not added for >> python 2.7? > > Most certainly: Python 2.7 (and thus Python 2) is feature-frozen; no As are 3.2 and now 3.3. Every version is feature frozen when released. Bugfix releases only contain bugfixes. > new features can be added to it. People wanting new features need to > port to Python 3. In particular 3.3. -- Terry Jan Reedy From trent at snakebite.org Fri Mar 15 00:21:14 2013 From: trent at snakebite.org (Trent Nelson) Date: Thu, 14 Mar 2013 16:21:14 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <514255A1.1020308@v.loewis.de> References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> <514255A1.1020308@v.loewis.de> Message-ID: <20130314232111.GB24799@snakebite.org> On Thu, Mar 14, 2013 at 03:56:33PM -0700, "Martin v. L?wis" wrote: > Am 14.03.13 11:23, schrieb Trent Nelson: > > Porting the Py_PXCTX part is trivial compared to the work that is > > going to be required to get this stuff working on POSIX where none > > of the sublime Windows concurrency, synchronisation and async IO > > primitives exist. > > I couldn't understand from your presentation why this is essential > to your approach. IIUC, you are "just" relying on the OS providing > a thread pool, (and the sublime concurrency and synchronization > routines are nothing more than that, ISTM). Right, there's nothing Windows* does that can't be achieved on Linux/BSD, it'll just take more scaffolding (i.e. we'll need to manage our own thread pool at the very least). [*]: actually, the interlocked singly-linked list stuff concerns me; the API seems straightforward enough but the implementation becomes deceptively complex once you factor in the ABA problem. (I'm not aware of a portable open source alternative for that stuff.) > Implementing a thread pool on top of select/poll/kqueue seems > straight-forward. Nod, that's exactly what I've got in mind. Spin up a bunch of threads that sit there and call poll/kqueue in an endless loop. That'll work just fine for Linux/BSD/OSX. Actually, what's really interesting is the new registered IO facilities in Windows 8/2012. The Microsoft recommendation for achieving the ultimate performance (least amount of jitter, lowest latency, highest throughput) is to do something like this: while (1) { if (!DequeueCompletionRequests(...)) { YieldProcessor(); continue; } else { /* Handle requests */ } } That pattern looks a lot more like what you'd do on Linux/BSD (spin up a thread per CPU and call epoll/kqueue endlessly) than any of the previous Windows IO patterns. Trent. From tjreedy at udel.edu Fri Mar 15 02:33:05 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 14 Mar 2013 21:33:05 -0400 Subject: [Python-Dev] Matching __all__ to doc: bugfix or enhancement? Message-ID: The timeit doc describes four public attributes. The current timeit.__all__ only lists one. http://bugs.python.org/issue17414 proposes to expand __all__ to include all four: -__all__ = ["Timer"] +__all__ = ["Timer", "timeit", "repeat", "default_timer"] The effect of the change is a) help(timit) will mention the three functions as well as the class; b) IDLE's attribute completion box* will list all four instead just Timer; c) unknow other users of .__all__ will see the expanded list, for better or worse. * Typing 'xxx.' and either waiting or typing cntl-space brings up a listbox of attributes to select from. Is the code change an all-version bugfix or a default-only enhancement? I can see it both ways, but a decision is required to act. PS: I think the devguide should gain a new 'Behavior versus Enhancement' section after the current "11.1.2. Type" to clarify issues like this. -- Terry Jan Reedy From fred at fdrake.net Fri Mar 15 02:47:52 2013 From: fred at fdrake.net (Fred Drake) Date: Thu, 14 Mar 2013 21:47:52 -0400 Subject: [Python-Dev] Matching __all__ to doc: bugfix or enhancement? In-Reply-To: References: Message-ID: On Thu, Mar 14, 2013 at 9:33 PM, Terry Reedy wrote: > Is the code change an all-version bugfix or a default-only enhancement? > I can see it both ways, but a decision is required to act. This is actually backward-incompatible, so should not be considered a simple bugfix. If determined to be desirable, it should not be applied to any version before 3.4. -Fred -- Fred L. Drake, Jr. "A storm broke loose in my mind." --Albert Einstein From eliben at gmail.com Fri Mar 15 02:54:34 2013 From: eliben at gmail.com (Eli Bendersky) Date: Thu, 14 Mar 2013 18:54:34 -0700 Subject: [Python-Dev] Matching __all__ to doc: bugfix or enhancement? In-Reply-To: References: Message-ID: On Thu, Mar 14, 2013 at 6:33 PM, Terry Reedy wrote: > The timeit doc describes four public attributes. > The current timeit.__all__ only lists one. > http://bugs.python.org/**issue17414 > proposes to expand __all__ to include all four: > -__all__ = ["Timer"] > +__all__ = ["Timer", "timeit", "repeat", "default_timer"] > > The effect of the change is > a) help(timit) will mention the three functions as well as the class; > b) IDLE's attribute completion box* will list all four instead just Timer; > c) unknow other users of .__all__ will see the expanded list, for better > or worse. > > Another effect is that existing code that does: from timeit import * May break. The above may not be the recommended best practice in Python, but it's perfectly valid and widely used. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Mar 15 05:15:08 2013 From: guido at python.org (Guido van Rossum) Date: Thu, 14 Mar 2013 21:15:08 -0700 Subject: [Python-Dev] Matching __all__ to doc: bugfix or enhancement? In-Reply-To: References: Message-ID: So it's a new feature, albeit a small one. I do see that it shouldn't be backported, but I don't see any worries about doing it in 3.4. Adding new functions/classes/constants to modules happens all the time, and we never give a second thought to users of import *. :-) On Thu, Mar 14, 2013 at 6:54 PM, Eli Bendersky wrote: > > > > On Thu, Mar 14, 2013 at 6:33 PM, Terry Reedy wrote: >> >> The timeit doc describes four public attributes. >> The current timeit.__all__ only lists one. >> http://bugs.python.org/issue17414 >> proposes to expand __all__ to include all four: >> -__all__ = ["Timer"] >> +__all__ = ["Timer", "timeit", "repeat", "default_timer"] >> >> The effect of the change is >> a) help(timit) will mention the three functions as well as the class; >> b) IDLE's attribute completion box* will list all four instead just Timer; >> c) unknow other users of .__all__ will see the expanded list, for better >> or worse. >> > > Another effect is that existing code that does: > > from timeit import * > > May break. The above may not be the recommended best practice in Python, but > it's perfectly valid and widely used. > > Eli > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) From eliben at gmail.com Fri Mar 15 05:24:51 2013 From: eliben at gmail.com (Eli Bendersky) Date: Thu, 14 Mar 2013 21:24:51 -0700 Subject: [Python-Dev] Matching __all__ to doc: bugfix or enhancement? In-Reply-To: References: Message-ID: On Thu, Mar 14, 2013 at 9:15 PM, Guido van Rossum wrote: > So it's a new feature, albeit a small one. I do see that it shouldn't > be backported, but I don't see any worries about doing it in 3.4. > Adding new functions/classes/constants to modules happens all the > time, and we never give a second thought to users of import *. :-) > > Oh yes, I agree there should be no problem in the default branch. My comment was mainly aimed at backporting it; I should've made it clearer. Eli > On Thu, Mar 14, 2013 at 6:54 PM, Eli Bendersky wrote: > > > > > > > > On Thu, Mar 14, 2013 at 6:33 PM, Terry Reedy wrote: > >> > >> The timeit doc describes four public attributes. > >> The current timeit.__all__ only lists one. > >> http://bugs.python.org/issue17414 > >> proposes to expand __all__ to include all four: > >> -__all__ = ["Timer"] > >> +__all__ = ["Timer", "timeit", "repeat", "default_timer"] > >> > >> The effect of the change is > >> a) help(timit) will mention the three functions as well as the class; > >> b) IDLE's attribute completion box* will list all four instead just > Timer; > >> c) unknow other users of .__all__ will see the expanded list, for better > >> or worse. > >> > > > > Another effect is that existing code that does: > > > > from timeit import * > > > > May break. The above may not be the recommended best practice in Python, > but > > it's perfectly valid and widely used. > > > > Eli > > > > > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > http://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nad at acm.org Fri Mar 15 06:55:10 2013 From: nad at acm.org (Ned Deily) Date: Thu, 14 Mar 2013 22:55:10 -0700 Subject: [Python-Dev] Followup - Re: Bad python 2.5 build on OSX 10.8 mountain lion References: <20121002073135.GA26567@sleipnir.bytereef.org> Message-ID: Way back on 2012-10-05 23:45:11 GMT in article , I wrote: > In article , > Ned Deily wrote: > > In article <20121002073135.GA26567 at sleipnir.bytereef.org>, > > Stefan Krah wrote: > > > Ned Deily wrote: > > > > > Forgot the link... > > > > > http://code.google.com/p/googleappengine/issues/detail?id=7885 > > > > > On Monday, October 1, 2012, Guido van Rossum wrote: > > > > > > As discussed here, the python 2.5 binary distributed by Apple on > > > > > > mountain > > > > > > lion is broken. Could someone file an official complaint? > > > > I've filed a bug against 10.8 python2.5. The 10.8 versions of Apple's > > > > pythons are compile with clang and we did see some sign extension > > > > issues > > > > with ctypes. The 10.7 version of Apple's python2.5 is compiled with > > > > llvm-gcc and handles 2**31 correctly. > > > Yes, this looks like http://bugs.python.org/issue11149 . > > Ah, right, thanks. I've updated the Apple issue accordingly. > > Update: the bug I filed has been closed as a duplicate of #11932488 > which apparently at the moment is still open. No other information is > available. FYI, today Apple finally released OS X 10.8.3, the next maintenance release of Mountain Lion, and it does include a recompiled version of Python 2.5.6 that appears to solve the sign-extension problem: 2**31-1 is now 2147483647L. -- Ned Deily, nad at acm.org From tjreedy at udel.edu Fri Mar 15 08:09:58 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 15 Mar 2013 03:09:58 -0400 Subject: [Python-Dev] Matching __all__ to doc: bugfix or enhancement? In-Reply-To: References: Message-ID: On 3/15/2013 12:15 AM, Guido van Rossum wrote: > So it's a new feature, albeit a small one. I do see that it shouldn't > be backported, but I don't see any worries about doing it in 3.4. > Adding new functions/classes/constants to modules happens all the > time, and we never give a second thought to users of import *. :-) Thanks all. Pushed to 3.4 only. -- Terry Jan Reedy From solipsis at pitrou.net Fri Mar 15 08:19:51 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 15 Mar 2013 08:19:51 +0100 Subject: [Python-Dev] Slides from today's parallel/async Python talk References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> <514255A1.1020308@v.loewis.de> <20130314232111.GB24799@snakebite.org> Message-ID: <20130315081951.43e30adb@pitrou.net> On Thu, 14 Mar 2013 16:21:14 -0700 Trent Nelson wrote: > > Actually, what's really interesting is the new registered IO > facilities in Windows 8/2012. The Microsoft recommendation for > achieving the ultimate performance (least amount of jitter, lowest > latency, highest throughput) is to do something like this: > > while (1) { > > if (!DequeueCompletionRequests(...)) { > YieldProcessor(); > continue; > } else { > /* Handle requests */ > } > } Does Microsoft change their recommendations every couple of years? :) Regards Antoine. From martin at v.loewis.de Fri Mar 15 14:09:49 2013 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 15 Mar 2013 06:09:49 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <20130315081951.43e30adb@pitrou.net> References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> <514255A1.1020308@v.loewis.de> <20130314232111.GB24799@snakebite.org> <20130315081951.43e30adb@pitrou.net> Message-ID: <51431D9D.5040103@v.loewis.de> Am 15.03.13 00:19, schrieb Antoine Pitrou: > Does Microsoft change their recommendations every couple of years? :) Indeed they do. In fact, it's not really the recommendation that changes, but APIs that are added to new Windows releases. In the specific case, Windows 8 adds an API called "Registered IO" (RIO). They (of course) do these API addition in expecting some gain, and then they (of course) promote these new APIs as actually achieving the gain. In the socket APIs, the Unix world went through a similar evolution, with select, poll, epoll, kqueue, and whatnot. The rate at which they change async APIs is actually low, compared to the rate at which they change relational-database APIs (ODBC, ADO, OLEDB, DAO, ADO.NET, LINQ, ... :-) Regards, Martin From status at bugs.python.org Fri Mar 15 18:07:26 2013 From: status at bugs.python.org (Python tracker) Date: Fri, 15 Mar 2013 18:07:26 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20130315170726.142F8568E9@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2013-03-08 - 2013-03-15) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3888 ( -7) closed 25316 (+51) total 29204 (+44) Open issues with patches: 1707 Issues opened (36) ================== #12466: sporadic failures of test_close_fds and test_pass_fds in test_ http://bugs.python.org/issue12466 reopened by ned.deily #13918: locale.atof documentation is missing func argument http://bugs.python.org/issue13918 reopened by ced #17386: Bring Doc/make.bat as close to Doc/Makefile as possible http://bugs.python.org/issue17386 opened by zach.ware #17387: Error in C API documentation of PySequenceMethods http://bugs.python.org/issue17387 opened by Alex.Orange #17389: Optimize Event.wait() http://bugs.python.org/issue17389 opened by pitrou #17390: display python version on idle title bar http://bugs.python.org/issue17390 opened by bagratte #17391: _cursesmodule Fails to Build on GCC 2.95 (static) http://bugs.python.org/issue17391 opened by Jeffrey.Armstrong #17393: stdlib import mistaken for local by import_fixer http://bugs.python.org/issue17393 opened by lregebro #17394: Add slicing support to collections.deque http://bugs.python.org/issue17394 opened by rhettinger #17396: modulefinder fails if module contains syntax error http://bugs.python.org/issue17396 opened by jgosmann #17397: ttk::themes missing from ttk.py http://bugs.python.org/issue17397 opened by klappnase #17398: document url argument of RobotFileParser http://bugs.python.org/issue17398 opened by tshepang #17399: test_multiprocessing hang on Windows, non-sockets http://bugs.python.org/issue17399 opened by terry.reedy #17400: ipaddress.is_private needs to take into account of rfc6598 http://bugs.python.org/issue17400 opened by leim #17401: io.FileIO closefd parameter is not documented nor shown in rep http://bugs.python.org/issue17401 opened by rbcollins #17403: Robotparser fails to parse some robots.txt http://bugs.python.org/issue17403 opened by benmezger #17404: ValueError: can't have unbuffered text I/O for io.open(1, 'wt' http://bugs.python.org/issue17404 opened by rbcollins #17405: Add _Py_memset_s() to securely clear memory http://bugs.python.org/issue17405 opened by christian.heimes #17408: second python execution fails when embedding http://bugs.python.org/issue17408 opened by theDarkBrainer #17409: resource.setrlimit doesn't respect -1 http://bugs.python.org/issue17409 opened by Paul.Price #17410: Generator-based HTMLParser http://bugs.python.org/issue17410 opened by flying sheep #17411: Build failures with non-NDEBUG, non-Py_DEBUG builds. http://bugs.python.org/issue17411 opened by twouters #17413: format_exception() breaks on exception tuples from trace funct http://bugs.python.org/issue17413 opened by inducer #17415: Clarify docs of os.path.normpath() http://bugs.python.org/issue17415 opened by gsingh #17416: Documentation Ambiguity 2 http://bugs.python.org/issue17416 opened by gsingh #17417: Documentation Modification Suggestion: os.walk, fwalk http://bugs.python.org/issue17417 opened by gsingh #17418: Documentation Bug http://bugs.python.org/issue17418 opened by gsingh #17419: bdist_wininst installer should allow install in user directory http://bugs.python.org/issue17419 opened by Sergio.Callegari #17420: bdist_wininst does not play well with unicode descriptions http://bugs.python.org/issue17420 opened by Sergio.Callegari #17421: Drop restriction that meta.__prepare__() must return a dict (s http://bugs.python.org/issue17421 opened by eric.snow #17422: language reference should specify restrictions on class namesp http://bugs.python.org/issue17422 opened by eric.snow #17423: libffi on 32bit is broken on linux http://bugs.python.org/issue17423 opened by fijall #17424: help() should use the class signature http://bugs.python.org/issue17424 opened by jafo #17425: Update OpenSSL versions in Windows builds http://bugs.python.org/issue17425 opened by pitrou #17428: replace readdir to readdir_r in function posix_listdir http://bugs.python.org/issue17428 opened by Rock #17429: platform.platform() can throw Unicode error http://bugs.python.org/issue17429 opened by a.badger Most recent 15 issues with no replies (15) ========================================== #17429: platform.platform() can throw Unicode error http://bugs.python.org/issue17429 #17428: replace readdir to readdir_r in function posix_listdir http://bugs.python.org/issue17428 #17424: help() should use the class signature http://bugs.python.org/issue17424 #17422: language reference should specify restrictions on class namesp http://bugs.python.org/issue17422 #17418: Documentation Bug http://bugs.python.org/issue17418 #17417: Documentation Modification Suggestion: os.walk, fwalk http://bugs.python.org/issue17417 #17416: Documentation Ambiguity 2 http://bugs.python.org/issue17416 #17411: Build failures with non-NDEBUG, non-Py_DEBUG builds. http://bugs.python.org/issue17411 #17403: Robotparser fails to parse some robots.txt http://bugs.python.org/issue17403 #17401: io.FileIO closefd parameter is not documented nor shown in rep http://bugs.python.org/issue17401 #17398: document url argument of RobotFileParser http://bugs.python.org/issue17398 #17396: modulefinder fails if module contains syntax error http://bugs.python.org/issue17396 #17394: Add slicing support to collections.deque http://bugs.python.org/issue17394 #17391: _cursesmodule Fails to Build on GCC 2.95 (static) http://bugs.python.org/issue17391 #17372: provide pretty printer for xml.etree.ElementTree http://bugs.python.org/issue17372 Most recent 15 issues waiting for review (15) ============================================= #17429: platform.platform() can throw Unicode error http://bugs.python.org/issue17429 #17428: replace readdir to readdir_r in function posix_listdir http://bugs.python.org/issue17428 #17423: libffi on 32bit is broken on linux http://bugs.python.org/issue17423 #17421: Drop restriction that meta.__prepare__() must return a dict (s http://bugs.python.org/issue17421 #17411: Build failures with non-NDEBUG, non-Py_DEBUG builds. http://bugs.python.org/issue17411 #17410: Generator-based HTMLParser http://bugs.python.org/issue17410 #17405: Add _Py_memset_s() to securely clear memory http://bugs.python.org/issue17405 #17397: ttk::themes missing from ttk.py http://bugs.python.org/issue17397 #17396: modulefinder fails if module contains syntax error http://bugs.python.org/issue17396 #17391: _cursesmodule Fails to Build on GCC 2.95 (static) http://bugs.python.org/issue17391 #17390: display python version on idle title bar http://bugs.python.org/issue17390 #17389: Optimize Event.wait() http://bugs.python.org/issue17389 #17386: Bring Doc/make.bat as close to Doc/Makefile as possible http://bugs.python.org/issue17386 #17375: Add docstrings to methods in the threading module http://bugs.python.org/issue17375 #17373: Add inspect.Signature.from_callable() http://bugs.python.org/issue17373 Top 10 most discussed issues (10) ================================= #17399: test_multiprocessing hang on Windows, non-sockets http://bugs.python.org/issue17399 12 msgs #17340: Handle malformed cookie http://bugs.python.org/issue17340 11 msgs #13564: ftplib and sendfile() http://bugs.python.org/issue13564 9 msgs #16895: Batch file to mimic 'make' on Windows http://bugs.python.org/issue16895 8 msgs #17410: Generator-based HTMLParser http://bugs.python.org/issue17410 8 msgs #17317: Benchmark driver should calculate actual benchmark count in -h http://bugs.python.org/issue17317 7 msgs #13918: locale.atof documentation is missing func argument http://bugs.python.org/issue13918 6 msgs #14468: Update cloning guidelines in devguide http://bugs.python.org/issue14468 6 msgs #15244: Support for opening files with FILE_SHARE_DELETE on Windows http://bugs.python.org/issue15244 6 msgs #16389: re._compiled_typed's lru_cache causes significant degradation http://bugs.python.org/issue16389 6 msgs Issues closed (48) ================== #3701: test_ntpath.test_relpath fails when launched from a different http://bugs.python.org/issue3701 closed by ezio.melotti #4099: dir on a compiled re does not show pattern as a part of the li http://bugs.python.org/issue4099 closed by ezio.melotti #5017: import suds help( suds ) fails http://bugs.python.org/issue5017 closed by ezio.melotti #6933: Threading issue with Tkinter Frame.insert http://bugs.python.org/issue6933 closed by terry.reedy #8318: Deprecation of multifile inappropriate or incomplete http://bugs.python.org/issue8318 closed by terry.reedy #9686: asyncore infinite loop on raise http://bugs.python.org/issue9686 closed by terry.reedy #11029: Crash, 2.7.1, Tkinter and threads and line drawing http://bugs.python.org/issue11029 closed by terry.reedy #11367: xml.etree.ElementTree.find(all): docs are wrong http://bugs.python.org/issue11367 closed by eli.bendersky #11656: Debug builds for Windows would be very helpful http://bugs.python.org/issue11656 closed by ezio.melotti #11869: Include information about the bug tracker Rietveld code review http://bugs.python.org/issue11869 closed by ezio.melotti #11963: Remove human verification from test suite (test_parser and tes http://bugs.python.org/issue11963 closed by ezio.melotti #12921: http.server.BaseHTTPRequestHandler.send_error , ability send a http://bugs.python.org/issue12921 closed by orsenthil #14639: Different behavior for urllib2 in Python 2.7 http://bugs.python.org/issue14639 closed by ezio.melotti #15121: devguide doesn't document all bug tracker components http://bugs.python.org/issue15121 closed by ezio.melotti #15158: Add support for multi-character delimiters in csv http://bugs.python.org/issue15158 closed by terry.reedy #15806: Add context manager for the "try: ... except: pass" pattern http://bugs.python.org/issue15806 closed by rhettinger #16004: Add `make touch` to 2.7 Makefile http://bugs.python.org/issue16004 closed by ezio.melotti #16471: upgrade to sphinx 1.1 http://bugs.python.org/issue16471 closed by terry.reedy #16643: Wrong documented default value for timefunc parameter in sched http://bugs.python.org/issue16643 closed by python-dev #16659: Pure Python implementation of random http://bugs.python.org/issue16659 closed by rhettinger #17047: Fix double double words words http://bugs.python.org/issue17047 closed by terry.reedy #17066: Fix test discovery for test_robotparser.py http://bugs.python.org/issue17066 closed by ezio.melotti #17099: Raise ValueError when __loader__ not defined for importlib.fin http://bugs.python.org/issue17099 closed by brett.cannon #17117: Update importlib.util.module_for_loader/set_loader to set when http://bugs.python.org/issue17117 closed by brett.cannon #17138: XPath error in xml.etree.ElementTree http://bugs.python.org/issue17138 closed by eli.bendersky #17176: Document imp.NullImporter is NOT used anymore by import http://bugs.python.org/issue17176 closed by brett.cannon #17222: py_compile.compile() explicitly sets st_mode for written files http://bugs.python.org/issue17222 closed by brett.cannon #17299: Test cPickle with real files http://bugs.python.org/issue17299 closed by serhiy.storchaka #17307: HTTP PUT request Example http://bugs.python.org/issue17307 closed by orsenthil #17332: typo in json docs - "convered" should be "converted" http://bugs.python.org/issue17332 closed by terry.reedy #17348: Unicode - encoding seems to be lost for inputs of unicode char http://bugs.python.org/issue17348 closed by terry.reedy #17351: Remove explicit "object" inheritance in Python 3 docs http://bugs.python.org/issue17351 closed by ezio.melotti #17368: Python version of JSON decoder does not work with object_pairs http://bugs.python.org/issue17368 closed by ezio.melotti #17370: PEP should note related PEPs http://bugs.python.org/issue17370 closed by barry #17376: TimedRotatingFileHandler documentation regarding 'Week day' la http://bugs.python.org/issue17376 closed by python-dev #17382: debugging with idle: current line not highlighted http://bugs.python.org/issue17382 closed by terry.reedy #17384: test_logging failures on Windows http://bugs.python.org/issue17384 closed by vinay.sajip #17385: Use deque instead of list the threading.Condition waiter queue http://bugs.python.org/issue17385 closed by rhettinger #17388: Providing invalid value to random.sample can result in incorre http://bugs.python.org/issue17388 closed by rhettinger #17392: Python installer for Windows packages wrong zipfile.py http://bugs.python.org/issue17392 closed by Simon.Wagner #17395: Wait for live children in test_multiprocessing http://bugs.python.org/issue17395 closed by ezio.melotti #17402: In mmap doc examples map() is shadowed http://bugs.python.org/issue17402 closed by ezio.melotti #17406: Upload Windows 9x/NT4 build http://bugs.python.org/issue17406 closed by ezio.melotti #17407: RotatingFileHandler issue when using multiple loggers instance http://bugs.python.org/issue17407 closed by vinay.sajip #17412: Windows make.bat fails on 2.7 http://bugs.python.org/issue17412 closed by terry.reedy #17414: timeit.timeit not in __all__ even though documented http://bugs.python.org/issue17414 closed by terry.reedy #17426: \0 in re.sub substitutes to space http://bugs.python.org/issue17426 closed by gvanrossum #17427: spam http://bugs.python.org/issue17427 closed by ezio.melotti From stefan at bytereef.org Sat Mar 16 10:17:50 2013 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 16 Mar 2013 10:17:50 +0100 Subject: [Python-Dev] [PEP 437] A DSL for specifying signatures, annotations and argument converters Message-ID: <20130316091750.GA24061@sleipnir.bytereef.org> This PEP hasn't been announced yet, so here's a short notice for all people (like myself) who are not at PyCon. This is the counter proposal for the argument parsing DSL from PEP 436: http://www.python.org/dev/peps/pep-0437/ Both PEPs were discussed at PyCon. The state of affairs is that a compromise is being worked upon and will be published by Larry in a revised PEP. Stefan Krah From ezio.melotti at gmail.com Sat Mar 16 22:09:08 2013 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Sat, 16 Mar 2013 23:09:08 +0200 Subject: [Python-Dev] [Python-checkins] cpython (merge default -> default): Merge heads default. In-Reply-To: <3ZSvqs5jdLzSMT@mail.python.org> References: <3ZSvqs5jdLzSMT@mail.python.org> Message-ID: Hi, On Sat, Mar 16, 2013 at 10:08 PM, terry.reedy wrote: > http://hg.python.org/cpython/rev/9a2f4418e65a > changeset: 82699:9a2f4418e65a > parent: 82691:0a15a58ac4a1 > parent: 82695:533a60251b9d > user: Terry Jan Reedy > date: Sat Mar 16 16:08:12 2013 -0400 > summary: > Merge heads default. > > files: > Doc/library/functions.rst | 4 ++-- > 1 files changed, 2 insertions(+), 2 deletions(-) > You forgot a couple of merges here. If you look at the graph at http://hg.python.org/cpython/graph/9a2f4418e65a you will see that all the heads in the individual branches got merged, but then you forgot to merge 3.2 -> 3.3 -> default (i.e. steps 5 and 6 of http://bugs.python.org/issue14468#msg184130). Serhiy fixed that shortly after: http://hg.python.org/cpython/graph/5da005db8166 At least now that the worst case scenario that doesn't really happen often?[0] happened I can point to some actual graphs that will hopefully clarify why all these merges are necessary :) Best Regards, Ezio Melotti [0]: http://bugs.python.org/issue14468#msg184140 From ezio.melotti at gmail.com Sat Mar 16 23:02:35 2013 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Sun, 17 Mar 2013 00:02:35 +0200 Subject: [Python-Dev] [Python-checkins] cpython (merge default -> default): Merge heads default. In-Reply-To: <5144E893.2070405@udel.edu> References: <3ZSvqs5jdLzSMT@mail.python.org> <5144E893.2070405@udel.edu> Message-ID: On Sat, Mar 16, 2013 at 11:48 PM, Terry Reedy wrote: > The FAQ says "... using hg merge 3.3 as usual." Serhiy's commit message > said 'Null merge', which to me is not 'as usual', as there are extra steps > given in the FAQ above. So, do he really do a 'null merge' and is that the > right thing to do in this situation? > It's probably just a matter of terminology. I assume he did a "usual merge" (i.e. "hg merge 3.2; hg ci -m '...';") and call it "null merge" because there was no code that changed. I prefer to use the term "null merge" when I explicitly revert the code before committing, and in this case I would have used "Merge with 3.x.". FWIW I might add http://bugs.python.org/issue15917 at some point, to prevent these situations. Best Regards, Ezio Melotti > I have no doubt the the extra merges are needed ;-). From glyph at twistedmatrix.com Sun Mar 17 01:58:10 2013 From: glyph at twistedmatrix.com (Glyph) Date: Sat, 16 Mar 2013 17:58:10 -0700 Subject: [Python-Dev] About issue 6560 In-Reply-To: <514253C1.9060209@v.loewis.de> References: <514253C1.9060209@v.loewis.de> Message-ID: On Mar 14, 2013, at 3:48 PM, Martin v. L?wis wrote: > Am 14.03.13 15:15, schrieb Ani Sinha: >> I was looking into a mechanism to get the aux fields from recvmsg() in >> python and I came across this issue. Looks like this feature was added >> in python 3.3. Is there any reason why this feature was not added for >> python 2.7? > > Most certainly: Python 2.7 (and thus Python 2) is feature-frozen; no > new features can be added to it. People wanting new features need to > port to Python 3. Or you can use Twisted: That module ought to have no dependencies outside of Twisted. We only use it for passing file descriptors between processes, but I believe it should be able to deal with whatever other types of auxiliary data that you need from recvmsg; if not, please file a bug (at ). -glyph From larry at hastings.org Sun Mar 17 08:14:04 2013 From: larry at hastings.org (Larry Hastings) Date: Sun, 17 Mar 2013 00:14:04 -0700 Subject: [Python-Dev] [PEP 437] A DSL for specifying signatures, annotations and argument converters In-Reply-To: <20130316091750.GA24061@sleipnir.bytereef.org> References: <20130316091750.GA24061@sleipnir.bytereef.org> Message-ID: <51456D3C.10608@hastings.org> On 03/16/2013 02:17 AM, Stefan Krah wrote: > Both PEPs were discussed at PyCon. The state of affairs is that a > compromise is being worked upon and will be published by Larry in > a revised PEP. I've pushed an update to PEP 436, the Argument Clinic PEP. It's now live on python.org. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Sun Mar 17 17:02:56 2013 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 17 Mar 2013 09:02:56 -0700 Subject: [Python-Dev] 2.7.4 is inevitable Message-ID: I am going to cut the 2.7.4 release branch next weekend (March 23, 24). Things which are breaking the buildbots at the point will be backed out. Owners of current release blockers will be poked, but nothing is going to hold up the release. The show must go on. Benjamin From christian at python.org Sun Mar 17 17:37:37 2013 From: christian at python.org (Christian Heimes) Date: Sun, 17 Mar 2013 17:37:37 +0100 Subject: [Python-Dev] Status of XML fixes Message-ID: <5145F151.3060302@python.org> Hello, I like to give an update on the XML vulnerability fixes. Brett has asked me a couple of days ago but I haven't had time to answer. I was/am busy with my daily job. Any attempt to fix the XML issues *will* change the behavior of the library and result into an incompatibility with older releases. Benjamin doesn't want to change the behavior of our XML libraries. IIRC Georg and Barry are +0. I think that we should keep the current and unsafe settings as default and add a simmple API to enable limitations and protections. What's available? ----------------- https://bitbucket.org/tiran/defusedexpat contains everything we need to fix the issues in the stdlib. All modifications to C code are available for all relevant Python versions. They have been tested on Linux and Windows, too. * modified expat library with checks and workarounds for entity expansion attacks. All fixes can be enabled or disabled by default at compile time. The default settings can also be configured globally (process wide, may be an issue for subinterpreters) and overwritten on the expat parser instance. * patched copies of pyexpat and _elementtree C extensions from Python 2.6, 2.7, 3.1, 3.2, 3.3 and 3.4 (a separate copy of each version). The patches provide the functions and attributes to modifiy the global and instance settings. * defusedexpat.py contains the patches for sax and dom parsers to disable external entity parsing. * http://bugs.python.org/issue17239 contains an old patch for the issues with a bunch of tests for each issue. What needs to be done? ---------------------- * agree on default settings: secure by default or backwards compatible by default? * review of the changes to expat, pyexpat and _elementtree. Antoine, Brett and Fred Drake have done some reviews. * design and implement an API to enable the protective restrictions. * documentation * perhaps more tests * finish the CVE reports In the mean time ... -------------------- https://pypi.python.org/pypi/defusedxml provides documentation, examples and fixes for all Python versions w/o any C extension. Christian From eliben at gmail.com Sun Mar 17 19:25:19 2013 From: eliben at gmail.com (Eli Bendersky) Date: Sun, 17 Mar 2013 11:25:19 -0700 Subject: [Python-Dev] Status of XML fixes In-Reply-To: <5145F151.3060302@python.org> References: <5145F151.3060302@python.org> Message-ID: I like to give an update on the XML vulnerability fixes. Brett has asked > me a couple of days ago but I haven't had time to answer. I was/am busy > with my daily job. > > Any attempt to fix the XML issues *will* change the behavior of the > library and result into an incompatibility with older releases. Benjamin > doesn't want to change the behavior of our XML libraries. IIRC Georg and > Barry are +0. I think that we should keep the current and unsafe > settings as default and add a simmple API to enable limitations and > protections. > > IMHO Benjamin is right, given that this attack has been known to exist since 2003. Moreover, as it appears that no changes whatsoever are going to make it into 2.7, I don't see why patching of 3.1, 3.2 and 3.3 is needed. As for 3.4, it can't hurt to add an opt-in option for a safe mode to the affected libraries. * review of the changes to expat, pyexpat and _elementtree. Antoine, > Brett and Fred Drake have done some reviews. > > I'll gladly review the _elementtree changes and can help with the expat & pyexpat changes as well. Until now I had the impression that the patches aren't ready for review yet. If they are, that's great. Do you have a patch in the issue tracker (so it can be reviewed with Rietveld)? ISTM the current form is just a file (say _elementtree.c) in your Bitbucket repo. Should that be just diffed with the trunk file to see the changes? Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sun Mar 17 20:00:19 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 17 Mar 2013 20:00:19 +0100 Subject: [Python-Dev] Status of XML fixes In-Reply-To: References: <5145F151.3060302@python.org> Message-ID: Eli Bendersky, 17.03.2013 19:25: > IMHO Benjamin is right, given that this attack has been known to exist > since 2003. Moreover, as it appears that no changes whatsoever are going to > make it into 2.7, I don't see why patching of 3.1, 3.2 and 3.3 is needed. > As for 3.4, it can't hurt to add an opt-in option for a safe mode to the > affected libraries. Why keep the libraries vulnerable for another year (3.4 final is expected for early 2014), if there is something we can do about them now? The fact that the attacks have been known for a decade doesn't mean an attacker will need another ten years to exploit them. Stefan From solipsis at pitrou.net Sun Mar 17 19:59:52 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 17 Mar 2013 19:59:52 +0100 Subject: [Python-Dev] Status of XML fixes References: <5145F151.3060302@python.org> Message-ID: <20130317195952.2bfa0dc8@pitrou.net> On Sun, 17 Mar 2013 20:00:19 +0100 Stefan Behnel wrote: > Eli Bendersky, 17.03.2013 19:25: > > IMHO Benjamin is right, given that this attack has been known to exist > > since 2003. Moreover, as it appears that no changes whatsoever are going to > > make it into 2.7, I don't see why patching of 3.1, 3.2 and 3.3 is needed. > > As for 3.4, it can't hurt to add an opt-in option for a safe mode to the > > affected libraries. > > Why keep the libraries vulnerable for another year (3.4 final is expected > for early 2014), if there is something we can do about them now? Well, Christian said that his stdlib patch wasn't ready yet. Regards Antoine. From eliben at gmail.com Sun Mar 17 21:03:21 2013 From: eliben at gmail.com (Eli Bendersky) Date: Sun, 17 Mar 2013 13:03:21 -0700 Subject: [Python-Dev] Status of XML fixes In-Reply-To: References: <5145F151.3060302@python.org> Message-ID: On Sun, Mar 17, 2013 at 12:00 PM, Stefan Behnel wrote: > Eli Bendersky, 17.03.2013 19:25: > > IMHO Benjamin is right, given that this attack has been known to exist > > since 2003. Moreover, as it appears that no changes whatsoever are going > to > > make it into 2.7, I don't see why patching of 3.1, 3.2 and 3.3 is needed. > > As for 3.4, it can't hurt to add an opt-in option for a safe mode to the > > affected libraries. > > Why keep the libraries vulnerable for another year (3.4 final is expected > for early 2014), if there is something we can do about them now? The fact > that the attacks have been known for a decade doesn't mean an attacker will > need another ten years to exploit them. > I'm using a conditional argument here. *If* we don't deem the changes important enough to go into 2.7, *then* they aren't important enough to go into 3.1 and 3.2; 3.3 is a question. That's because 2.7 is arguably more important in this respect, having no direct upgrade path, whereas for 3.x users the fix will be available with 3.4 anyway. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at bytereef.org Sun Mar 17 23:26:07 2013 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 17 Mar 2013 23:26:07 +0100 Subject: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL. In-Reply-To: <3ZTBVD26DlzR9T@mail.python.org> References: <3ZTBVD26DlzR9T@mail.python.org> Message-ID: <20130317222607.GA16540@sleipnir.bytereef.org> [PEP 436 revised syntax] While I like the syntax better and appreciate the option to condense the function declaration I still fear that the amount of implicitness will distract from what is important: programming in C. This applies especially if people start declaring converters using the [python] feature. So I hope that at least converters can be declared statically in a header file, like I suggested in PEP 437. A couple of comments: > As of CPython 3.3, builtin functions nearly always parse their arguments > with one of two functions: the original ``PyArg_ParseTuple()``, [1]_ and > the more modern ``PyArg_ParseTupleAndKeywords()``. [2]_ The former > only handles positional parameters; the latter also accommodates keyword > and keyword-only parameters, and is preferred for new code. What is the source for this? I seem to remember a discussion on python-ideas (but cannot find it now) where some developers preferred non-keyword functions for some use cases. For example it's strange to write div(x=10, y=3), or worse, div(y=3, x=10). Using positional-only arguments prevents this "feature". > /*[clinic] > os.stat as os_stat_fn -> stat result > > path: path_t(allow_fd=1) > Path to be examined; can be string, bytes, or open-file-descriptor int. I do not see where the C initialization or the cleanup are specified. Are they part of the converter specification? > /*[clinic] > curses.window.addch > > [ > x: int > X-coordinate. > > y: int > Y-coordinate. > ] The parameters appear to be in the wrong order. > The return annotation is also optional. If skipped, the arrow ("``->``") > must also be omitted. Why is it optional? Aren't type annotations important? > Clinic will ship with a number of built-in converters; new converters can > also be added dynamically. How are the converters specified? Inside the preprocessor source? Are initialization and cleanup part of the specification, e.g. is a converter represented by a class? I would prefer if the converters were in a header file, like I suggested in PEP 437. Any tool can read such a file and custom converters can be redeclared above their definition. > The default value is dynamically assigned, "live" in the generated C code, > and although it's specified as a Python value, it's translated into a native > C value in the generated C code. Few default values are permitted, owing to > this manual translation step. I think there should be a table that lists which values are converted and what the result of the conversion is. > ``[`` > Establishes the start of an optional "group" of parameters. > Note that "groups" may nest inside other "groups". > See `Functions With Positional-Only Parameters`_ below. I don't quite understand the terminology: Functions with the ``/`` are also "positional-only". Why not reserve this syntax exclusively for the legacy left-and-right optional case? > ``/`` > This hints to Argument Clinic that this function is performance-sensitive, > and that it's acceptable to forego supporting keyword parameters when parsing. > (In early implementations of Clinic, this will switch Clinic from generating > code using ``PyArg_ParseTupleAndKeywords`` to using ``PyArg_ParseTuple``. > The hope is that in the future there will be no appreciable speed difference, > rendering this syntax irrelevant and deprecated but harmless.) Here I would use "positional-only" and mention that the slash plays essentially the same role as the vertical bar in the existing syntax. If this isn't the intention, then I simply did not understand the paragraph. > types > > A list of strings representing acceptable Python types for this object. > There are also four strings which represent Python protocols: I don't quite follow: Aren't input types always specified by the converter function? > Argument Clinic also permits embedding Python code inside C files, which > is executed in-place when Argument Clinic processes the file. Embedded code > looks like this: The example in posixmodule.c takes up a lot of space and from the perspective of auditing the effects it's a little like following a longjmp. Stefan Krah From barry at python.org Mon Mar 18 04:48:23 2013 From: barry at python.org (Barry Warsaw) Date: Sun, 17 Mar 2013 20:48:23 -0700 Subject: [Python-Dev] Status of XML fixes In-Reply-To: <5145F151.3060302@python.org> References: <5145F151.3060302@python.org> Message-ID: <20130317204823.68c07a0e@anarchist> On Mar 17, 2013, at 05:37 PM, Christian Heimes wrote: >Any attempt to fix the XML issues *will* change the behavior of the >library and result into an incompatibility with older releases. Benjamin >doesn't want to change the behavior of our XML libraries. IIRC Georg and >Barry are +0. I think that we should keep the current and unsafe >settings as default and add a simmple API to enable limitations and >protections. I strongly believe that the decision must be the same for all stable versions. We can't impose the madness of version checks on people for them to know what to do. -Barry From v+python at g.nevcal.com Mon Mar 18 05:16:57 2013 From: v+python at g.nevcal.com (Glenn Linderman) Date: Sun, 17 Mar 2013 21:16:57 -0700 Subject: [Python-Dev] Status of XML fixes In-Reply-To: <20130317204823.68c07a0e@anarchist> References: <5145F151.3060302@python.org> <20130317204823.68c07a0e@anarchist> Message-ID: <51469539.5030202@g.nevcal.com> On 3/17/2013 8:48 PM, Barry Warsaw wrote: > On Mar 17, 2013, at 05:37 PM, Christian Heimes wrote: > >> Any attempt to fix the XML issues *will* change the behavior of the >> library and result into an incompatibility with older releases. Benjamin >> doesn't want to change the behavior of our XML libraries. IIRC Georg and >> Barry are +0. I think that we should keep the current and unsafe >> settings as default and add a simmple API to enable limitations and >> protections. > I strongly believe that the decision must be the same for all stable versions. > We can't impose the madness of version checks on people for them to know what > to do. try: newSimpleXMLAPI() newapi = True except Exception: newapi = False -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.svetlov at gmail.com Mon Mar 18 06:00:55 2013 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Sun, 17 Mar 2013 22:00:55 -0700 Subject: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL. In-Reply-To: <20130317222607.GA16540@sleipnir.bytereef.org> References: <3ZTBVD26DlzR9T@mail.python.org> <20130317222607.GA16540@sleipnir.bytereef.org> Message-ID: On Sun, Mar 17, 2013 at 3:26 PM, Stefan Krah wrote: > [PEP 436 revised syntax] > > While I like the syntax better and appreciate the option to condense the > function declaration I still fear that the amount of implicitness will > distract from what is important: programming in C. > > This applies especially if people start declaring converters using the > [python] feature. > > So I hope that at least converters can be declared statically in a header > file, like I suggested in PEP 437. > > > A couple of comments: > > >> As of CPython 3.3, builtin functions nearly always parse their arguments >> with one of two functions: the original ``PyArg_ParseTuple()``, [1]_ and >> the more modern ``PyArg_ParseTupleAndKeywords()``. [2]_ The former >> only handles positional parameters; the latter also accommodates keyword >> and keyword-only parameters, and is preferred for new code. > > What is the source for this? I seem to remember a discussion on python-ideas > (but cannot find it now) where some developers preferred non-keyword functions > for some use cases. > > For example it's strange to write div(x=10, y=3), or worse, div(y=3, x=10). > Using positional-only arguments prevents this "feature". IIRC objection was about functions like abs(5). If function has single and obvious argument why you need to name that parameter? The issue has related to documentation for existing one-argument functions only. > > >> /*[clinic] >> os.stat as os_stat_fn -> stat result >> >> path: path_t(allow_fd=1) >> Path to be examined; can be string, bytes, or open-file-descriptor int. > > I do not see where the C initialization or the cleanup are specified. Are > they part of the converter specification? > > >> /*[clinic] >> curses.window.addch >> >> [ >> x: int >> X-coordinate. >> >> y: int >> Y-coordinate. >> ] > > The parameters appear to be in the wrong order. > > >> The return annotation is also optional. If skipped, the arrow ("``->``") >> must also be omitted. > > Why is it optional? Aren't type annotations important? > > >> Clinic will ship with a number of built-in converters; new converters can >> also be added dynamically. > > How are the converters specified? Inside the preprocessor source? Are initialization > and cleanup part of the specification, e.g. is a converter represented by a class? > > I would prefer if the converters were in a header file, like I suggested in > PEP 437. Any tool can read such a file and custom converters can be redeclared > above their definition. > > >> The default value is dynamically assigned, "live" in the generated C code, >> and although it's specified as a Python value, it's translated into a native >> C value in the generated C code. Few default values are permitted, owing to >> this manual translation step. > > I think there should be a table that lists which values are converted and what > the result of the conversion is. > > >> ``[`` >> Establishes the start of an optional "group" of parameters. >> Note that "groups" may nest inside other "groups". >> See `Functions With Positional-Only Parameters`_ below. > > I don't quite understand the terminology: Functions with the ``/`` are also > "positional-only". Why not reserve this syntax exclusively for the legacy > left-and-right optional case? > > >> ``/`` >> This hints to Argument Clinic that this function is performance-sensitive, >> and that it's acceptable to forego supporting keyword parameters when parsing. >> (In early implementations of Clinic, this will switch Clinic from generating >> code using ``PyArg_ParseTupleAndKeywords`` to using ``PyArg_ParseTuple``. >> The hope is that in the future there will be no appreciable speed difference, >> rendering this syntax irrelevant and deprecated but harmless.) > > Here I would use "positional-only" and mention that the slash plays essentially > the same role as the vertical bar in the existing syntax. If this isn't the > intention, then I simply did not understand the paragraph. > > >> types >> >> A list of strings representing acceptable Python types for this object. >> There are also four strings which represent Python protocols: > > I don't quite follow: Aren't input types always specified by the converter > function? > > >> Argument Clinic also permits embedding Python code inside C files, which >> is executed in-place when Argument Clinic processes the file. Embedded code >> looks like this: > > The example in posixmodule.c takes up a lot of space and from the perspective > of auditing the effects it's a little like following a longjmp. > > > > Stefan Krah > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com -- Thanks, Andrew Svetlov From barry at python.org Mon Mar 18 05:52:16 2013 From: barry at python.org (Barry Warsaw) Date: Sun, 17 Mar 2013 21:52:16 -0700 Subject: [Python-Dev] Status of XML fixes In-Reply-To: <51469539.5030202@g.nevcal.com> References: <5145F151.3060302@python.org> <20130317204823.68c07a0e@anarchist> <51469539.5030202@g.nevcal.com> Message-ID: <20130317215216.161280d3@anarchist> On Mar 17, 2013, at 09:16 PM, Glenn Linderman wrote: >try: > newSimpleXMLAPI() > newapi = True >except Exception: > newapi = False try: True except NameError: True = 1 False = 0 -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From benjamin at python.org Mon Mar 18 06:53:19 2013 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 17 Mar 2013 22:53:19 -0700 Subject: [Python-Dev] Status of XML fixes In-Reply-To: <20130317215216.161280d3@anarchist> References: <5145F151.3060302@python.org> <20130317204823.68c07a0e@anarchist> <51469539.5030202@g.nevcal.com> <20130317215216.161280d3@anarchist> Message-ID: 2013/3/17 Barry Warsaw : > On Mar 17, 2013, at 09:16 PM, Glenn Linderman wrote: > >>try: >> newSimpleXMLAPI() >> newapi = True >>except Exception: >> newapi = False > > try: > True > except NameError: > True = 1 > False = 0 > > -Barry I understand why your bedtime is 21:30. :) -- Regards, Benjamin From larry at hastings.org Mon Mar 18 08:16:56 2013 From: larry at hastings.org (Larry Hastings) Date: Mon, 18 Mar 2013 00:16:56 -0700 Subject: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL. In-Reply-To: <20130317222607.GA16540@sleipnir.bytereef.org> References: <3ZTBVD26DlzR9T@mail.python.org> <20130317222607.GA16540@sleipnir.bytereef.org> Message-ID: <5146BF68.8030903@hastings.org> On 03/17/2013 03:26 PM, Stefan Krah wrote: > While I like the syntax better and appreciate the option to condense the > function declaration I still fear that the amount of implicitness will > distract from what is important: programming in C. > > This applies especially if people start declaring converters using the > [python] feature. > > So I hope that at least converters can be declared statically in a header > file, like I suggested in PEP 437. The Argument Clinic prototype is written in Python; I don't know how "declared static in a header file" applies to a Python implementation. Currently the converters are declared directly in clinic.py, somewhere in the middle. For what it's worth, I think the new syntax (dictated more-or-less by Guido and Nick) makes things slightly less explicit. In my original syntax, you declared the *type* of each parameter in C, and the flags dictated how Clinic should perform the conversion. In the new syntax, you declare the *converter function* for each parameter, and the type of the resulting C variable is inferred. > A couple of comments: >> As of CPython 3.3, builtin functions nearly always parse their arguments >> with one of two functions: the original ``PyArg_ParseTuple()``, [1]_ and >> the more modern ``PyArg_ParseTupleAndKeywords()``. [2]_ The former >> only handles positional parameters; the latter also accommodates keyword >> and keyword-only parameters, and is preferred for new code. > What is the source for this? I seem to remember a discussion on python-ideas > (but cannot find it now) where some developers preferred non-keyword functions > for some use cases. > > For example it's strange to write div(x=10, y=3), or worse, div(y=3, x=10). > Using positional-only arguments prevents this "feature". (I don't know to what "div" function you refer, but its arguments are poorly named.) I thought this was obviously true. But realized I didn't have any justification for it. So I checked into it. Thomas Wouters found me a thread from python-ideas from last March about changing range() to support keyword-only parameters. Guido participated in the thread, and responded with this pronouncement: [...] let's forget about "fixing" range. And that's final. http://mail.python.org/pipermail/python-ideas/2012-March/014380.html I double-checked with him about this and he confirmed: positional parameters are here to stay. This has some consequences. For example, inspect.getfullargspec, inspect.Signature, and indeed types.FunctionObject and types.CodeObject have no currently defined mechanism for communicating that a parameter is positional-only. I strongly assert we need such a mechanism, though it could be as simple as having the parameter name be an empty string or None. Anyway, it turns out this "keyword-only is preferred" was a misconception on my part. Python supports, and will continue to support, positional-only parameters as part of the language. Currently it isn't possible to define functions in Python that have them. But builtins have them, and will continue to have them, and therefore Argument Clinic needs to support them. I'll amend my PEP soonish to reflect this. Specifically the semantics of the /, [, and ] lines in the parameter section. >> path: path_t(allow_fd=1) >> >> I do not see where the C initialization or the cleanup are specified. Are >> they part of the converter specification? The extension interface isn't yet well-enough defined to be in the PEP. But yes, the generation of cleanup code will be part of the job of the conversion functions. I'm not sure I have any provision for initialization apart from assignment, but I agree that it should have one. >> curses.window.addch > The parameters appear to be in the wrong order. Fixed. >> The return annotation is also optional. If skipped, the arrow ("``->``") >> must also be omitted. > Why is it optional? Aren't type annotations important? I'm not aware of any builtins that have annotations. In fact, I'm not sure there's any place in the API where you can attach annotations to a builtin. I expect this will change when we add reflection support for signatures, though I doubt many builtins will use the facility. This raises an interesting point, though: Argument Clinic provides a place to provide a function return annotation, but has never supported per-parameter annotations. I'll meditate on this and see if I can come up with a reasonable amendment to the current syntax. > How are the converters specified? Inside the preprocessor source? Are initialization > and cleanup part of the specification, e.g. is a converter represented by a class? Respectively: in Python, yes, and yes. The current prototype has an experimental extension interface; this is used to add support for "path_t" in Modules/posixmodule.c. It supports initialization and cleanup. > I think there should be a table that lists which values are converted and what > the result of the conversion is. The documentation is TBD. In the current prototype, it explicitly turns "None" into "Py_None", and otherwise flushes whatever you specified as the default value through from the DSL to the generated C code. >> types >> >> A list of strings representing acceptable Python types for this object. >> There are also four strings which represent Python protocols: > I don't quite follow: Aren't input types always specified by the converter > function? I'm not sure yet. The purpose of parameterizing the converter functions is to cut down on having eleventy-billion (in other words, "forty") different converter functions, some with only slight differences. For example, 'C' accepts a str of length 1, whereas 'c' accepts a bytes or bytearray of length 1. Should this be two converter functions, or one converter taking a list of types? I'll see what feels better. > The example in posixmodule.c takes up a lot of space and from the perspective > of auditing the effects it's a little like following a longjmp. I got strong feedback that I needed more examples. That was the logical place for them. Can you suggest a better spot, or spots? //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Mon Mar 18 09:48:37 2013 From: larry at hastings.org (Larry Hastings) Date: Mon, 18 Mar 2013 01:48:37 -0700 Subject: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL. In-Reply-To: <5146BF68.8030903@hastings.org> References: <3ZTBVD26DlzR9T@mail.python.org> <20130317222607.GA16540@sleipnir.bytereef.org> <5146BF68.8030903@hastings.org> Message-ID: <5146D4E5.1020302@hastings.org> On 03/18/2013 12:16 AM, Larry Hastings wrote: > I'll amend my PEP soonish to reflect this. Specifically the semantics > of the /, [, and ] lines in the parameter section. I've just posted this revision. I'd like to draw everyone's attention to the top entry in the Notes section, reproduced below: * The DSL currently makes no provision for specifying per-parameter type annotations. This is something explicitly supported in Python; it should be supported for builtins too, once we have reflection support. It seems to me that the syntax for parameter lines--dictated by Guido--suggests conversion functions are themselves type annotations. This makes intuitive sense. But my thought experiments in how to convert the conversion function specification into a per-parameter type annotation ranged from obnoxious to toxic; I don't think that line of thinking will bear fruit. Instead, I think we need to add a new syntax allowing functions to explicitly specify a per-parameter type annotation. The problem: what should that syntax be? I've only had one idea so far, and I don't find it all that appealing: allow a optional second colon on the parameter line, and the type annotation would be specified... somewhere, either between the first and second colons, or between the second colon and the (optional) default. Also, I don't think this could specify any arbitrary Python value. I suspect it would suffer heavy restrictions on what types and literals it could use. Perhaps the best solution would be to store the exact string in static data, and have Python evaluate it on demand? If so, it would be safest to restrict it to Python literal syntax, permitting no function calls (even to builtins). Syntax suggestions welcome, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Mon Mar 18 10:29:49 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 18 Mar 2013 10:29:49 +0100 Subject: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL. In-Reply-To: <5146BF68.8030903@hastings.org> References: <3ZTBVD26DlzR9T@mail.python.org> <20130317222607.GA16540@sleipnir.bytereef.org> <5146BF68.8030903@hastings.org> Message-ID: <80206D7B-ACAB-4108-BC13-D6AC2BB2E7CB@mac.com> On 18 Mar, 2013, at 8:16, Larry Hastings wrote: > > This has some consequences. For example, inspect.getfullargspec, inspect.Signature, and indeed types.FunctionObject and types.CodeObject have no currently defined mechanism for communicating that a parameter is positional-only. I strongly assert we need such a mechanism, though it could be as simple as having the parameter name be an empty string or None. inspect.Signature does have support for positional-only arguments, they have inspect.Parameter.POSITIONAL_ONLY as their kind. The others probably don't have support for this kind of parameters because there is no Python syntax for creating them. Ronald From stefan at bytereef.org Mon Mar 18 11:13:11 2013 From: stefan at bytereef.org (Stefan Krah) Date: Mon, 18 Mar 2013 11:13:11 +0100 Subject: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL. In-Reply-To: <5146BF68.8030903@hastings.org> References: <3ZTBVD26DlzR9T@mail.python.org> <20130317222607.GA16540@sleipnir.bytereef.org> <5146BF68.8030903@hastings.org> Message-ID: <20130318101311.GA21364@sleipnir.bytereef.org> Larry Hastings wrote: > So I hope that at least converters can be declared statically in a header > file, like I suggested in PEP 437. > > > The Argument Clinic prototype is written in Python; I don't know how "declared > static in a header file" applies to a Python implementation. Currently the > converters are declared directly in clinic.py, somewhere in the middle. It applies in the same way to a Python implementation as declaring the DSL comment blocks in a C file applies to a Python implementation. This is exactly the same. 1) I think that third party tools should be able to extract *all* required information from the DSL only. 2) After writing a new custom converter, I'd rather edit a header file and not the preprocessor source. 3) Likewise, I'd rather edit a header file than inserting a magic [python] block into the C source that registers the required information with the preprocessor in a completely implementation defined way. > > The example in posixmodule.c takes up a lot of space and from the perspective > of auditing the effects it's a little like following a longjmp. > > > I got strong feedback that I needed more examples. That was the logical place > for them. Can you suggest a better spot, or spots? I'm concerned about the whole concept (see above). Stefan Krah From stefan at bytereef.org Mon Mar 18 11:36:43 2013 From: stefan at bytereef.org (Stefan Krah) Date: Mon, 18 Mar 2013 11:36:43 +0100 Subject: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL. In-Reply-To: <5146D4E5.1020302@hastings.org> References: <3ZTBVD26DlzR9T@mail.python.org> <20130317222607.GA16540@sleipnir.bytereef.org> <5146BF68.8030903@hastings.org> <5146D4E5.1020302@hastings.org> Message-ID: <20130318103643.GA21550@sleipnir.bytereef.org> Larry Hastings wrote: > * The DSL currently makes no provision for specifying per-parameter > type annotations. This is something explicitly supported in Python; > it should be supported for builtins too, once we have reflection support. > > It seems to me that the syntax for parameter lines--dictated by > Guido--suggests conversion functions are themselves type annotations. > This makes intuitive sense. Really, did you read PEP 437? It's all in there. > But my thought experiments in how to convert the conversion function > specification into a per-parameter type annotation ranged from obnoxious > to toxic; I don't think that > line of thinking will bear fruit. Did you look at the patch that I posted in issue #16612? It's already implemented: $ ./printsemant Tools/preprocess/testcases/posix_stat.c PROGRAM[ SOURCE[...], DEFINE[ CNAME posix_stat, SPEC[ DECLARATION { fun_fqname = os.stat, fun_name = stat, fun_cname = posix_stat, fun_kind = Keywords, fun_params = [ { param_name = path, param_type = [bytes, int, str], <== here it is param_default = NONE, param_kind = (PosOrKwd, Required), param_conv = path_converter, param_parseargs = [ ConvArg { arg_name = path_converter, arg_type = int (*converter)(PyObject *, void *) arg_use_ptr = false }, MainArg { arg_name = path, arg_type = path_t, arg_use_ptr = true }]}, [...] Stefan Krah From stefan at bytereef.org Mon Mar 18 12:05:52 2013 From: stefan at bytereef.org (Stefan Krah) Date: Mon, 18 Mar 2013 12:05:52 +0100 Subject: [Python-Dev] [Python-checkins] peps: Update for 436, explicitly supporting positional parameters forever, amen. In-Reply-To: <3ZTrcp6hZCzSLM@mail.python.org> References: <3ZTrcp6hZCzSLM@mail.python.org> Message-ID: <20130318110552.GA21900@sleipnir.bytereef.org> larry.hastings wrote: > + Establishes that all the *proceeding* arguments are > + positional-only. For now, Argument Clinic does not > + support functions with both positional-only and > + non-positional-only arguments; therefore, if ``/`` > + is specified for a function, currently it must always > + be after the last parameter. Also, Argument Clinic > + does not currently support default values for > + positional-only parameters. > + > +(The semantics of ``/`` follow a syntax for positional-only > +parameters in Python once proposed by Guido. [5]_ ) I think the entire PEP would be easier to understand if the main sections only contained the envisaged end result and all current preprocessor deficiencies were listed in a single isolated section. Stefan Krah From ndbecker2 at gmail.com Mon Mar 18 14:50:17 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 18 Mar 2013 09:50:17 -0400 Subject: [Python-Dev] can't assign to function call Message-ID: def F(x): return x x = 2 F(x) = 3 F(x) = 3 SyntaxError: can't assign to function call Do we really need this restriction? There do exist other languages without it. From steve at pearwood.info Mon Mar 18 15:03:47 2013 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 19 Mar 2013 01:03:47 +1100 Subject: [Python-Dev] can't assign to function call In-Reply-To: References: Message-ID: <51471EC3.4020101@pearwood.info> On 19/03/13 00:50, Neal Becker wrote: > def F(x): > return x > > x = 2 > F(x) = 3 > > F(x) = 3 > SyntaxError: can't assign to function call > > Do we really need this restriction? There do exist other languages without it. What meaning would you give to "F(x) = 3", and why? -- Steven From jsbueno at python.org.br Mon Mar 18 15:04:51 2013 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Mon, 18 Mar 2013 11:04:51 -0300 Subject: [Python-Dev] can't assign to function call In-Reply-To: References: Message-ID: On 18 March 2013 10:50, Neal Becker wrote: > def F(x): > return x > > x = 2 > F(x) = 3 > > F(x) = 3 > SyntaxError: can't assign to function call > > Do we really need this restriction? There do exist other languages without it. What? I mean...what are you even talking about? Assignments are to "names" - names are not Python objects and it is not something that can be returned from a function call. If you are meaning mathematical equation like functionality, I recommend you to try "SymPy" - the Library for symbolic mathematics. I can't make sense of what you want to perform by "assigning to a function call", and given the time without a reply to this e-mail, I think I am not the only one there. js -><- > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/jsbueno%40python.org.br From rosuav at gmail.com Mon Mar 18 15:23:07 2013 From: rosuav at gmail.com (Chris Angelico) Date: Tue, 19 Mar 2013 01:23:07 +1100 Subject: [Python-Dev] can't assign to function call In-Reply-To: References: Message-ID: On Tue, Mar 19, 2013 at 12:50 AM, Neal Becker wrote: > def F(x): > return x > > x = 2 > F(x) = 3 > > F(x) = 3 > SyntaxError: can't assign to function call > > Do we really need this restriction? There do exist other languages without it. The languages that permit you to assign to a function call all have some notion of a reference type. In C++, for instance, you can return a reference to an object, and assigning to the function call achieves the same thing as assigning to the referent. C has similar semantics with pointers; you can dereference a returned pointer: int somevalue; int *F() {return &somevalue;} *F() = 5; /* will assign to somevalue */ With Python, there are no pointers, there are no variables. But you can do something somewhat similar: >>> x = [0] >>> def F(): return x >>> F()[0]=3; >>> x[0] 3 If you think of x as a pointer to the value x[0], then Python lets you "dereference" the function return value. But this is fiddling with terminology; the concept of assigning to a function return value doesn't really make sense in Python. Further discussion on exactly _why_ this is the case can be found on python-list's or python-tutor's archives, such as this excellent post by Steven D'Aprano: http://mail.python.org/pipermail/tutor/2010-December/080505.html TLDR: Python != C. :) ChrisA From python-dev at masklinn.net Mon Mar 18 15:44:44 2013 From: python-dev at masklinn.net (Xavier Morel) Date: Mon, 18 Mar 2013 15:44:44 +0100 Subject: [Python-Dev] can't assign to function call In-Reply-To: References: Message-ID: <9D32D1B2-4C08-4704-AC8E-F44FFC3203D1@masklinn.net> On 2013-03-18, at 15:23 , Chris Angelico wrote: > On Tue, Mar 19, 2013 at 12:50 AM, Neal Becker wrote: >> def F(x): >> return x >> >> x = 2 >> F(x) = 3 >> >> F(x) = 3 >> SyntaxError: can't assign to function call >> >> Do we really need this restriction? There do exist other languages without it. > > The languages that permit you to assign to a function call all have > some notion of a reference type. Alternatively they're functional language defining "match cases" e.g. in Haskell a function is defined as foo a b c = someOperation a b c which is functionally equivalent to Python's def foo(a, b, c): return someOperation(a, b, c) From christian at python.org Mon Mar 18 15:54:34 2013 From: christian at python.org (Christian Heimes) Date: Mon, 18 Mar 2013 15:54:34 +0100 Subject: [Python-Dev] Status of XML fixes In-Reply-To: References: <5145F151.3060302@python.org> Message-ID: Am 17.03.2013 19:25, schrieb Eli Bendersky: > I'll gladly review the _elementtree changes and can help with the expat > & pyexpat changes as well. Until now I had the impression that the > patches aren't ready for review yet. If they are, that's great. The modifications to expat, pyexpat and _elementtree are available for weeks. I just hadn't have time to create proper patches yet. > Do you have a patch in the issue tracker (so it can be reviewed with > Rietveld)? ISTM the current form is just a file (say _elementtree.c) in > your Bitbucket repo. Should that be just diffed with the trunk file to > see the changes? I have pushed all changes from defusedexpat to a clone of Python's hg repository. You can find the clone at https://bitbucket.org/tiran/xmlbomb/ . The repository also contains a quick draft for a XML security API. https://bitbucket.org/tiran/xmlbomb/commits/c033abd0f7747c5b215e1b32f90372dd96e397ba I have to port tests from my other branch and add tests for the new API, too. Christian From christian at python.org Mon Mar 18 16:00:21 2013 From: christian at python.org (Christian Heimes) Date: Mon, 18 Mar 2013 16:00:21 +0100 Subject: [Python-Dev] Status of XML fixes In-Reply-To: <20130317195952.2bfa0dc8@pitrou.net> References: <5145F151.3060302@python.org> <20130317195952.2bfa0dc8@pitrou.net> Message-ID: <51472C05.1050806@python.org> Am 17.03.2013 19:59, schrieb Antoine Pitrou: >> Why keep the libraries vulnerable for another year (3.4 final is expected >> for early 2014), if there is something we can do about them now? > > Well, Christian said that his stdlib patch wasn't ready yet. The patch is > 90% finished. All the hard work is already done. Christian From hrvoje.niksic at avl.com Mon Mar 18 16:01:12 2013 From: hrvoje.niksic at avl.com (Hrvoje Niksic) Date: Mon, 18 Mar 2013 16:01:12 +0100 Subject: [Python-Dev] can't assign to function call In-Reply-To: References: Message-ID: <51472C38.9090408@avl.com> On 03/18/2013 03:23 PM, Chris Angelico wrote: > The languages that permit you to assign to a function call all have > some notion of a reference type. Assigning to function calls is orthogonal to reference types. For example, Python manages assignment to subscripts without having references just fine: val = obj[index] # val = obj.__getitem__(index) obj[index] = val # obj.__setitem__(index, val) In analogy with that, Python could implement what looks like assignment to function call like this: val = f(arg) # val = f.__call__(arg) f(arg) = val # f.__setcall__(arg, val) I am not arguing that this should be added, I'm only pointing out that Python's object customization is not fundamentally at odds with assignment to function calls. Having said that, I am in fact arguing that Python doesn't need them. All C++ uses of operator() overloads can be implemented with the subscript operator. Even if one needs more different assignments than there are operators, Python can provide it as easily as C++. For example, on std::vector::operator[] provides access to the container without error checking, and std::vector::at() checks bounds: vec[i] = val // no error checking vec.at(i) = val // error checking This is trivially translated to Python as: vec[i] = val # primary functionality, use __setitem__ vec.at[i] = val # secondary functionality, __setitem__ on a proxy From steve at pearwood.info Mon Mar 18 16:40:37 2013 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 19 Mar 2013 02:40:37 +1100 Subject: [Python-Dev] can't assign to function call In-Reply-To: <51472C38.9090408@avl.com> References: <51472C38.9090408@avl.com> Message-ID: <51473575.7050406@pearwood.info> On 19/03/13 02:01, Hrvoje Niksic wrote: > On 03/18/2013 03:23 PM, Chris Angelico wrote: >> The languages that permit you to assign to a function call all have >> some notion of a reference type. > > Assigning to function calls is orthogonal to reference types. For example, Python manages assignment to subscripts without having references just fine: > > val = obj[index] # val = obj.__getitem__(index) > obj[index] = val # obj.__setitem__(index, val) > > In analogy with that, Python could implement what looks like assignment to function call like this: > > val = f(arg) # val = f.__call__(arg) > f(arg) = val # f.__setcall__(arg, val) That's all very well, but what would it do? It's not enough to say that the syntax could exist, we also need to have semantics. What's the use-case here? (That question is mostly aimed at the original poster.) Aside: I'd reverse the order of the arg, val in any such hypothetical __setcall__, so as to support functions with zero or more arguments: f(*args, **kwargs) = val <=> f.__setcall__(val, *args, **kwargs) -- Steven From ncoghlan at gmail.com Mon Mar 18 16:50:08 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Mar 2013 08:50:08 -0700 Subject: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL. In-Reply-To: <20130318101311.GA21364@sleipnir.bytereef.org> References: <3ZTBVD26DlzR9T@mail.python.org> <20130317222607.GA16540@sleipnir.bytereef.org> <5146BF68.8030903@hastings.org> <20130318101311.GA21364@sleipnir.bytereef.org> Message-ID: On Mon, Mar 18, 2013 at 3:13 AM, Stefan Krah wrote: > Larry Hastings wrote: >> So I hope that at least converters can be declared statically in a header >> file, like I suggested in PEP 437. >> >> >> The Argument Clinic prototype is written in Python; I don't know how "declared >> static in a header file" applies to a Python implementation. Currently the >> converters are declared directly in clinic.py, somewhere in the middle. > > It applies in the same way to a Python implementation as declaring the > DSL comment blocks in a C file applies to a Python implementation. This > is exactly the same. > > 1) I think that third party tools should be able to extract *all* required > information from the DSL only. > > 2) After writing a new custom converter, I'd rather edit a header file and > not the preprocessor source. > > 3) Likewise, I'd rather edit a header file than inserting a magic [python] > block into the C source that registers the required information with > the preprocessor in a completely implementation defined way. We didn't spend much time on this (we were focused on the per-function DSL), but I agree a DSL for converters would be highly desirable, and quite like the one in PEP 437. It would require some tweaks to correctly handle the converter parameterisation (for example, in the "es#" case, "encoding" is an input to the converter rather than an output. This is why PEP 436 now allows a callable notation for type converter references) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Mon Mar 18 16:57:05 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Mar 2013 08:57:05 -0700 Subject: [Python-Dev] [Python-checkins] peps: Update for 436, explicitly supporting positional parameters forever, amen. In-Reply-To: <3ZTrcp6hZCzSLM@mail.python.org> References: <3ZTrcp6hZCzSLM@mail.python.org> Message-ID: On Mon, Mar 18, 2013 at 1:47 AM, larry.hastings wrote: > Notes / TBD > =========== > > +* The DSL currently makes no provision for specifying per-parameter > + type annotations. This is something explicitly supported in Python; > + it should be supported for builtins too, once we have reflection support. > + > + It seems to me that the syntax for parameter lines--dictated by > + Guido--suggests conversion functions are themselves type annotations. > + This makes intuitive sense. But my thought experiments in how to > + convert the conversion function specification into a per-parameter > + type annotation ranged from obnoxious to toxic; I don't think that > + line of thinking will bear fruit. > + > + Instead, I think wee need to add a new syntax allowing functions > + to explicitly specify a per-parameter type annotation. The problem: > + what should that syntax be? I've only had one idea so far, and I > + don't find it all that appealing: allow a optional second colon > + on the parameter line, and the type annotation would be specified... > + somewhere, either between the first and second colons, or between > + the second colon and the (optional) default. > + > + Also, I don't think this could specify any arbitrary Python value. > + I suspect it would suffer heavy restrictions on what types and > + literals it could use. Perhaps the best solution would be to > + store the exact string in static data, and have Python evaluate > + it on demand? If so, it would be safest to restrict it to Python > + literal syntax, permitting no function calls (even to builtins). > + I think the hack we're using for the default-as-shown-in-Python will work here as well: use the converter parameterisation notation. Then "pydefault" (I think that is a better name than the current "default") and "pynote" would control what is shown for the conversion in the first line of the docstring and in any future introspection support. If either is not given, then the C default would be passed through as the Python default and the annotation would be left blank. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From skip at pobox.com Mon Mar 18 16:57:03 2013 From: skip at pobox.com (Skip Montanaro) Date: Mon, 18 Mar 2013 10:57:03 -0500 Subject: [Python-Dev] can't assign to function call In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 8:50 AM, Neal Becker wrote: > def F(x): > return x > > x = 2 > F(x) = 3 > > F(x) = 3 > SyntaxError: can't assign to function call > > Do we really need this restriction? There do exist other languages without it. I think this belongs on python-ideas before launching into discussions of syntax and semantics on python-dev. Skip From ncoghlan at gmail.com Mon Mar 18 17:28:26 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Mar 2013 09:28:26 -0700 Subject: [Python-Dev] [PEP 437] A DSL for specifying signatures, annotations and argument converters In-Reply-To: <51456D3C.10608@hastings.org> References: <20130316091750.GA24061@sleipnir.bytereef.org> <51456D3C.10608@hastings.org> Message-ID: On Sun, Mar 17, 2013 at 12:14 AM, Larry Hastings wrote: > On 03/16/2013 02:17 AM, Stefan Krah wrote: > > Both PEPs were discussed at PyCon. The state of affairs is that a > compromise is being worked upon and will be published by Larry in > a revised PEP. > > > I've pushed an update to PEP 436, the Argument Clinic PEP. It's now live on > python.org. Thanks for that. A few comments. * I'm confused by the leading table - is it possible/expected to have a module declaration, class declaration *and* function declaration in the same block? If not, it seems more appropriate to have 3 tables, or else simplify the initial table to omit the declaration details, and explain the available options later. * Is it possible to use "module_name.class_name.method_name" when declaring a function that will be used as a method in a class? * To match the behaviour of Python functions, function docstrings should also be optional in Argument Clinic. We'll always include at least the function prototype, even if no other content is specified. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From hrvoje.niksic at avl.com Mon Mar 18 17:36:17 2013 From: hrvoje.niksic at avl.com (Hrvoje Niksic) Date: Mon, 18 Mar 2013 17:36:17 +0100 Subject: [Python-Dev] can't assign to function call In-Reply-To: <51473575.7050406@pearwood.info> References: <51472C38.9090408@avl.com> <51473575.7050406@pearwood.info> Message-ID: <51474281.7060501@avl.com> On 03/18/2013 04:40 PM, Steven D'Aprano wrote: >> In analogy with that, Python could implement what looks like assignment to function call like this: >> >> val = f(arg) # val = f.__call__(arg) >> f(arg) = val # f.__setcall__(arg, val) > > That's all very well, but what would it do? It's not enough to say > that the syntax could exist, we also need to have semantics. I am not the best person to answer because I go on to argue that this syntax is not needed in Python at all (anything it can do can be implemented with __setitem__ at no loss of clarity). Still, if such a feature existed in Python, I imagine people would use it to set the same resource that the function obtains, where such a thing is applicable. > Aside: I'd reverse the order of the arg, val in any such hypothetical > __setcall__, so as to support functions with zero or more arguments: > > f(*args, **kwargs) = val <=> f.__setcall__(val, *args, **kwargs) That would be a better design, I agree. From storchaka at gmail.com Mon Mar 18 18:25:02 2013 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 18 Mar 2013 19:25:02 +0200 Subject: [Python-Dev] can't assign to function call In-Reply-To: <51473575.7050406@pearwood.info> References: <51472C38.9090408@avl.com> <51473575.7050406@pearwood.info> Message-ID: On 18.03.13 17:40, Steven D'Aprano wrote: > On 19/03/13 02:01, Hrvoje Niksic wrote: >> Assigning to function calls is orthogonal to reference types. For >> example, Python manages assignment to subscripts without having >> references just fine: >> >> val = obj[index] # val = obj.__getitem__(index) >> obj[index] = val # obj.__setitem__(index, val) >> >> In analogy with that, Python could implement what looks like >> assignment to function call like this: >> >> val = f(arg) # val = f.__call__(arg) >> f(arg) = val # f.__setcall__(arg, val) > > That's all very well, but what would it do? It's not enough to say that > the syntax could exist, we also need to have semantics. What's the > use-case here? (That question is mostly aimed at the original poster.) Python could use parenthesis instead of brackets for indexing and a dictionary lookup. However it is too late to discuss this idea. From guido at python.org Mon Mar 18 19:02:49 2013 From: guido at python.org (Guido van Rossum) Date: Mon, 18 Mar 2013 11:02:49 -0700 Subject: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL. In-Reply-To: <20130318103643.GA21550@sleipnir.bytereef.org> References: <3ZTBVD26DlzR9T@mail.python.org> <20130317222607.GA16540@sleipnir.bytereef.org> <5146BF68.8030903@hastings.org> <5146D4E5.1020302@hastings.org> <20130318103643.GA21550@sleipnir.bytereef.org> Message-ID: On Mon, Mar 18, 2013 at 3:36 AM, Stefan Krah wrote: > Larry Hastings wrote: >> * The DSL currently makes no provision for specifying per-parameter >> type annotations. This is something explicitly supported in Python; >> it should be supported for builtins too, once we have reflection support. >> >> It seems to me that the syntax for parameter lines--dictated by >> Guido--suggests conversion functions are themselves type annotations. >> This makes intuitive sense. > > Really, did you read PEP 437? It's all in there. This attitude is unhelpful. Please stop being outright hostile. If you want to have any influence on the outcome at all, consider looking into compromises. >> But my thought experiments in how to convert the conversion function >> specification into a per-parameter type annotation ranged from obnoxious >> to toxic; I don't think that >> line of thinking will bear fruit. > > Did you look at the patch that I posted in issue #16612? It's already > implemented: > > $ ./printsemant Tools/preprocess/testcases/posix_stat.c > PROGRAM[ > SOURCE[...], > DEFINE[ > CNAME posix_stat, > SPEC[ > DECLARATION > { fun_fqname = os.stat, > fun_name = stat, > fun_cname = posix_stat, > fun_kind = Keywords, > fun_params = [ > { param_name = path, > param_type = [bytes, int, str], <== here it is > param_default = NONE, > param_kind = (PosOrKwd, Required), > param_conv = path_converter, > param_parseargs = [ > ConvArg { arg_name = path_converter, > arg_type = int (*converter)(PyObject *, void *) > arg_use_ptr = false }, > MainArg { arg_name = path, > arg_type = path_t, > arg_use_ptr = true }]}, > [...] I can assure you nobody downloaded your binaries. The security implications are just too scary. -- --Guido van Rossum (python.org/~guido) From guido at python.org Mon Mar 18 18:57:56 2013 From: guido at python.org (Guido van Rossum) Date: Mon, 18 Mar 2013 10:57:56 -0700 Subject: [Python-Dev] can't assign to function call In-Reply-To: References: <51472C38.9090408@avl.com> <51473575.7050406@pearwood.info> Message-ID: Move. This. Thread. Out. Of. Python-Dev. Now. (python-ideas is the right place.) On Mon, Mar 18, 2013 at 10:25 AM, Serhiy Storchaka wrote: > On 18.03.13 17:40, Steven D'Aprano wrote: >> >> On 19/03/13 02:01, Hrvoje Niksic wrote: >>> >>> Assigning to function calls is orthogonal to reference types. For >>> example, Python manages assignment to subscripts without having >>> references just fine: >>> >>> val = obj[index] # val = obj.__getitem__(index) >>> obj[index] = val # obj.__setitem__(index, val) >>> >>> In analogy with that, Python could implement what looks like >>> assignment to function call like this: >>> >>> val = f(arg) # val = f.__call__(arg) >>> f(arg) = val # f.__setcall__(arg, val) >> >> >> That's all very well, but what would it do? It's not enough to say that >> the syntax could exist, we also need to have semantics. What's the >> use-case here? (That question is mostly aimed at the original poster.) > > > Python could use parenthesis instead of brackets for indexing and a > dictionary lookup. However it is too late to discuss this idea. > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From guido at python.org Mon Mar 18 19:04:39 2013 From: guido at python.org (Guido van Rossum) Date: Mon, 18 Mar 2013 11:04:39 -0700 Subject: [Python-Dev] [Python-checkins] peps: Update for 436, explicitly supporting positional parameters forever, amen. In-Reply-To: References: <3ZTrcp6hZCzSLM@mail.python.org> Message-ID: On Mon, Mar 18, 2013 at 8:57 AM, Nick Coghlan wrote: > On Mon, Mar 18, 2013 at 1:47 AM, larry.hastings > wrote: >> Notes / TBD >> =========== >> >> +* The DSL currently makes no provision for specifying per-parameter >> + type annotations. This is something explicitly supported in Python; >> + it should be supported for builtins too, once we have reflection support. >> + >> + It seems to me that the syntax for parameter lines--dictated by >> + Guido--suggests conversion functions are themselves type annotations. >> + This makes intuitive sense. But my thought experiments in how to >> + convert the conversion function specification into a per-parameter >> + type annotation ranged from obnoxious to toxic; I don't think that >> + line of thinking will bear fruit. >> + >> + Instead, I think wee need to add a new syntax allowing functions >> + to explicitly specify a per-parameter type annotation. The problem: >> + what should that syntax be? I've only had one idea so far, and I >> + don't find it all that appealing: allow a optional second colon >> + on the parameter line, and the type annotation would be specified... >> + somewhere, either between the first and second colons, or between >> + the second colon and the (optional) default. >> + >> + Also, I don't think this could specify any arbitrary Python value. >> + I suspect it would suffer heavy restrictions on what types and >> + literals it could use. Perhaps the best solution would be to >> + store the exact string in static data, and have Python evaluate >> + it on demand? If so, it would be safest to restrict it to Python >> + literal syntax, permitting no function calls (even to builtins). >> + > > I think the hack we're using for the default-as-shown-in-Python will > work here as well: use the converter parameterisation notation. > > Then "pydefault" (I think that is a better name than the current > "default") and "pynote" would control what is shown for the conversion > in the first line of the docstring and in any future introspection > support. If either is not given, then the C default would be passed > through as the Python default and the annotation would be left blank. Right. In fact, I think the decision of what (if anything) should be put in the annotation should be up to the converter class. It can be a specific method on the converter object. -- --Guido van Rossum (python.org/~guido) From "ja...py" at farowl.co.uk Mon Mar 18 21:26:05 2013 From: "ja...py" at farowl.co.uk (Jeff Allen) Date: Mon, 18 Mar 2013 20:26:05 +0000 Subject: [Python-Dev] Recent changes to TextIOWrapper and its tests Message-ID: <5147785D.3070100@farowl.co.uk> I'm pulling recent changes in the io module across to Jython. I am looking for help understanding the changes in http://hg.python.org/cpython/rev/19a33ef3821d That change set is about what should happen if the underlying buffer does not return bytes when read, but instead, for example, unicode characters. The test test_read_nonbytes() constructs a pathological text stream reader t where the usual BytesIO or BufferedReader is replaced with a StringIO. It then checks that r.read(1) and t.readlines() raise a TypeError, that is, it tests that TextIOWrapper checks the type of what it reads from the buffer. The puzzle is that it requires t.read() to succeed. When I insert a check for bytes type in all the places it seems necessary in my code, I pass the first two conditions, but since t.read() also raises TypeError, the overall test fails. Is reading the stream with read() intended to succeed? Why is this desired? Jeff Allen From greg.ewing at canterbury.ac.nz Mon Mar 18 22:50:05 2013 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 19 Mar 2013 10:50:05 +1300 Subject: [Python-Dev] can't assign to function call In-Reply-To: <51474281.7060501@avl.com> References: <51472C38.9090408@avl.com> <51473575.7050406@pearwood.info> <51474281.7060501@avl.com> Message-ID: <51478C0D.4090706@canterbury.ac.nz> Hrvoje Niksic wrote: > I am not the best person to answer because I go on to argue that this > syntax is not needed in Python at all (anything it can do can be > implemented with __setitem__ at no loss of clarity). I would even argue that the proxy solution is even *better* for that particular use case, because it makes both operations look like forms of indexing, which they are. -- Greg From larry at hastings.org Mon Mar 18 23:43:10 2013 From: larry at hastings.org (Larry Hastings) Date: Mon, 18 Mar 2013 15:43:10 -0700 Subject: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL. In-Reply-To: <80206D7B-ACAB-4108-BC13-D6AC2BB2E7CB@mac.com> References: <3ZTBVD26DlzR9T@mail.python.org> <20130317222607.GA16540@sleipnir.bytereef.org> <5146BF68.8030903@hastings.org> <80206D7B-ACAB-4108-BC13-D6AC2BB2E7CB@mac.com> Message-ID: <5147987E.9070902@hastings.org> On 03/18/2013 02:29 AM, Ronald Oussoren wrote: > On 18 Mar, 2013, at 8:16, Larry Hastings wrote: >> This has some consequences. For example, inspect.getfullargspec, inspect.Signature, and indeed types.FunctionObject and types.CodeObject have no currently defined mechanism for communicating that a parameter is positional-only. > inspect.Signature does have support for positional-only arguments, they have inspect.Parameter.POSITIONAL_ONLY as their kind. You're right! And I should have remembered that--I was one of the authors of the inspect.Signature PEP. It's funny, it can represent something that it has no way of inferring ;-) //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Mon Mar 18 23:50:21 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 18 Mar 2013 23:50:21 +0100 Subject: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL. In-Reply-To: <5147987E.9070902@hastings.org> References: <3ZTBVD26DlzR9T@mail.python.org> <20130317222607.GA16540@sleipnir.bytereef.org> <5146BF68.8030903@hastings.org> <80206D7B-ACAB-4108-BC13-D6AC2BB2E7CB@mac.com> <5147987E.9070902@hastings.org> Message-ID: <7422CE81-912B-4F81-BC69-6590839B63C6@mac.com> On 18 Mar, 2013, at 23:43, Larry Hastings wrote: > On 03/18/2013 02:29 AM, Ronald Oussoren wrote: >> On 18 Mar, 2013, at 8:16, Larry Hastings wrote: >>> This has some consequences. For example, inspect.getfullargspec, inspect.Signature, and indeed types.FunctionObject and types.CodeObject have no currently defined mechanism for communicating that a parameter is positional-only. >> inspect.Signature does have support for positional-only arguments, they have inspect.Parameter.POSITIONAL_ONLY as their kind. > > You're right! And I should have remembered that--I was one of the authors of the inspect.Signature PEP. It's funny, it can represent something that it has no way of inferring ;-) It doesn't necessarily have to, builtin functions could grow a __signature__ attribute that calculates the signature (possibly from the DSL data). I've done something like that in a pre-release version of PyObjC, and with some patching of pydoc and inspect (see #17053) I now have useful help information for what are basicly builtin functions with positional-only arguments. Ronald -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Mar 19 00:46:52 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 18 Mar 2013 16:46:52 -0700 Subject: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL. In-Reply-To: References: <3ZTBVD26DlzR9T@mail.python.org> <20130317222607.GA16540@sleipnir.bytereef.org> <5146BF68.8030903@hastings.org> <5146D4E5.1020302@hastings.org> <20130318103643.GA21550@sleipnir.bytereef.org> Message-ID: On Mon, Mar 18, 2013 at 11:02 AM, Guido van Rossum wrote: > On Mon, Mar 18, 2013 at 3:36 AM, Stefan Krah wrote: >> Larry Hastings wrote: >>> * The DSL currently makes no provision for specifying per-parameter >>> type annotations. This is something explicitly supported in Python; >>> it should be supported for builtins too, once we have reflection support. >>> >>> It seems to me that the syntax for parameter lines--dictated by >>> Guido--suggests conversion functions are themselves type annotations. >>> This makes intuitive sense. >> >> Really, did you read PEP 437? It's all in there. > > This attitude is unhelpful. Please stop being outright hostile. If you > want to have any influence on the outcome at all, consider looking > into compromises. While I actually agree with Stefan that it's important to eventually have a converter DSL, I don't think it's necessary to have it in the initial implementation. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From kristjan at ccpgames.com Tue Mar 19 00:40:32 2013 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Mon, 18 Mar 2013 23:40:32 +0000 Subject: [Python-Dev] newbuffer support in python 2.7 Message-ID: Hi python dev. I have two languishing defects regarding 2.7 and how buffer support isn't complete there. http://bugs.python.org/issue10211 http://bugs.python.org/issue10212 In both cases, the new style buffer support is incomplete, and the patches close usability holes having to do with memoryview objects. It was suggested to me that I put it to python-dev to decide if we should consider this a bug to be fixed or not, and hopefully get a consensus before 2.7.4 freeze. I have been running a local patch of 2.7 with those fixes for two years now. Cheers, Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Tue Mar 19 00:56:51 2013 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 19 Mar 2013 01:56:51 +0200 Subject: [Python-Dev] Recent changes to TextIOWrapper and its tests In-Reply-To: <5147785D.3070100@farowl.co.uk> References: <5147785D.3070100@farowl.co.uk> Message-ID: On 18.03.13 22:26, Jeff Allen wrote: > The puzzle is that it requires t.read() to succeed. > > When I insert a check for bytes type in all the places it seems > necessary in my code, I pass the first two conditions, but since > t.read() also raises TypeError, the overall test fails. Is reading the > stream with read() intended to succeed? Why is this desired? This is not desired. I just registered the current behavior. Python 3 is more strict and always raises an exception. Perhaps this test should be relaxed. I.e. use with self.maybeRaises(TypeError): t.read() and define maybeRaises() as: @contextlib.contextmanager def maybeRaises(self, *args, **kwds): try: yield except args: pass From tismer at stackless.com Tue Mar 19 01:27:33 2013 From: tismer at stackless.com (Christian Tismer) Date: Mon, 18 Mar 2013 17:27:33 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <20130314184519.GD24307@snakebite.org> References: <20130314020540.GB22505@snakebite.org> <20130314184519.GD24307@snakebite.org> Message-ID: <5147B0F5.2030107@stackless.com> Hi Trent, I just started to try to understand the idea and the implications. Removing almost all of your message since that is already too long to work with: The reference is http://mail.python.org/pipermail/python-dev/2013-March/124690.html On 3/14/13 11:45 AM, Trent Nelson wrote: > On Wed, Mar 13, 2013 at 07:05:41PM -0700, Trent Nelson wrote: >> Just posted the slides for those that didn't have the benefit of >> attending the language summit today: >> >> https://speakerdeck.com/trent/parallelizing-the-python-interpreter-an-alternate-approach-to-async > Someone on /r/python asked if I could elaborate on the "do Y" part > of "if we're in a parallel thread, do Y, if not, do X", which I > (inadvertently) ended up replying to in detail. I've included the > response below. (I'll work on converting this into a TL;DR set of > slides soon.) > >> Can you go into a bit of depth about "X" here? > That's a huge topic that I'm hoping to tackle ASAP. The basic premise > is that parallel 'Context' objects (well, structs) are allocated for > each parallel thread callback. The context persists for the lifetime of > the "parallel work". > So, the remaining challenge is preventing the use case alluded to > earlier where someone tries to modify an object that hasn't been "async > protected". That's a bit harder. The idea I've got in mind is to > instrument the main CPython ceval loop, such that we do these checks as > part of opcode processing. That allows us to keep all the logic in the > one spot and not have to go hacking the internals of every single > object's C backend to ensure correctness. > > Now, that'll probably work to an extent. I mean, after all, there are > opcodes for all the things we'd be interested in instrumenting, > LOAD_GLOBAL, STORE_GLOBAL, SETITEM etc. What becomes challenging is > detecting arbitrary mutations via object calls, i.e. how do we know, > during the ceval loop, that foo.append(x) needs to be treated specially > if foo is a main-thread object and x is a parallel thread object? > > There may be no way to handle that *other* than hacking the internals of > each object, unfortunately. So, the viability of this whole approach > may rest on whether or that's deemed as an acceptable tradeoff (a > necessary evil, even) to the Python developer community. This is pretty much my concern: In order to make this waterproof, as required for CPython, you will quite likely have to do something on very many objects, and this is hard to chime into CPython. > > If it's not, then it's unlikely this approach will ever see the light of > day in CPython. If that turns out to be the case, then I see this > project taking the path that Stackless took (forking off and becoming a > separate interpreter). We had that discussion quite often for Stackless, and I would love to find a solution that allows to add special versions and use cases to CPython in a way that avoids the forking as we did it. It would be a nice thing if we could come up with a way to keep CPython in place, but to swap the interpreter out and replace it with a specialized version, if the application needs it. I wonder to what extent that would be possible. What I would like to achieve, after having given up on Stackless integration is a way to let it piggyback onto CPython that works like an extension module, although it hat effectively replace larger parts of the interpreter. I wonder if that might be the superior way to have more flexibility, without forcing everything and all go into CPython. If we can make the interpreter somehow pluggable at runtime, a lot of issues would become much simpler. > > There's nothing wrong with that; I am really excited about the > possibilities afforded by this approach, and I'm sure it will pique the > interest of commercial entities out there that have problems perfectly > suited to where this pattern excels (shared-nothing, highly concurrent), > much like the relationship that developed between Stackless and Eve > Online. > What do you think: does it make sense to think of a framework that allows to replace the interpreter at runtime, without making normal CPython really slower? cheers - chris -- Christian Tismer :^) Software Consulting : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de phone +49 173 24 18 776 fax +49 (30) 700143-0023 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From trent at snakebite.org Tue Mar 19 02:26:45 2013 From: trent at snakebite.org (Trent Nelson) Date: Mon, 18 Mar 2013 18:26:45 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <5147B0F5.2030107@stackless.com> References: <20130314020540.GB22505@snakebite.org> <20130314184519.GD24307@snakebite.org> <5147B0F5.2030107@stackless.com> Message-ID: <20130319012644.GF29928@snakebite.org> On Mon, Mar 18, 2013 at 05:27:33PM -0700, Christian Tismer wrote: > Hi Trent, Hi Christian! Thanks for taking the time to read my walls of text ;-) > > So, the remaining challenge is preventing the use case alluded to > > earlier where someone tries to modify an object that hasn't been "async > > protected". That's a bit harder. The idea I've got in mind is to > > instrument the main CPython ceval loop, such that we do these checks as > > part of opcode processing. That allows us to keep all the logic in the > > one spot and not have to go hacking the internals of every single > > object's C backend to ensure correctness. > > > > Now, that'll probably work to an extent. I mean, after all, there are > > opcodes for all the things we'd be interested in instrumenting, > > LOAD_GLOBAL, STORE_GLOBAL, SETITEM etc. What becomes challenging is > > detecting arbitrary mutations via object calls, i.e. how do we know, > > during the ceval loop, that foo.append(x) needs to be treated specially > > if foo is a main-thread object and x is a parallel thread object? > > > > There may be no way to handle that *other* than hacking the internals of > > each object, unfortunately. So, the viability of this whole approach > > may rest on whether or that's deemed as an acceptable tradeoff (a > > necessary evil, even) to the Python developer community. > > This is pretty much my concern: > In order to make this waterproof, as required for CPython, you will quite > likely have to do something on very many objects, and this is hard > to chime into CPython. Actually, I think I was unnecessarily pessimistic here. When I sent that follow-up mail with cross-references, I realized I'd forgotten the nitty gritty details of how I implemented the async protection support. It turns out I'd already started on protecting lists (or rather, PySequenceMethods), but decided to stop as the work I'd done on the PyMappingMethods was sufficient for my needs at the time. All I *really* want to do is raise an exception if a parallel object gets assigned to a main-thread container object (list/dict etc) that hasn't been "async protected". (As opposed to now, where it'll either segfault or silently corrupt stuff, then segfault later.) I've already got all the infrastructure in place to test that (I use it extensively within pyparallel.c): Py_ISPY(obj) - detect a main-thread object Py_ISPX(obj) - detect a parallel-thread object Py_IS_PROTECTED(obj) - detect if a main-thread object has been protected* [*]: actually, this isn't in a macro form right now, it's a cheeky inline: __inline char _protected(PyObject *obj) { return (obj->px_flags & Py_PXFLAGS_RWLOCK); } As those macros are exposed in the public , they can be used in other parts of the code base. So, it's just a matter of finding the points where an `lvalue = rvalue` takes place; where: ``Py_ISPY(lvalue) && Py_ISPX(rvalue)``. Then a test to see if lvalue is protected; if not, raise an exception. If so, then nothing else needs to be done. And there aren't that many places where this happens. (It didn't take long to get the PyMappingMethods intercepts nailed down.) That's the idea anyway. I need to get back to coding to see how it all plays out in practice. "And there aren't many places where this happens" might be my famous last words. > > If it's not, then it's unlikely this approach will ever see the light of > > day in CPython. If that turns out to be the case, then I see this > > project taking the path that Stackless took (forking off and becoming a > > separate interpreter). > > We had that discussion quite often for Stackless, and I would love to find > a solution that allows to add special versions and use cases to CPython > in a way that avoids the forking as we did it. > > It would be a nice thing if we could come up with a way to keep CPython > in place, but to swap the interpreter out and replace it with a specialized > version, if the application needs it. I wonder to what extent that would be > possible. > What I would like to achieve, after having given up on Stackless integration > is a way to let it piggyback onto CPython that works like an extension > module, although it hat effectively replace larger parts of the interpreter. > I wonder if that might be the superior way to have more flexibility, > without forcing > everything and all go into CPython. > If we can make the interpreter somehow pluggable at runtime, a lot of issues > would become much simpler. > > > > > There's nothing wrong with that; I am really excited about the > > possibilities afforded by this approach, and I'm sure it will pique the > > interest of commercial entities out there that have problems perfectly > > suited to where this pattern excels (shared-nothing, highly concurrent), > > much like the relationship that developed between Stackless and Eve > > Online. > > > > What do you think: does it make sense to think of a framework that > allows to replace the interpreter at runtime, without making normal > CPython really slower? I think there may actually be some interest in what you're suggesting. I had a chat with various other groups over PyCon that had some interesting things in the pipeline, and being able to call back out to CPython internals like you mentioned would be useful to them. I don't want to take my pyparallel work in that direction yet; I'm still hoping I can fix all the show stoppers and have the whole thing eligible for CPython inclusion one day ;-) (So, I'm like you, 10 years ago? :P) Regards, Trent. From larry at hastings.org Tue Mar 19 05:45:09 2013 From: larry at hastings.org (Larry Hastings) Date: Mon, 18 Mar 2013 21:45:09 -0700 Subject: [Python-Dev] Rough idea for adding introspection information for builtins Message-ID: <5147ED55.30605@hastings.org> The original impetus for Argument Clinic was adding introspection information for builtins--it seemed like any manual approach I came up with would push the builtins maintenance burden beyond the pale. Assuming that we have Argument Clinic or something like it, we don't need to optimize for ease of use from the API end--we can optimize for data size. So the approach writ large: store a blob of data associated with each entry point, as small as possible. Reconstitute the appropriate inspect.Signature on demand by reading that blob. Where to store the data? PyMethodDef is the obvious spot, but I think that structure is part of the stable ABI. So we'd need a new PyMethodDefEx and that'd be a little tiresome. Less violent to the ABI would be defining a new array of pointers-to-introspection-blobs, parallel to the PyMethodDef array, passed in via a new entry point. On to the representation. Consider the function def foo(arg, b=3, *, kwonly='a'): pass I considered four approaches, each listed below along with its total size if it was stored as C static data. 1. A specialized bytecode format, something like pickle, like this: bytes([ PARAMETER_START_LENGTH_3, 'a', 'r', 'g', PARAMETER_START_LENGTH_1, 'b', PARAMETER_DEFAULT_LENGTH_1, '3', KEYWORD_ONLY, PARAMETER_START_LENGTH_6, 'k', 'w', 'o', 'n', 'l', 'y', PARAMETER_DEFAULT_LENGTH_3, '\'', 'a', '\'', END ]) Length: 20 bytes. 2. Just use pickle--pickle the result of inspect.signature() run on a mocked-up signature, just store that. Length: 130 bytes. (Assume a two-byte size stored next to it.) 3. Store a string that, if eval'd, would produce the inspect.Signature. Length: 231 bytes. (This could be made smaller if we could assume "from inspect import *" or "p = inspect.Parameter" or something, but it'd still be easily the heaviest.) 4. Store a string that looks like the Python declaration of the signature, and parse it (Nick's suggestion). For foo above, this would be "(arg,b=3,*,kwonly='a')". Length: 23 bytes. Of those, Nick's suggestion seems best. It's slightly bigger than the specialized bytecode format, but it's human-readable (and human-writable!), and it'd be the easiest to implement. My first idea for implementation: add a "def x" to the front and ": pass" to the end, then run it through ast.parse. Iterate over the tree, converting parameters into inspect.Parameters and handling the return annotation if present. Default values and annotations would be turned into values by ast.eval_literal. (It wouldn't surprise me if there's a cleaner way to do it than the fake function definition; I'm not familiar with the ast module.) We'd want one more mild hack: the DSL will support positional parameters, and inspect.Signature supports positional parameters, so it'd be nice to render that information. But we can't represent that in Python syntax (or at least not yet!), so we can't let ast.parse see it. My suggestion: run it through ast.parse, and if it throws a SyntaxError see if the problem was a slash. If it was, remove the slash, reprocess through ast.parse, and remember that all parameters are positional-only (and barf if there are kwonly, args, or kwargs). Thoughts? //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Tue Mar 19 07:08:48 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 19 Mar 2013 07:08:48 +0100 Subject: [Python-Dev] Rough idea for adding introspection information for builtins In-Reply-To: <5147ED55.30605@hastings.org> References: <5147ED55.30605@hastings.org> Message-ID: Larry Hastings, 19.03.2013 05:45: > The original impetus for Argument Clinic was adding introspection > information for builtins [...] > On to the representation. Consider the function > > def foo(arg, b=3, *, kwonly='a'): > pass > [...] > 4. Store a string that looks like the Python declaration of the signature, > and parse it (Nick's suggestion). For foo above, this would be > "(arg,b=3,*,kwonly='a')". I had already noted that this would be generally useful, specifically for Cython, so I'm all for going this route. No need to invent something new here. > Length: 23 bytes. I can't see why the size would matter in any way. > Of those, Nick's suggestion seems best. It's slightly bigger than the > specialized bytecode format, but it's human-readable (and human-writable!), > and it'd be the easiest to implement. Plus, if it becomes the format how C level signatures are expressed anyway, it wouldn't require any additional build time preprocessing. > My first idea for implementation: add a "def x" to the front and ": pass" > to the end Why not require it to be there already? Maybe more like def foo(arg, b=3, *, kwonly='a'): ... (i.e. using Ellipsis instead of pass, so that it's clear that it's not an empty function but one the implementation of which is hidden) > then run it through ast.parse. Iterate over the tree, > converting parameters into inspect.Parameters and handling the return > annotation if present. Default values and annotations would be turned into > values by ast.eval_literal. (It wouldn't surprise me if there's a cleaner > way to do it than the fake function definition; I'm not familiar with the > ast module.) IMHO, if there is no straight forward way currently to convert a function header from a code blob into a Signature object in Python code, preferably using the ast module (either explicitly or implicitly through inspect.py), then that's a bug. > We'd want one more mild hack: the DSL will support positional parameters, > and inspect.Signature supports positional parameters, so it'd be nice to > render that information. But we can't represent that in Python syntax (or > at least not yet!), so we can't let ast.parse see it. My suggestion: run > it through ast.parse, and if it throws a SyntaxError see if the problem was > a slash. If it was, remove the slash, reprocess through ast.parse, and > remember that all parameters are positional-only (and barf if there are > kwonly, args, or kwargs). Is sounds simpler to me to just make it a Python syntax feature. Or at least an optional one, supported by the ast module with a dedicated compiler flag. Stefan From ncoghlan at gmail.com Tue Mar 19 08:23:47 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Mar 2013 00:23:47 -0700 Subject: [Python-Dev] Rough idea for adding introspection information for builtins In-Reply-To: References: <5147ED55.30605@hastings.org> Message-ID: On Mon, Mar 18, 2013 at 11:08 PM, Stefan Behnel wrote: > I can't see why the size would matter in any way. We're mildly concerned about the possible impact on the size of the ever-growing CPython binaries. However, it turns out that this is a case where readability and brevity are allies rather than enemies, so we don't need to choose one or the other. > > >> Of those, Nick's suggestion seems best. It's slightly bigger than the >> specialized bytecode format, but it's human-readable (and human-writable!), >> and it'd be the easiest to implement. > > Plus, if it becomes the format how C level signatures are expressed anyway, > it wouldn't require any additional build time preprocessing. > > >> My first idea for implementation: add a "def x" to the front and ": pass" >> to the end > > Why not require it to be there already? Maybe more like > > def foo(arg, b=3, *, kwonly='a'): > ... > > (i.e. using Ellipsis instead of pass, so that it's clear that it's not an > empty function but one the implementation of which is hidden) I like this notion. The groups notation and '/' will still cause the parser to choke and require special handling, but OTOH, they have deliberately been chosen as potentially acceptable notations for providing the same features in actual Python function declarations. > > >> then run it through ast.parse. Iterate over the tree, >> converting parameters into inspect.Parameters and handling the return >> annotation if present. Default values and annotations would be turned into >> values by ast.eval_literal. (It wouldn't surprise me if there's a cleaner >> way to do it than the fake function definition; I'm not familiar with the >> ast module.) > > IMHO, if there is no straight forward way currently to convert a function > header from a code blob into a Signature object in Python code, preferably > using the ast module (either explicitly or implicitly through inspect.py), > then that's a bug. The complexity here is that Larry would like to limit the annotations to compatibility with ast.literal_eval. If we drop that restriction, then the inspect module could handle the task directly. Given the complexity of implementing it, I believe the restriction needs more justification than is currently included in the PEP. > > >> We'd want one more mild hack: the DSL will support positional parameters, >> and inspect.Signature supports positional parameters, so it'd be nice to >> render that information. But we can't represent that in Python syntax (or >> at least not yet!), so we can't let ast.parse see it. My suggestion: run >> it through ast.parse, and if it throws a SyntaxError see if the problem was >> a slash. If it was, remove the slash, reprocess through ast.parse, and >> remember that all parameters are positional-only (and barf if there are >> kwonly, args, or kwargs). > > Is sounds simpler to me to just make it a Python syntax feature. Or at > least an optional one, supported by the ast module with a dedicated > compiler flag. Agreed. Guido had previously decided "not worth the hassle", but this may be enough to make him change his mind. Also, Larry's "simple" solution here isn't enough, since it doesn't handle optional groups correctly. While the support still has some odd limitations under the covers, I think an explicit compiler flag is a good compromise between a lot of custom hacks and exposing an unfinished implementation of a new language feature. Cheers, Nick. > > Stefan > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From storchaka at gmail.com Tue Mar 19 08:37:35 2013 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 19 Mar 2013 09:37:35 +0200 Subject: [Python-Dev] Rough idea for adding introspection information for builtins In-Reply-To: <5147ED55.30605@hastings.org> References: <5147ED55.30605@hastings.org> Message-ID: On 19.03.13 06:45, Larry Hastings wrote: > 4. Store a string that looks like the Python declaration of the > signature, and parse it (Nick's suggestion). For foo above, this would > be "(arg,b=3,*,kwonly='a')". Length: 23 bytes. Strip parenthesis and it will be only 21 bytes long. > We'd want one more mild hack: the DSL will support positional > parameters, and inspect.Signature supports positional parameters, so > it'd be nice to render that information. But we can't represent that in > Python syntax (or at least not yet!), so we can't let ast.parse see it. > My suggestion: run it through ast.parse, and if it throws a SyntaxError > see if the problem was a slash. If it was, remove the slash, reprocess > through ast.parse, and remember that all parameters are positional-only > (and barf if there are kwonly, args, or kwargs). It will be simpler to use some one-character separator which shouldn't be used unquoted in the signature. I.e. LF. From storchaka at gmail.com Tue Mar 19 08:52:51 2013 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 19 Mar 2013 09:52:51 +0200 Subject: [Python-Dev] Tarfile CLI Message-ID: There is a proposition to add a command line interface to tarfile module. It will be useful on platforms where tar is not included in the base system. The question is about interface. Should it be a subset of tar options (-t as --extract, -t as --list, -f to specify an archive name) or be similar to the zipfile module interface (-e as --extract, -l as --list, the first positional parameter is an archive name)? There are different opinions. From storchaka at gmail.com Tue Mar 19 08:56:43 2013 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 19 Mar 2013 09:56:43 +0200 Subject: [Python-Dev] Tarfile CLI In-Reply-To: References: Message-ID: On 19.03.13 09:52, Serhiy Storchaka wrote: > There is a proposition to add a command line interface to tarfile > module. Link: http://bugs.python.org/issue13477 From storchaka at gmail.com Tue Mar 19 09:03:35 2013 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 19 Mar 2013 10:03:35 +0200 Subject: [Python-Dev] Recent changes to TextIOWrapper and its tests In-Reply-To: <5147785D.3070100@farowl.co.uk> References: <5147785D.3070100@farowl.co.uk> Message-ID: On 18.03.13 22:26, Jeff Allen wrote: > The puzzle is that it requires t.read() to succeed. > > When I insert a check for bytes type in all the places it seems > necessary in my code, I pass the first two conditions, but since > t.read() also raises TypeError, the overall test fails. Is reading the > stream with read() intended to succeed? Why is this desired? And alternative option is to change C implementation of TextIOWrapper.read() to raise an exception in this case. However I worry that it can break backward compatibility. Are there other tests (in other test files) which fail with a new Jython TextIOWrapper? From larry at hastings.org Tue Mar 19 10:24:53 2013 From: larry at hastings.org (Larry Hastings) Date: Tue, 19 Mar 2013 02:24:53 -0700 Subject: [Python-Dev] Rough idea for adding introspection information for builtins In-Reply-To: References: <5147ED55.30605@hastings.org> Message-ID: <51482EE5.3010605@hastings.org> On 03/19/2013 12:37 AM, Serhiy Storchaka wrote: > On 19.03.13 06:45, Larry Hastings wrote: >> 4. Store a string that looks like the Python declaration of the >> signature, and parse it (Nick's suggestion). For foo above, this would >> be "(arg,b=3,*,kwonly='a')". Length: 23 bytes. > > Strip parenthesis and it will be only 21 bytes long. I left the parentheses there because the return annotation is outside them. If we strip the parentheses, I would have to restore them, and if there was a return annotation I would have to parse the string to know where to put it, because there could be arbitrary Python rvalues on either side of it with quotes and everything, and now I can no longer use ast.parse because it's not legal Python because the parentheses are missing ;-) We could omit the /left/ parenthesis and save one byte per builtin. I honestly don't know how many builtins there are, but my guess is one extra byte per builtin isn't a big deal. Let's leave it in for readability's sakes. >> We'd want one more mild hack: the DSL will support positional >> parameters, and inspect.Signature supports positional parameters, so >> it'd be nice to render that information. But we can't represent that in >> Python syntax (or at least not yet!), so we can't let ast.parse see it. >> My suggestion: run it through ast.parse, and if it throws a SyntaxError >> see if the problem was a slash. If it was, remove the slash, reprocess >> through ast.parse, and remember that all parameters are positional-only >> (and barf if there are kwonly, args, or kwargs). > > It will be simpler to use some one-character separator which shouldn't > be used unquoted in the signature. I.e. LF. I had trouble understanding what you're suggesting. What I think you're saying is, "normally these generated strings won't have LF in them. So let's use LF as a harmless extra character that means 'this is a positional-only signature'." At one point Guido suggested / as syntax for exactly this case. And while the LF approach is simpler programmatically, removing the slash and reparsing isn't terribly complicated; this part will be in Python, after all. Meanwhile, I suggest that for human readability the slash is way more obvious--having a LF in the string mean this is awfully subtle. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Tue Mar 19 10:42:16 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Tue, 19 Mar 2013 10:42:16 +0100 Subject: [Python-Dev] Rough idea for adding introspection information for builtins In-Reply-To: <51482EE5.3010605@hastings.org> References: <5147ED55.30605@hastings.org> <51482EE5.3010605@hastings.org> Message-ID: <4E35883F-9F1F-4B93-80E7-9C3019A650FF@mac.com> On 19 Mar, 2013, at 10:24, Larry Hastings wrote: > > >>> We'd want one more mild hack: the DSL will support positional >>> parameters, and inspect.Signature supports positional parameters, so >>> it'd be nice to render that information. But we can't represent that in >>> Python syntax (or at least not yet!), so we can't let ast.parse see it. >>> My suggestion: run it through ast.parse, and if it throws a SyntaxError >>> see if the problem was a slash. If it was, remove the slash, reprocess >>> through ast.parse, and remember that all parameters are positional-only >>> (and barf if there are kwonly, args, or kwargs). >> >> It will be simpler to use some one-character separator which shouldn't be used unquoted in the signature. I.e. LF. > > I had trouble understanding what you're suggesting. What I think you're saying is, "normally these generated strings won't have LF in them. So let's use LF as a harmless extra character that means 'this is a positional-only signature'." > > At one point Guido suggested / as syntax for exactly this case. And while the LF approach is simpler programmatically, removing the slash and reparsing isn't terribly complicated; this part will be in Python, after all. Meanwhile, I suggest that for human readability the slash is way more obvious--having a LF in the string mean this is awfully subtle. You could also add the slash to the start of the signature, for example "/(arg1, arg2)", that way the positional only can be detected without trying to parse it first and removing a slash at the start is easier than removing it somewhere along a signature with arbitrary default values, such as "(arg1='/', arg2=4 /) -> 'arg1/arg2'". The disadvantage is that you can't specify that only some of the arguments are positional-only, but that's not supported by PyArg_Parse... anyway. Ronald From larry at hastings.org Tue Mar 19 11:00:45 2013 From: larry at hastings.org (Larry Hastings) Date: Tue, 19 Mar 2013 03:00:45 -0700 Subject: [Python-Dev] Rough idea for adding introspection information for builtins In-Reply-To: References: <5147ED55.30605@hastings.org> Message-ID: <5148374D.6050402@hastings.org> On 03/19/2013 12:23 AM, Nick Coghlan wrote: > On Mon, Mar 18, 2013 at 11:08 PM, Stefan Behnel wrote: >>> My first idea for implementation: add a "def x" to the front and ": pass" >>> to the end >> Why not require it to be there already? Maybe more like >> >> def foo(arg, b=3, *, kwonly='a'): >> ... >> >> (i.e. using Ellipsis instead of pass, so that it's clear that it's not an >> empty function but one the implementation of which is hidden) > I like this notion. The groups notation and '/' will still cause the > parser to choke and require special handling, but OTOH, they have > deliberately been chosen as potentially acceptable notations for > providing the same features in actual Python function declarations. I don't see the benefit of including the "def foo" and ":\n ...". The name doesn't help; inspect.Signature pointedly does /not/ contain the name of the function, so it's irrelevant to this purpose. And why have unnecessary boilerplate? And if I can go one further: what we're talking about is essentially a textual representation of a Signature object. I assert that the stuff inside the parentheses, and the return annotation, *is* the signature. The name isn't part of the signature, and the colon and what lies afterwards is definitely not part of its signature. So I think it's entirely appropriate, and a happy coincidence, that it happens to reflect the minimum amount of text you need to communicate the signature. >> IMHO, if there is no straight forward way currently to convert a function >> header from a code blob into a Signature object in Python code, preferably >> using the ast module (either explicitly or implicitly through inspect.py), >> then that's a bug. > The complexity here is that Larry would like to limit the annotations > to compatibility with ast.literal_eval. If we drop that restriction, > then the inspect module could handle the task directly. Given the > complexity of implementing it, I believe the restriction needs more > justification than is currently included in the PEP. I concede that it's totally unjustified in the PEP. It's more playing a hunch at the moment, a combination of YAGNI and that it'd be hard to put the genie back in the bottle if we let people use arbitrary values. Let me restate what we're talking about. We're debating what types of data should be permissible to use for a datum that so far is not only unused, but is /required/ to be unused. PEP 8 states " The Python standard library will not use function annotations". I don't know who among us has any experience using function annotations--or, at least, for their intended purpose. It's hard to debate what are reasonable vs unreasonable restrictions on data we might be permitted to specify in the future for uses we don't know about. Restricting it to Python's rich set of safe literal values seems entirely reasonable; if we get there and need to relax the restriction, we can do so there. Also, you and I discussed this evening whether there was a credible attack vector here. I figured, if you're running an untrustworthy extension, it's already game over. You suggested that a miscreant could easily edit static data on a trusted shared library without having to recompile it to achieve their naughtiness. I'm not sure I necessarily buy it, I just wanted to point out you were the one making the case for restricting it to ast.literal_eval. ;-) >> Is sounds simpler to me to just make it a Python syntax feature. Or at >> least an optional one, supported by the ast module with a dedicated >> compiler flag. > Agreed. Guido had previously decided "not worth the hassle", but this > may be enough to make him change his mind. Also, Larry's "simple" > solution here isn't enough, since it doesn't handle optional groups > correctly. I certainly don't agree that "remove the slash and reparse" is more complicated than "add a new parameter metaphor to the Python language". Adding support for it may be worth doing--don't ask me, I'm still nursing my "positional-only arguments are part of Python and forever will be" Kool-aid. I'm just dealing with cold harsh reality as I understand it. As for handling optional argument groups, my gut feeling is that we're better off not leaking it out of Argument Clinic--don't expose it in this string we're talking about, and don't add support for it in the inspect.Parameter object. I'm not going to debate range(), the syntax of which predates one of our release managers. But I suggest option groups are simply a misfeature of the curses module. There are some other possible uses in builtins (I forgot to dig those out this evening) but so far we're talking adding complexity to an array of technologies (this representation, the parser, the Parameter object) to support a handful of uses of something we shouldn't have done in the first place, for consumers who I think won't care and won't appreciate the added conceptual complexity. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Mar 19 11:13:00 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 19 Mar 2013 11:13:00 +0100 Subject: [Python-Dev] Rough idea for adding introspection information for builtins References: <5147ED55.30605@hastings.org> <5148374D.6050402@hastings.org> Message-ID: <20130319111300.46b70aa3@pitrou.net> Le Tue, 19 Mar 2013 03:00:45 -0700, Larry Hastings a ?crit : > > As for handling optional argument groups, my gut feeling is that > we're better off not leaking it out of Argument Clinic--don't expose > it in this string we're talking about, and don't add support for it > in the inspect.Parameter object. I'm not going to debate range(), > the syntax of which predates one of our release managers. But I > suggest option groups are simply a misfeature of the curses module. > There are some other possible uses in builtins (I forgot to dig those > out this evening) but so far we're talking adding complexity to an > array of technologies (this representation, the parser, the Parameter > object) to support a handful of uses of something we shouldn't have > done in the first place, for consumers who I think won't care and > won't appreciate the added conceptual complexity. Agreed with Larry. Regards Antoine. From stefan at bytereef.org Tue Mar 19 15:23:35 2013 From: stefan at bytereef.org (Stefan Krah) Date: Tue, 19 Mar 2013 15:23:35 +0100 Subject: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL. In-Reply-To: References: <3ZTBVD26DlzR9T@mail.python.org> <20130317222607.GA16540@sleipnir.bytereef.org> <5146BF68.8030903@hastings.org> <5146D4E5.1020302@hastings.org> <20130318103643.GA21550@sleipnir.bytereef.org> Message-ID: <20130319142334.GA3546@sleipnir.bytereef.org> Guido van Rossum wrote: > On Mon, Mar 18, 2013 at 3:36 AM, Stefan Krah wrote: > > Larry Hastings wrote: > >> * The DSL currently makes no provision for specifying per-parameter > >> type annotations. This is something explicitly supported in Python; > >> it should be supported for builtins too, once we have reflection support. > >> > >> It seems to me that the syntax for parameter lines--dictated by > >> Guido--suggests conversion functions are themselves type annotations. > >> This makes intuitive sense. > > > > Really, did you read PEP 437? It's all in there. > > This attitude is unhelpful. Please stop being outright hostile. If you > want to have any influence on the outcome at all, consider looking > into compromises. My apologies, I agree that wasn't very constructive. In case there's a misunderstanding: This wasn't an attempt to push the whole of PEP 437 again. Type-specifying converters were first mentioned in issue #16612 and are central to PEP 437, so my response should have been something like "I think that's already covered in section X of ...". Regarding compromises: I'm not at all after getting as many parts of PEP 437 into the end result as possible. Apparently PEP 437 as a whole is unacceptable, so if Larry's original PEP 436 turns out to be more coherent than the revised PEP 436, then I'd actually favor Larry's original. I'm getting the impression though that the proposal that Guido, Larry and Nick worked out at PyCon *is* reasonably coherent. So my position is the same as Nick's in http://mail.python.org/pipermail/python-dev/2013-March/124757.html Stefan Krah From ncoghlan at gmail.com Tue Mar 19 15:34:30 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Mar 2013 07:34:30 -0700 Subject: [Python-Dev] Rough idea for adding introspection information for builtins In-Reply-To: <5148374D.6050402@hastings.org> References: <5147ED55.30605@hastings.org> <5148374D.6050402@hastings.org> Message-ID: On Tue, Mar 19, 2013 at 3:00 AM, Larry Hastings wrote: > Why not require it to be there already? Maybe more like > > def foo(arg, b=3, *, kwonly='a'): > ... > > (i.e. using Ellipsis instead of pass, so that it's clear that it's not an > empty function but one the implementation of which is hidden) > > I like this notion. The groups notation and '/' will still cause the > parser to choke and require special handling, but OTOH, they have > deliberately been chosen as potentially acceptable notations for > providing the same features in actual Python function declarations. > > > I don't see the benefit of including the "def foo" and ":\n ...". The > name doesn't help; inspect.Signature pointedly does not contain the name of > the function, so it's irrelevant to this purpose. And why have unnecessary > boilerplate? Also, we can already easily produce the extended form through: "def {}{}:\n ...".format(f.__name__, inspect.signature(f)) So, agreed, capturing just the signature info is fine. > Let me restate what we're talking about. We're debating what types of data > should be permissible to use for a datum that so far is not only unused, but > is required to be unused. PEP 8 states " The Python standard library will > not use function annotations". I don't know who among us has any experience > using function annotations--or, at least, for their intended purpose. It's > hard to debate what are reasonable vs unreasonable restrictions on data we > might be permitted to specify in the future for uses we don't know about. > Restricting it to Python's rich set of safe literal values seems entirely > reasonable; if we get there and need to relax the restriction, we can do so > there. > > Also, you and I discussed this evening whether there was a credible attack > vector here. I figured, if you're running an untrustworthy extension, it's > already game over. You suggested that a miscreant could easily edit static > data on a trusted shared library without having to recompile it to achieve > their naughtiness. I'm not sure I necessarily buy it, I just wanted to > point out you were the one making the case for restricting it to > ast.literal_eval. ;-) IIRC, I was arguing against allowing *pickle* because you can't audit that just by looking at the generated source code. OTOH, I'm a big fan of locking this kind of thing down by default and letting people make the case for additional permissiveness, so I agree it's best to start with literals only. Here's a thought, though: instead of doing an Argument Clinic specific hack, let's instead design a proper whitelist API for ast.literal_eval that lets you accept additional constructs. As a general sketch, the long if/elif chain in ast.literal_eval could be replaced by: for converter in converters: ok, converted = converter(node) if ok: return converted raise ValueError('malformed node or string: ' + repr(node)) The _convert function would need to be lifted out and made public as "ast.convert_node", so conversion functions could recurse appropriately. Both ast.literal_eval and ast.convert_node would accept a keyword-only "allow" parameter that accepted an iterable of callables that return a 2-tuple to whitelist additional expressions beyond those normally allowed. So, assuming we don't add it by default, you could allow empty sets by doing: _empty_set = ast.dump(ast.parse("set()").body[0].value) def convert_empty_set(node): if ast.dump(node) == _empty_set: return True, set() return False, None ast.literal_eval(some_str, allow=[convert_empy_set]) This is quite powerful as a general tool to allow constrained execution, since it could be used to whitelist builtins that accept parameters, as well as to process class and function header lines without executing their bodies. In the case of Argument Clinic, that would mean writing a converter for the FunctionDef node. > I certainly don't agree that "remove the slash and reparse" is more > complicated than "add a new parameter metaphor to the Python language". > Adding support for it may be worth doing--don't ask me, I'm still nursing my > "positional-only arguments are part of Python and forever will be" Kool-aid. > I'm just dealing with cold harsh reality as I understand it. > > As for handling optional argument groups, my gut feeling is that we're > better off not leaking it out of Argument Clinic--don't expose it in this > string we're talking about, and don't add support for it in the > inspect.Parameter object. I'm not going to debate range(), the syntax of > which predates one of our release managers. But I suggest option groups are > simply a misfeature of the curses module. There are some other possible > uses in builtins (I forgot to dig those out this evening) but so far we're > talking adding complexity to an array of technologies (this representation, > the parser, the Parameter object) to support a handful of uses of something > we shouldn't have done in the first place, for consumers who I think won't > care and won't appreciate the added conceptual complexity. Agreed on both points, but this should be articulated in the PEP. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From barry at python.org Tue Mar 19 18:01:06 2013 From: barry at python.org (Barry Warsaw) Date: Tue, 19 Mar 2013 10:01:06 -0700 Subject: [Python-Dev] Rough idea for adding introspection information for builtins In-Reply-To: <5147ED55.30605@hastings.org> References: <5147ED55.30605@hastings.org> Message-ID: <20130319100106.0b32527e@anarchist> On Mar 18, 2013, at 09:45 PM, Larry Hastings wrote: >4. Store a string that looks like the Python declaration of the signature, >and parse it (Nick's suggestion). For foo above, this would be >"(arg,b=3,*,kwonly='a')". Length: 23 bytes. Very nice. -Barry From senthil at uthcode.com Tue Mar 19 20:17:20 2013 From: senthil at uthcode.com (Senthil Kumaran) Date: Tue, 19 Mar 2013 12:17:20 -0700 Subject: [Python-Dev] [Python-checkins] cpython: ../bug-fixes/http_error_interface/.hg/last-message.txt In-Reply-To: <3ZVkG913khzS5n@mail.python.org> References: <3ZVkG913khzS5n@mail.python.org> Message-ID: Looks like I used hg commit -m /path/to/.hg/last-message.txt instead of hg commit -l /path/to/.hg/last-message.txt I have amended it, merged it and pushed it again. On Tue, Mar 19, 2013 at 12:04 PM, senthil.kumaran wrote: > http://hg.python.org/cpython/rev/4f2080e9eee2 > changeset: 82765:4f2080e9eee2 > parent: 82763:4c6463b96a2c > user: Senthil Kumaran > date: Tue Mar 19 12:07:43 2013 -0700 > summary: > ../bug-fixes/http_error_interface/.hg/last-message.txt > > files: > Lib/test/test_urllib2.py | 39 +++++++++++++-------------- > 1 files changed, 19 insertions(+), 20 deletions(-) > > > diff --git a/Lib/test/test_urllib2.py b/Lib/test/test_urllib2.py > --- a/Lib/test/test_urllib2.py > +++ b/Lib/test/test_urllib2.py > @@ -1387,6 +1387,10 @@ > > class MiscTests(unittest.TestCase): > > + def opener_has_handler(self, opener, handler_class): > + self.assertTrue(any(h.__class__ == handler_class > + for h in opener.handlers)) > + > def test_build_opener(self): > class MyHTTPHandler(urllib.request.HTTPHandler): pass > class FooHandler(urllib.request.BaseHandler): > @@ -1439,10 +1443,22 @@ > self.assertEqual(b"1234567890", request.data) > self.assertEqual("10", request.get_header("Content-length")) > > + def test_HTTPError_interface(self): > + """ > + Issue 13211 reveals that HTTPError didn't implement the URLError > + interface even though HTTPError is a subclass of URLError. > > - def opener_has_handler(self, opener, handler_class): > - self.assertTrue(any(h.__class__ == handler_class > - for h in opener.handlers)) > + >>> msg = 'something bad happened' > + >>> url = code = fp = None > + >>> hdrs = 'Content-Length: 42' > + >>> err = urllib.error.HTTPError(url, code, msg, hdrs, fp) > + >>> assert hasattr(err, 'reason') > + >>> err.reason > + 'something bad happened' > + >>> assert hasattr(err, 'headers') > + >>> err.headers > + 'Content-Length: 42' > + """ > > class RequestTests(unittest.TestCase): > > @@ -1514,23 +1530,6 @@ > req = Request(url) > self.assertEqual(req.get_full_url(), url) > > -def test_HTTPError_interface(): > - """ > - Issue 13211 reveals that HTTPError didn't implement the URLError > - interface even though HTTPError is a subclass of URLError. > - > - >>> msg = 'something bad happened' > - >>> url = code = fp = None > - >>> hdrs = 'Content-Length: 42' > - >>> err = urllib.error.HTTPError(url, code, msg, hdrs, fp) > - >>> assert hasattr(err, 'reason') > - >>> err.reason > - 'something bad happened' > - >>> assert hasattr(err, 'headers') > - >>> err.headers > - 'Content-Length: 42' > - """ > - > def test_main(verbose=None): > from test import test_urllib2 > support.run_doctest(test_urllib2, verbose) > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > From kristjan at ccpgames.com Tue Mar 19 20:34:21 2013 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Tue, 19 Mar 2013 19:34:21 +0000 Subject: [Python-Dev] [Python-checkins] cpython (2.7): Issue #9090 : Error code 10035 calling socket.recv() on a socket with a timeout In-Reply-To: References: <3ZVj1911wHzSrl@mail.python.org> Message-ID: Yes, it is a symbol problem on unix. Working on it. -----Original Message----- From: Python-checkins [mailto:python-checkins-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Senthil Kumaran Sent: 19. mars 2013 12:28 To: sweskman at gmail.com Cc: python-checkins at python.org Subject: Re: [Python-checkins] cpython (2.7): Issue #9090 : Error code 10035 calling socket.recv() on a socket with a timeout Looks like RHEL 2.7 buildbots are unhappy with this change. -- Senthil On Tue, Mar 19, 2013 at 11:08 AM, kristjan.jonsson wrote: > http://hg.python.org/cpython/rev/8ec39bfd1f01 > changeset: 82764:8ec39bfd1f01 > branch: 2.7 > parent: 82740:b10ec5083a53 > user: Kristj?n Valur J?nsson > date: Tue Mar 19 10:58:59 2013 -0700 > summary: > Issue #9090 : Error code 10035 calling socket.recv() on a socket > with a timeout (WSAEWOULDBLOCK - A non-blocking socket operation > could not be completed > immediately) > > files: > Misc/NEWS | 5 + > Modules/socketmodule.c | 104 ++++++++++++++++++++++++---- > Modules/timemodule.c | 7 + > 3 files changed, 101 insertions(+), 15 deletions(-) > > > diff --git a/Misc/NEWS b/Misc/NEWS > --- a/Misc/NEWS > +++ b/Misc/NEWS > @@ -214,6 +214,11 @@ > Library > ------- > > +- Issue #9090: When a socket with a timeout fails with EWOULDBLOCK or > +EAGAIN, > + retry the select() loop instead of bailing out. This is because > +select() > + can incorrectly report a socket as ready for reading (for example, > +if it > + received some data with an invalid checksum). > + > - Issue #1285086: Get rid of the refcounting hack and speed up urllib.unquote(). > > - Issue #17368: Fix an off-by-one error in the Python JSON decoder > that caused diff --git a/Modules/socketmodule.c > b/Modules/socketmodule.c > --- a/Modules/socketmodule.c > +++ b/Modules/socketmodule.c > @@ -473,6 +473,17 @@ > return NULL; > } > > +#ifdef MS_WINDOWS > +#ifndef WSAEAGAIN > +#define WSAEAGAIN WSAEWOULDBLOCK > +#endif > +#define CHECK_ERRNO(expected) \ > + (WSAGetLastError() == WSA ## expected) #else #define > +CHECK_ERRNO(expected) \ > + (errno == expected) > +#endif > + > /* Convenience function to raise an error according to errno > and return a NULL pointer from a function. */ > > @@ -661,7 +672,7 @@ > after they've reacquired the interpreter lock. > Returns 1 on timeout, -1 on error, 0 otherwise. */ static int > -internal_select(PySocketSockObject *s, int writing) > +internal_select_ex(PySocketSockObject *s, int writing, double > +interval) > { > int n; > > @@ -673,6 +684,10 @@ > if (s->sock_fd < 0) > return 0; > > + /* Handling this condition here simplifies the select loops */ > + if (interval < 0.0) > + return 1; > + > /* Prefer poll, if available, since you can poll() any fd > * which can't be done with select(). */ #ifdef HAVE_POLL @@ > -684,7 +699,7 @@ > pollfd.events = writing ? POLLOUT : POLLIN; > > /* s->sock_timeout is in seconds, timeout in ms */ > - timeout = (int)(s->sock_timeout * 1000 + 0.5); > + timeout = (int)(interval * 1000 + 0.5); > n = poll(&pollfd, 1, timeout); > } > #else > @@ -692,8 +707,8 @@ > /* Construct the arguments to select */ > fd_set fds; > struct timeval tv; > - tv.tv_sec = (int)s->sock_timeout; > - tv.tv_usec = (int)((s->sock_timeout - tv.tv_sec) * 1e6); > + tv.tv_sec = (int)interval; > + tv.tv_usec = (int)((interval - tv.tv_sec) * 1e6); > FD_ZERO(&fds); > FD_SET(s->sock_fd, &fds); > > @@ -712,6 +727,49 @@ > return 0; > } > > +static int > +internal_select(PySocketSockObject *s, int writing) { > + return internal_select_ex(s, writing, s->sock_timeout); } > + > +/* > + Two macros for automatic retry of select() in case of false positives > + (for example, select() could indicate a socket is ready for reading > + but the data then discarded by the OS because of a wrong checksum). > + Here is an example of use: > + > + BEGIN_SELECT_LOOP(s) > + Py_BEGIN_ALLOW_THREADS > + timeout = internal_select_ex(s, 0, interval); > + if (!timeout) > + outlen = recv(s->sock_fd, cbuf, len, flags); > + Py_END_ALLOW_THREADS > + if (timeout == 1) { > + PyErr_SetString(socket_timeout, "timed out"); > + return -1; > + } > + END_SELECT_LOOP(s) > +*/ > +PyAPI_FUNC(double) _PyTime_floattime(void); /* defined in timemodule.c */ > +#define BEGIN_SELECT_LOOP(s) \ > + { \ > + double deadline, interval = s->sock_timeout; \ > + int has_timeout = s->sock_timeout > 0.0; \ > + if (has_timeout) { \ > + deadline = _PyTime_floattime() + s->sock_timeout; \ > + } \ > + while (1) { \ > + errno = 0; \ > + > +#define END_SELECT_LOOP(s) \ > + if (!has_timeout || \ > + (!CHECK_ERRNO(EWOULDBLOCK) && !CHECK_ERRNO(EAGAIN))) \ > + break; \ > + interval = deadline - _PyTime_floattime(); \ > + } \ > + } \ > + > /* Initialize a new socket object. */ > > static double defaulttimeout = -1.0; /* Default timeout for new sockets */ > @@ -1656,8 +1714,9 @@ > if (!IS_SELECTABLE(s)) > return select_error(); > > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > - timeout = internal_select(s, 0); > + timeout = internal_select_ex(s, 0, interval); > if (!timeout) > newfd = accept(s->sock_fd, SAS2SA(&addrbuf), &addrlen); > Py_END_ALLOW_THREADS > @@ -1666,6 +1725,7 @@ > PyErr_SetString(socket_timeout, "timed out"); > return NULL; > } > + END_SELECT_LOOP(s) > > #ifdef MS_WINDOWS > if (newfd == INVALID_SOCKET) > @@ -2355,8 +2415,9 @@ > } > > #ifndef __VMS > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > - timeout = internal_select(s, 0); > + timeout = internal_select_ex(s, 0, interval); > if (!timeout) > outlen = recv(s->sock_fd, cbuf, len, flags); > Py_END_ALLOW_THREADS > @@ -2365,6 +2426,7 @@ > PyErr_SetString(socket_timeout, "timed out"); > return -1; > } > + END_SELECT_LOOP(s) > if (outlen < 0) { > /* Note: the call to errorhandler() ALWAYS indirectly returned > NULL, so ignore its return value */ > @@ -2386,8 +2448,9 @@ > segment = remaining; > } > > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > - timeout = internal_select(s, 0); > + timeout = internal_select_ex(s, 0, interval); > if (!timeout) > nread = recv(s->sock_fd, read_buf, segment, flags); > Py_END_ALLOW_THREADS > @@ -2396,6 +2459,8 @@ > PyErr_SetString(socket_timeout, "timed out"); > return -1; > } > + END_SELECT_LOOP(s) > + > if (nread < 0) { > s->errorhandler(); > return -1; > @@ -2559,9 +2624,10 @@ > return -1; > } > > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > memset(&addrbuf, 0, addrlen); > - timeout = internal_select(s, 0); > + timeout = internal_select_ex(s, 0, interval); > if (!timeout) { > #ifndef MS_WINDOWS > #if defined(PYOS_OS2) && !defined(PYCC_GCC) > @@ -2582,6 +2648,7 @@ > PyErr_SetString(socket_timeout, "timed out"); > return -1; > } > + END_SELECT_LOOP(s) > if (n < 0) { > s->errorhandler(); > return -1; > @@ -2719,8 +2786,9 @@ > buf = pbuf.buf; > len = pbuf.len; > > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > - timeout = internal_select(s, 1); > + timeout = internal_select_ex(s, 1, interval); > if (!timeout) > #ifdef __VMS > n = sendsegmented(s->sock_fd, buf, len, flags); > @@ -2728,13 +2796,14 @@ > n = send(s->sock_fd, buf, len, flags); > #endif > Py_END_ALLOW_THREADS > - > - PyBuffer_Release(&pbuf); > - > if (timeout == 1) { > + PyBuffer_Release(&pbuf); > PyErr_SetString(socket_timeout, "timed out"); > return NULL; > } > + END_SELECT_LOOP(s) > + > + PyBuffer_Release(&pbuf); > if (n < 0) > return s->errorhandler(); > return PyInt_FromLong((long)n); > @@ -2768,8 +2837,9 @@ > } > > do { > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > - timeout = internal_select(s, 1); > + timeout = internal_select_ex(s, 1, interval); > n = -1; > if (!timeout) { > #ifdef __VMS > @@ -2784,6 +2854,7 @@ > PyErr_SetString(socket_timeout, "timed out"); > return NULL; > } > + END_SELECT_LOOP(s) > /* PyErr_CheckSignals() might change errno */ > saved_errno = errno; > /* We must run our signal handlers before looping again. > @@ -2863,17 +2934,20 @@ > return NULL; > } > > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > - timeout = internal_select(s, 1); > + timeout = internal_select_ex(s, 1, interval); > if (!timeout) > n = sendto(s->sock_fd, buf, len, flags, SAS2SA(&addrbuf), addrlen); > Py_END_ALLOW_THREADS > > - PyBuffer_Release(&pbuf); > if (timeout == 1) { > + PyBuffer_Release(&pbuf); > PyErr_SetString(socket_timeout, "timed out"); > return NULL; > } > + END_SELECT_LOOP(s) > + PyBuffer_Release(&pbuf); > if (n < 0) > return s->errorhandler(); > return PyInt_FromLong((long)n); > diff --git a/Modules/timemodule.c b/Modules/timemodule.c > --- a/Modules/timemodule.c > +++ b/Modules/timemodule.c > @@ -1055,3 +1055,10 @@ > > return 0; > } > + > +/* export floattime to socketmodule.c */ > +PyAPI_FUNC(double) > +_PyTime_floattime(void) > +{ > + return floattime(); > +} > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > _______________________________________________ Python-checkins mailing list Python-checkins at python.org http://mail.python.org/mailman/listinfo/python-checkins From kristjan at ccpgames.com Tue Mar 19 20:49:16 2013 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Tue, 19 Mar 2013 19:49:16 +0000 Subject: [Python-Dev] [Python-checkins] cpython (2.7): Issue #9090 : Error code 10035 calling socket.recv() on a socket with a timeout In-Reply-To: References: <3ZVj1911wHzSrl@mail.python.org> Message-ID: Apparently timemodule is not a built-in module on linux. But it is on windows. Funny! -----Original Message----- From: Python-Dev [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Kristj?n Valur J?nsson Sent: 19. mars 2013 12:34 To: python-dev at python.org Subject: Re: [Python-Dev] [Python-checkins] cpython (2.7): Issue #9090 : Error code 10035 calling socket.recv() on a socket with a timeout Yes, it is a symbol problem on unix. Working on it. -----Original Message----- From: Python-checkins [mailto:python-checkins-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Senthil Kumaran Sent: 19. mars 2013 12:28 To: sweskman at gmail.com Cc: python-checkins at python.org Subject: Re: [Python-checkins] cpython (2.7): Issue #9090 : Error code 10035 calling socket.recv() on a socket with a timeout Looks like RHEL 2.7 buildbots are unhappy with this change. -- Senthil On Tue, Mar 19, 2013 at 11:08 AM, kristjan.jonsson wrote: > http://hg.python.org/cpython/rev/8ec39bfd1f01 > changeset: 82764:8ec39bfd1f01 > branch: 2.7 > parent: 82740:b10ec5083a53 > user: Kristj?n Valur J?nsson > date: Tue Mar 19 10:58:59 2013 -0700 > summary: > Issue #9090 : Error code 10035 calling socket.recv() on a socket > with a timeout (WSAEWOULDBLOCK - A non-blocking socket operation > could not be completed > immediately) > > files: > Misc/NEWS | 5 + > Modules/socketmodule.c | 104 ++++++++++++++++++++++++---- > Modules/timemodule.c | 7 + > 3 files changed, 101 insertions(+), 15 deletions(-) > > > diff --git a/Misc/NEWS b/Misc/NEWS > --- a/Misc/NEWS > +++ b/Misc/NEWS > @@ -214,6 +214,11 @@ > Library > ------- > > +- Issue #9090: When a socket with a timeout fails with EWOULDBLOCK or > +EAGAIN, > + retry the select() loop instead of bailing out. This is because > +select() > + can incorrectly report a socket as ready for reading (for example, > +if it > + received some data with an invalid checksum). > + > - Issue #1285086: Get rid of the refcounting hack and speed up urllib.unquote(). > > - Issue #17368: Fix an off-by-one error in the Python JSON decoder > that caused diff --git a/Modules/socketmodule.c > b/Modules/socketmodule.c > --- a/Modules/socketmodule.c > +++ b/Modules/socketmodule.c > @@ -473,6 +473,17 @@ > return NULL; > } > > +#ifdef MS_WINDOWS > +#ifndef WSAEAGAIN > +#define WSAEAGAIN WSAEWOULDBLOCK > +#endif > +#define CHECK_ERRNO(expected) \ > + (WSAGetLastError() == WSA ## expected) #else #define > +CHECK_ERRNO(expected) \ > + (errno == expected) > +#endif > + > /* Convenience function to raise an error according to errno > and return a NULL pointer from a function. */ > > @@ -661,7 +672,7 @@ > after they've reacquired the interpreter lock. > Returns 1 on timeout, -1 on error, 0 otherwise. */ static int > -internal_select(PySocketSockObject *s, int writing) > +internal_select_ex(PySocketSockObject *s, int writing, double > +interval) > { > int n; > > @@ -673,6 +684,10 @@ > if (s->sock_fd < 0) > return 0; > > + /* Handling this condition here simplifies the select loops */ > + if (interval < 0.0) > + return 1; > + > /* Prefer poll, if available, since you can poll() any fd > * which can't be done with select(). */ #ifdef HAVE_POLL @@ > -684,7 +699,7 @@ > pollfd.events = writing ? POLLOUT : POLLIN; > > /* s->sock_timeout is in seconds, timeout in ms */ > - timeout = (int)(s->sock_timeout * 1000 + 0.5); > + timeout = (int)(interval * 1000 + 0.5); > n = poll(&pollfd, 1, timeout); > } > #else > @@ -692,8 +707,8 @@ > /* Construct the arguments to select */ > fd_set fds; > struct timeval tv; > - tv.tv_sec = (int)s->sock_timeout; > - tv.tv_usec = (int)((s->sock_timeout - tv.tv_sec) * 1e6); > + tv.tv_sec = (int)interval; > + tv.tv_usec = (int)((interval - tv.tv_sec) * 1e6); > FD_ZERO(&fds); > FD_SET(s->sock_fd, &fds); > > @@ -712,6 +727,49 @@ > return 0; > } > > +static int > +internal_select(PySocketSockObject *s, int writing) { > + return internal_select_ex(s, writing, s->sock_timeout); } > + > +/* > + Two macros for automatic retry of select() in case of false positives > + (for example, select() could indicate a socket is ready for reading > + but the data then discarded by the OS because of a wrong checksum). > + Here is an example of use: > + > + BEGIN_SELECT_LOOP(s) > + Py_BEGIN_ALLOW_THREADS > + timeout = internal_select_ex(s, 0, interval); > + if (!timeout) > + outlen = recv(s->sock_fd, cbuf, len, flags); > + Py_END_ALLOW_THREADS > + if (timeout == 1) { > + PyErr_SetString(socket_timeout, "timed out"); > + return -1; > + } > + END_SELECT_LOOP(s) > +*/ > +PyAPI_FUNC(double) _PyTime_floattime(void); /* defined in > +timemodule.c */ #define BEGIN_SELECT_LOOP(s) \ > + { \ > + double deadline, interval = s->sock_timeout; \ > + int has_timeout = s->sock_timeout > 0.0; \ > + if (has_timeout) { \ > + deadline = _PyTime_floattime() + s->sock_timeout; \ > + } \ > + while (1) { \ > + errno = 0; \ > + > +#define END_SELECT_LOOP(s) \ > + if (!has_timeout || \ > + (!CHECK_ERRNO(EWOULDBLOCK) && !CHECK_ERRNO(EAGAIN))) \ > + break; \ > + interval = deadline - _PyTime_floattime(); \ > + } \ > + } \ > + > /* Initialize a new socket object. */ > > static double defaulttimeout = -1.0; /* Default timeout for new > sockets */ @@ -1656,8 +1714,9 @@ > if (!IS_SELECTABLE(s)) > return select_error(); > > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > - timeout = internal_select(s, 0); > + timeout = internal_select_ex(s, 0, interval); > if (!timeout) > newfd = accept(s->sock_fd, SAS2SA(&addrbuf), &addrlen); > Py_END_ALLOW_THREADS > @@ -1666,6 +1725,7 @@ > PyErr_SetString(socket_timeout, "timed out"); > return NULL; > } > + END_SELECT_LOOP(s) > > #ifdef MS_WINDOWS > if (newfd == INVALID_SOCKET) > @@ -2355,8 +2415,9 @@ > } > > #ifndef __VMS > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > - timeout = internal_select(s, 0); > + timeout = internal_select_ex(s, 0, interval); > if (!timeout) > outlen = recv(s->sock_fd, cbuf, len, flags); > Py_END_ALLOW_THREADS > @@ -2365,6 +2426,7 @@ > PyErr_SetString(socket_timeout, "timed out"); > return -1; > } > + END_SELECT_LOOP(s) > if (outlen < 0) { > /* Note: the call to errorhandler() ALWAYS indirectly returned > NULL, so ignore its return value */ @@ -2386,8 +2448,9 @@ > segment = remaining; > } > > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > - timeout = internal_select(s, 0); > + timeout = internal_select_ex(s, 0, interval); > if (!timeout) > nread = recv(s->sock_fd, read_buf, segment, flags); > Py_END_ALLOW_THREADS > @@ -2396,6 +2459,8 @@ > PyErr_SetString(socket_timeout, "timed out"); > return -1; > } > + END_SELECT_LOOP(s) > + > if (nread < 0) { > s->errorhandler(); > return -1; > @@ -2559,9 +2624,10 @@ > return -1; > } > > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > memset(&addrbuf, 0, addrlen); > - timeout = internal_select(s, 0); > + timeout = internal_select_ex(s, 0, interval); > if (!timeout) { > #ifndef MS_WINDOWS > #if defined(PYOS_OS2) && !defined(PYCC_GCC) @@ -2582,6 +2648,7 @@ > PyErr_SetString(socket_timeout, "timed out"); > return -1; > } > + END_SELECT_LOOP(s) > if (n < 0) { > s->errorhandler(); > return -1; > @@ -2719,8 +2786,9 @@ > buf = pbuf.buf; > len = pbuf.len; > > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > - timeout = internal_select(s, 1); > + timeout = internal_select_ex(s, 1, interval); > if (!timeout) > #ifdef __VMS > n = sendsegmented(s->sock_fd, buf, len, flags); @@ -2728,13 > +2796,14 @@ > n = send(s->sock_fd, buf, len, flags); #endif > Py_END_ALLOW_THREADS > - > - PyBuffer_Release(&pbuf); > - > if (timeout == 1) { > + PyBuffer_Release(&pbuf); > PyErr_SetString(socket_timeout, "timed out"); > return NULL; > } > + END_SELECT_LOOP(s) > + > + PyBuffer_Release(&pbuf); > if (n < 0) > return s->errorhandler(); > return PyInt_FromLong((long)n); > @@ -2768,8 +2837,9 @@ > } > > do { > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > - timeout = internal_select(s, 1); > + timeout = internal_select_ex(s, 1, interval); > n = -1; > if (!timeout) { > #ifdef __VMS > @@ -2784,6 +2854,7 @@ > PyErr_SetString(socket_timeout, "timed out"); > return NULL; > } > + END_SELECT_LOOP(s) > /* PyErr_CheckSignals() might change errno */ > saved_errno = errno; > /* We must run our signal handlers before looping again. > @@ -2863,17 +2934,20 @@ > return NULL; > } > > + BEGIN_SELECT_LOOP(s) > Py_BEGIN_ALLOW_THREADS > - timeout = internal_select(s, 1); > + timeout = internal_select_ex(s, 1, interval); > if (!timeout) > n = sendto(s->sock_fd, buf, len, flags, SAS2SA(&addrbuf), addrlen); > Py_END_ALLOW_THREADS > > - PyBuffer_Release(&pbuf); > if (timeout == 1) { > + PyBuffer_Release(&pbuf); > PyErr_SetString(socket_timeout, "timed out"); > return NULL; > } > + END_SELECT_LOOP(s) > + PyBuffer_Release(&pbuf); > if (n < 0) > return s->errorhandler(); > return PyInt_FromLong((long)n); > diff --git a/Modules/timemodule.c b/Modules/timemodule.c > --- a/Modules/timemodule.c > +++ b/Modules/timemodule.c > @@ -1055,3 +1055,10 @@ > > return 0; > } > + > +/* export floattime to socketmodule.c */ > +PyAPI_FUNC(double) > +_PyTime_floattime(void) > +{ > + return floattime(); > +} > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > _______________________________________________ Python-checkins mailing list Python-checkins at python.org http://mail.python.org/mailman/listinfo/python-checkins _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From guido at python.org Tue Mar 19 20:55:53 2013 From: guido at python.org (Guido van Rossum) Date: Tue, 19 Mar 2013 12:55:53 -0700 Subject: [Python-Dev] [Python-checkins] cpython (2.7): Issue #9090 : Error code 10035 calling socket.recv() on a socket with a timeout In-Reply-To: <3ZVj1911wHzSrl@mail.python.org> References: <3ZVj1911wHzSrl@mail.python.org> Message-ID: On Tue, Mar 19, 2013 at 11:08 AM, kristjan.jonsson < python-checkins at python.org> wrote: > http://hg.python.org/cpython/rev/8ec39bfd1f01 > changeset: 82764:8ec39bfd1f01 > branch: 2.7 > parent: 82740:b10ec5083a53 > user: Kristj?n Valur J?nsson > date: Tue Mar 19 10:58:59 2013 -0700 > summary: > Issue #9090 : Error code 10035 calling socket.recv() on a socket with a > timeout > (WSAEWOULDBLOCK - A non-blocking socket operation could not be completed > immediately) > [...] > +- Issue #9090: When a socket with a timeout fails with EWOULDBLOCK or > EAGAIN, > + retry the select() loop instead of bailing out. This is because > select() > + can incorrectly report a socket as ready for reading (for example, if it > + received some data with an invalid checksum). > Might I recommend treating EINTR the same way? It has the same issue of popping up, rarely, when you least expect it, and messing with your code. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Wed Mar 20 01:14:07 2013 From: larry at hastings.org (Larry Hastings) Date: Tue, 19 Mar 2013 17:14:07 -0700 Subject: [Python-Dev] Early results from Argument Clinic automation discussion Message-ID: <5148FF4F.8020704@hastings.org> Mark Shannon, Dmitry Jemerov (from PyCharm) and I sat down to talk about rearchitecting the Argument Clinic prototype to make it easily to interact with. We came up with the following. The DSL will now produce an intermediate representation. The output will consume this intermediate representation. The two defined outputs from the IR so far: 1. The inplace generated C code (what it has right now), and 2. Argument Clinic DSL code itself. The intermediate representation will be subclasses of inspect.Signature and inspect.Parameter that add the extra bits of information needed by Argument Clinic, as follows: class Function(inspect.Signature): name = 'function name' module = 'module name if any class = 'class name if any, can't actually call this class' docstring = 'function docstring' c_id = 'stub id to use for generated c functions' return_converter = SeeBelow() class Parameter(inspect.Parameter): docstring = 'per-parameter docstring' group = converter = ConverterFunction() Parameter.group is an integer, specifying which "option group" the parameter is in. This must be an integer for positional-only arguments, and None for other argument types. 0 indicates "required positional-only parameter". Left-optional groups get negative numbers, decreasing as they get further from the required parameters; right-optional get positive numbers, increasing as they get further from the required parameters. (Groups cannot nest, so a parameter cannot be in more than one group.) Function.return_converter was suggested by Mark. This is the inverse of the per-parameter converter function: it defines the return type of the impl function, and the conversion process to turn it into the PyObject * that gets returned to Python. And, like the converter functions, it will define the actual return annotation of the function (if any). Mark wants to write something that parses C code implementing builtins and produces the IR; he can then use that to write Argument Clinic DSL. This will make the conversion process go much more quickly. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Mar 20 01:26:33 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 20 Mar 2013 01:26:33 +0100 Subject: [Python-Dev] cpython: Closes issue 17467. Add readline and readlines support to References: <3ZVsKZ6MKPzPF3@mail.python.org> Message-ID: <20130320012633.57bcb451@pitrou.net> On Wed, 20 Mar 2013 01:22:58 +0100 (CET) michael.foord wrote: > http://hg.python.org/cpython/rev/684b75600fa9 > changeset: 82811:684b75600fa9 > user: Michael Foord > date: Tue Mar 19 17:22:51 2013 -0700 > summary: > Closes issue 17467. Add readline and readlines support to unittest.mock.mock_open Wasn't it possible to re-use an existing implementation (such as TextIOBase or StringIO) rather than re-write your own? (it's not even obvious your implementation is correct, BTW. How about universal newlines?) Regards Antoine. From ezio.melotti at gmail.com Wed Mar 20 02:31:05 2013 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Wed, 20 Mar 2013 03:31:05 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #17385: Fix quadratic behavior in threading.Condition In-Reply-To: References: <3ZPLXL0jVYzS7C@mail.python.org> Message-ID: On Mon, Mar 11, 2013 at 3:14 AM, Ezio Melotti wrote: > Hi, > > On Mon, Mar 11, 2013 at 2:58 AM, raymond.hettinger > wrote: >> http://hg.python.org/cpython/rev/0f86b51f8f8b >> changeset: 82592:0f86b51f8f8b >> user: Raymond Hettinger >> date: Sun Mar 10 17:57:28 2013 -0700 >> summary: >> Issue #17385: Fix quadratic behavior in threading.Condition >> >> files: >> Lib/threading.py | 10 ++++++++-- >> Misc/NEWS | 3 +++ >> 2 files changed, 11 insertions(+), 2 deletions(-) >> >> >> diff --git a/Lib/threading.py b/Lib/threading.py >> --- a/Lib/threading.py >> +++ b/Lib/threading.py >> @@ -10,6 +10,12 @@ >> from time import time as _time >> from traceback import format_exc as _format_exc >> from _weakrefset import WeakSet >> +try: >> + from _itertools import islice as _slice >> + from _collections import deque as _deque >> +except ImportError: >> + from itertools import islice as _islice >> + from collections import deque as _deque >> > > Shouldn't the one in the 'try' be _islice too? > Also I don't seem to have an _itertools module. Is this something used by the other VMs? > Best Regards, > Ezio Melotti From kristjan at ccpgames.com Wed Mar 20 04:16:53 2013 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Wed, 20 Mar 2013 03:16:53 +0000 Subject: [Python-Dev] [Python-checkins] cpython: #15927: Fix cvs.reader parsing of escaped \r\n with quoting off. In-Reply-To: <3ZVwQ43t3lzSHt@mail.python.org> References: <3ZVwQ43t3lzSHt@mail.python.org> Message-ID: The compiler complains about this line: if (c == '\n' | c=='\r') { Perhaps you wanted a Boolean operator? -----Original Message----- From: Python-checkins [mailto:python-checkins-bounces+kristjan=ccpgames.com at python.org] On Behalf Of r.david.murray Sent: 19. mars 2013 19:42 To: python-checkins at python.org Subject: [Python-checkins] cpython: #15927: Fix cvs.reader parsing of escaped \r\n with quoting off. http://hg.python.org/cpython/rev/940748853712 changeset: 82815:940748853712 parent: 82811:684b75600fa9 user: R David Murray date: Tue Mar 19 22:41:47 2013 -0400 summary: #15927: Fix cvs.reader parsing of escaped \r\n with quoting off. This fix means that such values are correctly roundtripped, since cvs.writer already does the correct escaping. Patch by Michael Johnson. files: Lib/test/test_csv.py | 9 +++++++++ Misc/ACKS | 1 + Misc/NEWS | 3 +++ Modules/_csv.c | 13 ++++++++++++- 4 files changed, 25 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_csv.py b/Lib/test/test_csv.py --- a/Lib/test/test_csv.py +++ b/Lib/test/test_csv.py @@ -308,6 +308,15 @@ for i, row in enumerate(csv.reader(fileobj)): self.assertEqual(row, rows[i]) + def test_roundtrip_escaped_unquoted_newlines(self): + with TemporaryFile("w+", newline='') as fileobj: + writer = csv.writer(fileobj,quoting=csv.QUOTE_NONE,escapechar="\\") + rows = [['a\nb','b'],['c','x\r\nd']] + writer.writerows(rows) + fileobj.seek(0) + for i, row in enumerate(csv.reader(fileobj,quoting=csv.QUOTE_NONE,escapechar="\\")): + self.assertEqual(row,rows[i]) + class TestDialectRegistry(unittest.TestCase): def test_registry_badargs(self): self.assertRaises(TypeError, csv.list_dialects, None) diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -591,6 +591,7 @@ Fredrik Johansson Gregory K. Johnson Kent Johnson +Michael Johnson Simon Johnston Matt Joiner Thomas Jollans diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -289,6 +289,9 @@ Library ------- +- Issue #15927: CVS now correctly parses escaped newlines and carriage + when parsing with quoting turned off. + - Issue #17467: add readline and readlines support to mock_open in unittest.mock. diff --git a/Modules/_csv.c b/Modules/_csv.c --- a/Modules/_csv.c +++ b/Modules/_csv.c @@ -51,7 +51,7 @@ typedef enum { START_RECORD, START_FIELD, ESCAPED_CHAR, IN_FIELD, IN_QUOTED_FIELD, ESCAPE_IN_QUOTED_FIELD, QUOTE_IN_QUOTED_FIELD, - EAT_CRNL + EAT_CRNL,AFTER_ESCAPED_CRNL } ParserState; typedef enum { @@ -644,6 +644,12 @@ break; case ESCAPED_CHAR: + if (c == '\n' | c=='\r') { + if (parse_add_char(self, c) < 0) + return -1; + self->state = AFTER_ESCAPED_CRNL; + break; + } if (c == '\0') c = '\n'; if (parse_add_char(self, c) < 0) @@ -651,6 +657,11 @@ self->state = IN_FIELD; break; + case AFTER_ESCAPED_CRNL: + if (c == '\0') + break; + /*fallthru*/ + case IN_FIELD: /* in unquoted field */ if (c == '\n' || c == '\r' || c == '\0') { -- Repository URL: http://hg.python.org/cpython From rdmurray at bitdance.com Wed Mar 20 04:59:25 2013 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 19 Mar 2013 23:59:25 -0400 Subject: [Python-Dev] [Python-checkins] cpython: #15927: Fix cvs.reader parsing of escaped \r\n with quoting off. In-Reply-To: References: <3ZVwQ43t3lzSHt@mail.python.org> Message-ID: <20130320035926.75D5A2500B3@webabinitio.net> On Wed, 20 Mar 2013 03:16:53 -0000, =?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?= wrote: > The compiler complains about this line: > if (c == '\n' | c=='\r') { > > Perhaps you wanted a Boolean operator? Indeed, yes. --David From andrew.svetlov at gmail.com Wed Mar 20 05:25:00 2013 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Tue, 19 Mar 2013 21:25:00 -0700 Subject: [Python-Dev] [Python-checkins] peps: update 2.7.4 release dates In-Reply-To: <3ZVyV50rrYzSFs@mail.python.org> References: <3ZVyV50rrYzSFs@mail.python.org> Message-ID: Are you sure about 2.7.4 2012-04-06? I mean 2012 year. On Tue, Mar 19, 2013 at 9:15 PM, benjamin.peterson wrote: > http://hg.python.org/peps/rev/ce17779c395c > changeset: 4810:ce17779c395c > user: Benjamin Peterson > date: Tue Mar 19 23:15:23 2013 -0500 > summary: > update 2.7.4 release dates > > files: > pep-0373.txt | 4 ++-- > 1 files changed, 2 insertions(+), 2 deletions(-) > > > diff --git a/pep-0373.txt b/pep-0373.txt > --- a/pep-0373.txt > +++ b/pep-0373.txt > @@ -56,8 +56,8 @@ > > Planned future release dates: > > -- 2.7.4rc1 2013-02-02 > -- 2.7.4 2012-02-16 > +- 2.7.4rc1 2013-03-23 > +- 2.7.4 2012-04-06 > > Dates of previous maintenance releases: > > > -- > Repository URL: http://hg.python.org/peps > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > -- Thanks, Andrew Svetlov From benjamin at python.org Wed Mar 20 05:30:33 2013 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 19 Mar 2013 21:30:33 -0700 Subject: [Python-Dev] [Python-checkins] peps: update 2.7.4 release dates In-Reply-To: References: <3ZVyV50rrYzSFs@mail.python.org> Message-ID: Good catch. 2013/3/19 Andrew Svetlov : > Are you sure about 2.7.4 2012-04-06? I mean 2012 year. > > On Tue, Mar 19, 2013 at 9:15 PM, benjamin.peterson > wrote: >> http://hg.python.org/peps/rev/ce17779c395c >> changeset: 4810:ce17779c395c >> user: Benjamin Peterson >> date: Tue Mar 19 23:15:23 2013 -0500 >> summary: >> update 2.7.4 release dates >> >> files: >> pep-0373.txt | 4 ++-- >> 1 files changed, 2 insertions(+), 2 deletions(-) >> >> >> diff --git a/pep-0373.txt b/pep-0373.txt >> --- a/pep-0373.txt >> +++ b/pep-0373.txt >> @@ -56,8 +56,8 @@ >> >> Planned future release dates: >> >> -- 2.7.4rc1 2013-02-02 >> -- 2.7.4 2012-02-16 >> +- 2.7.4rc1 2013-03-23 >> +- 2.7.4 2012-04-06 >> >> Dates of previous maintenance releases: >> >> >> -- >> Repository URL: http://hg.python.org/peps >> >> _______________________________________________ >> Python-checkins mailing list >> Python-checkins at python.org >> http://mail.python.org/mailman/listinfo/python-checkins >> > > > > -- > Thanks, > Andrew Svetlov > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins -- Regards, Benjamin From fuzzyman at voidspace.org.uk Wed Mar 20 05:44:15 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 19 Mar 2013 21:44:15 -0700 Subject: [Python-Dev] cpython: Closes issue 17467. Add readline and readlines support to In-Reply-To: <20130320012633.57bcb451@pitrou.net> References: <3ZVsKZ6MKPzPF3@mail.python.org> <20130320012633.57bcb451@pitrou.net> Message-ID: On 19 Mar 2013, at 17:26, Antoine Pitrou wrote: > On Wed, 20 Mar 2013 01:22:58 +0100 (CET) > michael.foord wrote: >> http://hg.python.org/cpython/rev/684b75600fa9 >> changeset: 82811:684b75600fa9 >> user: Michael Foord >> date: Tue Mar 19 17:22:51 2013 -0700 >> summary: >> Closes issue 17467. Add readline and readlines support to unittest.mock.mock_open > > Wasn't it possible to re-use an existing implementation (such as > TextIOBase or StringIO) rather than re-write your own? > > (it's not even obvious your implementation is correct, BTW. How about > universal newlines?) mock_open makes it easy to put a StringIO in place if that's what you want. It's just a simple helper function for providing some known data *along with the Mock api* to make asserts that it was used correctly. It isn't presenting a full file-system. My suggestion to the implementor of the patch was that read / readline / readlines be disconnected - but the patch provided allows them to be interleaved and I saw no reason to undo that. If users want more complex behaviour (like universal newline support) they can use mock_open along with a StringIO. Michael > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From greg at krypto.org Wed Mar 20 07:09:22 2013 From: greg at krypto.org (Gregory P. Smith) Date: Tue, 19 Mar 2013 23:09:22 -0700 Subject: [Python-Dev] cpython: Closes issue 17467. Add readline and readlines support to In-Reply-To: References: <3ZVsKZ6MKPzPF3@mail.python.org> <20130320012633.57bcb451@pitrou.net> Message-ID: On Tue, Mar 19, 2013 at 9:44 PM, Michael Foord wrote: > > On 19 Mar 2013, at 17:26, Antoine Pitrou wrote: > > > On Wed, 20 Mar 2013 01:22:58 +0100 (CET) > > michael.foord wrote: > >> http://hg.python.org/cpython/rev/684b75600fa9 > >> changeset: 82811:684b75600fa9 > >> user: Michael Foord > >> date: Tue Mar 19 17:22:51 2013 -0700 > >> summary: > >> Closes issue 17467. Add readline and readlines support to > unittest.mock.mock_open > > > > Wasn't it possible to re-use an existing implementation (such as > > TextIOBase or StringIO) rather than re-write your own? > > > > (it's not even obvious your implementation is correct, BTW. How about > > universal newlines?) > > mock_open makes it easy to put a StringIO in place if that's what you > want. It's just a simple helper function for providing some known data > *along with the Mock api* to make asserts that it was used correctly. It > isn't presenting a full file-system. My suggestion to the implementor of > the patch was that read / readline / readlines be disconnected - but the > patch provided allows them to be interleaved and I saw no reason to undo > that. > > If users want more complex behaviour (like universal newline support) they > can use mock_open along with a StringIO. > It'd be good to mention that in the unittest.mock.rst docs. > > Michael > > > > > > Regards > > > > Antoine. > > > > > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > http://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > > > -- > http://www.voidspace.org.uk/ > > > May you do good and not evil > May you find forgiveness for yourself and forgive others > May you share freely, never taking more than you give. > -- the sqlite blessing > http://www.sqlite.org/different.html > > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Mar 20 08:09:29 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 20 Mar 2013 08:09:29 +0100 Subject: [Python-Dev] cpython: Closes issue 17467. Add readline and readlines support to In-Reply-To: References: <3ZVsKZ6MKPzPF3@mail.python.org> <20130320012633.57bcb451@pitrou.net> Message-ID: <20130320080929.2cdbab56@pitrou.net> On Tue, 19 Mar 2013 21:44:15 -0700 Michael Foord wrote: > > mock_open makes it easy to put a StringIO in place if that's what you want. It's just a simple helper function for providing some known data *along with the Mock api* to make asserts that it was used correctly. It isn't presenting a full file-system. My suggestion to the implementor of the patch was that read / readline / readlines be disconnected - but the patch provided allows them to be interleaved and I saw no reason to undo that. > > If users want more complex behaviour (like universal newline support) they can use mock_open along with a StringIO. This is not about complex behaviour but simply correct behaviour. For the record, universal newlines are enabled by default in Python 3: >>> with open("foo", "wb") as f: f.write(b"a\r\nb\rc\n") ... 7 >>> with open("foo", "r") as f: print(list(f)) ... ['a\n', 'b\n', 'c\n'] Regards Antoine. From fuzzyman at voidspace.org.uk Wed Mar 20 08:50:27 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Wed, 20 Mar 2013 00:50:27 -0700 Subject: [Python-Dev] cpython: Closes issue 17467. Add readline and readlines support to In-Reply-To: <20130320080929.2cdbab56@pitrou.net> References: <3ZVsKZ6MKPzPF3@mail.python.org> <20130320012633.57bcb451@pitrou.net> <20130320080929.2cdbab56@pitrou.net> Message-ID: On 20 Mar 2013, at 00:09, Antoine Pitrou wrote: > On Tue, 19 Mar 2013 21:44:15 -0700 > Michael Foord wrote: >> >> mock_open makes it easy to put a StringIO in place if that's what you want. It's just a simple helper function for providing some known data *along with the Mock api* to make asserts that it was used correctly. It isn't presenting a full file-system. My suggestion to the implementor of the patch was that read / readline / readlines be disconnected - but the patch provided allows them to be interleaved and I saw no reason to undo that. >> >> If users want more complex behaviour (like universal newline support) they can use mock_open along with a StringIO. > > This is not about complex behaviour but simply correct behaviour. > For the record, universal newlines are enabled by default in Python 3: > >>>> with open("foo", "wb") as f: f.write(b"a\r\nb\rc\n") > ... > 7 >>>> with open("foo", "r") as f: print(list(f)) > ... > ['a\n', 'b\n', 'c\n'] > mock_open is ?not? presenting a mock filesystem, but is about providing a mock object to avoid ?either? reading or writing to the real filesystem. You don't ?tend? to do both with a single file handle - I know it's possible but mock_open is a convenience function for the common case. This commit simply adds support for readline and readlines, whereas before only read was supported. If you want to add support for additional functionality feel free to propose a patch. Michael > > Regards > > Antoine. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk From solipsis at pitrou.net Wed Mar 20 09:03:20 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 20 Mar 2013 09:03:20 +0100 Subject: [Python-Dev] cpython: Closes issue 17467. Add readline and readlines support to In-Reply-To: References: <3ZVsKZ6MKPzPF3@mail.python.org> <20130320012633.57bcb451@pitrou.net> <20130320080929.2cdbab56@pitrou.net> Message-ID: <20130320090320.6c5a22be@pitrou.net> On Wed, 20 Mar 2013 00:50:27 -0700 Michael Foord wrote: > > If you want to add support for additional functionality feel free to propose a patch. This isn't about additional functionality, this is about correctness. You don't want to write multiple, slightly different, implementations of readline() and friends (which is what Python 2 did). Regards Antoine. From fuzzyman at voidspace.org.uk Wed Mar 20 09:14:36 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Wed, 20 Mar 2013 01:14:36 -0700 Subject: [Python-Dev] cpython: Closes issue 17467. Add readline and readlines support to In-Reply-To: <20130320090320.6c5a22be@pitrou.net> References: <3ZVsKZ6MKPzPF3@mail.python.org> <20130320012633.57bcb451@pitrou.net> <20130320080929.2cdbab56@pitrou.net> <20130320090320.6c5a22be@pitrou.net> Message-ID: <59A3CDC7-2D7E-4F28-AB14-01023144F637@voidspace.org.uk> On 20 Mar 2013, at 01:03, Antoine Pitrou wrote: > On Wed, 20 Mar 2013 00:50:27 -0700 > Michael Foord wrote: >> >> If you want to add support for additional functionality feel free to propose a patch. > > This isn't about additional functionality, this is about > correctness. You don't want to write multiple, slightly different, > implementations of readline() and friends (which is what Python 2 did). > This change allows you to set a series of return values for readlines when mocking open. We are not reading data from anywhere the user is supplying it pre-canned. Do you have a specific problem with it? Michael > Regards > > Antoine. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk From fuzzyman at voidspace.org.uk Wed Mar 20 10:14:46 2013 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Wed, 20 Mar 2013 02:14:46 -0700 Subject: [Python-Dev] cpython: Closes issue 17467. Add readline and readlines support to In-Reply-To: References: <3ZVsKZ6MKPzPF3@mail.python.org> <20130320012633.57bcb451@pitrou.net> Message-ID: On 19 Mar 2013, at 23:09, "Gregory P. Smith" wrote: > > On Tue, Mar 19, 2013 at 9:44 PM, Michael Foord wrote: >> >> On 19 Mar 2013, at 17:26, Antoine Pitrou wrote: >> >> > On Wed, 20 Mar 2013 01:22:58 +0100 (CET) >> > michael.foord wrote: >> >> http://hg.python.org/cpython/rev/684b75600fa9 >> >> changeset: 82811:684b75600fa9 >> >> user: Michael Foord >> >> date: Tue Mar 19 17:22:51 2013 -0700 >> >> summary: >> >> Closes issue 17467. Add readline and readlines support to unittest.mock.mock_open >> > >> > Wasn't it possible to re-use an existing implementation (such as >> > TextIOBase or StringIO) rather than re-write your own? >> > >> > (it's not even obvious your implementation is correct, BTW. How about >> > universal newlines?) >> >> mock_open makes it easy to put a StringIO in place if that's what you want. It's just a simple helper function for providing some known data *along with the Mock api* to make asserts that it was used correctly. It isn't presenting a full file-system. My suggestion to the implementor of the patch was that read / readline / readlines be disconnected - but the patch provided allows them to be interleaved and I saw no reason to undo that. >> >> If users want more complex behaviour (like universal newline support) they can use mock_open along with a StringIO. > > It'd be good to mention that in the unittest.mock.rst docs. > I'll look at clarifying the intent and limitations of mock_open in the docs - plus an example of using it with a StringIO. It maybe that the support for interleaving of read and readline (etc) is just unnecessary and setting them separately is enough (which simplifies the implementation). I know Toshio had a specific use case needing readline support so I'll check with him. Michael >> >> Michael >> >> >> > >> > Regards >> > >> > Antoine. >> > >> > >> > _______________________________________________ >> > Python-Dev mailing list >> > Python-Dev at python.org >> > http://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk >> >> >> -- >> http://www.voidspace.org.uk/ >> >> >> May you do good and not evil >> May you find forgiveness for yourself and forgive others >> May you share freely, never taking more than you give. >> -- the sqlite blessing >> http://www.sqlite.org/different.html >> >> >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Wed Mar 20 13:23:43 2013 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 20 Mar 2013 05:23:43 -0700 Subject: [Python-Dev] [Python-checkins] cpython: Issue #13248: removed deprecated and undocumented difflib.isbjunk, isbpopular. In-Reply-To: <3ZVrTH23v4zPCm@mail.python.org> References: <3ZVrTH23v4zPCm@mail.python.org> Message-ID: A mention in Misc/NEWS can't hurt here, Terry. Even though it's undocumented, some old code could rely on it being there and this code will break with the transition to 3.4 Eli On Tue, Mar 19, 2013 at 4:44 PM, terry.reedy wrote: > http://hg.python.org/cpython/rev/612d8bbcfa3a > changeset: 82807:612d8bbcfa3a > user: Terry Jan Reedy > date: Tue Mar 19 19:44:04 2013 -0400 > summary: > Issue #13248: removed deprecated and undocumented difflib.isbjunk, > isbpopular. > > files: > Lib/difflib.py | 14 -------------- > 1 files changed, 0 insertions(+), 14 deletions(-) > > > diff --git a/Lib/difflib.py b/Lib/difflib.py > --- a/Lib/difflib.py > +++ b/Lib/difflib.py > @@ -336,20 +336,6 @@ > for elt in popular: # ditto; as fast for 1% deletion > del b2j[elt] > > - def isbjunk(self, item): > - "Deprecated; use 'item in SequenceMatcher().bjunk'." > - warnings.warn("'SequenceMatcher().isbjunk(item)' is deprecated;\n" > - "use 'item in SMinstance.bjunk' instead.", > - DeprecationWarning, 2) > - return item in self.bjunk > - > - def isbpopular(self, item): > - "Deprecated; use 'item in SequenceMatcher().bpopular'." > - warnings.warn("'SequenceMatcher().isbpopular(item)' is > deprecated;\n" > - "use 'item in SMinstance.bpopular' instead.", > - DeprecationWarning, 2) > - return item in self.bpopular > - > def find_longest_match(self, alo, ahi, blo, bhi): > """Find longest matching block in a[alo:ahi] and b[blo:bhi]. > > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Wed Mar 20 17:41:53 2013 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 20 Mar 2013 09:41:53 -0700 Subject: [Python-Dev] IDLE in the stdlib Message-ID: Interesting writeup about PyCon 2013 young coder education: http://therealkatie.net/blog/2013/mar/19/pycon-2013-young-coders/ Quote: "We used IDLE because it's already on Raspian's desktop. Personally, I like IDLE as a teaching tool. It's included in the standard library, it does tab completion and color coding, and it even has a text editor included so you don't have to start your class off by teaching everyone about paths. Too bad it's broke as hell." Personally, I think that IDLE reflects badly on Python in more ways than one. It's badly maintained, quirky and ugly. It serves a very narrow set of uses, and does it badly. Being part of Python *distributions* and being part of core Python standard library are two different things. The former may make sense, the latter IMHO makes no sense whatsoever. Outside the Python core IDLE can be maintained more freely, with less restrictions on contributors and hopefully become a better tool. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From rovitotv at gmail.com Wed Mar 20 18:11:18 2013 From: rovitotv at gmail.com (Todd Rovito) Date: Wed, 20 Mar 2013 13:11:18 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 12:41 PM, Eli Bendersky wrote: > Interesting writeup about PyCon 2013 young coder > education:http://therealkatie.net/blog/2013/mar/19/pycon-2013-young-coders/ > > Quote: > > "We used IDLE because it's already on Raspian's desktop. Personally, I like > IDLE as a teaching tool. It's included in the standard library, it does tab > completion and color coding, and it even has a text editor included so you > don't have to start your class off by teaching everyone about paths. > > Too bad it's broke as hell." > > Personally, I think that IDLE reflects badly on Python in more ways than > one. It's badly maintained, quirky and ugly. It serves a very narrow set of > uses, and does it badly. > > Being part of Python *distributions* and being part of core Python standard > library are two different things. The former may make sense, the latter IMHO > makes no sense whatsoever. Outside the Python core IDLE can be maintained > more freely, with less restrictions on contributors and hopefully become a > better tool. Eli, Thanks for sharing that article it was a fun read. I think the next paragraph from the article is important as well: "I believe my first contribution to the Python Standard Library will be fixes to IDLE. I really do like it that much. Happily, the kids were flexible. If they needed to do a workaround, or ignore something on our slides (they were written with the standard shell in mind), they did so. They were total champs. My adult students would have been much more upset." Having an IDE that ships with Python is powerful and follows Python's mantra "batteries included". Personally I think removing IDLE from the Python Standard Library is a mistake. IDLE helps the novice get started as demonstrated by this article. What is frustrating is many patches already exist for IDLE in the bug tracker they simply have not been committed. PEP-434 (http://www.python.org/dev/peps/pep-0434/) is designed to make it easier to get these patches committed. I would ask that you give PEP-434 some time and let the process work before we start a in-depth discussion on if IDLE should stay or go. From eliben at gmail.com Wed Mar 20 18:16:31 2013 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 20 Mar 2013 10:16:31 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 10:11 AM, Todd Rovito wrote: > On Wed, Mar 20, 2013 at 12:41 PM, Eli Bendersky wrote: > > Interesting writeup about PyCon 2013 young coder > > education: > http://therealkatie.net/blog/2013/mar/19/pycon-2013-young-coders/ > > > > Quote: > > > > "We used IDLE because it's already on Raspian's desktop. Personally, I > like > > IDLE as a teaching tool. It's included in the standard library, it does > tab > > completion and color coding, and it even has a text editor included so > you > > don't have to start your class off by teaching everyone about paths. > > > > Too bad it's broke as hell." > > > > Personally, I think that IDLE reflects badly on Python in more ways than > > one. It's badly maintained, quirky and ugly. It serves a very narrow set > of > > uses, and does it badly. > > > > Being part of Python *distributions* and being part of core Python > standard > > library are two different things. The former may make sense, the latter > IMHO > > makes no sense whatsoever. Outside the Python core IDLE can be maintained > > more freely, with less restrictions on contributors and hopefully become > a > > better tool. > Eli, > Thanks for sharing that article it was a fun read. I think the > next paragraph from the article is important as well: > "I believe my first contribution to the Python Standard Library will > be fixes to IDLE. I really do like it that much. Happily, the kids > were flexible. If they needed to do a workaround, or ignore something > on our slides (they were written with the standard shell in mind), > they did so. They were total champs. My adult students would have been > much more upset." > > Having an IDE that ships with Python is powerful and follows Python's > mantra "batteries included". Personally I think removing IDLE from > the Python Standard Library is a mistake. IDLE helps the novice get > started as demonstrated by this article. What is frustrating is many > patches already exist for IDLE in the bug tracker they simply have not > been committed. PEP-434 (http://www.python.org/dev/peps/pep-0434/) is > designed to make it easier to get these patches committed. I would > ask that you give PEP-434 some time and let the process work before we > start a in-depth discussion on if IDLE should stay or go. > Todd, note that I did not propose to remove IDLE from Python distributions, just from the Python core (Mercurial repository, to be technically precise). This is a big difference. Outside the Python core a more free-moving community can be built around developing IDLE. I've seen PEP 434, but it's far from being enough. I just don't think there are enough core devs with the time and desire to review IDLE patches (especially non-trivial ones). Outside the Python code, this can be relaxed. And Python distributions can still bundle some stable IDLE release. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Mar 20 18:18:48 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Mar 2013 10:18:48 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 9:41 AM, Eli Bendersky wrote: > Interesting writeup about PyCon 2013 young coder > education:http://therealkatie.net/blog/2013/mar/19/pycon-2013-young-coders/ > > Quote: > > "We used IDLE because it's already on Raspian's desktop. Personally, I like > IDLE as a teaching tool. It's included in the standard library, it does tab > completion and color coding, and it even has a text editor included so you > don't have to start your class off by teaching everyone about paths. > > Too bad it's broke as hell." > > Personally, I think that IDLE reflects badly on Python in more ways than > one. It's badly maintained, quirky and ugly. It serves a very narrow set of > uses, and does it badly. > > Being part of Python *distributions* and being part of core Python standard > library are two different things. The former may make sense, the latter IMHO > makes no sense whatsoever. Outside the Python core IDLE can be maintained > more freely, with less restrictions on contributors and hopefully become a > better tool. Unfortunately, this cannot change until we have a usable installation tool shipping with CPython. Thus, it can only be on the agenda for serious consideration in Python 3.5 at the earliest. In the meantime, any core developers concerned that IDLE reflects badly on Python could go through the tracker issues on bugs.python.org and try to improve the situation for 3.4 (and 3.3.2 and 2.7.5). Feedback on Terry's PEP 434 (explicitly pushing IDLE towards "application that ships with Python that may receive minor enhancements in maintenance releases" status, rather than "no new features whatsoever in maintenance releases") would also be appreciated. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From rdmurray at bitdance.com Wed Mar 20 19:09:41 2013 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 20 Mar 2013 14:09:41 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: <20130320180942.20A6A2500B3@webabinitio.net> On Wed, 20 Mar 2013 09:41:53 -0700, Eli Bendersky wrote: > Personally, I think that IDLE reflects badly on Python in more ways than > one. It's badly maintained, quirky and ugly. It serves a very narrow set of > uses, and does it badly. > > Being part of Python *distributions* and being part of core Python standard > library are two different things. The former may make sense, the latter > IMHO makes no sense whatsoever. Outside the Python core IDLE can be > maintained more freely, with less restrictions on contributors and > hopefully become a better tool. On the other hand, after several years of almost complete neglect, we have some people interested in and actively contributing to making it better *in the stdib*. Terry has proposed a PEP for allowing it to see more rapid changes than a "normal" stdlib package, and I haven't perceived a lot of opposition to this. I think Terry's PEP represents less of change to how we do things than bundling an externally maintained IDLE would be, especially with respect to Linux. FYI I talked to someone at PyCon who is not a current contributor to IDLE but who is very interested in helping with it, and it sounded like he had the backing of his organization to do this (it was a quick hall conversation and unfortunately I did not get his name). So we may be approaching an inflection point where IDLE will start getting the love that it needs. That said, there is something important in the argument that more contributors could be attracted to an external project. I'm wondering, however, if this is more a reflection of a general issue we might want to look at, than anything specific to IDLE. Python is a growing project, and it may be time to start thinking about better ways to encourage and coordinate more contributions to various pieces of Python and its standard library. But that is a much bigger conversation. --David From rdmurray at bitdance.com Wed Mar 20 19:13:20 2013 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 20 Mar 2013 14:13:20 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #13248: removed deprecated and undocumented difflib.isbjunk, isbpopular. In-Reply-To: References: <3ZVrTH23v4zPCm@mail.python.org> Message-ID: <20130320181320.5B45C2500B3@webabinitio.net> On Wed, 20 Mar 2013 05:23:43 -0700, Eli Bendersky wrote: > A mention in Misc/NEWS can't hurt here, Terry. Even though it's > undocumented, some old code could rely on it being there and this code will > break with the transition to 3.4 Note that we also have a list of deprecated things that were removed in What's New. Aside: given the 3.3 experience, I think people should be thinking in terms of always updating What's New when appropriate, at the time a commit is made. --David From eliben at gmail.com Wed Mar 20 19:22:12 2013 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 20 Mar 2013 11:22:12 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130320180942.20A6A2500B3@webabinitio.net> References: <20130320180942.20A6A2500B3@webabinitio.net> Message-ID: On Wed, Mar 20, 2013 at 11:09 AM, R. David Murray wrote: > On Wed, 20 Mar 2013 09:41:53 -0700, Eli Bendersky > wrote: > > Personally, I think that IDLE reflects badly on Python in more ways than > > one. It's badly maintained, quirky and ugly. It serves a very narrow set > of > > uses, and does it badly. > > > > Being part of Python *distributions* and being part of core Python > standard > > library are two different things. The former may make sense, the latter > > IMHO makes no sense whatsoever. Outside the Python core IDLE can be > > maintained more freely, with less restrictions on contributors and > > hopefully become a better tool. > > On the other hand, after several years of almost complete neglect, > we have some people interested in and actively contributing to making > it better *in the stdib*. Terry has proposed a PEP for allowing it > to see more rapid changes than a "normal" stdlib package, and I haven't > perceived a lot of opposition to this. I think Terry's PEP represents > less of change to how we do things than bundling an externally maintained > IDLE would be, especially with respect to Linux. > > FYI I talked to someone at PyCon who is not a current contributor to > IDLE but who is very interested in helping with it, and it sounded like > he had the backing of his organization to do this (it was a quick hall > conversation and unfortunately I did not get his name). So we may be > approaching an inflection point where IDLE will start getting the love > that it needs. > The "choke point" is going to be core devs with the time and desire to review such contributions though. We have a relatively strict process in the Python core, which makes a lot of since *because* it's Python core. Getting things committed in Python is not easy, and even if we get a sudden influx of good patches (which I doubt) these will take time to review and get committed. In an outside project there's much less friction. IDLE would be a great first foray into this "separate project" world, because it is many ways a separate project. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed Mar 20 18:09:07 2013 From: barry at python.org (Barry Warsaw) Date: Wed, 20 Mar 2013 10:09:07 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> Message-ID: <20130320100907.22a24586@anarchist> On Mar 20, 2013, at 11:22 AM, Eli Bendersky wrote: >IDLE would be a great first foray into this "separate project" world, >because it is many ways a separate project. I really think that's true. A separate project, occasionally sync'd back into the stdlib by a core dev seems like the right way to manage IDLE. -Barry From guido at python.org Wed Mar 20 19:47:03 2013 From: guido at python.org (Guido van Rossum) Date: Wed, 20 Mar 2013 11:47:03 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> Message-ID: On Wed, Mar 20, 2013 at 11:22 AM, Eli Bendersky wrote: > > > > On Wed, Mar 20, 2013 at 11:09 AM, R. David Murray wrote: > >> On Wed, 20 Mar 2013 09:41:53 -0700, Eli Bendersky >> wrote: >> > Personally, I think that IDLE reflects badly on Python in more ways than >> > one. It's badly maintained, quirky and ugly. It serves a very narrow >> set of >> > uses, and does it badly. >> > >> > Being part of Python *distributions* and being part of core Python >> standard >> > library are two different things. The former may make sense, the latter >> > IMHO makes no sense whatsoever. Outside the Python core IDLE can be >> > maintained more freely, with less restrictions on contributors and >> > hopefully become a better tool. >> >> On the other hand, after several years of almost complete neglect, >> we have some people interested in and actively contributing to making >> it better *in the stdib*. Terry has proposed a PEP for allowing it >> to see more rapid changes than a "normal" stdlib package, and I haven't >> perceived a lot of opposition to this. I think Terry's PEP represents >> less of change to how we do things than bundling an externally maintained >> IDLE would be, especially with respect to Linux. >> >> FYI I talked to someone at PyCon who is not a current contributor to >> IDLE but who is very interested in helping with it, and it sounded like >> he had the backing of his organization to do this (it was a quick hall >> conversation and unfortunately I did not get his name). So we may be >> approaching an inflection point where IDLE will start getting the love >> that it needs. >> > > The "choke point" is going to be core devs with the time and desire to > review such contributions though. We have a relatively strict process in > the Python core, which makes a lot of since *because* it's Python core. > Getting things committed in Python is not easy, and even if we get a sudden > influx of good patches (which I doubt) these will take time to review and > get committed. In an outside project there's much less friction. > > IDLE would be a great first foray into this "separate project" world, > because it is many ways a separate project. > +1 -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Wed Mar 20 20:14:09 2013 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 20 Mar 2013 14:14:09 -0500 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130320100907.22a24586@anarchist> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> Message-ID: 2013/3/20 Barry Warsaw : > On Mar 20, 2013, at 11:22 AM, Eli Bendersky wrote: > >>IDLE would be a great first foray into this "separate project" world, >>because it is many ways a separate project. > > I really think that's true. A separate project, occasionally sync'd back into > the stdlib by a core dev seems like the right way to manage IDLE. I would advise against this. Basically, every "externally-maintained" package with have causes pain. For example, the stdlib now has some long-diverged fork of simplejson. With xml.etree, it was not clear for years whether core developers could touch it even though the external project had died. Either the stdlib and IDLE should go separate ways or development has to happen in the stdlib with CPython release schedule and policies. -- Regards, Benjamin From dan at woz.io Wed Mar 20 20:04:19 2013 From: dan at woz.io (Daniel Wozniak) Date: Wed, 20 Mar 2013 12:04:19 -0700 Subject: [Python-Dev] PyCon Sprints - Thank you Message-ID: <514A0833.7000101@woz.io> David and Senthil, I won't make it to the sprints today because my ride wants to go into San Francisco to do touristy things. I'll be flying back to Arizona this evening. I still have a fair amount of code that has not been submitted to the issue tracker. I will sprint from my house tomorrow and will submit whatever else I have ready tomorrow night. In the future, I'd also like to continue contributing by continuing to improve test coverage since that is helping me get familiar with both the code base, style, and general processes of contributing. Thank you both for all your help and patience. You have been very welcoming. I am sure Python will continue to grow and attract new contributors so long as there are mentors such as yourselves there to support n00bs like me. Thanks again, Daniel Wozniak From rdmurray at bitdance.com Wed Mar 20 20:24:47 2013 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 20 Mar 2013 15:24:47 -0400 Subject: [Python-Dev] PyCon Sprints - Thank you In-Reply-To: <514A0833.7000101@woz.io> References: <514A0833.7000101@woz.io> Message-ID: <20130320192447.4F188250069@webabinitio.net> Thank you for your contributions, and we look forward to anything else you may choose to contribute! --David From guido at python.org Wed Mar 20 20:31:44 2013 From: guido at python.org (Guido van Rossum) Date: Wed, 20 Mar 2013 12:31:44 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> Message-ID: On Wed, Mar 20, 2013 at 12:14 PM, Benjamin Peterson wrote: > 2013/3/20 Barry Warsaw : > > On Mar 20, 2013, at 11:22 AM, Eli Bendersky wrote: > > > >>IDLE would be a great first foray into this "separate project" world, > >>because it is many ways a separate project. > > > > I really think that's true. A separate project, occasionally sync'd > back into > > the stdlib by a core dev seems like the right way to manage IDLE. > > I would advise against this. Basically, every "externally-maintained" > package with have causes pain. For example, the stdlib now has some > long-diverged fork of simplejson. With xml.etree, it was not clear for > years whether core developers could touch it even though the external > project had died. Either the stdlib and IDLE should go separate ways > or development has to happen in the stdlib with CPython release > schedule and policies. > Agreed that the "sync into stdlib" think should not happen, or should at best be a temporary measure until we can remove idle from the source tarball (maybe at the 3.4 release, otherwise at 3.5). The main thing I like about the separate project idea is that, given that only a small group of people care about IDLE, it is much more satisfying for them to be able to release IDLE separately to their user community regularly (every month if they want to) rather than being held to the core Python release schedule and practices. We should deal with compatibility obligations of the stdlib in the usual way, though maybe we can just delete it in 3.4, since few people presumably use idlelib apart from IDLE itself. Binary distributions from python.org should still include IDLE (and Tcl/Tk) -- however we should switch to bundling the separate project's output rather than bundling the increasingly broken version in the stdlib. What other distributors do is outside our control, but we ought to recommend them to do the same. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed Mar 20 20:38:33 2013 From: barry at python.org (Barry Warsaw) Date: Wed, 20 Mar 2013 12:38:33 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> Message-ID: <20130320123833.14886a86@anarchist> On Mar 20, 2013, at 12:31 PM, Guido van Rossum wrote: >Agreed that the "sync into stdlib" think should not happen, or should at >best be a temporary measure until we can remove idle from the source >tarball (maybe at the 3.4 release, otherwise at 3.5). Right. Ultimately, I think IDLE should be a separate project entirely, but I guess there's push back against that too. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From guido at python.org Wed Mar 20 20:40:07 2013 From: guido at python.org (Guido van Rossum) Date: Wed, 20 Mar 2013 12:40:07 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130320123833.14886a86@anarchist> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> Message-ID: On Wed, Mar 20, 2013 at 12:38 PM, Barry Warsaw wrote: > On Mar 20, 2013, at 12:31 PM, Guido van Rossum wrote: > > >Agreed that the "sync into stdlib" think should not happen, or should at > >best be a temporary measure until we can remove idle from the source > >tarball (maybe at the 3.4 release, otherwise at 3.5). > > Right. Ultimately, I think IDLE should be a separate project entirely, > but I > guess there's push back against that too. I didn't hear any at the sprint here. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Wed Mar 20 20:46:25 2013 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 20 Mar 2013 12:46:25 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> Message-ID: On Wed, Mar 20, 2013 at 12:14 PM, Benjamin Peterson wrote: > 2013/3/20 Barry Warsaw : > > On Mar 20, 2013, at 11:22 AM, Eli Bendersky wrote: > > > >>IDLE would be a great first foray into this "separate project" world, > >>because it is many ways a separate project. > > > > I really think that's true. A separate project, occasionally sync'd > back into > > the stdlib by a core dev seems like the right way to manage IDLE. > > I would advise against this. Basically, every "externally-maintained" > package with have causes pain. For example, the stdlib now has some > long-diverged fork of simplejson. With xml.etree, it was not clear for > years whether core developers could touch it even though the external > project had died. Either the stdlib and IDLE should go separate ways > or development has to happen in the stdlib with CPython release > schedule and policies. > There are other dependencies like libffi, but I really think IDLE is different. xml.etree and libffi are building blocks upon which a lot of users' code depends. So we have to keep maintaining them (unless there's some sort of agreed deprecation process). IDLE is really a stand-alone project built on Python. It's unique in this respect. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From python-dev at masklinn.net Wed Mar 20 20:51:36 2013 From: python-dev at masklinn.net (Xavier Morel) Date: Wed, 20 Mar 2013 20:51:36 +0100 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130320123833.14886a86@anarchist> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> Message-ID: On 2013-03-20, at 20:38 , Barry Warsaw wrote: > On Mar 20, 2013, at 12:31 PM, Guido van Rossum wrote: > >> Agreed that the "sync into stdlib" think should not happen, or should at >> best be a temporary measure until we can remove idle from the source >> tarball (maybe at the 3.4 release, otherwise at 3.5). > > Right. Ultimately, I think IDLE should be a separate project entirely, but I > guess there's push back against that too. The problem with it is, well, that it's a separate project so unless it is still packaged in (in which case it's not quite separate project, just a separate source tree) it's got to be downloaded and installed separately. That would be a blow to educators, but also Windows users: while the CLI works very nicely in unices, that's not the case with the win32 console which is as best as I can describe it a complete turd, making IDLE a very nice proposition there (I never use IDLE on Linux or OSX, but do all the time in Windows). It also provides a rather capable (and in many case sufficient) code editor for a platform which lacks any form of native text editor allowing sane edition of code. Installing the Python windows packages and having everything "work" (in the sense that you can immediately start writing and running python code) is ? I think ? a pretty big feature. From barry at python.org Wed Mar 20 20:54:45 2013 From: barry at python.org (Barry Warsaw) Date: Wed, 20 Mar 2013 12:54:45 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> Message-ID: <20130320125445.64676161@anarchist> On Mar 20, 2013, at 12:40 PM, Guido van Rossum wrote: >I didn't hear any at the sprint here. JFDI! :) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From brian at python.org Wed Mar 20 20:59:37 2013 From: brian at python.org (Brian Curtin) Date: Wed, 20 Mar 2013 14:59:37 -0500 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> Message-ID: On Wed, Mar 20, 2013 at 2:51 PM, Xavier Morel wrote: > That would be a blow to educators, but also Windows users: while the CLI > works very nicely in unices, that's not the case with the win32 console > which is as best as I can describe it a complete turd, making IDLE a > very nice proposition there (I never use IDLE on Linux or OSX, but do > all the time in Windows). Can you explain this a bit more? I've been using the CLI python.exe on Windows, Mac, and Linux for years and I don't know what you're talking about. From eliben at gmail.com Wed Mar 20 21:00:35 2013 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 20 Mar 2013 13:00:35 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> Message-ID: On Wed, Mar 20, 2013 at 12:51 PM, Xavier Morel wrote: > On 2013-03-20, at 20:38 , Barry Warsaw wrote: > > > On Mar 20, 2013, at 12:31 PM, Guido van Rossum wrote: > > > >> Agreed that the "sync into stdlib" think should not happen, or should at > >> best be a temporary measure until we can remove idle from the source > >> tarball (maybe at the 3.4 release, otherwise at 3.5). > > > > Right. Ultimately, I think IDLE should be a separate project entirely, > but I > > guess there's push back against that too. > > The problem with it is, well, that it's a separate project so unless it > is still packaged in (in which case it's not quite separate project, > just a separate source tree) it's got to be downloaded and installed > separately. > > That would be a blow to educators, but also Windows users: while the CLI > works very nicely in unices, that's not the case with the win32 console > which is as best as I can describe it a complete turd, making IDLE a > very nice proposition there (I never use IDLE on Linux or OSX, but do > all the time in Windows). It also provides a rather capable (and in many > case sufficient) code editor for a platform which lacks any form of > native text editor allowing sane edition of code. > > Installing the Python windows packages and having everything "work" (in > the sense that you can immediately start writing and running python > code) is ? I think ? a pretty big feature. > _____________________________ > FWIW, I specifically suggested that IDLE still gets packaged with Python releases for Windows. This shouldn't be hard, because IDLE depends on Python rather than the other way around. Packaging is not what I'm against. Maintaining this project's source within the Python core *is*. I would be interested to hear Martin's opinion on this, as he's producing the Windows installers. Eli P.S. other Python distributions like ActiveState already bundle additional projects with their Python releases (pywin32 if I'm not mistaken). -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Wed Mar 20 21:14:00 2013 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 20 Mar 2013 13:14:00 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> Message-ID: >> Agreed that the "sync into stdlib" think should not happen, or should at > >> best be a temporary measure until we can remove idle from the source > >> tarball (maybe at the 3.4 release, otherwise at 3.5). > > > > Right. Ultimately, I think IDLE should be a separate project entirely, > but I > > guess there's push back against that too. > > The problem with it is, well, that it's a separate project so unless it > is still packaged in (in which case it's not quite separate project, > just a separate source tree) it's got to be downloaded and installed > separately. > > That would be a blow to educators, but also Windows users: while the CLI > works very nicely in unices, that's not the case with the win32 console > which is as best as I can describe it a complete turd, making IDLE a > very nice proposition there (I never use IDLE on Linux or OSX, but do > all the time in Windows). It also provides a rather capable (and in many > case sufficient) code editor for a platform which lacks any form of > native text editor allowing sane edition of code. > > Installing the Python windows packages and having everything "work" (in > the sense that you can immediately start writing and running python > code) is ? I think ? a pretty big feature. Oh, and another thing. If a Windows user wants a good Python shell, IDLE should be his last choice. There's Spyder, there's IPython, there's probably a bunch of others I'm not aware of. This is for IDLE as a shell. The same can be said for IDLE as an editor. This is precisely my main gripe with IDLE: it does a lot of things, but neither of them it does well. It stomps in place while "competition" moves fast forward. If someone cares about it, that someone should fix it and improve it, quickly. The speed required for such improvements is unrealistic for Python core. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Wed Mar 20 21:14:40 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 20 Mar 2013 16:14:40 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130320125445.64676161@anarchist> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <20130320125445.64676161@anarchist> Message-ID: On Wed, Mar 20, 2013 at 3:54 PM, Barry Warsaw wrote: > On Mar 20, 2013, at 12:40 PM, Guido van Rossum wrote: > >>I didn't hear any at the sprint here. > > JFDI! :) > > -Barry +1 why are we still talking show me the patches From doko at ubuntu.com Wed Mar 20 21:18:19 2013 From: doko at ubuntu.com (Matthias Klose) Date: Wed, 20 Mar 2013 13:18:19 -0700 Subject: [Python-Dev] How to fix the incorrect shared library extension on linux for 3.2 and newer? Message-ID: <514A198B.500@ubuntu.com> This is http://bugs.python.org/issue16754, affecting Linux systems only, and only those which don't provide static libraries. PEP 3149 did change the SO macro to include the ABI tag, although the SO macro is used to search for shared system libraries too. E.g. searching for the jpeg library search for a file libjpeg.cpython3.3m.so, which is not found. If the static library libjpeg.a is found, it is taken, and linked as -ljpeg, which then links with the shared library. The patch in the issue now makes a distinction between EXT_SUFFIX and SHLIB_SUFFIX, and restores the value of SO to SHLIB_SUFFIX. Now this could break users of sysconfig.get_config_var('SO'), however I don't see a better way than to restore the original behaviour and advise people to use the new config variables. Matthias From python-dev at masklinn.net Wed Mar 20 21:50:33 2013 From: python-dev at masklinn.net (Xavier Morel) Date: Wed, 20 Mar 2013 21:50:33 +0100 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> Message-ID: On 2013-03-20, at 20:59 , Brian Curtin wrote: > On Wed, Mar 20, 2013 at 2:51 PM, Xavier Morel wrote: >> That would be a blow to educators, but also Windows users: while the CLI >> works very nicely in unices, that's not the case with the win32 console >> which is as best as I can describe it a complete turd, making IDLE a >> very nice proposition there (I never use IDLE on Linux or OSX, but do >> all the time in Windows). > > Can you explain this a bit more? I've been using the CLI python.exe on > Windows, Mac, and Linux for years and I don't know what you're talking > about. Windows's terminal emulator (the "win32 console")'s deficiencies don't break it for running existing script, but make using it interactively a rather thankless task, at least as far as I'm concerned: no readline keybinding (e.g. C-a & C-e), very limited scrollback, fixed width, non-handling of signals (C-d will simply print ^D, a syntax error), odd copy behavior (rectangle copies *only*), etc? From python-dev at masklinn.net Wed Mar 20 21:59:58 2013 From: python-dev at masklinn.net (Xavier Morel) Date: Wed, 20 Mar 2013 21:59:58 +0100 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> Message-ID: <3EAD9253-17A5-45CD-AB32-0BE37BE72E48@masklinn.net> On 2013-03-20, at 21:14 , Eli Bendersky wrote: >>> Agreed that the "sync into stdlib" think should not happen, or should at > >>>> best be a temporary measure until we can remove idle from the source >>>> tarball (maybe at the 3.4 release, otherwise at 3.5). >>> >>> Right. Ultimately, I think IDLE should be a separate project entirely, >> but I >>> guess there's push back against that too. >> >> The problem with it is, well, that it's a separate project so unless it >> is still packaged in (in which case it's not quite separate project, >> just a separate source tree) it's got to be downloaded and installed >> separately. >> >> That would be a blow to educators, but also Windows users: while the CLI >> works very nicely in unices, that's not the case with the win32 console >> which is as best as I can describe it a complete turd, making IDLE a >> very nice proposition there (I never use IDLE on Linux or OSX, but do >> all the time in Windows). It also provides a rather capable (and in many >> case sufficient) code editor for a platform which lacks any form of >> native text editor allowing sane edition of code. >> >> Installing the Python windows packages and having everything "work" (in >> the sense that you can immediately start writing and running python >> code) is ? I think ? a pretty big feature. > > > Oh, and another thing. If a Windows user wants a good Python shell, IDLE > should be his last choice. There's Spyder, there's IPython, there's > probably a bunch of others I'm not aware of. Sure, there are plenty of tools for the experienced python developer with reasons to invest time in a windows development setup, but IDLE provides an acceptable low-cost and low-investment base which is *there*: it does not require spending a day downloading, trying out and getting familiar with a dozen different Python IDEs, it's simple and for the most part it works. I view it as an mg, not an emacs, if you see what I mean. From brett at python.org Wed Mar 20 22:05:10 2013 From: brett at python.org (Brett Cannon) Date: Wed, 20 Mar 2013 17:05:10 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> Message-ID: On Wed, Mar 20, 2013 at 3:51 PM, Xavier Morel wrote: > On 2013-03-20, at 20:38 , Barry Warsaw wrote: > > > On Mar 20, 2013, at 12:31 PM, Guido van Rossum wrote: > > > >> Agreed that the "sync into stdlib" think should not happen, or should at > >> best be a temporary measure until we can remove idle from the source > >> tarball (maybe at the 3.4 release, otherwise at 3.5). > > > > Right. Ultimately, I think IDLE should be a separate project entirely, > but I > > guess there's push back against that too. > > The problem with it is, well, that it's a separate project so unless it > is still packaged in (in which case it's not quite separate project, > just a separate source tree) it's got to be downloaded and installed > separately. > > That would be a blow to educators, but also Windows users: while the CLI > works very nicely in unices, that's not the case with the win32 console > which is as best as I can describe it a complete turd, making IDLE a > very nice proposition there (I never use IDLE on Linux or OSX, but do > all the time in Windows). It also provides a rather capable (and in many > case sufficient) code editor for a platform which lacks any form of > native text editor allowing sane edition of code. > > Installing the Python windows packages and having everything "work" (in > the sense that you can immediately start writing and running python > code) is ? I think ? a pretty big feature. First, a clarification since people seem to have missed it a couple of times: both Eli and Guido said IDLE would continue to be bundled with binary distributions from python.org, just developed independently. Now Guido's comment may have just been to handle deprecation of IDLE from the stdlib, but at least he wasn't saying "leave it out tomorrow", just "take it out of the stdlib to be developed independently". That still allows it to come with Python and alleviate any installation issues by shifting the load to release managers. Second, I hear the "it will hurt educators" argument every time this topic comes up, and so I want to know exactly where the challenge comes from. Is it from people coming to a class with their own laptop where they can't install anything due to lack of knowledge but can bring up a shell and type "idle"? If that were the case can't you also just as easily teach them to type "pysetup pip", "pip install idle", "idle"? Or heck, if Nick's dead-simple bootstrap installer could handle "pysetup idle" then you can cut that down a step. Notice that none of this suggests the removal of tkinter since that never changes and the hard work is already done (although if we could get the binary wheel for the various OSs to just include tkinter w/ IDLE then that could also potentially move out and shrink the binary even further). Is it lack of administrative access on machines in a computer lab? Then I would ask if IDLE is even included in Linux distributions by default typically or can be removed separately? Is it lack of administrative access on personal laptops? In that case you should be able to ask them to find out the password before they come. You can also ask them to install into their user-level site-packages directory (we have PEP 370 for a reason). Regardless of the answers to the questions above, I support the idea of at least releasing IDLE more often and that mostly means developing it externally from the stdlib. If we want to continue to bundle it in the binary, then we should pull in the latest stable release when we cut a new version of Python. But regardless the current situation should not continue. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Wed Mar 20 22:15:38 2013 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 20 Mar 2013 14:15:38 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <3EAD9253-17A5-45CD-AB32-0BE37BE72E48@masklinn.net> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <3EAD9253-17A5-45CD-AB32-0BE37BE72E48@masklinn.net> Message-ID: On Wed, Mar 20, 2013 at 1:59 PM, Xavier Morel wrote: > > On 2013-03-20, at 21:14 , Eli Bendersky wrote: > > >>> Agreed that the "sync into stdlib" think should not happen, or should > at > > > >>>> best be a temporary measure until we can remove idle from the source > >>>> tarball (maybe at the 3.4 release, otherwise at 3.5). > >>> > >>> Right. Ultimately, I think IDLE should be a separate project entirely, > >> but I > >>> guess there's push back against that too. > >> > >> The problem with it is, well, that it's a separate project so unless it > >> is still packaged in (in which case it's not quite separate project, > >> just a separate source tree) it's got to be downloaded and installed > >> separately. > >> > >> That would be a blow to educators, but also Windows users: while the CLI > >> works very nicely in unices, that's not the case with the win32 console > >> which is as best as I can describe it a complete turd, making IDLE a > >> very nice proposition there (I never use IDLE on Linux or OSX, but do > >> all the time in Windows). It also provides a rather capable (and in many > >> case sufficient) code editor for a platform which lacks any form of > >> native text editor allowing sane edition of code. > >> > >> Installing the Python windows packages and having everything "work" (in > >> the sense that you can immediately start writing and running python > >> code) is ? I think ? a pretty big feature. > > > > > > Oh, and another thing. If a Windows user wants a good Python shell, IDLE > > should be his last choice. There's Spyder, there's IPython, there's > > probably a bunch of others I'm not aware of. > > Sure, there are plenty of tools for the experienced python developer > with reasons to invest time in a windows development setup, but IDLE > provides an acceptable low-cost and low-investment base which is > *there*: it does not require spending a day downloading, trying out and > getting familiar with a dozen different Python IDEs, it's simple and > for the most part it works. > > I view it as an mg, not an emacs, if you see what I mean. > _______________________________________________ > This seems more like an education & documentation issue than a technical problem. We can explicitly recommend Python Windows users to install IDLE or Spyder or IPython in some friendly "get started on Windows" guide. But we seem to be talking about different things, really. I'm not saying we shouldn't distribute IDLE with Python on Windows, at this point (I think this will be a good idea in the future, but let's make it gradual). All I'm saying is that IDLE should be developed outside the CPython core project. This has the potential of making both CPython and IDLE better. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From senthil at uthcode.com Wed Mar 20 22:23:06 2013 From: senthil at uthcode.com (Senthil Kumaran) Date: Wed, 20 Mar 2013 14:23:06 -0700 Subject: [Python-Dev] PyCon Sprints - Thank you In-Reply-To: <20130320192447.4F188250069@webabinitio.net> References: <514A0833.7000101@woz.io> <20130320192447.4F188250069@webabinitio.net> Message-ID: Thanks Daniel, for all the patches and improving the test coverage. Hope you had a good time and will you enjoy contributing further. Happy touring SF. -- Senthil On Wed, Mar 20, 2013 at 12:24 PM, R. David Murray wrote: > Thank you for your contributions, and we look forward to anything else > you may choose to contribute! > > --David From barry at python.org Wed Mar 20 22:36:48 2013 From: barry at python.org (Barry Warsaw) Date: Wed, 20 Mar 2013 14:36:48 -0700 Subject: [Python-Dev] How to fix the incorrect shared library extension on linux for 3.2 and newer? In-Reply-To: <514A198B.500@ubuntu.com> References: <514A198B.500@ubuntu.com> Message-ID: <20130320143648.6df30a2f@anarchist> On Mar 20, 2013, at 01:18 PM, Matthias Klose wrote: >The patch in the issue now makes a distinction between EXT_SUFFIX and >SHLIB_SUFFIX, and restores the value of SO to SHLIB_SUFFIX. Now this could >break users of sysconfig.get_config_var('SO'), however I don't see a better >way than to restore the original behaviour and advise people to use the new >config variables. It should probably be considered a bug that we changed the meaning of SO in PEP 3149, but I don't think anybody realized it was being used for both purposes (otherwise I'm sure we wouldn't have done it that way). I suppose Georg should make the final determination for 3.2 and 3.3, but the solution you propose seems about the best you can do. As we discussed at Pycon, you'll post a diff to the PEP in the tracker issue and I'll commit that when I figure out the best way to indicate that a PEP has been updated post-Final status. -Barry From ncoghlan at gmail.com Wed Mar 20 23:05:40 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Mar 2013 15:05:40 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130320125445.64676161@anarchist> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <20130320125445.64676161@anarchist> Message-ID: On Wed, Mar 20, 2013 at 12:54 PM, Barry Warsaw wrote: > On Mar 20, 2013, at 12:40 PM, Guido van Rossum wrote: > >>I didn't hear any at the sprint here. > > JFDI! :) Please don't rush this. We have Roger Serwy being given commit privileges specifically to work on Idle, Terry's PEP proposing to make it explicit that we consider IDLE an application bundled with Python that can receive new features in maintenance releases and several people expressing interest in helping to make IDLE better (primarily educators, including Katie Cunningham, one of the teachers who ran the Raspberry Pi based Young Coders sessions for teens and pre-teens here at PyCon). These are the people who care about Idle, we should be recruiting them to work on it *as it is now*, and then letting them decide if they wish to continue working on it as it is now, or if they prefer to move to a more inclusive development platform which allows them to accept pull requests rather than requiring patches to be generated locally and uploaded to our tracker. It's not as simple as saying "let's split it out to a separate repo and then bundle it", because bundling still means python-dev is placing it's stamp of approval on the application, which means we should be satisfied that the developers leading the project are people we trust as stewards of software we distribute. Yes, the status quo of Idle is not something we should allow to continue indefinitely, but decisions about its future development should be made by active maintainers that are already trusted to make changes to it (such as Terry and Roger), rather than those of us that don't use it, and aren't interested in maintaining it. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From "ja...py" at farowl.co.uk Wed Mar 20 22:27:05 2013 From: "ja...py" at farowl.co.uk (Jeff Allen) Date: Wed, 20 Mar 2013 21:27:05 +0000 Subject: [Python-Dev] Recent changes to TextIOWrapper and its tests In-Reply-To: References: <5147785D.3070100@farowl.co.uk> Message-ID: <514A29A9.3020500@farowl.co.uk> On 19/03/2013 08:03, Serhiy Storchaka wrote: > On 18.03.13 22:26, Jeff Allen wrote: >> The puzzle is that it requires t.read() to succeed. >> >> When I insert a check for bytes type in all the places it seems >> necessary in my code, I pass the first two conditions, but since >> t.read() also raises TypeError, the overall test fails. Is reading the >> stream with read() intended to succeed? Why is this desired? > > An alternative option is to change C implementation of > TextIOWrapper.read() to raise an exception in this case. However I > worry that it can break backward compatibility. Thanks for this and the previous note. It is good to get it from the horse's mouth. I was surprised that the Python 3 version of the test was different here. I'd looked at the source of textio.c and found no test for bytes type in the n<0 branch of textiowrapper_read. Having tested it just now, I see that the TypeError is raised by the decoder in Py3k, because the input (when it is a unicode string) does not bear the buffer API, and not by a type test in TextIOWrapper.read() at all. For Jython, I shall make TextIOWrapper raise TypeError and our version of test_io will check for it. Any incompatibility relates only to whether a particular mistake sometimes goes undetected, so I feel pretty compatible. Added to which, this is the behaviour of Python 3 and we feel safe anticipating Py3k in small ways. > Are there other tests (in other test files) which fail with a new > Jython TextIOWrapper? I don't think there is anything else specific to TextIOWrapper, but if there is I will first treat that as a fault in our implementation. This is the general approach, to emulate the CPython implementation unless a test is clearly specific to arbitrary implementation choices. (There's a general exclusion for garbage collection.) In this case the test appeared to reflect an accident of implementation, but might just have been deliberate. Parts of the Jython implementation of io not yet implemented in Java are supplied by a Python module _jyio. This is essentially a copy of the corresponding parts of _pyio, except that it has to pass the C* tests, not the Py* tests. In places _jyio is therefore closer to _io than is _pyio. For example, it makes the type tests just discussed, and passes CTextIOWrapperTest.test_illegal_decoder and test_initialization. _jyio.StringIO has getstate and setstate methods lacking in _pyio counterparts to pass pickling tests in test_memoryio. This might be of interest to CPython for _pyio. Jeff Allen From solipsis at pitrou.net Wed Mar 20 23:11:47 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 20 Mar 2013 23:11:47 +0100 Subject: [Python-Dev] IDLE in the stdlib References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <20130320125445.64676161@anarchist> Message-ID: <20130320231147.665b3adc@pitrou.net> On Wed, 20 Mar 2013 15:05:40 -0700 Nick Coghlan wrote: > > Yes, the status quo of Idle is not something we should allow to > continue indefinitely, but decisions about its future development > should be made by active maintainers that are already trusted to make > changes to it (such as Terry and Roger), rather than those of us that > don't use it, and aren't interested in maintaining it. Definitely. People shouldn't remain quiescently torpid about the idle status quo. Regards Antoine. From kbk at shore.net Wed Mar 20 23:48:15 2013 From: kbk at shore.net (Kurt B. Kaiser) Date: Wed, 20 Mar 2013 18:48:15 -0400 Subject: [Python-Dev] IDLE in the stdlib References: Message-ID: <1363819695.3885.140661207062389.589C7405@webmail.messagingengine.com> [Barry] > On Mar 20, 2013, at 11:22 AM, Eli Bendersky wrote: >>IDLE would be a great first foray into this "separate project" world, because it is many ways a separate project. > I really think that's true. A separate project, occasionally sync'd back > into > the stdlib by a core dev seems like the right way to manage IDLE. It seems to me that we are seeing increasing use of IDLE for beginner training. I've seen several recent Python books that use IDLE as their programming environment, and which include IDLE screen captures in the text. I've always felt that IDLE should be targeted to an eight year old beginner, and should work uniformly across the major platforms. That now includes the Raspberry Pi!! I believe it's very important that Python come with an IDE as part of the "batteries" - it's very awkward for a beginner to write code in something like Notepad and then run and debug it in a Windows command shell. Just getting the paths right is problematic (and I'm not talking about backslashes). It's very helpful for an instructor to be able to deal with a single application that runs on all the major platforms, and not have to spend a lot of time getting the tools up to speed before the actual Python training can begin. And, while an instructor can walk a student through downloading and installing some IDE, it's very helpful IMHO for a beginner working alone on Windows or Mac to be able to just click on IDLE. A Raspberry Pi might not even have a web connection! IDLE has a single keystroke round trip - it's an IDE, not just an editor like Sublime Text or Notepad. In the 21st century, people expect some sort of IDE. Or, they should! IDLE forked nearly a decade ago to introduce subprocess execution and the configuration dialog. Subsequently, I merged it back into core and it played a useful role in Python 3 development. Scheme hackers write new Scheme implementations. Python hackers tend to write editors and IDEs, it seems. I think, considering all the competition, that IDLE would have died if it hadn't been merged back. Instead, many of its competitors died. So, although I'm pretty agnostic regarding where development is done, I think Python should continue to release a simple native IDE in its binaries, and I worry that IDLE will be eventually dropped from the binaries if it's separate. Right now, Apple is delivering IDLE along with Python (though there are issues with the current installation) and I hope that will continue. OTOH, development is likely to be more vigorous if it's separate. I'd also like to make a plea to keep IDLE's interface clean and basic. There are lots of complex IDEs available for those who want them. It's natural for developers to add features, that's what they do :-), but you don't hand a novice a Ferrari (or emacs) and expect good results. IMHO some of the feature patches on the tracker should be rejected on that basis. It's sometimes said that IDLE is "ugly" or "broken". These terms are subjective! If it's truly broken, then we should fix it. If it's "broken" because a feature is missing, maybe that's an intentional part of Guido's design of a simple Python IDE. -- KBK From kbk at shore.net Thu Mar 21 00:10:32 2013 From: kbk at shore.net (Kurt B. Kaiser) Date: Wed, 20 Mar 2013 19:10:32 -0400 Subject: [Python-Dev] IDLE in the stdlib Message-ID: <1363821032.7658.140661207069089.6C4712FB@webmail.messagingengine.com> I apologize for that formatting mess - the Barracuda rejected my original email for some reason (squirrelmail directly from shore.net) and I had to resend it from rejects - it got re-wrapped on transmission. -- KBK From barry at python.org Thu Mar 21 00:47:29 2013 From: barry at python.org (Barry Warsaw) Date: Wed, 20 Mar 2013 16:47:29 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130320231147.665b3adc@pitrou.net> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <20130320125445.64676161@anarchist> <20130320231147.665b3adc@pitrou.net> Message-ID: <20130320164729.715c9e10@anarchist> On Mar 20, 2013, at 11:11 PM, Antoine Pitrou wrote: >On Wed, 20 Mar 2013 15:05:40 -0700 >Nick Coghlan wrote: >> >> Yes, the status quo of Idle is not something we should allow to >> continue indefinitely, but decisions about its future development >> should be made by active maintainers that are already trusted to make >> changes to it (such as Terry and Roger), rather than those of us that >> don't use it, and aren't interested in maintaining it. > >Definitely. People shouldn't remain quiescently torpid about the idle >status quo. The release managers should have a say in the matter, since it does cause some amount of pain there. -Barry From tjreedy at udel.edu Thu Mar 21 01:15:56 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 20 Mar 2013 20:15:56 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> Message-ID: On 3/20/2013 2:22 PM, Eli Bendersky wrote: > On Wed, Mar 20, 2013 at 11:09 AM, R. David Murray > wrote: > > On Wed, 20 Mar 2013 09:41:53 -0700, Eli Bendersky > wrote: >> Personally, I think that IDLE reflects badly on Python in more ways than >> one. It's badly maintained, quirky and ugly. It serves a very narrow set of >> uses, and does it badly. Personally, I think running Python with the imitation-MSDOS Windows Command Prompt (CP) reflects badly on Python in more ways than one. It's anti-maintained, quirky and ugly, and for some uses is badly broken. I am serious and am not just being sarcastic. I suggested years ago that we try to find an alternative. Some details: Ugly (and quirky): it imitates ancient white type on black background CRT monitors. I do not know of anything else on Windows that looks like that. Broken (and quirky): it has an absurdly limited output buffer (under a thousand lines) that is over-flowed, for instance, by "python -m test". In other words, by the time the test suite has finished, the early results and error messages have scrolled off the top. Same can be true of verbose repository pulls, doc builds, external dependency fetch and compile, help messages, and any other substantial printing. For example, start the standard interpreter, enter 'help(str)', page down to the bottom, scroll back up, and the top of the help message is gone. *That* is real 'broken'. Quirky: Windows uses cntl-C to copy selected text to the clipboard and (where appropriate) cntl-V to insert clipboard text at the cursor pretty much everywhere. (Anti-maintained: I think Microsoft wants people to abandon unix-like command processing and that they are intentionally keeping CP ugly and disfunctional.) Now lets look at IDLE: Standard black (+ other colors) on white and close to standard gui. *Much* prettier than CP 'Unlimited' lines of code and output in windows. *Much* more functional in this respect. I cannot think of *anything* in IDLE that compares to the brokeness of CP in throwing output away. Standard ^C,^V behavior. To me, IDLE is much less quirky than CP. Oh, but it did have one quirk -- the right context menus lacked the standard Copy and Paste entries of Windows applications. When that was added for all versions, a couple of people challenged it as a default-only 'enhancement' rather than all-versions 'bugfix'. That challenge directly lead to PEP434. In last two months since, IDLE work has mostly stopped. I will say more about maintenance problems, but getting back to CP versus IDLE... From IDLE: >>> print('\x80') ? >>> print('\xc8') ? Impressed? You should be. Open Start menu / Python33 / Python (command line) and both of those result (modulo the specific character) in UnicodeEncodeError: 'charmap' codec can't encode character '\xc8' in position 0: character maps to and we have not even gotten beyond latin-1, into the 'real' unicode char set, which IDLE supports MUCH better. For display, default United States CP is close to ascii-only. In summary: IDLE makes Python interactive mode tolerable and useful on Windows in ways where CP completely fails. If you are worried about something making Python look bad on Windows, target CP before IDLE. > Getting things committed in Python is not easy, and even if we get a > sudden influx of good patches (which I doubt) Given recent history, this is silly. A year and a few months ago, (Dec 2011, I think), I asked Roger Serwy to start submitting patches, with the promise to review and possibly commit some. Consequently, in the last year, we have *already* gotten a 'sudden influx of good patches'. It is true that I have not yet kept up my end of the bargain. One problem for me has been an inability to test IDLE patches on Windows with repository builds, due to the need for a working tkinter. It turns out that the pc build files are buggy and the devguide and the (to me confusing) pcbuild/readme do not have the workaround that was communicated to me only about 10 days ago. > these will take time to review and get committed. In spite of my slowness, others have been active. Searching cpython/Misc/NEWS for ' IDLE ' (case-insensitive) returns 31 hits. For 4 months (Oct-Jan), that is pretty active, certainly much more active than in the years prior to 2012. As soon as PEP434 is resolved and Roger is fully on board, the pace will pick up again. At least a few people have been given commit privileges but never (or essentially never) exercised them. I think a couple were specifically for working on IDLE. I have not asked any of these people why, but I can imagine from my own experience that uncertainty over what is and is not allowed is one reason. I will discuss repository separation in another response, but request here that the still vague idea of doing something in the *future* not be used to stop PEP434, in whatever form, and current IDLE development *now*. -- Terry Jan Reedy From tjreedy at udel.edu Thu Mar 21 01:30:23 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 20 Mar 2013 20:30:23 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> Message-ID: On 3/20/2013 3:59 PM, Brian Curtin wrote: > On Wed, Mar 20, 2013 at 2:51 PM, Xavier Morel wrote: >> That would be a blow to educators, but also Windows users: while the CLI >> works very nicely in unices, that's not the case with the win32 console >> which is as best as I can describe it a complete turd, making IDLE a >> very nice proposition there (I never use IDLE on Linux or OSX, but do >> all the time in Windows). > > Can you explain this a bit more? I've been using the CLI python.exe on > Windows, Mac, and Linux for years and I don't know what you're talking > about. I gave examples in my response to the original post: ^C,^V do not work (but do in IDLE); the output buffer cannot even hold the entire 'help(str)' response (IDLE can, and much more) ; by default, print() cannot even print all latin-1 chars, let alone the BMP (IDLE will at least print a box for the entire BMP); CP is frozen in time 30 years ago, while IDLE is being maintained and modernized. -- Terry Jan Reedy From tjreedy at udel.edu Thu Mar 21 02:17:08 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 20 Mar 2013 21:17:08 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <1363819695.3885.140661207062389.589C7405@webmail.messagingengine.com> References: <1363819695.3885.140661207062389.589C7405@webmail.messagingengine.com> Message-ID: On 3/20/2013 6:48 PM, Kurt B. Kaiser wrote: > It seems to me that we are seeing increasing use of IDLE for beginner > training. I've seen several recent Python books that use IDLE as their > programming environment, and which include IDLE screen captures in the > text. Well, one can hardly use Command Prompt captures, unless one were to flip black and white within the window (but not its frame). > I've always felt that IDLE should be targeted to an eight year old > beginner, and should work uniformly across the major platforms. That > now includes the Raspberry Pi!! I think it should also work uniformly across Python versions. That is the gist of PEP434. > I believe it's very important that Python come with an IDE as part of > the "batteries" - it's very awkward for a beginner to write code in > something like Notepad and then run and debug it in a Windows command shell. I cut and pasted when I began ;-). > IDLE has a single keystroke round trip - it's an IDE, not just an editor > like Sublime Text or Notepad. In the 21st century, people expect some > sort of IDE. Or, they should! I have never understood those who suggest that an editor, even a super editor, can replace IDLE's one key F5-run, with one click return to the spot of the foul on syntax errors. > OTOH, development is likely to be more vigorous if it's separate. Perhaps, perhaps not, or perhaps it would become 'too' vigorous if too many developers pushed multiple 'kitchen sinks'. > I'd also like to make a plea to keep IDLE's interface clean and basic. > There are lots of complex IDEs available for those who want them. It's > natural for developers to add features, that's what they do :-), but you > don't hand a novice a Ferrari (or emacs) and expect good results. IMHO > some of the feature patches on the tracker should be rejected on that > basis. Have you commented on those issues? I so far have mostly concentrated on fixing current features. I agree that major new features should be considered carefully and perhaps discussed on a revived idle-sig list. I have never used some of the existing features, like breakpoints, that seem pretty advanced. I first opened a debugger window only recently, in order to comment on a issue about a possible bug. We should document how to use that before adding anything else comparable. > It's sometimes said that IDLE is "ugly" or "broken". These terms are > subjective! When IDLE-closing bugs are all fixed, I would like to see how much difference themed widgets would make to appearance. Then we could debate whether IDLE should look 'native' on each platform or have a common 'Python' theme -- or have both and let users choose. > If it's truly broken, then we should fix it. If it's > "broken" because a feature is missing, maybe that's an intentional part > of Guido's design of a simple Python IDE. Without a vision and design document, it is sometimes hard for someone like me to know which is which. -- Terry Jan Reedy From nyamatongwe at me.com Thu Mar 21 01:38:37 2013 From: nyamatongwe at me.com (Neil Hodgson) Date: Thu, 21 Mar 2013 11:38:37 +1100 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> Message-ID: <0FD3F001-CA03-4FE8-A088-736A819BA252@me.com> Terry Reedy: > Broken (and quirky): it has an absurdly limited output buffer (under a thousand lines) The limit is actually 9999 lines. > Quirky: Windows uses cntl-C to copy selected text to the clipboard and (where appropriate) cntl-V to insert clipboard text at the cursor pretty much everywhere. CP uses Ctrl+C to interrupt programs similar to Unix. Therefore it moves copy to a different key in a similar way to Unix consoles like GNOME Terminal and MATE Terminal which use Shift+Ctrl+C for copy despite Ctrl+C being the standard for other applications. Neil From cben at users.sf.net Thu Mar 21 03:30:10 2013 From: cben at users.sf.net (Beni Paskin-Cherniavsky) Date: Thu, 21 Mar 2013 02:30:10 +0000 (UTC) Subject: [Python-Dev] IDLE in the stdlib References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> Message-ID: Eli Bendersky gmail.com> writes: > Oh, and another thing. If a Windows user wants a good Python shell, > IDLE should be his last choice. There's Spyder, there's IPython, > there's probably a bunch of others I'm not aware of.This is for IDLE > as a shell. The same can be said for IDLE as an editor.This is > precisely my main gripe with IDLE: it does a lot of things, but > neither of them it does well. Actually, there are surprisingly little competition to IDLE as a shell! IDLE has mostly working multi-line editing and history, while most "sophisticated" environments (including Spyder) work line-by-line, which makes defining a function (let alone a class) in the shell prohibitively painful. The only other shells I could recommend to a beginner are: 1. IPython, which of course does multi-line editing superbly. Its non-standard extensions are a distraction and it's too far into power-user end of the spectrum to ever become a fits-all recommendation. The notebook is enough of a win to tip the scales for some educators, but the jury is still out if that's a smooth beginner experience. 2. Dreampie, designed by an ex-IDLE-contributor for the sole purpose of being a better shell than IDLE. However, lack of an editor makes it less practical as an introductory tool. From tjreedy at udel.edu Thu Mar 21 03:30:33 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 20 Mar 2013 22:30:33 -0400 Subject: [Python-Dev] A 'common' respository? (was Re: IDLE in the stdlib) In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> Message-ID: On 3/20/2013 8:15 PM, Terry Reedy wrote: > I will discuss repository separation in another response Here is a radical idea I have been toying with: set up a 'common' repository to 'factor out' files that are, could be, or should be the same across versions. The 'common' files would be declared (especially to packagers, when relevant) to be a part of each branch. Each release would (somehow - not my department) incorporate the latest version of everything in 'common'. What would go here? Misc/ACKS: the sensible idea that there should only be one copy of this file has been discussed before. LICENSE: I believe this is the same across current versions and must be edited in parallel for all future branches. xxx: others that I have not thought of. Doc/tools (sphinx and dependencies): setting this up separately but identically for each branch is a bit silly if it could be avoided. The sphinx versions should, of course, be the new one that runs on both python 2 and 3. idlelib: already discussed. Having only one IDLE version would partially speed up development. (surely controvesial) tkinter and _tkinter: I think the _tkinter and tkinter for each release should work with and be tested with the most recent tcl/tk release. Having only one tkinter version might make having one version of IDLE even easier. (probably even more controversial) tcl/tk (or at least the files needed to fetch and build - but as long as the sources are on python.org anyway, the sources could also be moved here from svn): For IDLE to really work the same across versions, it needs to run on the same tcl/tk version with the same bugfixes. For example, over a year ago, a French professor wrote python-list or idle-sig or maybe both saying that he would like to use IDLE in a class in Sept 2012, but there was a bug keeping it from working properly with French keyboards. He wanted to know if we were likely to fix it. The first answer (provided by Kevin Walzer) was that it was a tcl/tk bug that he (Kevin) was working on. The fix made it into 8.5.9 a year ago and hence into 3.3 but 2.7.3 or 3.2.3, released a month after the fix. So I later told him he could use IDLE, but, at least on Windows, only with the then upcoming 3.3. (I don't know the tcl/tk version policy for the non-Apple builds.) I do not know if tcl/tk 8.5.z releases have added many features or are primarily bugfixes like our micro releases. If the latter, the case for distributing at least the most recent 8.5.z with windows would seem pretty strong. I also do not know what 8.6.z adds. But an tcl/tk 'enhancement' of supporting astral characters might look like a bugfix for IDLE. (Running from IDLE, print(astral_char) raises, but I believe the same code works in some Linux interpreters.) yyy: any other external dependencies that we update on all versions. --- Terry Jan Reedy From raymond.hettinger at gmail.com Thu Mar 21 03:57:54 2013 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 20 Mar 2013 19:57:54 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130320123833.14886a86@anarchist> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> Message-ID: <8632CEBE-50A9-414C-8E43-B53F427DB8D6@gmail.com> On Mar 20, 2013, at 12:38 PM, Barry Warsaw wrote: > Right. Ultimately, I think IDLE should be a separate project entirely, but I > guess there's push back against that too. The most important feature of IDLE is that it ships with the standard library. Everyone who clicks on the Windows MSI on the python.org webpage automatically has IDLE. That is why I frequently teach Python with IDLE. If this thread results in IDLE being ripped out of the standard distribution, then I would likely never use it again. > JFDI! :) That is a comment from a person who uses Emacs every day. For those of us who have to support people with basic installs, it is essential that they have some Python aware editor on their machine. Without IDLE, a shocking number of people would create Python files using notepad. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Thu Mar 21 03:53:44 2013 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 21 Mar 2013 13:53:44 +1100 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> Message-ID: <514A7638.4090205@pearwood.info> On 21/03/13 11:15, Terry Reedy wrote: > getting back to CP versus IDLE... > > From IDLE: >>>> print('\x80') > ? >>>> print('\xc8') > ? > > Impressed? You should be. Open Start menu / Python33 / Python (command line) and both of those result (modulo the specific character) in > UnicodeEncodeError: 'charmap' codec can't encode character > '\xc8' in position 0: character maps to Terry, you have just done something I didn't think was possible: you've changed my personal opinion about IDLE. On the rare, rare occasions where I've had to use Python interactively on Windows, I use the standard python.exe command prompt, which I thought was easier than learning the (to me) quirks of IDLE's UI. You've just given me a reason to use IDLE. I also note that in the last few weeks, I've seen at least two instances that I recall of a beginner on the tutor at python.org mailing list being utterly confused by Python's Unicode handling because the Windows command prompt is unable to print Unicode strings. Thanks Terry. -- Steven From dreamingforward at gmail.com Thu Mar 21 04:03:28 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Wed, 20 Mar 2013 20:03:28 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <8632CEBE-50A9-414C-8E43-B53F427DB8D6@gmail.com> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <8632CEBE-50A9-414C-8E43-B53F427DB8D6@gmail.com> Message-ID: On Wed, Mar 20, 2013 at 7:57 PM, Raymond Hettinger < raymond.hettinger at gmail.com> wrote: > > On Mar 20, 2013, at 12:38 PM, Barry Warsaw wrote: > > Right. Ultimately, I think IDLE should be a separate project entirely, > but I > guess there's push back against that too. > > > The most important feature of IDLE is that it ships with the standard > library. > Everyone who clicks on the Windows MSI on the python.org webpage > automatically has IDLE. That is why I frequently teach Python with IDLE. > > If this thread results in IDLE being ripped out of the standard > distribution, > then I would likely never use it again. > > +1, FWIW MarkJ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Mar 21 04:32:38 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 20 Mar 2013 23:32:38 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: On 3/20/2013 12:41 PM, Eli Bendersky wrote: > Personally, I think that IDLE reflects badly on Python in more ways than > one. It's badly maintained, quirky and ugly. Ugly is subjective: by what standard and compared to what? I suggested in my previous response why I think 'badly maintained' is untrue and/or unfair. Dismissing the recent work that has been done does not help. There are 20 open issues with smtp(lib) in the title. It is 37 kb, making .54 issues per kb. For idlelib, with 786 kb, there are 104 issues, or .13 issues per kb, which is one fourth as many. I could claim that smtplib, based on 1990s RFCs is much worse maintained. It certainly could use somee positive attention. What current quirks, not already the subject of a tracker issue, are you thinking of? > It serves a very narrow set of uses, Relative to the computing universe, yes. It focuses on editing and running Python code. > and does it badly. As a user, I rate it at least 'good'. Most of the tracker issues hardly affect me, and many or most of the worst problems for me have already been fixed. What IDE would you suggest as a simple, install and go, alternative? It should have the following features or something close: * One-key saves the file and runs it with the -i option (enter interactive mode after running the file) so one can enter additional statements interactively. * Syntax errors cause a message display; one click returns to the spot the error was detected. * Error tracebacks are displayed unmodified, without extra garbage or censorship. # Right click on a line like File "C:\Programs\Python33\lib\difflib.py", line 1759, ... and then left click on the goto popup to go to that line in that file, opening the file if necessary. As of 3.3.0, this last feature was not documented, at least not in the Idle Help file. Since then, it has been. -- Terry Jan Reedy From eliben at gmail.com Thu Mar 21 04:36:39 2013 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 20 Mar 2013 20:36:39 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <8632CEBE-50A9-414C-8E43-B53F427DB8D6@gmail.com> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <8632CEBE-50A9-414C-8E43-B53F427DB8D6@gmail.com> Message-ID: On Wed, Mar 20, 2013 at 7:57 PM, Raymond Hettinger < raymond.hettinger at gmail.com> wrote: > > > On Mar 20, 2013, at 12:38 PM, Barry Warsaw wrote: > > Right. Ultimately, I think IDLE should be a separate project entirely, > but I > guess there's push back against that too. > > > The most important feature of IDLE is that it ships with the standard > library. > Everyone who clicks on the Windows MSI on the python.org webpage > automatically has IDLE. That is why I frequently teach Python with IDLE. > > If this thread results in IDLE being ripped out of the standard > distribution, > then I would likely never use it again. > Why is it necessary to conflate distribution and development. "standard library" != "Python distribution". Take the ActivePython distribution for example. They ship with extra packages for Windows (pywin32, etc) and our Python installer doesn't. This is a reason many Windows people prefer ActivePython. That's their right, but this preference is not the point. The point is that it's perfectly conceivable to ship IDLE with Python releases on Windows, while managing it as a separate project outside the CPython core Mercurial repository. This seems to me to combine benefits from both worlds: 1. IDLE keeps being shipped to end users. I have to admit the reasons made in favor of this in the thread so far are convincing. 2. IDLE is developed as a standalone project. As such, it's much easier to contribute to, which will hopefully result in a quicker pace of improvement. The only demand is that it keeps working with a release version of Python, and this is pretty easy. It's even possible and easy to have a single IDLE version for Python 3.x instead of contributors having to propose patches for 3.2, 3.3 and 3.4 simultaneously. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Thu Mar 21 04:54:25 2013 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 20 Mar 2013 20:54:25 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 8:32 PM, Terry Reedy wrote: > On 3/20/2013 12:41 PM, Eli Bendersky wrote: > > Personally, I think that IDLE reflects badly on Python in more ways than >> one. It's badly maintained, quirky and ugly. >> > > Ugly is subjective: by what standard and compared to what? > > Compared to other existing Python IDEs and shells which are layered on top of modern GUI toolkits that are actively developed to keep with modern standards, unlike Tk which is frozen in the 1990s. > I suggested in my previous response why I think 'badly maintained' is > untrue and/or unfair. Dismissing the recent work that has been done does > not help. > I did not intend to dismiss your work Terry, and I'm sorry if it came out this way. You know that I also contributed bug fixes to IDLE in the past so I'm not a complete outsider. I see the value of IDLE being distributed with Python. However, especially in view of the recent developments in the area of alternative Python implementations, I think it's important to clearly mark the boundaries between things that belong in the core CPython code repository and things that don't. > > There are 20 open issues with smtp(lib) in the title. It is 37 kb, making > .54 issues per kb. For idlelib, with 786 kb, there are 104 issues, or .13 > issues per kb, which is one fourth as many. I could claim that smtplib, > based on 1990s RFCs is much worse maintained. It certainly could use somee > positive attention. > You know better than I do that the number of open issues is not really the only factor for determining the quality of a module. Eli -- > What current quirks, not already the subject of a tracker issue, are you > thinking of? > > > > It serves a very narrow set of uses, > > Relative to the computing universe, yes. It focuses on editing and running > Python code. > > > and does it badly. > > As a user, I rate it at least 'good'. Most of the tracker issues hardly > affect me, and many or most of the worst problems for me have already been > fixed. What IDE would you suggest as a simple, install and go, alternative? > It should have the following features or something close: > * One-key saves the file and runs it with the -i option (enter interactive > mode after running the file) so one can enter additional statements > interactively. > * Syntax errors cause a message display; one click returns to the spot the > error was detected. > * Error tracebacks are displayed unmodified, without extra garbage or > censorship. > # Right click on a line like > File "C:\Programs\Python33\lib\**difflib.py", line 1759, ... > and then left click on the goto popup to go to that line in that file, > opening the file if necessary. > > As of 3.3.0, this last feature was not documented, at least not in the > Idle Help file. Since then, it has been. > > > -- > Terry Jan Reedy > > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > eliben%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbk at shore.net Thu Mar 21 05:32:50 2013 From: kbk at shore.net (Kurt B. Kaiser) Date: Thu, 21 Mar 2013 00:32:50 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <1363819695.3885.140661207062389.589C7405@webmail.messagingengine.com> Message-ID: <1363840370.26832.140661207112409.77D53C31@webmail.messagingengine.com> On Wed, Mar 20, 2013, at 09:17 PM, Terry Reedy wrote: > On 3/20/2013 6:48 PM, Kurt B. Kaiser wrote: > > Well, one can hardly use Command Prompt captures, unless one were to > flip black and white within the window (but not its frame). And yet many books do - it's really ugly. When I see a book with a bunch of DOS windows, white on black, printed on cheap paper, 900 pages, I just put it back on the shelf. Quickly. > > I've always felt that IDLE should be targeted to an eight year old > > beginner, and should work uniformly across the major platforms. That > > now includes the Raspberry Pi!! > > I think it should also work uniformly across Python versions. That is > the gist of PEP434. Well, spending a lot of time backporting new features is not my idea of fun. OTOH, I have no objection. Maybe we could automate checkpoints of IDLE into PyPI or somewhere for those who don't care to pull from hg, yet want to be on the cutting edge? Along those lines, I've thought that IDLE should refrain from using the newest features in Python, to allow people running released versions of Python to access the newest IDLE. i.e. Python 3.4 innovations would not be used in IDLE 3.4. Only Python 3.3 and earlier innovations. [...] > > OTOH, development is likely to be more vigorous if it's separate. > > Perhaps, perhaps not, or perhaps it would become 'too' vigorous if too > many developers pushed multiple 'kitchen sinks'. Oh, I agree with that. Distributed version control can lead to chaos and dilution of effort. I don't know if this is the place to comment on PEP 434 (is it?), but I've always taken the approach that IDLE development should be less formal than the rest of stdlib, since it's not a dependency. Big changes should be reserved for the tip, but I don't see why something like the right click menu change shouldn't be backported. I think we should try a more relaxed idlelib development process inside core before we move it out, and should be generous about adding checkin permissions for that purpose. Rietveld will help. It's a good way to habilitate new developers. > > > I'd also like to make a plea to keep IDLE's interface clean and > > basic. There are lots of complex IDEs available for those who want > > them. It's natural for developers to add features, that's what they > > do :-), but you don't hand a novice a Ferrari (or emacs) and expect > > good results. IMHO some of the feature patches on the tracker > > should be rejected on that basis. > > Have you commented on those issues? I so far have mostly concentrated > on fixing current features. I agree that major new features should be > considered carefully and perhaps discussed on a revived idle-sig list. > I have never used some of the existing features, like breakpoints, > that seem pretty advanced. I first opened a debugger window only > recently, in order to comment on a issue about a possible bug. We > should document how to use that before adding anything else > comparable. I have commented over the years, but lately I've been so distracted by the Treasurer job that I haven't found much time for IDLE. And as I was telling Ned here at PyCon, when you step off the train, it can be hard to get back on. I've tried to channel Guido over the years. I looked at what he did, and tried to project forward. IDLE-dev is still active. Would anyone else like to be a moderator? > > It's sometimes said that IDLE is "ugly" or "broken". These terms > > are subjective! > > When IDLE-closing bugs are all fixed, I would like to see how much > difference themed widgets would make to appearance. Then we could > debate whether IDLE should look 'native' on each platform or have a > common 'Python' theme -- or have both and let users choose. > > > If it's truly broken, then we should fix it. If it's "broken" > > because a feature is missing, maybe that's an intentional part of > > Guido's design of a simple Python IDE. > > Without a vision and design document, it is sometimes hard for someone > like me to know which is which. IDLE development has always been organic, as opposed to the formal approach of the PEPs. What we need is a Zen of IDLE. When I look at it, I see a simple IDE. I try to adhere to the principle of least surprise, and to maintain an uncluttered interface suitable for beginners. Expert features can be there, but somewhat hidden (though they should be documented!) or implemented as disabled extensions. IDLE tries to promote Pythonic style. For example, the lack of a horizontal scroll bar was deliberate, I think. We should implement the patch that adds an extension selector to the options dialog, and keep the expert features as disabled extensions. That way, an instructor could distribute an idlerc file which would set IDLE up exactly as desired, including links to course specific help files on the web. BTW, I'll take this chance to promote the use of idlelib/NEWS.txt for IDLE news, instead of Misc/NEWS. That way, if IDLE is used outside of the main release, the NEWS.txt will go with it. Stability is paramount. No surprises beats cool. KISS is better than full featured. Uniform trumps native look. Experts go to the rear. Where they can find what they want if they know where to look. IDLE tests tkinter. -- KBK From v+python at g.nevcal.com Thu Mar 21 05:36:05 2013 From: v+python at g.nevcal.com (Glenn Linderman) Date: Wed, 20 Mar 2013 21:36:05 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> Message-ID: <514A8E35.8090204@g.nevcal.com> On 3/20/2013 5:15 PM, Terry Reedy wrote: > Broken (and quirky): it has an absurdly limited output buffer (under a > thousand lines) People keep claiming that Windows CMD has a limited output buffer. It is configurable, at least to 9999 lines, which is where I have mine set. That is far too much to actually scroll back through for most practical purposes, although sometimes I do :) I'm not trying to claim that Windows CMD is wonderful, perfect, or has large numbers of redeeming values, but let's keep to the facts. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Mar 21 06:19:42 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 01:19:42 -0400 Subject: [Python-Dev] cpython: Issue #13248: removed deprecated and undocumented difflib.isbjunk, isbpopular. In-Reply-To: <20130320181320.5B45C2500B3@webabinitio.net> References: <3ZVrTH23v4zPCm@mail.python.org> <20130320181320.5B45C2500B3@webabinitio.net> Message-ID: On 3/20/2013 2:13 PM, R. David Murray wrote: > On Wed, 20 Mar 2013 05:23:43 -0700, Eli Bendersky wrote: >> A mention in Misc/NEWS can't hurt here, Terry. Even though it's >> undocumented, some old code could rely on it being there and this code will >> break with the transition to 3.4 Will do. > Note that we also have a list of deprecated things that were removed in > What's New. > > Aside: given the 3.3 experience, I think people should be thinking in > terms of always updating What's New when appropriate, at the time a > commit is made. How does this look? Is ``replacement`` right? Should the subsequent sm by itself be marked? If so, how? * :meth:`difflib.SequenceMatcher.isbjunk` and :meth:`difflib.SequenceMatcher.isbpopular`: use ``x in sm.bjunk`` and ``x in sm.bpopular``, where sm is a SequenceMatcher object. -- Terry Jan Reedy From pjj at philipjohnjames.com Thu Mar 21 07:06:31 2013 From: pjj at philipjohnjames.com (Philip James) Date: Wed, 20 Mar 2013 23:06:31 -0700 Subject: [Python-Dev] A 'common' respository? (was Re: IDLE in the stdlib) In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> Message-ID: I hope I'm not coming across as pedantic, because I think you have some good arguments listed above, but shouldn't discussion like this go in python-ideas rather than python-dev? I'm very new to these lists, so forgive me if I'm stepping on any toes, I'm just trying to grok what kind of content should go in each list. PJJ http://philipjohnjames.com On Wed, Mar 20, 2013 at 7:30 PM, Terry Reedy wrote: > On 3/20/2013 8:15 PM, Terry Reedy wrote: > > I will discuss repository separation in another response >> > > Here is a radical idea I have been toying with: set up a 'common' > repository to 'factor out' files that are, could be, or should be the same > across versions. The 'common' files would be declared (especially to > packagers, when relevant) to be a part of each branch. Each release would > (somehow - not my department) incorporate the latest version of everything > in 'common'. > > What would go here? > > Misc/ACKS: the sensible idea that there should only be one copy of this > file has been discussed before. > > LICENSE: I believe this is the same across current versions and must be > edited in parallel for all future branches. > > xxx: others that I have not thought of. > > Doc/tools (sphinx and dependencies): setting this up separately but > identically for each branch is a bit silly if it could be avoided. The > sphinx versions should, of course, be the new one that runs on both python > 2 and 3. > > idlelib: already discussed. Having only one IDLE version would partially > speed up development. > > (surely controvesial) tkinter and _tkinter: I think the _tkinter and > tkinter for each release should work with and be tested with the most > recent tcl/tk release. Having only one tkinter version might make having > one version of IDLE even easier. > > (probably even more controversial) tcl/tk (or at least the files needed to > fetch and build - but as long as the sources are on python.org anyway, > the sources could also be moved here from svn): For IDLE to really work the > same across versions, it needs to run on the same tcl/tk version with the > same bugfixes. For example, over a year ago, a French professor wrote > python-list or idle-sig or maybe both saying that he would like to use IDLE > in a class in Sept 2012, but there was a bug keeping it from working > properly with French keyboards. He wanted to know if we were likely to fix > it. The first answer (provided by Kevin Walzer) was that it was a tcl/tk > bug that he (Kevin) was working on. The fix made it into 8.5.9 a year ago > and hence into 3.3 but 2.7.3 or 3.2.3, released a month after the fix. So I > later told him he could use IDLE, but, at least on Windows, only with the > then upcoming 3.3. (I don't know the tcl/tk version policy for the > non-Apple builds.) > > I do not know if tcl/tk 8.5.z releases have added many features or are > primarily bugfixes like our micro releases. If the latter, the case for > distributing at least the most recent 8.5.z with windows would seem pretty > strong. I also do not know what 8.6.z adds. But an tcl/tk 'enhancement' of > supporting astral characters might look like a bugfix for IDLE. (Running > from IDLE, print(astral_char) raises, but I believe the same code works in > some Linux interpreters.) > > yyy: any other external dependencies that we update on all versions. > > --- > Terry Jan Reedy > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > pjj%40philipjohnjames.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Mar 21 07:18:05 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 02:18:05 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <0FD3F001-CA03-4FE8-A088-736A819BA252@me.com> References: <20130320180942.20A6A2500B3@webabinitio.net> <0FD3F001-CA03-4FE8-A088-736A819BA252@me.com> Message-ID: On 3/20/2013 8:38 PM, Neil Hodgson wrote: > Terry Reedy: > >> Broken (and quirky): it has an absurdly limited output buffer >> (under a thousand lines) > > The limit is actually 9999 lines. I clicked Start / All programs / Python 3.3 / Python (command line) >>> help(str) (several times) and scrolled back up and the result was as I described. Help was gone above the 'c' methods. That is not 9999 lines. I believe I later specified 'as installed'. But you are right. If one knows to right click on the blue and yellow Python snake, select Properties, Layout, and find Screen Buffer Size and Height, then one can increase the miserly default of 300 to 9999, at least on Win 7. 'Under a thousand lines' may be a vague memory from XP. I am also sure that with XP, the settings would revert for non-admin users after closing. Maybe MS did upgrade Command Prompt a bit. Oh, but we are not done with the stupidity of Command Prompt. If one does set the buffer to 9999 lines, it pads the output to 9999 lines and shrinks the movable scroll bar down to an eighth inch. If you move it. say, 1/20 of the screen, you jump 500 lines, which initially is way past your actually output. The standard 'modern' convenience of dynamically resized buffers and bars, such as found in the nearly 20 year old Notepad, is not for CP. (There is an idea for MS: junk CP and re-build a modern version on top of Notepad.) Setting properties by right clicking the icon is not standard on Windows. There is no help available from the window that I could find. I also could not find anything about the properties dialog in Windows help. If you can find an official entry for 'QuickEdit Mode' and 'Insert Mode', please let me know. Python Setup and Usage also says nothing about using the Command Prompt interpreter. -- Terry Jan Reedy From g.brandl at gmx.net Thu Mar 21 07:34:37 2013 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 21 Mar 2013 07:34:37 +0100 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130320164729.715c9e10@anarchist> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <20130320125445.64676161@anarchist> <20130320231147.665b3adc@pitrou.net> <20130320164729.715c9e10@anarchist> Message-ID: Am 21.03.2013 00:47, schrieb Barry Warsaw: > On Mar 20, 2013, at 11:11 PM, Antoine Pitrou wrote: > >>On Wed, 20 Mar 2013 15:05:40 -0700 >>Nick Coghlan wrote: >>> >>> Yes, the status quo of Idle is not something we should allow to >>> continue indefinitely, but decisions about its future development >>> should be made by active maintainers that are already trusted to make >>> changes to it (such as Terry and Roger), rather than those of us that >>> don't use it, and aren't interested in maintaining it. >> >>Definitely. People shouldn't remain quiescently torpid about the idle >>status quo. > > The release managers should have a say in the matter, since it does cause some > amount of pain there. I don't really understand what Antoine's "quiescently torpid" means, but splitting IDLE out to a separate repo and then merging it back every time a release rolls around sounds stupid. Either split it off completely or develop it here (my preferred solution). It's really not that hard to get CPython commit bits. Georg From tjreedy at udel.edu Thu Mar 21 07:42:33 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 02:42:33 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: On 3/20/2013 11:54 PM, Eli Bendersky wrote: > On Wed, Mar 20, 2013 at 8:32 PM, Terry Reedy Ugly is subjective: by what standard and compared to what? > > Compared to other existing Python IDEs and shells which are layered on > top of modern GUI toolkits that are actively developed to keep with > modern standards, unlike Tk which is frozen in the 1990s. I think being frozen in the late 1990s is better than being frozen in the early 1980s, like Command Prompt is. In fact, I think we should 'deprecate' the Command Prompt interpreter as the standard interactive interpreter and finish polishing and de-glitching IDLE's Python Shell, which runs on top of the windowless version of CP with a true GUI. Then we can promote and present the latter as the preferred interface, which for many people, it already is. > There are 20 open issues with smtp(lib) in the title. It is 37 kb, > making .54 issues per kb. For idlelib, with 786 kb, there are 104 > issues, or .13 issues per kb, which is one fourth as many. I could > claim that smtplib, based on 1990s RFCs is much worse maintained. It > certainly could use somee positive attention. Repeat: based on the 1990s RFCs, needing to be updated to the 2008 RFC, itself in the process of being superseded by a more unicode aware RFC. > You know better than I do that the number of open issues is not really > the only factor for determining the quality of a module. And you should notice that I did not present that as the only factor for what I said I *could* claim. Actually, I think the comparison would be fairer if enhancements were not counted. I am pretty sure this would favor IDLE even more (depending on what one counted as a bug). Let me repeat this question. What IDE might be a simple, install and go, alternative to IDLE that I might investigate, even if just as a source of ideas for IDLE? > It should have the following features or something close: > * One-key saves the file and runs it with the -i option (enter > interactive mode after running the file) so one can enter additional > statements interactively. > * Syntax errors cause a message display; one click returns to the > spot the error was detected. > * Error tracebacks are displayed unmodified, without extra garbage > or censorship. > # Right click on a line like > File "C:\Programs\Python33\lib\__difflib.py", line 1759, ... > and then left click on the goto popup to go to that line in that > file, opening the file if necessary. -- Terry Jan Reedy From jeanpierreda at gmail.com Thu Mar 21 07:54:01 2013 From: jeanpierreda at gmail.com (Devin Jeanpierre) Date: Thu, 21 Mar 2013 02:54:01 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: On Thu, Mar 21, 2013 at 2:42 AM, Terry Reedy wrote: > I think being frozen in the late 1990s is better than being frozen in the > early 1980s, like Command Prompt is. In fact, I think we should 'deprecate' > the Command Prompt interpreter as the standard interactive interpreter and > finish polishing and de-glitching IDLE's Python Shell, which runs on top of > the windowless version of CP with a true GUI. Then we can promote and > present the latter as the preferred interface, which for many people, it > already is. Please don't cease supporting the command line interface. I use the command line interactive interpreter plenty. That way I can use git, grep, the unit test suite, etc. ... and the interactive interpreter, all from one place: the console. That can't happen with IDLE, by design. -- Devin From tjreedy at udel.edu Thu Mar 21 08:03:34 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 03:03:34 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <514A8E35.8090204@g.nevcal.com> References: <20130320180942.20A6A2500B3@webabinitio.net> <514A8E35.8090204@g.nevcal.com> Message-ID: On 3/21/2013 12:36 AM, Glenn Linderman wrote: > On 3/20/2013 5:15 PM, Terry Reedy wrote: >> Broken (and quirky): it has an absurdly limited output buffer (under a >> thousand lines) > > People keep claiming that Windows CMD has a limited output buffer. It is > configurable, at least to 9999 lines, which is where I have mine set. > That is far too much to actually scroll back through for most practical > purposes, although sometimes I do :) See my response to the same point by Neil Hodgson, where I noticed that setting to 9999 lines somewhat disables easy scrolling. (I checked and it was 9999 also on XP.) > I'm not trying to claim that Windows CMD is wonderful, perfect, or has > large numbers of redeeming values, but let's keep to the facts. Yes, lets do. I tried to. A person who installs Python on Windows and runs Python (command prompt) instead of IDLE is confronted with a 300 line default. Someone not familiar with Command Prompt will not know that the limit can be increased. I use CP so seldom, until recently, that I had forgotten how to do so. I fiddled with the history buffer setting on the first page. That did not work so I gave up. -- Terry Jan Reedy From larry at hastings.org Thu Mar 21 08:42:14 2013 From: larry at hastings.org (Larry Hastings) Date: Thu, 21 Mar 2013 00:42:14 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <20130320125445.64676161@anarchist> <20130320231147.665b3adc@pitrou.net> <20130320164729.715c9e10@anarchist> Message-ID: <514AB9D6.3070406@hastings.org> On 03/20/2013 11:34 PM, Georg Brandl wrote: > I don't really understand what Antoine's "quiescently torpid" means, "quiescent" = "peaceful, quiet, still" "torpid" = "lethargic, not moving" "antoine" = "thesaurus owner" //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Mar 21 08:59:52 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 03:59:52 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <1363840370.26832.140661207112409.77D53C31@webmail.messagingengine.com> References: <1363819695.3885.140661207062389.589C7405@webmail.messagingengine.com> <1363840370.26832.140661207112409.77D53C31@webmail.messagingengine.com> Message-ID: On 3/21/2013 12:32 AM, Kurt B. Kaiser wrote: > Well, spending a lot of time backporting new features is not my idea of > fun. OTOH, I have no objection. I intentionally did not say in the PEP that it should be mandatory. > Along those lines, I've thought that IDLE should refrain from using the > newest features in Python, to allow people running released versions of > Python to access the newest IDLE. i.e. Python 3.4 innovations would not > be used in IDLE 3.4. Only Python 3.3 and earlier innovations. So far, since 3.0/1, that has been a moot point. 3.2 was syntax frozen. 3.3 has 'yield from', but I do not know if there is much of anywhere that *could* be used, and for simple uses, the old 'for i in it: yield i; is good enough. > I don't know if this is the place to comment on PEP 434 (is it?), but > I've always taken the approach that IDLE development should be less > formal than the rest of stdlib, since it's not a dependency. Big > changes should be reserved for the tip, but I don't see why something > like the right click menu change shouldn't be backported. Thank you for confirming my impression of your approach. When the right click changed was challenged, I claimed that an *informal* relaxed approach was the defacto rule. Antoine said that that was not OK and Nick agreed and said that a relaxed approach would have to be *formalized* in a PEP and approved. Then there would be no (less ;-) arguments about IDLE pushes. Todd took the initiative to write a first draft and I revised it. > I think we should try a more relaxed idlelib development process inside > core before we move it out, and should be generous about adding checkin > permissions for that purpose. Rietveld will help. It's a good way to > habilitate new developers. > I have commented over the years, but lately I've been so distracted by > the Treasurer job that I haven't found much time for IDLE. So I should be selective in specifically asking for comment. > > I've tried to channel Guido over the years. I looked at what he did, > and tried to project forward. > > IDLE-dev is still active. Would anyone else like to be a moderator? Yes. I presume that is the one mirrored as gmane.comp.pythong.idle. It has been mostly quiet since September. When we have an accepted ground rule for making decisions, I expect more discussions. For instance, I think it might be the appropriate place to get input from interested users about preferred behavior on a specific issue. http://mail.python.org/mailman/listinfo/idle-dev is out of date in places. >> Without a vision and design document, it is sometimes hard for someone >> like me to know which is which. > > IDLE development has always been organic, as opposed to the formal > approach of the PEPs. What we need is a Zen of IDLE. When I look at > it, I see a simple IDE. I try to adhere to the principle of least > surprise, and to maintain an uncluttered interface suitable for > beginners. Expert features can be there, but somewhat hidden (though > they should be documented!) or implemented as disabled extensions. > > IDLE tries to promote Pythonic style. For example, the lack of a > horizontal scroll bar was deliberate, I think. > > We should implement the patch that adds an extension selector to the > options dialog, and keep the expert features as disabled extensions. In all versions, I presume ;-). I have been wondering if there are any beginner features implemented as extensions that should be directly incorporated and planned to ask Roger on idle-dev. Anyway, I like that idea as it lets us somewhat have the cake of simplicity and also eat the cupcakes of expert features as desired. > That way, an instructor could distribute an idlerc file which would set > IDLE up exactly as desired, including links to course specific help > files on the web. That would be excellent. > BTW, I'll take this chance to promote the use of idlelib/NEWS.txt for > IDLE news, instead of Misc/NEWS. That way, if IDLE is used outside of > the main release, the NEWS.txt will go with it. Thanks for the reminder. That came up here on pydev recently and it was agreed that IDLE should indeed go in the idlelib/NEWS and that the release manager or someone could then *copy* it into Misc/NEWS as a separate *** IDLE *** section. Or maybe the idea was vice versa. Either was, this would avoid NEWS conficts for IDLE patches (unless multiple IDLE developers pushed nearly simultaneously). http://bugs.python.org/issue17506 > Stability is paramount. > > No surprises beats cool. > > KISS is better than full featured. > > Uniform trumps native look. > > Experts go to the rear. > > Where they can find what they want if they know where to look. > > IDLE tests tkinter. Thanks. I am glad I asked. -- Terry Jan Reedy From solipsis at pitrou.net Thu Mar 21 10:20:23 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 21 Mar 2013 10:20:23 +0100 Subject: [Python-Dev] IDLE in the stdlib References: Message-ID: <20130321102023.31b78c7f@pitrou.net> Le Thu, 21 Mar 2013 02:42:33 -0400, Terry Reedy a ?crit : > On 3/20/2013 11:54 PM, Eli Bendersky wrote: > > On Wed, Mar 20, 2013 at 8:32 PM, Terry Reedy > > Ugly is subjective: by what standard and compared to what? > > > > Compared to other existing Python IDEs and shells which are layered > > on top of modern GUI toolkits that are actively developed to keep > > with modern standards, unlike Tk which is frozen in the 1990s. > > I think being frozen in the late 1990s is better than being frozen in > the early 1980s, like Command Prompt is. In fact, I think we should > 'deprecate' the Command Prompt interpreter as the standard > interactive interpreter and finish polishing and de-glitching IDLE's > Python Shell, which runs on top of the windowless version of CP with > a true GUI. And this may indeed be reasonable under Windows, where the command-line is a PITA! But the Linux command-line is actually quite very usable these days, especially if you configure your Python interpreter to use readline for tab-completion of identifiers (which should be done by default, see http://bugs.python.org/issue5845). Regards Antoine. From p.f.moore at gmail.com Thu Mar 21 10:27:40 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 21 Mar 2013 09:27:40 +0000 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <0FD3F001-CA03-4FE8-A088-736A819BA252@me.com> References: <20130320180942.20A6A2500B3@webabinitio.net> <0FD3F001-CA03-4FE8-A088-736A819BA252@me.com> Message-ID: On 21 March 2013 00:38, Neil Hodgson wrote: > Terry Reedy: > >> Broken (and quirky): it has an absurdly limited output buffer (under a thousand lines) > > The limit is actually 9999 lines. > >> Quirky: Windows uses cntl-C to copy selected text to the clipboard and (where appropriate) cntl-V to insert clipboard text at the cursor pretty much everywhere. > > CP uses Ctrl+C to interrupt programs similar to Unix. Therefore it moves copy to a different key in a similar way to Unix consoles like GNOME Terminal and MATE Terminal which use Shift+Ctrl+C for copy despite Ctrl+C being the standard for other applications. Can I suggest that debates about the capability of Windows command line programming are off-topic here? Whether it is good or bad (and in my view, it is perfectly adequate, and in some ways better than Unix) it is what Windows users who use the command line are used to. The experience with Python is *identical* to what people see with other scripting languages like Perl, Ruby, etc. It's even similar to something like Java (I know everyone uses something like Eclipse for Java, but that's a 3rd party download). And there's Python Tools for Visual Studio if people want a "real Windows IDE"... If people teaching Python have problems with the current environment (and I know we've had some very good feedback on that score) then that's fine, let's address it. But simply saying "Windows users have no usable command line so they need GUI support" is neither productive nor true. (Apologies if this sounds grumpy. I'll go and get my first cup of tea of the day now...) Paul. From solipsis at pitrou.net Thu Mar 21 10:37:50 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 21 Mar 2013 10:37:50 +0100 Subject: [Python-Dev] IDLE in the stdlib References: <1363819695.3885.140661207062389.589C7405@webmail.messagingengine.com> Message-ID: <20130321103750.311cb6f2@pitrou.net> Le Wed, 20 Mar 2013 18:48:15 -0400, "Kurt B. Kaiser" a ?crit : > > IDLE has a single keystroke round trip - it's an IDE, not just an > editor like Sublime Text or Notepad. In the 21st century, people > expect some sort of IDE. Or, they should! I don't think I've used an IDE in years (not seriously anyway). I also don't think beginners "expect some sort of IDE", since they don't know what it is. They probably don't even expect a text editor at first. > I'd also like to make a plea to keep IDLE's interface clean and > basic. There are lots of complex IDEs available for those who want > them. It's natural for developers to add features, that's what they > do :-), but you don't hand a novice a Ferrari (or emacs) and expect > good results. What is the point of an IDE without features? Also, this is touching another issue: IDLE needs active maintainers, who will obviously be experienced Python developers. But if they are experienced Python developers, they will certainly want the additional features, otherwise's they'll stop using and maintaining IDLE. In other words, if IDLE were actually usable *and* pleasant for experienced developers, I'm sure more developers would be motivated to improve and maintain it. > It's sometimes said that IDLE is "ugly" or "broken". These terms are > subjective! Subjective statements are not baseless and idiotic. They come from the experience of people actually wanting to like a piece of software, you shouldn't discard them at face value. Regards Antoine. From p.f.moore at gmail.com Thu Mar 21 10:41:43 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 21 Mar 2013 09:41:43 +0000 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: On 21 March 2013 06:54, Devin Jeanpierre wrote: > On Thu, Mar 21, 2013 at 2:42 AM, Terry Reedy wrote: >> I think being frozen in the late 1990s is better than being frozen in the >> early 1980s, like Command Prompt is. In fact, I think we should 'deprecate' >> the Command Prompt interpreter as the standard interactive interpreter and >> finish polishing and de-glitching IDLE's Python Shell, which runs on top of >> the windowless version of CP with a true GUI. Then we can promote and >> present the latter as the preferred interface, which for many people, it >> already is. > > Please don't cease supporting the command line interface. I use the > command line interactive interpreter plenty. That way I can use git, > grep, the unit test suite, etc. ... and the interactive interpreter, > all from one place: the console. > > That can't happen with IDLE, by design. Agreed. Command line Python is 100% of my usage, and removing it would make Python unusable for me. If what is being suggested is removing the "Python Command Line" *shortcuts* that are installed by default, but leaving the console "python.exe" program, then I have no view on that, as I don't use those shortcuts (and if I did, I could set them up myself). But before removing them, why not consider setting the defaults to be more helpful (larger scrollback buffer, things like quick edit set on, etc) if that's the real issue here? I'm not saying it is, but some of the complaints about "the default experience" *can* be fixed simply by changing the defaults. Paul. From tjreedy at udel.edu Thu Mar 21 11:07:03 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 06:07:03 -0400 Subject: [Python-Dev] A 'common' respository? (was Re: IDLE in the stdlib) In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> Message-ID: On 3/21/2013 2:06 AM, Philip James wrote: > I hope I'm not coming across as pedantic, because I think you have some > good arguments listed above, but shouldn't discussion like this go in > python-ideas rather than python-dev? Normally yes. But since this is a counter-proposal or an alternate proposal to proposals already made in this thread, it belongs as an offshoot of this thread. This thread in turn is a continuation of similar threads here in the past, and involves some people who are less active in python-ideas. Also, it is more about technical development matters than about future *language* changes. > I'm very new to these lists, so > forgive me if I'm stepping on any toes, I'm just trying to grok what > kind of content should go in each list. You are not stepping on my toes, and that is a good question to think about. -- Terry Jan Reedy From jsbueno at python.org.br Thu Mar 21 11:08:33 2013 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Thu, 21 Mar 2013 07:08:33 -0300 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <514A7638.4090205@pearwood.info> References: <20130320180942.20A6A2500B3@webabinitio.net> <514A7638.4090205@pearwood.info> Message-ID: On 20 March 2013 23:53, Steven D'Aprano wrote: > I also note that in the last few weeks, I've seen at least two instances > that I recall of a beginner on the tutor at python.org mailing list being > utterly confused by Python's Unicode handling because the Windows command > prompt is unable to print Unicode strings. It can be worst than that - in i18nes Windows installs, the DOS Prompt sometimes have a different encoding than the Windows running - for example, for pt_BR Windows, all UI apps run using latin1, but the CP uses a CP850 encoding which generates _different_ characters for the same codes. As someone who form times to times lecture a introductory workshop of Python to people running Windows, I second Terry's long message - and I highlight Raymond's """ Without IDLE, a shocking number of people would create Python files using notepad. """ js -><- > > > Thanks Terry. From tjreedy at udel.edu Thu Mar 21 11:32:46 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 06:32:46 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <0FD3F001-CA03-4FE8-A088-736A819BA252@me.com> Message-ID: On 3/21/2013 5:27 AM, Paul Moore wrote: > Can I suggest that debates about the capability of Windows command > line programming are off-topic here? I respectfully disagree, unless you say that the whole thread is off topic. If it is okay for people to say that IDLE, including the IDLE interactive interpreter shell is ugly, quirky, broken, badly maintained, and disfunctional, without giving hardly any facts or details to back up or explain the claims, and then claim it is so bad that it should be banished, then to me it is perfectly on topic for me to point out that the alternative, the CP shell, is objectively far worse in multiple respects. Yes, I gave some facts for the benefit of those who were willing to consider my claim that IDLE is better. > it is what Windows users who use the command line are used to. I bet that 'Windows users who use the command line' are less than 10% of Windows users. I am willing to accept that you find it adequate. Why can't you accept that I find it wretched, especially when I explained some of why rather than just throwing it out as an opinion, and consider IDLE to be wonderfully better? -- Terry Jan Reedy From p.f.moore at gmail.com Thu Mar 21 12:02:38 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 21 Mar 2013 11:02:38 +0000 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <0FD3F001-CA03-4FE8-A088-736A819BA252@me.com> Message-ID: On 21 March 2013 10:32, Terry Reedy wrote: > On 3/21/2013 5:27 AM, Paul Moore wrote: > >> Can I suggest that debates about the capability of Windows command >> line programming are off-topic here? > > > I respectfully disagree, unless you say that the whole thread is off topic. > If it is okay for people to say that IDLE, including the IDLE interactive > interpreter shell is ugly, quirky, broken, badly maintained, and > disfunctional, without giving hardly any facts or details to back up or > explain the claims, and then claim it is so bad that it should be banished, > then to me it is perfectly on topic for me to point out that the > alternative, the CP shell, is objectively far worse in multiple respects. > Yes, I gave some facts for the benefit of those who were willing to consider > my claim that IDLE is better. I agree entirely that unsubstantiated claims of "it's dreadful" are just as unacceptable when referring to IDLE. And I'm happy to accept that you find IDLE better - although I'd take issue with a claim that it's objectively better for everyone. I'd rather that we focus on dealing with genuine issues as reported by users (e.g., the comments from people working with IDLE in training courses). One difference, though, is that the quality of IDLE is within our control, so comments about how it can be improved are valid - whereas comments about the Windows console are simply statements of a reality we have no means of addressing. >> it is what Windows users who use the command line are used to. > > I bet that 'Windows users who use the command line' are less than 10% of > Windows users. I have no figures one way or the other on that. You may well be right. Are we aiming at "all Windows users" here? All I can say is that my experience (in a corporate Windows-based environment) is that people who have any interest in learning or using Python are nearly always *also* command line users, and comfortable with it. They are often looking at Python as a step up from Windows batch files - and that is such a huge improvement that trivia like the way you select text in the console are completely irrelevant by comparison. My experience may well be atypical, I can't say. > I am willing to accept that you find it adequate. Why can't you accept that > I find it wretched, especially when I explained some of why rather than just > throwing it out as an opinion, and consider IDLE to be wonderfully better? I'm happy to accept that. What I don't know (no criticism here, but it's something I mentioned at the start of my post) is whether you use Windows as your main platform, and so what you're implying in terms of how to generalise the fact that you prefer IDLE (I assume you want me to generalise, and not just take your comments as a statement of your personal preference). Much of the comments about the Windows experience *seem to me* to come from Unix users who only occasionally use Windows and find the experience unpleasant. A bit more clarity as to where the people advocating particular actions are coming from would help (because I, and possibly others, may be wrong in that impression). Anyway, I was the one who claimed things were getting off-topic, so I'll stop at this point. I hope I haven't offended anyone - I didn't mean to. Paul From tjreedy at udel.edu Thu Mar 21 12:03:32 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 07:03:32 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: On 3/21/2013 5:41 AM, Paul Moore wrote: > On 21 March 2013 06:54, Devin Jeanpierre wrote: >> On Thu, Mar 21, 2013 at 2:42 AM, Terry Reedy wrote: >>> I think being frozen in the late 1990s is better than being frozen in the >>> early 1980s, like Command Prompt is. In fact, I think we should 'deprecate' >>> the Command Prompt interpreter as the standard interactive interpreter and >>> finish polishing and de-glitching IDLE's Python Shell, which runs on top of >>> the windowless version of CP with a true GUI. Then we can promote and >>> present the latter as the preferred interface, which for many people, it >>> already is. I should have prefaced that with 'on Windows'. I presume the command line on *nix is better with most of the issues I discussed than on Windows. >> Please don't cease supporting the command line interface. My one person opinion and counter-proposal is not going to change anything. Anyway, my two points were this: if late 1990s is bad, isn't early 1980s worse? And *if* we are going to one downplay/demote of the two interactive shells, should not it be the worse one? >> I use the >> command line interactive interpreter plenty. That way I can use git, >> grep, the unit test suite, etc. ... and the interactive interpreter, >> all from one place: the console. >> >> That can't happen with IDLE, by design. It is Microsoft, not me, that is a threat to Command Prompt. I have the impression that it is not part of the Win 8 not-Metro tablet interface that they would like everyone to use even on the desktop. To push beginners away from the desktop to the pane interface, they were initially going to limit the free Visual Express IDE and compilers to the new interface. You can use idle from the command line almost as easily as the CP interpreter: 'python -m idlelib' instead of just 'python' (I just tried it to verify). Unlike bare 'python', IDLE includes a grep. Right click on any 'hit' and it opens the file at the specified line. Unlike bare 'python', you can run tests and collect the all the output, from as many tests as you want, in a dynamically right-sized buffer. > Agreed. Command line Python is 100% of my usage, and removing it would > make Python unusable for me. Whereas others would say the same about removing IDLE. > If what is being suggested is removing the "Python Command Line" > *shortcuts* I did not suggest that. > those shortcuts (and if I did, I could set them up myself). But before > removing them, why not consider setting the defaults to be more > helpful (larger scrollback buffer, things like quick edit set on, etc) > if that's the real issue here? I'm not saying it is, but some of the > complaints about "the default experience" *can* be fixed simply by > changing the defaults. If that is so easily possible, then it should have been done already. -- Terry Jan Reedy From tjreedy at udel.edu Thu Mar 21 12:19:09 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 07:19:09 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130321102023.31b78c7f@pitrou.net> References: <20130321102023.31b78c7f@pitrou.net> Message-ID: On 3/21/2013 5:20 AM, Antoine Pitrou wrote: > Le Thu, 21 Mar 2013 02:42:33 -0400, > Terry Reedy a ?crit : >> On 3/20/2013 11:54 PM, Eli Bendersky wrote: >>> On Wed, Mar 20, 2013 at 8:32 PM, Terry Reedy > >>> Ugly is subjective: by what standard and compared to what? >>> >>> Compared to other existing Python IDEs and shells which are layered >>> on top of modern GUI toolkits that are actively developed to keep >>> with modern standards, unlike Tk which is frozen in the 1990s. >> >> I think being frozen in the late 1990s is better than being frozen in >> the early 1980s, like Command Prompt is. In fact, I think we should >> 'deprecate' the Command Prompt interpreter as the standard >> interactive interpreter and finish polishing and de-glitching IDLE's >> Python Shell, which runs on top of the windowless version of CP with >> a true GUI. > > And this may indeed be reasonable under Windows, where the command-line > is a PITA! Which is the only context I was talking about. > But the Linux command-line is actually quite very usable > these days, especially if you configure your Python interpreter to use > readline for tab-completion of identifiers (which should be done by > default, see http://bugs.python.org/issue5845). IDLE has tab-completion for both identifiers and attributes, in both shell and editor windows. It is probably under-documented; I am still learning to use it effectively. I am curious if the readline version works better in any way that IDLE could imitate. -- Terry Jan Reedy From eliben at gmail.com Thu Mar 21 13:22:40 2013 From: eliben at gmail.com (Eli Bendersky) Date: Thu, 21 Mar 2013 05:22:40 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <514AAAF0.7050303@gmx.net> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <8632CEBE-50A9-414C-8E43-B53F427DB8D6@gmail.com> <514AAAF0.7050303@gmx.net> Message-ID: > > On Mar 20, 2013, at 12:38 PM, Barry Warsaw > > wrote: > > > >> Right. Ultimately, I think IDLE should be a separate project > entirely, but I > >> guess there's push back against that too. > > > > The most important feature of IDLE is that it ships with the > standard library. > > Everyone who clicks on the Windows MSI on the python.org < > http://python.org> > > webpage > > automatically has IDLE. That is why I frequently teach Python with > IDLE. > > > > If this thread results in IDLE being ripped out of the standard > distribution, > > then I would likely never use it again. > > > > > > Why is it necessary to conflate distribution and development. "standard > library" > > != "Python distribution". > > Because that's how CPython defines its distribution. We distribute things > that > are in the CPython/standard library repo, and nothing else. > Yes, I realize this is the case. I was wondering whether it's hard to change. > > > Take the ActivePython distribution for example. They ship with extra > packages > > for Windows (pywin32, etc) and our Python installer doesn't. This is a > reason > > many Windows people prefer ActivePython. That's their right, but this > preference > > is not the point. The point is that it's perfectly conceivable to ship > IDLE with > > Python releases on Windows, while managing it as a separate project > outside the > > CPython core Mercurial repository. > > And what's the benefit? I just don't see it. It just makes it harder to > create > a Python release. > > This is the feedback I was looking for. If this will make Python distribution non-trivially harder, then it's a point against the proposal. > > This seems to me to combine benefits from both worlds: > > > > 1. IDLE keeps being shipped to end users. I have to admit the reasons > made in > > favor of this in the thread so far are convincing. > > 2. IDLE is developed as a standalone project. As such, it's much easier > to > > contribute to, which will hopefully result in a quicker pace of > improvement. > > Why? You won't magically gather more people that are interested in IDLE > development. > But that's the point - If there are not enough people interested in it, it should then die. Right now it's a burden of Python core developers to keep it functional even if no one else cares (and if anything, the low amount of open issues Terry quoted elsewhere may be a sign that indeed not many care). > > > The > > only demand is that it keeps working with a release version of Python, > and this > > is pretty easy. It's even possible and easy to have a single IDLE > version for > > Python 3.x instead of contributors having to propose patches for 3.2, > 3.3 and > > 3.4 simultaneously. > > They don't anyway. > But we know perfectly well that a core dev is expected to backport reasonably. In an outside repo, it can have a single-code base. It's not hard to avoid the new features of 3.3 and 3.4 and be compatible with all active Python 3 versions. Note that even if the same is done in our Mercurial repo, each commit still needs to be triplicated in the push dance. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Thu Mar 21 13:25:57 2013 From: eliben at gmail.com (Eli Bendersky) Date: Thu, 21 Mar 2013 05:25:57 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 11:42 PM, Terry Reedy wrote: > On 3/20/2013 11:54 PM, Eli Bendersky wrote: > >> On Wed, Mar 20, 2013 at 8:32 PM, Terry Reedy > > > Ugly is subjective: by what standard and compared to what? >> >> Compared to other existing Python IDEs and shells which are layered on >> top of modern GUI toolkits that are actively developed to keep with >> modern standards, unlike Tk which is frozen in the 1990s. >> > > I think being frozen in the late 1990s is better than being frozen in the > early 1980s, like Command Prompt is. In fact, I think we should 'deprecate' > the Command Prompt interpreter as the standard interactive interpreter and > finish polishing and de-glitching IDLE's Python Shell, which runs on top of > the windowless version of CP with a true GUI. Then we can promote and > present the latter as the preferred interface, which for many people, it > already is. There are two discussions being held in parallel here: 1. Whether IDLE should be developed separately from the core Python repository (while still being shipped). 2. Whether IDLE is bad in a general sense and should die. I've stated several times now that it's (1) that I'm interested in. (2) is too subjective, and frankly I'm not in a good position to argue from an objective point of view. As Antoine mentioned, "feeling" should not be dismissed (especially if it resonates from a number of developers). That said, I'm not going to continue to talk about (2). I really want to constructively focus on (1). Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu Mar 21 13:26:35 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 21 Mar 2013 08:26:35 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130321102023.31b78c7f@pitrou.net> Message-ID: I showed IDLE to my 6-year-old on the Raspberry Pi and I'm convinced it is cool. Gave up on trying to (slowly) install bpython. We were multiplying large numbers and counting to 325,000 in no time. It might not be for *me* but I'm not going to teach my daughter a large IDE any time soon. From tjreedy at udel.edu Thu Mar 21 13:34:24 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 08:34:24 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: On 3/20/2013 12:41 PM, Eli Bendersky wrote: > Interesting writeup about PyCon 2013 young coder > education:http://therealkatie.net/blog/2013/mar/19/pycon-2013-young-coders/ > > Quote: > > "We used IDLE because it's already on Raspian's desktop. Personally, I > like IDLE as a teaching tool. It's included in the standard library, it > does tab completion and color coding, and it even has a text editor > included so you don't have to start your class off by teaching everyone > about paths. > > Too bad it's broke as hell." A typical blog exaggeration, which is not a good basis for serious decision making. He continued "I believe my first contribution to the Python Standard Library will be fixes to IDLE. I really do like it that much. " -- Terry Jan Reedy From sturla at molden.no Thu Mar 21 13:53:06 2013 From: sturla at molden.no (Sturla Molden) Date: Thu, 21 Mar 2013 13:53:06 +0100 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <20130314222337.GG24307@snakebite.org> References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> <20130314222337.GG24307@snakebite.org> Message-ID: <6BE748A8-4F59-43CF-BD18-AF116A764A60@molden.no> Den 14. mars 2013 kl. 23:23 skrev Trent Nelson : > > For the record, here are all the Windows calls I'm using that have > no *direct* POSIX equivalent: > > Interlocked singly-linked lists: > - InitializeSListHead() > - InterlockedFlushSList() > - QueryDepthSList() > - InterlockedPushEntrySList() > - InterlockedPushListSList() > - InterlockedPopEntrySlist() > > Synchronisation and concurrency primitives: > - Critical sections > - InitializeCriticalSectionAndSpinCount() > - EnterCriticalSection() > - LeaveCriticalSection() > - TryEnterCriticalSection() > - Slim read/writer locks (some pthread implements have > rwlocks)*: > - InitializeSRWLock() > - AcquireSRWLockShared() > - AcquireSRWLockExclusive() > - ReleaseSRWLockShared() > - ReleaseSRWLockExclusive() > - TryAcquireSRWLockExclusive() > - TryAcquireSRWLockShared() > - One-time initialization: > - InitOnceBeginInitialize() > - InitOnceComplete() > - Generic event, signalling and wait facilities: > - CreateEvent() > - SetEvent() > - WaitForSingleObject() > - WaitForMultipleObjects() > - SignalObjectAndWait() > > Native thread pool facilities: > - TrySubmitThreadpoolCallback() > - StartThreadpoolIo() > - CloseThreadpoolIo() > - CancelThreadpoolIo() > - DisassociateCurrentThreadFromCallback() > - CallbackMayRunLong() > - CreateThreadpoolWait() > - SetThreadpoolWait() > > Memory management: > - HeapCreate() > - HeapAlloc() > - HeapDestroy() > > Structured Exception Handling (#ifdef Py_DEBUG): > - __try/__except > > Sockets: > - ConnectEx() > - AcceptEx() > - WSAEventSelect(FD_ACCEPT) > - DisconnectEx(TF_REUSE_SOCKET) > - Overlapped WSASend() > - Overlapped WSARecv() > > > Don't get me wrong, I grew up with UNIX and love it as much as the > next guy, but you can't deny the usefulness of Windows' facilities > for writing high-performance, multi-threaded IO code. It's decades > ahead of POSIX. (Which is also why it bugs me when I see select() > being used on Windows, or IOCP being used as if it were a poll-type > "generic IO multiplexor" -- that's like having a Ferrari and speed > limiting it to 5mph!) > > So, before any of this has a chance of working on Linux/BSD, a lot > more scaffolding will need to be written to provide the things we > get for free on Windows (threadpools being the biggest freebie). > > > Have you considered using OpenMP instead of Windows API or POSIX threads directly? OpenMP gives you a thread pool and synchronization primitives for free as well, with no special code needed for Windows or POSIX. OpenBLAS (and GotoBLAS2) uses OpenMP to produce a thread pool on POSIX systems (and actually Windows API on Windows). The OpenMP portion of the C code is wrapped so it looks like sending an asynch task to a thread pool; the C code is not littered with OpenMP pragmas. If you need something like Windows threadpools on POSIX, just look at the BSD licensed OpenBLAS code. It is written to be scalable for the world's largest supercomputers (but also beautifully written and very easy to read). Cython has code to register OpenMP threads as Python threads, in case that is needed. So that problem is also solved. Sturla From solipsis at pitrou.net Thu Mar 21 14:17:25 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 21 Mar 2013 14:17:25 +0100 Subject: [Python-Dev] Slides from today's parallel/async Python talk References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> <20130314222337.GG24307@snakebite.org> Message-ID: <20130321141725.5a1e9d0d@pitrou.net> Le Thu, 14 Mar 2013 15:23:37 -0700, Trent Nelson a ?crit : > > Don't get me wrong, I grew up with UNIX and love it as much as the > next guy, but you can't deny the usefulness of Windows' facilities > for writing high-performance, multi-threaded IO code. It's > decades ahead of POSIX. I suppose that's why all high-performance servers run under Windows. Regards Antoine. From stephen at xemacs.org Thu Mar 21 14:18:34 2013 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Thu, 21 Mar 2013 22:18:34 +0900 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130320180942.20A6A2500B3@webabinitio.net> <0FD3F001-CA03-4FE8-A088-736A819BA252@me.com> Message-ID: <87y5dgdh79.fsf@uwakimon.sk.tsukuba.ac.jp> Paul Moore writes: > I have no figures one way or the other on that. You may well be > right. Are we aiming at "all Windows users" here? We need to be careful about this. ISTM that IDLE is aiming at the subset of users on any platform who for some reason need/want a simple development environment that is consistent across Python versions and platforms and immediately available when they install Python, but don't have one yet. I think that there's been sufficient testimony to demonstrate that there are a fair number of folks in that boat. Educators (acting as proxies for a couple of orders of magnitude more students) are one identifiable group. Beginning Python users on Windows who don't use English in their daily lives and therefore need an environment that deals with the nightmare of "code pages" and "POSIX locales" are another. > My experience may well be atypical, I can't say. I suppose it is reasonably typical. I'm sure everybody (by now, including Guido!) have many parts of the stdlib they just never need to use, nor does anybody around them. Most of us rarely to never want IDLE. That's not the point. The "batteries included" slogan is inaccurate in the sense that everybody needs batteries or their toys won't run. The batteries that slogan refers to aren't for everyone, rather the hope is that there are enough different batteries in the kit that everyone can get started. They can always get Twisted later. From rosuav at gmail.com Thu Mar 21 14:31:29 2013 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 22 Mar 2013 00:31:29 +1100 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <87y5dgdh79.fsf@uwakimon.sk.tsukuba.ac.jp> References: <20130320180942.20A6A2500B3@webabinitio.net> <0FD3F001-CA03-4FE8-A088-736A819BA252@me.com> <87y5dgdh79.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Fri, Mar 22, 2013 at 12:18 AM, Stephen J. Turnbull wrote: > Paul Moore writes: > > > I have no figures one way or the other on that. You may well be > > right. Are we aiming at "all Windows users" here? > > We need to be careful about this. ISTM that IDLE is aiming at the > subset of users on any platform who for some reason need/want a simple > development environment that is consistent across Python versions and > platforms and immediately available when they install Python, but > don't have one yet. When I'm on Windows, I use IDLE as my interactive interpreter, but SciTE for actual development. Even on Linux, there's one feature that CLI interactive mode lacks: multi-line command recall. To tinker with a function definition in IDLE, just recall it and edit it. To tinker with it in command-line Python, fetch back each of its lines, or edit it elsewhere and paste it. IDLE isn't a program editor for me, it's the face of Python. ChrisA From barry at python.org Thu Mar 21 14:22:55 2013 From: barry at python.org (Barry Warsaw) Date: Thu, 21 Mar 2013 06:22:55 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: <20130321062255.5da0dadf@anarchist> On Mar 21, 2013, at 05:25 AM, Eli Bendersky wrote: >1. Whether IDLE should be developed separately from the core Python >repository (while still being shipped). > >I really want to constructively focus on (1). In fact, solving (1) should help move along the discussions about separating the stdlib into a separate repo, for the benefit of alternative implementations. Agreed that there are many other issues to resolve in that latter discussion. But I think Eli is right that we should be thinking about how to develop code in separate repos and still ship a combined release. -Barry From ncoghlan at gmail.com Thu Mar 21 16:21:44 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 21 Mar 2013 08:21:44 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130321102023.31b78c7f@pitrou.net> Message-ID: On Thu, Mar 21, 2013 at 5:26 AM, Daniel Holth wrote: > I showed IDLE to my 6-year-old on the Raspberry Pi and I'm convinced > it is cool. Gave up on trying to (slowly) install bpython. We were > multiplying large numbers and counting to 325,000 in no time. It might > not be for *me* but I'm not going to teach my daughter a large IDE any > time soon. This, 1000x this. It was helping out at the Young Coders tutorials that convinced me we need to continue shipping IDLE, or something like it, for use by *people learning to use computers as more than just passive consumers for the first time*. This means running well on Windows and the Raspberry Pi at this point. Keeping IDLE in the core represents a commitment to the use of Python as a teaching language both inside and outside of formal educational settings. We can refactor IDLE to make aspects of it easier to test with the buildbots, especially now that we have unittest.mock in the standard library to mock out some of the UI interaction in the test suite. (I'm happy to help coach the IDLE devs on that if they want to start improving the test suite coverage for the IDLE code) I think we should commit to making "start with IDLE" the recommended teaching experience, and then focus on *making that experience awesome*. Once people are already familiar with the language and what it can do for them, they may choose to move on to other tools, or they may decide to stick with IDLE. But deciding on "What is IDLE?" and "Why is it part of the CPython development repo?" is a necessary step to revitalising it and stopping the recurring discussions about taking it out. If Terry is willing to recast his PEP in that light, I think that would be a wonderful thing to do. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu Mar 21 16:23:59 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 21 Mar 2013 08:23:59 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130321062255.5da0dadf@anarchist> References: <20130321062255.5da0dadf@anarchist> Message-ID: On Thu, Mar 21, 2013 at 6:22 AM, Barry Warsaw wrote: > On Mar 21, 2013, at 05:25 AM, Eli Bendersky wrote: > >>1. Whether IDLE should be developed separately from the core Python >>repository (while still being shipped). >> >>I really want to constructively focus on (1). > > In fact, solving (1) should help move along the discussions about separating > the stdlib into a separate repo, for the benefit of alternative > implementations. Agreed that there are many other issues to resolve in that > latter discussion. > > But I think Eli is right that we should be thinking about how to develop code > in separate repos and still ship a combined release. I think a federated repo model in general is something we need to consider, it's not something we should consider IDLE specific. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From baptiste.lepilleur at gmail.com Thu Mar 21 17:30:34 2013 From: baptiste.lepilleur at gmail.com (Baptiste Lepilleur) Date: Thu, 21 Mar 2013 17:30:34 +0100 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <20130314230007.GA24799@snakebite.org> References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> <51425433.1090700@v.loewis.de> <20130314230007.GA24799@snakebite.org> Message-ID: 2013/3/15 Trent Nelson > On Thu, Mar 14, 2013 at 03:50:27PM -0700, "Martin v. L?wis" wrote: > > Am 14.03.13 12:59, schrieb Stefan Ring: > > > I think you should be able to just take the address of a static > > > __thread variable to achieve the same thing in a more portable way. > > > > That assumes that the compiler supports __thread variables, which > > isn't that portable in the first place. > > FWIW, I make extensive use of __declspec(thread). I'm aware of GCC > and Clang's __thread alternative. No idea what IBM xlC, Sun Studio > and others offer, if anything. > IBM xlC and Sun Studio also support this feature. From memory, it's also __thread keyword. This features is also supported by the new C11/C++11 standards. Baptiste. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Mar 21 17:32:14 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 21 Mar 2013 16:32:14 +0000 Subject: [Python-Dev] PEP 405 (venv) - why does it copy the DLLs on Windows Message-ID: PEP 405 has this note: """ On Windows, it is necessary to also copy or symlink DLLs and pyd files from compiled stdlib modules into the env, because if the venv is created from a non-system-wide Python installation, Windows won't be able to find the Python installation's copies of those files when Python is run from the venv. """ I don't understand what this is saying - can someone clarify the reason behind this statement? What is different about a "non-system-wide installation" that causes this issue (I assume "non-system-wide" means "not All Users")? The reason I ask is that virtualenv doesn't do this, and I'm not clear if this is because of a potential bug lurking in virtualenv (in which case, I'd like to find out how to reproduce it) or because virtualenv takes a different approach which avoids this issue somehow. Thanks, Paul. From trent at snakebite.org Thu Mar 21 17:55:51 2013 From: trent at snakebite.org (Trent Nelson) Date: Thu, 21 Mar 2013 09:55:51 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> <51425433.1090700@v.loewis.de> <20130314230007.GA24799@snakebite.org> Message-ID: <906BF18E-918D-4018-9840-27A2AD0A83EF@snakebite.org> That's good to hear :-) (It's a fantastic facility, I couldn't imagine having to go back to manual TLS API stuff after using __thread/__declspec(thread).) This e-mail was sent from a wireless device. On 21 Mar 2013, at 09:30, "Baptiste Lepilleur" > wrote: 2013/3/15 Trent Nelson > On Thu, Mar 14, 2013 at 03:50:27PM -0700, "Martin v. L?wis" wrote: > Am 14.03.13 12:59, schrieb Stefan Ring: > > I think you should be able to just take the address of a static > > __thread variable to achieve the same thing in a more portable way. > > That assumes that the compiler supports __thread variables, which > isn't that portable in the first place. FWIW, I make extensive use of __declspec(thread). I'm aware of GCC and Clang's __thread alternative. No idea what IBM xlC, Sun Studio and others offer, if anything. IBM xlC and Sun Studio also support this feature. From memory, it's also __thread keyword. This features is also supported by the new C11/C++11 standards. Baptiste. -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.brandl at gmx.net Thu Mar 21 18:39:00 2013 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 21 Mar 2013 18:39:00 +0100 Subject: [Python-Dev] How to fix the incorrect shared library extension on linux for 3.2 and newer? In-Reply-To: <20130320143648.6df30a2f@anarchist> References: <514A198B.500@ubuntu.com> <20130320143648.6df30a2f@anarchist> Message-ID: Am 20.03.2013 22:36, schrieb Barry Warsaw: > On Mar 20, 2013, at 01:18 PM, Matthias Klose wrote: > >>The patch in the issue now makes a distinction between EXT_SUFFIX and >>SHLIB_SUFFIX, and restores the value of SO to SHLIB_SUFFIX. Now this could >>break users of sysconfig.get_config_var('SO'), however I don't see a better >>way than to restore the original behaviour and advise people to use the new >>config variables. I agree. This looks like a seriously broken behavior to me. > It should probably be considered a bug that we changed the meaning of SO in > PEP 3149, but I don't think anybody realized it was being used for both > purposes (otherwise I'm sure we wouldn't have done it that way). I suppose > Georg should make the final determination for 3.2 and 3.3, but the solution > you propose seems about the best you can do. > > As we discussed at Pycon, you'll post a diff to the PEP in the tracker issue > and I'll commit that when I figure out the best way to indicate that a PEP has > been updated post-Final status. Sounds good. Georg From g.brandl at gmx.net Thu Mar 21 18:40:27 2013 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 21 Mar 2013 18:40:27 +0100 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130321062255.5da0dadf@anarchist> Message-ID: Am 21.03.2013 16:23, schrieb Nick Coghlan: > On Thu, Mar 21, 2013 at 6:22 AM, Barry Warsaw wrote: >> On Mar 21, 2013, at 05:25 AM, Eli Bendersky wrote: >> >>>1. Whether IDLE should be developed separately from the core Python >>>repository (while still being shipped). >>> >>>I really want to constructively focus on (1). >> >> In fact, solving (1) should help move along the discussions about separating >> the stdlib into a separate repo, for the benefit of alternative >> implementations. Agreed that there are many other issues to resolve in that >> latter discussion. >> >> But I think Eli is right that we should be thinking about how to develop code >> in separate repos and still ship a combined release. > > I think a federated repo model in general is something we need to > consider, it's not something we should consider IDLE specific. Right. Without a coordinated plan this will go the road of elementtree or simplejson. Georg From fwierzbicki at gmail.com Thu Mar 21 18:44:16 2013 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Thu, 21 Mar 2013 10:44:16 -0700 Subject: [Python-Dev] Federated repo model [was: IDLE in the stdlib] Message-ID: On Thu, Mar 21, 2013 at 8:23 AM, Nick Coghlan wrote: > I think a federated repo model in general is something we need to > consider, it's not something we should consider IDLE specific. I would love to have a federated repo model. I have recently made the attempt to port the devguide for CPython to Jython with some reasonable success. Part of that success has come because the devguide is in its own repo and so forking it and continuing to merge improvements from the original has been very easy. I'd love to be able to do the same for the Doc/ directory at the root of the CPython repo, but currently would have to fork the entire code and doc repository etc. This would mean that merging the Doc/ improvements would be a big pain, with lots and lots of useless merges where it would be hard to pick out the Doc changes. To a lesser extent the same would hold for the Lib/ area - though in that case I don't mind just pushing our changes to the CPython Lib/ (through the tracker and with code reviews of course) in the medium term. Still, a separate repo for Lib would definitely be nice down the road. -Frank From solipsis at pitrou.net Thu Mar 21 19:13:00 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 21 Mar 2013 19:13:00 +0100 Subject: [Python-Dev] IDLE in the stdlib References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <8632CEBE-50A9-414C-8E43-B53F427DB8D6@gmail.com> Message-ID: <20130321191300.4ffe831f@pitrou.net> On Wed, 20 Mar 2013 19:57:54 -0700 Raymond Hettinger wrote: > > On Mar 20, 2013, at 12:38 PM, Barry Warsaw wrote: > > > Right. Ultimately, I think IDLE should be a separate project entirely, but I > > guess there's push back against that too. > > The most important feature of IDLE is that it ships with the standard library. > Everyone who clicks on the Windows MSI on the python.org webpage > automatically has IDLE. That is why I frequently teach Python with IDLE. > > If this thread results in IDLE being ripped out of the standard distribution, > then I would likely never use it again. Which says a lot about its usefulness, if the only reason you use it is that it's bundled with the standard distribution. Regards Antoine. From rdmurray at bitdance.com Thu Mar 21 19:23:49 2013 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 21 Mar 2013 14:23:49 -0400 Subject: [Python-Dev] cpython: Issue #13248: removed deprecated and undocumented difflib.isbjunk, isbpopular. In-Reply-To: References: <3ZVrTH23v4zPCm@mail.python.org> <20130320181320.5B45C2500B3@webabinitio.net> Message-ID: <20130321182350.4240A250069@webabinitio.net> On Thu, 21 Mar 2013 01:19:42 -0400, Terry Reedy wrote: > On 3/20/2013 2:13 PM, R. David Murray wrote: > > On Wed, 20 Mar 2013 05:23:43 -0700, Eli Bendersky wrote: > >> A mention in Misc/NEWS can't hurt here, Terry. Even though it's > >> undocumented, some old code could rely on it being there and this code will > >> break with the transition to 3.4 > > Will do. > > > Note that we also have a list of deprecated things that were removed in > > What's New. > > > > Aside: given the 3.3 experience, I think people should be thinking in > > terms of always updating What's New when appropriate, at the time a > > commit is made. > > How does this look? Is ``replacement`` right? Should the subsequent sm > by itself be marked? If so, how? > > * :meth:`difflib.SequenceMatcher.isbjunk` and > :meth:`difflib.SequenceMatcher.isbpopular`: use ``x in sm.bjunk`` and > ``x in sm.bpopular``, where sm is a SequenceMatcher object. Looks fine to me. A link on SequenceMatcher probably isn't necessary, but might be handy. If you want to add it it would be :class:`~difflib.SequenceMatcher`. --David From doko at ubuntu.com Thu Mar 21 21:14:59 2013 From: doko at ubuntu.com (Matthias Klose) Date: Thu, 21 Mar 2013 13:14:59 -0700 Subject: [Python-Dev] backporting the _sysconfigdata.py module to 2.7 Message-ID: <514B6A43.20102@ubuntu.com> I'd like to backport issue13150, the _sysconfigdata.py module to 2.7. My motivation is not the improved startup time, but the ability to cross-build extension modules using distutils/setuptools. The basic idea is to use the python interpreter of the build machine (the machine you build on), and the _sysconfigdata.py for the host machine (the machine you build for). This kind of setup works fine as long as the setup.py for a third party package gets all build related information from the sysconfig.py module, and not directly from os or sys (e.g. sys.platform). The patch for issue13150 doesn't change any API's, but only moves the computation of the config vars from runtime to build time. I'd like to avoid backporting this to 3.2 as well, because the cross-build support is currently only found in 2.7, 3.3 and the trunk. Matthias From g.brandl at gmx.net Thu Mar 21 21:38:41 2013 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 21 Mar 2013 21:38:41 +0100 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130321191300.4ffe831f@pitrou.net> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <8632CEBE-50A9-414C-8E43-B53F427DB8D6@gmail.com> <20130321191300.4ffe831f@pitrou.net> Message-ID: Am 21.03.2013 19:13, schrieb Antoine Pitrou: > On Wed, 20 Mar 2013 19:57:54 -0700 > Raymond Hettinger wrote: >> >> On Mar 20, 2013, at 12:38 PM, Barry Warsaw wrote: >> >> > Right. Ultimately, I think IDLE should be a separate project entirely, but I >> > guess there's push back against that too. >> >> The most important feature of IDLE is that it ships with the standard library. >> Everyone who clicks on the Windows MSI on the python.org webpage >> automatically has IDLE. That is why I frequently teach Python with IDLE. >> >> If this thread results in IDLE being ripped out of the standard distribution, >> then I would likely never use it again. > > Which says a lot about its usefulness, if the only reason you use it is > that it's bundled with the standard distribution. Just like a lot of the stdlib, it *gets* a lot of usefulness from being a battery. But just because there are better/more comprehensive/prettier replacements out there is not reason enough to remove standard libraries. Georg From dreamingforward at gmail.com Thu Mar 21 22:19:33 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Thu, 21 Mar 2013 14:19:33 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: On Wed, Mar 20, 2013 at 8:32 PM, Terry Reedy wrote: > On 3/20/2013 12:41 PM, Eli Bendersky wrote: > > Personally, I think that IDLE reflects badly on Python in more ways than >> one. It's badly maintained, quirky and ugly. >> > > Ugly is subjective: by what standard and compared to what? > I might be jumping in late here, but... The *only* thing I find "ugly" about it is that it doesn't have a white-on-black color scheme. Look at any hacker console and you won't find a white screen. Otherwise its fine. Fixing that issue is simple, I can upload my color scheme if anyone wants. > It serves a very narrow set of uses, > > IDLE serves a very important "narrow use" purpose -- helping the plethora of beginning programmers. Anyone who wants to criticize it can slap themselves. Python attracts many beginners, and if you don't remember, installing a separate "fancy" editor was never on the priority list until several years later. Give me a break. > > and does it badly. > > Come on. It gets even a strong programmer 80% of the way to what he/she needs. And in any case, I think the interpreter environment is the place to keep the programmer's focus. That is the arena where the community has been and it's what has kept programming in Python fun. And although this goes against decades(?) of programming history, the future of programming, is not in the editor. The "editor-centric paradigm" has not created a community of re-usable code, despite all the promises. I'll argue that the *interpreter environment* will be the future and the editor will be relegated to a simple memory-saving device. Mark Tacoma, Washington -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Thu Mar 21 22:31:28 2013 From: phd at phdru.name (Oleg Broytman) Date: Fri, 22 Mar 2013 01:31:28 +0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: <20130321213128.GA6934@iskra.aviel.ru> On Thu, Mar 21, 2013 at 02:19:33PM -0700, Mark Janssen wrote: > The *only* thing I find "ugly" about it is that it doesn't have a > white-on-black color scheme. Look at any hacker console and you won't find > a white screen. Call me a bad hacker or not hacker at all -- I hate black backgrounds. My windows are always black-on-lightgrey, sometimes on dark grey, never black. I have been spending 16 hours a day at the screen for last 25 years -- and never understood black background. Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From benjamin at python.org Thu Mar 21 22:48:50 2013 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 21 Mar 2013 16:48:50 -0500 Subject: [Python-Dev] backporting the _sysconfigdata.py module to 2.7 In-Reply-To: <514B6A43.20102@ubuntu.com> References: <514B6A43.20102@ubuntu.com> Message-ID: 2013/3/21 Matthias Klose : > I'd like to backport issue13150, the _sysconfigdata.py module to 2.7. My > motivation is not the improved startup time, but the ability to cross-build > extension modules using distutils/setuptools. The basic idea is to use the > python interpreter of the build machine (the machine you build on), and the > _sysconfigdata.py for the host machine (the machine you build for). This kind > of setup works fine as long as the setup.py for a third party package gets all > build related information from the sysconfig.py module, and not directly from os > or sys (e.g. sys.platform). > > The patch for issue13150 doesn't change any API's, but only moves the > computation of the config vars from runtime to build time. I'd like to avoid > backporting this to 3.2 as well, because the cross-build support is currently > only found in 2.7, 3.3 and the trunk. This is a fairly small non-userfacing change, so okay. -- Regards, Benjamin From dreamingforward at gmail.com Thu Mar 21 23:36:58 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Thu, 21 Mar 2013 15:36:58 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130321213128.GA6934@iskra.aviel.ru> References: <20130321213128.GA6934@iskra.aviel.ru> Message-ID: On Thu, Mar 21, 2013 at 2:31 PM, Oleg Broytman wrote: > On Thu, Mar 21, 2013 at 02:19:33PM -0700, Mark Janssen < > dreamingforward at gmail.com> wrote: > > The *only* thing I find "ugly" about it is that it doesn't have a > > white-on-black color scheme. Look at any hacker console and you won't > find > > a white screen. > > Call me a bad hacker or not hacker at all -- I hate black > backgrounds. My windows are always black-on-lightgrey, sometimes on dark > grey, never black. I have been spending 16 hours a day at the screen for > last 25 years -- and never understood black background. Lol, funny. It takes energy to display a phosphor, but none for black. So I don't know how it could be harder for the eyes. Plus, it's "hacker nostalgia" for me, going back to assembler and BASIC on an Apple II. But I think this thread discussion happened decades ago. Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at python.org Thu Mar 21 23:49:39 2013 From: thomas at python.org (Thomas Wouters) Date: Thu, 21 Mar 2013 23:49:39 +0100 Subject: [Python-Dev] backporting the _sysconfigdata.py module to 2.7 In-Reply-To: References: <514B6A43.20102@ubuntu.com> Message-ID: On Thu, Mar 21, 2013 at 10:48 PM, Benjamin Peterson wrote: > 2013/3/21 Matthias Klose : > > I'd like to backport issue13150, the _sysconfigdata.py module to 2.7. My > > motivation is not the improved startup time, but the ability to > cross-build > > extension modules using distutils/setuptools. The basic idea is to use > the > > python interpreter of the build machine (the machine you build on), and > the > > _sysconfigdata.py for the host machine (the machine you build for). > This kind > > of setup works fine as long as the setup.py for a third party package > gets all > > build related information from the sysconfig.py module, and not directly > from os > > or sys (e.g. sys.platform). > > > > The patch for issue13150 doesn't change any API's, but only moves the > > computation of the config vars from runtime to build time. I'd like to > avoid > > backporting this to 3.2 as well, because the cross-build support is > currently > > only found in 2.7, 3.3 and the trunk. > > This is a fairly small non-userfacing change, so okay. > FWIW, we do the exact same thing in our (internal) Google Python 2.7 builds (because we're forbidden from having files named 'Makefile' and '*.h' in our production environment -- and also because of the startup time) and while we've seen the most obscure, internal changes cause failures in the most unexpected ways, we haven't seen anything broken or failing in any way by that change. (Not that I was expecting it, I'm just saying even I think this is a good idea ;) -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at python.org Fri Mar 22 00:17:38 2013 From: thomas at python.org (Thomas Wouters) Date: Fri, 22 Mar 2013 00:17:38 +0100 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130321213128.GA6934@iskra.aviel.ru> Message-ID: I expressed this opinion at the sprints (right before I left) in the group discussion with Guido and Nick, but I'm not sure if it's been represented in this thread yet (I'm jetlagged and talk about Windows command prompts depresses me) -- so I'll just rehash it: distributing IDLE in the binary packages people download from python.org means *python-dev is still responsible IDLE*. We can't distribute something that we don't support. Even for the third-party libraries we're wrapping we're taking responsibility for updating them, fixing specific bugs or working around the bugs in the wrappers. Removing IDLE from the source tarballs isn't a way to disown it, or shed responsibility. The benefits of having IDLE in a separate repository, as I see it, would be that we can have separate access control for the repositories, and possibly make it more approachable for new developers, and easier to re-use by other Python implementations. We couldn't even sensibly stop accepting bugs for it on bugs.python.org. It may well be that moving IDLE to a separate repository is the right thing, but only if there's an active team of people working on it that would prefer it that way. And only if we realize that if IDLE languishes again, python-dev is *still* on the hook for it, even in the separate repository. I don't know if excluding it from the source tarball gains us anything on top of that -- although I do think we should move 'idlelib' out of the standard library :) -- Thomas Wouters Hi! I'm an email virus! Think twice before sending your email to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Fri Mar 22 00:37:57 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 19:37:57 -0400 Subject: [Python-Dev] cpython: Issue #13248: removed deprecated and undocumented difflib.isbjunk, isbpopular. In-Reply-To: <20130321182350.4240A250069@webabinitio.net> References: <3ZVrTH23v4zPCm@mail.python.org> <20130320181320.5B45C2500B3@webabinitio.net> <20130321182350.4240A250069@webabinitio.net> Message-ID: On 3/21/2013 2:23 PM, R. David Murray wrote: > On Thu, 21 Mar 2013 01:19:42 -0400, Terry Reedy wrote: >> How does this look? Is ``replacement`` right? Should the subsequent sm >> by itself be marked? If so, how? >> >> * :meth:`difflib.SequenceMatcher.isbjunk` and >> :meth:`difflib.SequenceMatcher.isbpopular`: use ``x in sm.bjunk`` and >> ``x in sm.bpopular``, where sm is a SequenceMatcher object. > > Looks fine to me. A link on SequenceMatcher probably isn't necessary, > but might be handy. If you want to add it it would be > :class:`~difflib.SequenceMatcher`. Committed and pushed. -- Terry Jan Reedy From tjreedy at udel.edu Fri Mar 22 01:26:59 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 20:26:59 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: On 3/21/2013 5:19 PM, Mark Janssen wrote: > On Wed, Mar 20, 2013 at 8:32 PM, Terry Reedy > wrote: > I might be jumping in late here, but... Not at all. Thank you for the enlightening post. > The *only* thing I find "ugly" about it is that it doesn't have a > white-on-black color scheme. Oh. The wonderful thing about computers is the possibility of customizing to taste. The think I do not like about .pdf is the motivation to 'control the user experience'. > Look at any hacker console and you won't > find a white screen. Otherwise its fine. Fixing that issue is simple, > I can upload my color scheme if anyone wants. If you want, open an IDLE enhancement issue "IDLE: add alternate high-light schemes", select Componemts: IDLE, and attach your 'Console' scheme. Many skinnable apps either have a place for users to share their custom skins or come with alternatives builtin. There might be other emulation schemes worth adding. -- Terry Jan Reedy From tjreedy at udel.edu Fri Mar 22 01:27:26 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 20:27:26 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130321102023.31b78c7f@pitrou.net> Message-ID: On 3/21/2013 11:21 AM, Nick Coghlan wrote: > On Thu, Mar 21, 2013 at 5:26 AM, Daniel Holth wrote: >> I showed IDLE to my 6-year-old on the Raspberry Pi and I'm convinced >> it is cool. Gave up on trying to (slowly) install bpython. We were >> multiplying large numbers and counting to 325,000 in no time. It might >> not be for *me* but I'm not going to teach my daughter a large IDE any >> time soon. > > This, 1000x this. > > It was helping out at the Young Coders tutorials that convinced me we > need to continue shipping IDLE, or something like it, for use by > *people learning to use computers as more than just passive consumers > for the first time*. This means running well on Windows and the > Raspberry Pi at this point. > > Keeping IDLE in the core represents a commitment to the use of Python > as a teaching language both inside and outside of formal educational > settings. > > We can refactor IDLE to make aspects of it easier to test with the > buildbots, especially now that we have unittest.mock in the standard > library to mock out some of the UI interaction in the test suite. (I'm > happy to help coach the IDLE devs on that if they want to start > improving the test suite coverage for the IDLE code) Thank for for the offer to help. I added you to the IDLE test issue. http://bugs.python.org/issue15392 Improving tests is one of the main things I personally want to do. Roger is expert at tkinter code, so I will focus on other things. I want to work toward IDLE patches following the standard rule of adding at least one test with every patch. A permanent exemption from that rule is *not* part of the PEP. > I think we should commit to making "start with IDLE" the recommended > teaching experience, and then focus on *making that experience > awesome*. Once people are already familiar with the language and what > it can do for them, they may choose to move on to other tools, or they > may decide to stick with IDLE. But deciding on "What is IDLE?" and > "Why is it part of the CPython development repo?" is a necessary step > to revitalising it and stopping the recurring discussions about taking > it out. > If Terry is willing to recast his PEP in that light, I think that > would be a wonderful thing to do. I completely agree ;-). I asked Todd to help with this, and perhaps you can give me some more concrete hints as to what you would like to see where. -- Terry Jan Reedy From tjreedy at udel.edu Fri Mar 22 02:24:58 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 21 Mar 2013 21:24:58 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130321213128.GA6934@iskra.aviel.ru> Message-ID: On 3/21/2013 7:17 PM, Thomas Wouters wrote: > although I do think we should move 'idlelib' out of the standard library :) Currently, 'python -m idlelib' start idle from the command line. If idlelib/ were moved out of /Lib, idle.py should be added so 'python -m idle' would work. I may suggest that anyway. -- Terry Jan Reedy From trent at snakebite.org Fri Mar 22 04:00:18 2013 From: trent at snakebite.org (Trent Nelson) Date: Thu, 21 Mar 2013 20:00:18 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <6BE748A8-4F59-43CF-BD18-AF116A764A60@molden.no> References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> <20130314222337.GG24307@snakebite.org> <6BE748A8-4F59-43CF-BD18-AF116A764A60@molden.no> Message-ID: <81018940-7EEA-48DA-879B-DEBE158B1338@snakebite.org> No, I haven't. I'd lose the excellent Windows pairing of thread pool IO and overlapped IO facilities if I did that. Not saying it isn't an option down the track for the generic "submit work" API though; that stuff will work against any thread pool without too much effort. But for now, the fact that all I need to call is TrySubmitThreadpoolCallback and Windows does *everything* else is pretty handy. Lets me concentrate on the problem instead of getting distracted by scaffolding. This e-mail was sent from a wireless device. On 21 Mar 2013, at 05:53, "Sturla Molden" wrote: > Den 14. mars 2013 kl. 23:23 skrev Trent Nelson : > >> >> For the record, here are all the Windows calls I'm using that have >> no *direct* POSIX equivalent: >> >> Interlocked singly-linked lists: >> - InitializeSListHead() >> - InterlockedFlushSList() >> - QueryDepthSList() >> - InterlockedPushEntrySList() >> - InterlockedPushListSList() >> - InterlockedPopEntrySlist() >> >> Synchronisation and concurrency primitives: >> - Critical sections >> - InitializeCriticalSectionAndSpinCount() >> - EnterCriticalSection() >> - LeaveCriticalSection() >> - TryEnterCriticalSection() >> - Slim read/writer locks (some pthread implements have >> rwlocks)*: >> - InitializeSRWLock() >> - AcquireSRWLockShared() >> - AcquireSRWLockExclusive() >> - ReleaseSRWLockShared() >> - ReleaseSRWLockExclusive() >> - TryAcquireSRWLockExclusive() >> - TryAcquireSRWLockShared() >> - One-time initialization: >> - InitOnceBeginInitialize() >> - InitOnceComplete() >> - Generic event, signalling and wait facilities: >> - CreateEvent() >> - SetEvent() >> - WaitForSingleObject() >> - WaitForMultipleObjects() >> - SignalObjectAndWait() >> >> Native thread pool facilities: >> - TrySubmitThreadpoolCallback() >> - StartThreadpoolIo() >> - CloseThreadpoolIo() >> - CancelThreadpoolIo() >> - DisassociateCurrentThreadFromCallback() >> - CallbackMayRunLong() >> - CreateThreadpoolWait() >> - SetThreadpoolWait() >> >> Memory management: >> - HeapCreate() >> - HeapAlloc() >> - HeapDestroy() >> >> Structured Exception Handling (#ifdef Py_DEBUG): >> - __try/__except >> >> Sockets: >> - ConnectEx() >> - AcceptEx() >> - WSAEventSelect(FD_ACCEPT) >> - DisconnectEx(TF_REUSE_SOCKET) >> - Overlapped WSASend() >> - Overlapped WSARecv() >> >> >> Don't get me wrong, I grew up with UNIX and love it as much as the >> next guy, but you can't deny the usefulness of Windows' facilities >> for writing high-performance, multi-threaded IO code. It's decades >> ahead of POSIX. (Which is also why it bugs me when I see select() >> being used on Windows, or IOCP being used as if it were a poll-type >> "generic IO multiplexor" -- that's like having a Ferrari and speed >> limiting it to 5mph!) >> >> So, before any of this has a chance of working on Linux/BSD, a lot >> more scaffolding will need to be written to provide the things we >> get for free on Windows (threadpools being the biggest freebie). >> >> >> > > > Have you considered using OpenMP instead of Windows API or POSIX threads directly? OpenMP gives you a thread pool and synchronization primitives for free as well, with no special code needed for Windows or POSIX. > > OpenBLAS (and GotoBLAS2) uses OpenMP to produce a thread pool on POSIX systems (and actually Windows API on Windows). The OpenMP portion of the C code is wrapped so it looks like sending an asynch task to a thread pool; the C code is not littered with OpenMP pragmas. If you need something like Windows threadpools on POSIX, just look at the BSD licensed OpenBLAS code. It is written to be scalable for the world's largest supercomputers (but also beautifully written and very easy to read). > > Cython has code to register OpenMP threads as Python threads, in case that is needed. So that problem is also solved. > > > Sturla > > > > > > > > From trent at snakebite.org Fri Mar 22 04:11:02 2013 From: trent at snakebite.org (Trent Nelson) Date: Thu, 21 Mar 2013 20:11:02 -0700 Subject: [Python-Dev] Slides from today's parallel/async Python talk In-Reply-To: <20130321141725.5a1e9d0d@pitrou.net> References: <20130314020540.GB22505@snakebite.org> <5141C0B5.6060904@python.org> <20130314182352.GC24307@snakebite.org> <20130314222337.GG24307@snakebite.org> <20130321141725.5a1e9d0d@pitrou.net> Message-ID: <0FE526DD-12B6-4A30-A68C-CBDB390B54C5@snakebite.org> http://c2.com/cgi/wiki?BlubParadox ;-) Sent from my iPhone On 21 Mar 2013, at 06:18, "Antoine Pitrou" wrote: > Le Thu, 14 Mar 2013 15:23:37 -0700, > Trent Nelson a ?crit : >> >> Don't get me wrong, I grew up with UNIX and love it as much as the >> next guy, but you can't deny the usefulness of Windows' facilities >> for writing high-performance, multi-threaded IO code. It's >> decades ahead of POSIX. > > I suppose that's why all high-performance servers run under Windows. > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/trent%40snakebite.org From rovitotv at gmail.com Fri Mar 22 05:26:36 2013 From: rovitotv at gmail.com (Todd Rovito) Date: Fri, 22 Mar 2013 00:26:36 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: <20130321102023.31b78c7f@pitrou.net> Message-ID: On Thu, Mar 21, 2013 at 11:21 AM, Nick Coghlan wrote: > We can refactor IDLE to make aspects of it easier to test with the > buildbots, especially now that we have unittest.mock in the standard > library to mock out some of the UI interaction in the test suite. (I'm > happy to help coach the IDLE devs on that if they want to start > improving the test suite coverage for the IDLE code) Count me in where/when do we start? Can we perhaps setup a IRC chat with you? From andrew.svetlov at gmail.com Fri Mar 22 05:54:30 2013 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Thu, 21 Mar 2013 21:54:30 -0700 Subject: [Python-Dev] [Python-checkins] cpython (2.7): - Issue #13150: sysconfig no longer parses the Makefile and config.h files In-Reply-To: <3ZX26R49RDzSn4@mail.python.org> References: <3ZX26R49RDzSn4@mail.python.org> Message-ID: Great! On Thu, Mar 21, 2013 at 3:02 PM, matthias.klose wrote: > http://hg.python.org/cpython/rev/66e30c4870bb > changeset: 82872:66e30c4870bb > branch: 2.7 > parent: 82843:71adf21421d9 > user: doko at ubuntu.com > date: Thu Mar 21 15:02:16 2013 -0700 > summary: > - Issue #13150: sysconfig no longer parses the Makefile and config.h files > when imported, instead doing it at build time. This makes importing > sysconfig faster and reduces Python startup time by 20%. > > files: > Lib/distutils/sysconfig.py | 63 +-------------------- > Lib/pprint.py | 5 +- > Lib/sysconfig.py | 75 ++++++++++++++++++++++++- > Makefile.pre.in | 12 +++- > Misc/NEWS | 4 + > 5 files changed, 93 insertions(+), 66 deletions(-) > > > diff --git a/Lib/distutils/sysconfig.py b/Lib/distutils/sysconfig.py > --- a/Lib/distutils/sysconfig.py > +++ b/Lib/distutils/sysconfig.py > @@ -387,66 +387,11 @@ > > def _init_posix(): > """Initialize the module as appropriate for POSIX systems.""" > - g = {} > - # load the installed Makefile: > - try: > - filename = get_makefile_filename() > - parse_makefile(filename, g) > - except IOError, msg: > - my_msg = "invalid Python installation: unable to open %s" % filename > - if hasattr(msg, "strerror"): > - my_msg = my_msg + " (%s)" % msg.strerror > - > - raise DistutilsPlatformError(my_msg) > - > - # load the installed pyconfig.h: > - try: > - filename = get_config_h_filename() > - parse_config_h(file(filename), g) > - except IOError, msg: > - my_msg = "invalid Python installation: unable to open %s" % filename > - if hasattr(msg, "strerror"): > - my_msg = my_msg + " (%s)" % msg.strerror > - > - raise DistutilsPlatformError(my_msg) > - > - # On AIX, there are wrong paths to the linker scripts in the Makefile > - # -- these paths are relative to the Python source, but when installed > - # the scripts are in another directory. > - if python_build: > - g['LDSHARED'] = g['BLDSHARED'] > - > - elif get_python_version() < '2.1': > - # The following two branches are for 1.5.2 compatibility. > - if sys.platform == 'aix4': # what about AIX 3.x ? > - # Linker script is in the config directory, not in Modules as the > - # Makefile says. > - python_lib = get_python_lib(standard_lib=1) > - ld_so_aix = os.path.join(python_lib, 'config', 'ld_so_aix') > - python_exp = os.path.join(python_lib, 'config', 'python.exp') > - > - g['LDSHARED'] = "%s %s -bI:%s" % (ld_so_aix, g['CC'], python_exp) > - > - elif sys.platform == 'beos': > - # Linker script is in the config directory. In the Makefile it is > - # relative to the srcdir, which after installation no longer makes > - # sense. > - python_lib = get_python_lib(standard_lib=1) > - linkerscript_path = string.split(g['LDSHARED'])[0] > - linkerscript_name = os.path.basename(linkerscript_path) > - linkerscript = os.path.join(python_lib, 'config', > - linkerscript_name) > - > - # XXX this isn't the right place to do this: adding the Python > - # library to the link, if needed, should be in the "build_ext" > - # command. (It's also needed for non-MS compilers on Windows, and > - # it's taken care of for them by the 'build_ext.get_libraries()' > - # method.) > - g['LDSHARED'] = ("%s -L%s/lib -lpython%s" % > - (linkerscript, PREFIX, get_python_version())) > - > + # _sysconfigdata is generated at build time, see the sysconfig module > + from _sysconfigdata import build_time_vars > global _config_vars > - _config_vars = g > + _config_vars = {} > + _config_vars.update(build_time_vars) > > > def _init_nt(): > diff --git a/Lib/pprint.py b/Lib/pprint.py > --- a/Lib/pprint.py > +++ b/Lib/pprint.py > @@ -37,7 +37,10 @@ > import sys as _sys > import warnings > > -from cStringIO import StringIO as _StringIO > +try: > + from cStringIO import StringIO as _StringIO > +except ImportError: > + from StringIO import StringIO as _StringIO > > __all__ = ["pprint","pformat","isreadable","isrecursive","saferepr", > "PrettyPrinter"] > diff --git a/Lib/sysconfig.py b/Lib/sysconfig.py > --- a/Lib/sysconfig.py > +++ b/Lib/sysconfig.py > @@ -278,9 +278,10 @@ > return os.path.join(_PROJECT_BASE, "Makefile") > return os.path.join(get_path('platstdlib'), "config", "Makefile") > > - > -def _init_posix(vars): > - """Initialize the module as appropriate for POSIX systems.""" > +def _generate_posix_vars(): > + """Generate the Python module containing build-time variables.""" > + import pprint > + vars = {} > # load the installed Makefile: > makefile = _get_makefile_filename() > try: > @@ -308,6 +309,49 @@ > if _PYTHON_BUILD: > vars['LDSHARED'] = vars['BLDSHARED'] > > + # There's a chicken-and-egg situation on OS X with regards to the > + # _sysconfigdata module after the changes introduced by #15298: > + # get_config_vars() is called by get_platform() as part of the > + # `make pybuilddir.txt` target -- which is a precursor to the > + # _sysconfigdata.py module being constructed. Unfortunately, > + # get_config_vars() eventually calls _init_posix(), which attempts > + # to import _sysconfigdata, which we won't have built yet. In order > + # for _init_posix() to work, if we're on Darwin, just mock up the > + # _sysconfigdata module manually and populate it with the build vars. > + # This is more than sufficient for ensuring the subsequent call to > + # get_platform() succeeds. > + name = '_sysconfigdata' > + if 'darwin' in sys.platform: > + import imp > + module = imp.new_module(name) > + module.build_time_vars = vars > + sys.modules[name] = module > + > + pybuilddir = 'build/lib.%s-%s' % (get_platform(), sys.version[:3]) > + if hasattr(sys, "gettotalrefcount"): > + pybuilddir += '-pydebug' > + try: > + os.makedirs(pybuilddir) > + except OSError: > + pass > + destfile = os.path.join(pybuilddir, name + '.py') > + > + with open(destfile, 'wb') as f: > + f.write('# system configuration generated and used by' > + ' the sysconfig module\n') > + f.write('build_time_vars = ') > + pprint.pprint(vars, stream=f) > + > + # Create file used for sys.path fixup -- see Modules/getpath.c > + with open('pybuilddir.txt', 'w') as f: > + f.write(pybuilddir) > + > +def _init_posix(vars): > + """Initialize the module as appropriate for POSIX systems.""" > + # _sysconfigdata is generated at build time, see _generate_posix_vars() > + from _sysconfigdata import build_time_vars > + vars.update(build_time_vars) > + > def _init_non_posix(vars): > """Initialize the module as appropriate for NT""" > # set basic install directories > @@ -565,3 +609,28 @@ > > def get_python_version(): > return _PY_VERSION_SHORT > + > + > +def _print_dict(title, data): > + for index, (key, value) in enumerate(sorted(data.items())): > + if index == 0: > + print '%s: ' % (title) > + print '\t%s = "%s"' % (key, value) > + > + > +def _main(): > + """Display all information sysconfig detains.""" > + if '--generate-posix-vars' in sys.argv: > + _generate_posix_vars() > + return > + print 'Platform: "%s"' % get_platform() > + print 'Python version: "%s"' % get_python_version() > + print 'Current installation scheme: "%s"' % _get_default_scheme() > + print > + _print_dict('Paths', get_paths()) > + print > + _print_dict('Variables', get_config_vars()) > + > + > +if __name__ == '__main__': > + _main() > diff --git a/Makefile.pre.in b/Makefile.pre.in > --- a/Makefile.pre.in > +++ b/Makefile.pre.in > @@ -437,15 +437,20 @@ > Modules/python.o \ > $(BLDLIBRARY) $(LIBS) $(MODLIBS) $(SYSLIBS) $(LDLAST) > > -platform: $(BUILDPYTHON) > +platform: $(BUILDPYTHON) pybuilddir.txt > $(RUNSHARED) $(PYTHON_FOR_BUILD) -c 'import sys ; from sysconfig import get_platform ; print get_platform()+"-"+sys.version[0:3]' >platform > > +# Create build directory and generate the sysconfig build-time data there. > +# pybuilddir.txt contains the name of the build dir and is used for > +# sys.path fixup -- see Modules/getpath.c. > +pybuilddir.txt: $(BUILDPYTHON) > + $(RUNSHARED) $(PYTHON_FOR_BUILD) -S -m sysconfig --generate-posix-vars > > # Build the shared modules > # Under GNU make, MAKEFLAGS are sorted and normalized; the 's' for > # -s, --silent or --quiet is always the first char. > # Under BSD make, MAKEFLAGS might be " -s -v x=y". > -sharedmods: $(BUILDPYTHON) > +sharedmods: $(BUILDPYTHON) pybuilddir.txt > @case "$$MAKEFLAGS" in \ > *\ -s*|s*) quiet="-q";; \ > *) quiet="";; \ > @@ -955,7 +960,7 @@ > else true; \ > fi; \ > done > - @for i in $(srcdir)/Lib/*.py $(srcdir)/Lib/*.doc $(srcdir)/Lib/*.egg-info ; \ > + @for i in $(srcdir)/Lib/*.py `cat pybuilddir.txt`/_sysconfigdata.py $(srcdir)/Lib/*.doc $(srcdir)/Lib/*.egg-info ; \ > do \ > if test -x $$i; then \ > $(INSTALL_SCRIPT) $$i $(DESTDIR)$(LIBDEST); \ > @@ -1133,6 +1138,7 @@ > --install-scripts=$(BINDIR) \ > --install-platlib=$(DESTSHARED) \ > --root=$(DESTDIR)/ > + -rm $(DESTDIR)$(DESTSHARED)/_sysconfigdata.py* > > # Here are a couple of targets for MacOSX again, to install a full > # framework-based Python. frameworkinstall installs everything, the > diff --git a/Misc/NEWS b/Misc/NEWS > --- a/Misc/NEWS > +++ b/Misc/NEWS > @@ -216,6 +216,10 @@ > Library > ------- > > +- Issue #13150: sysconfig no longer parses the Makefile and config.h files > + when imported, instead doing it at build time. This makes importing > + sysconfig faster and reduces Python startup time by 20%. > + > - Issue #10212: cStringIO and struct.unpack support new buffer objects. > > - Issue #12098: multiprocessing on Windows now starts child processes > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > -- Thanks, Andrew Svetlov From rakeshgk21 at hotmail.com Fri Mar 22 10:28:37 2013 From: rakeshgk21 at hotmail.com (rakesh karanth) Date: Fri, 22 Mar 2013 14:58:37 +0530 Subject: [Python-Dev] Increase the code coverage of "OS" module Message-ID: Hi python-dev, I'm interested in increasing the code coverage of the Python stdlib library "OS"Can some one who is already working on this or on a similar issue enlighten me on this? Thanks in advance. RegardsRakesh.G.K -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Mar 22 10:48:46 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 22 Mar 2013 10:48:46 +0100 Subject: [Python-Dev] IDLE in the stdlib References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <8632CEBE-50A9-414C-8E43-B53F427DB8D6@gmail.com> <20130321191300.4ffe831f@pitrou.net> Message-ID: <20130322104846.28b4eac9@pitrou.net> Le Thu, 21 Mar 2013 21:38:41 +0100, Georg Brandl a ?crit : > Am 21.03.2013 19:13, schrieb Antoine Pitrou: > > On Wed, 20 Mar 2013 19:57:54 -0700 > > Raymond Hettinger wrote: > >> > >> On Mar 20, 2013, at 12:38 PM, Barry Warsaw > >> wrote: > >> > >> > Right. Ultimately, I think IDLE should be a separate project > >> > entirely, but I guess there's push back against that too. > >> > >> The most important feature of IDLE is that it ships with the > >> standard library. Everyone who clicks on the Windows MSI on the > >> python.org webpage automatically has IDLE. That is why I > >> frequently teach Python with IDLE. > >> > >> If this thread results in IDLE being ripped out of the standard > >> distribution, then I would likely never use it again. > > > > Which says a lot about its usefulness, if the only reason you use > > it is that it's bundled with the standard distribution. > > Just like a lot of the stdlib, it *gets* a lot of usefulness from > being a battery. But just because there are better/more > comprehensive/prettier replacements out there is not reason enough to > remove standard libraries. That's a good point. I guess it's difficult for me to think of IDLE as an actual library. Regards Antoine. From g.brandl at gmx.net Fri Mar 22 17:22:08 2013 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 22 Mar 2013 17:22:08 +0100 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130322104846.28b4eac9@pitrou.net> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <8632CEBE-50A9-414C-8E43-B53F427DB8D6@gmail.com> <20130321191300.4ffe831f@pitrou.net> <20130322104846.28b4eac9@pitrou.net> Message-ID: Am 22.03.2013 10:48, schrieb Antoine Pitrou: > Le Thu, 21 Mar 2013 21:38:41 +0100, > Georg Brandl a ?crit : > >> Am 21.03.2013 19:13, schrieb Antoine Pitrou: >> > On Wed, 20 Mar 2013 19:57:54 -0700 >> > Raymond Hettinger wrote: >> >> >> >> On Mar 20, 2013, at 12:38 PM, Barry Warsaw >> >> wrote: >> >> >> >> > Right. Ultimately, I think IDLE should be a separate project >> >> > entirely, but I guess there's push back against that too. >> >> >> >> The most important feature of IDLE is that it ships with the >> >> standard library. Everyone who clicks on the Windows MSI on the >> >> python.org webpage automatically has IDLE. That is why I >> >> frequently teach Python with IDLE. >> >> >> >> If this thread results in IDLE being ripped out of the standard >> >> distribution, then I would likely never use it again. >> > >> > Which says a lot about its usefulness, if the only reason you use >> > it is that it's bundled with the standard distribution. >> >> Just like a lot of the stdlib, it *gets* a lot of usefulness from >> being a battery. But just because there are better/more >> comprehensive/prettier replacements out there is not reason enough to >> remove standard libraries. > > That's a good point. I guess it's difficult for me to think of IDLE as > an actual library. You're right, "library" is not a good term, but "battery" certainly is. Georg From status at bugs.python.org Fri Mar 22 18:07:30 2013 From: status at bugs.python.org (Python tracker) Date: Fri, 22 Mar 2013 18:07:30 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20130322170730.9E9F3568E8@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2013-03-15 - 2013-03-22) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3908 (+20) closed 25389 (+73) total 29297 (+93) Open issues with patches: 1735 Issues opened (56) ================== #12098: Child process running as debug on Windows http://bugs.python.org/issue12098 reopened by kristjan.jonsson #17430: missed peephole optimization http://bugs.python.org/issue17430 opened by Neal.Norwitz #17432: PyUnicode_ functions not accessible in Limited API on Windows http://bugs.python.org/issue17432 opened by bdirks #17433: stdlib generator-like iterators don't forward send/throw http://bugs.python.org/issue17433 opened by twouters #17435: threading.Timer.__init__() should use immutable argument defau http://bugs.python.org/issue17435 opened by denversc #17436: hashlib: add a method to hash the content of a file http://bugs.python.org/issue17436 opened by techtonik #17437: Difference between open and codecs.open http://bugs.python.org/issue17437 opened by giampaolo.rodola #17438: json.load docs should mention that it always return unicode http://bugs.python.org/issue17438 opened by techtonik #17441: Do not cache re.compile http://bugs.python.org/issue17441 opened by serhiy.storchaka #17442: code.InteractiveInterpreter doesn't display the exception caus http://bugs.python.org/issue17442 opened by pjenvey #17444: multiprocessing.cpu_count() should use hw.availcpu on Mac OS X http://bugs.python.org/issue17444 opened by jszakmeister #17445: Handle bytes comparisons in difflib.Differ http://bugs.python.org/issue17445 opened by barry #17446: doctest test finder doesnt find line numbers of properties http://bugs.python.org/issue17446 opened by Ronny.Pfannschmidt #17447: str.identifier shouldn't accept Python keywords http://bugs.python.org/issue17447 opened by rhettinger #17449: dev guide appears not to cover the benchmarking suite http://bugs.python.org/issue17449 opened by dmalcolm #17453: logging.config.fileConfig error http://bugs.python.org/issue17453 opened by Alzakath #17454: ld_so_aix not used when linking c++ (scipy) http://bugs.python.org/issue17454 opened by alef #17457: Unittest discover fails with namespace packages and builtin mo http://bugs.python.org/issue17457 opened by Claudiu.Popa #17462: argparse FAQ: how it is different from optparse http://bugs.python.org/issue17462 opened by techtonik #17468: Generator memory leak http://bugs.python.org/issue17468 opened by Anssi.K????ri??inen #17469: Fix sys.getallocatedblocks() when running on valgrind http://bugs.python.org/issue17469 opened by piotr #17473: -m is not universally applicable http://bugs.python.org/issue17473 opened by Devin Jeanpierre #17475: Better doc on using python-gdb.py http://bugs.python.org/issue17475 opened by cben #17477: update the bsddb module do build with db 5.x versions http://bugs.python.org/issue17477 opened by doko #17478: Tkinter's split() inconsistent for bytes and unicode strings http://bugs.python.org/issue17478 opened by serhiy.storchaka #17479: Fix test discovery for test_io.py http://bugs.python.org/issue17479 opened by zach.ware #17480: pyvenv should be installed someplace more obvious on Windows http://bugs.python.org/issue17480 opened by jason.coombs #17481: inspect.getfullargspec could use __signature__ http://bugs.python.org/issue17481 opened by michael.foord #17482: functools.update_wrapper mishandles __wrapped__ http://bugs.python.org/issue17482 opened by ncoghlan #17483: Can not tell urlopen not to check the hostname for https conne http://bugs.python.org/issue17483 opened by dwoz #17484: add tests for getpass http://bugs.python.org/issue17484 opened by Thomas Fenzl #17486: datetime.timezone returns the wrong tzname() http://bugs.python.org/issue17486 opened by lregebro #17487: wave.Wave_read.getparams should be more user friendly http://bugs.python.org/issue17487 opened by Claudiu.Popa #17488: subprocess.Popen bufsize=0 parameter behaves differently in Py http://bugs.python.org/issue17488 opened by gregory.p.smith #17489: random.Random implements __getstate__() and __reduce__() http://bugs.python.org/issue17489 opened by vterron #17490: Improve ast.literal_eval test suite coverage http://bugs.python.org/issue17490 opened by ncoghlan #17491: Consolidate traceback.format_tb and traceback.print_tb http://bugs.python.org/issue17491 opened by raduv #17492: Increase test coverage for random (up to 99%) http://bugs.python.org/issue17492 opened by vterron #17496: OS X test for Tk availability in runtktests.py doesn't work http://bugs.python.org/issue17496 opened by alex #17498: error responses from server are masked in smtplib when server http://bugs.python.org/issue17498 opened by r.david.murray #17500: move PC/icons/source.xar to http://www.python.org/community/lo http://bugs.python.org/issue17500 opened by doko #17502: unittest.mock: side_effect iterators ignore DEFAULT http://bugs.python.org/issue17502 opened by michael.foord #17504: Dropping duplicated docstring explanation of what Mocks' side_ http://bugs.python.org/issue17504 opened by raduv #17505: email.header.Header.__unicode__ does not decode header http://bugs.python.org/issue17505 opened by hniksic #17506: Improve IDLE news handling http://bugs.python.org/issue17506 opened by terry.reedy #17507: To add history time format in readline http://bugs.python.org/issue17507 opened by Zulu #17510: assertEquals deprecated in test_program.py (unittest) http://bugs.python.org/issue17510 opened by grooverdan #17511: Idle find function closes after each find operation http://bugs.python.org/issue17511 opened by Kuchinsky #17512: backport of the _sysconfigdata.py module (issue 13150) breaks http://bugs.python.org/issue17512 opened by doko #17514: Add the license to argparse.py http://bugs.python.org/issue17514 opened by David.James #17515: Add sys.setasthook() to allow to use a custom AST optimizer http://bugs.python.org/issue17515 opened by haypo #17516: Dead code should be removed http://bugs.python.org/issue17516 opened by haypo #17518: urllib2 cannnot handle https and BasicAuth via Proxy. http://bugs.python.org/issue17518 opened by masato kawamura #17519: unittest should not try to run abstract classes http://bugs.python.org/issue17519 opened by ??ric.Piel #17521: fileConfig() disables any previously-used "named" loggers, eve http://bugs.python.org/issue17521 opened by Bob.Igo #17522: Add api PyGILState_Check http://bugs.python.org/issue17522 opened by kristjan.jonsson Most recent 15 issues with no replies (15) ========================================== #17522: Add api PyGILState_Check http://bugs.python.org/issue17522 #17521: fileConfig() disables any previously-used "named" loggers, eve http://bugs.python.org/issue17521 #17519: unittest should not try to run abstract classes http://bugs.python.org/issue17519 #17518: urllib2 cannnot handle https and BasicAuth via Proxy. http://bugs.python.org/issue17518 #17511: Idle find function closes after each find operation http://bugs.python.org/issue17511 #17510: assertEquals deprecated in test_program.py (unittest) http://bugs.python.org/issue17510 #17504: Dropping duplicated docstring explanation of what Mocks' side_ http://bugs.python.org/issue17504 #17502: unittest.mock: side_effect iterators ignore DEFAULT http://bugs.python.org/issue17502 #17492: Increase test coverage for random (up to 99%) http://bugs.python.org/issue17492 #17491: Consolidate traceback.format_tb and traceback.print_tb http://bugs.python.org/issue17491 #17488: subprocess.Popen bufsize=0 parameter behaves differently in Py http://bugs.python.org/issue17488 #17486: datetime.timezone returns the wrong tzname() http://bugs.python.org/issue17486 #17481: inspect.getfullargspec could use __signature__ http://bugs.python.org/issue17481 #17479: Fix test discovery for test_io.py http://bugs.python.org/issue17479 #17478: Tkinter's split() inconsistent for bytes and unicode strings http://bugs.python.org/issue17478 Most recent 15 issues waiting for review (15) ============================================= #17522: Add api PyGILState_Check http://bugs.python.org/issue17522 #17516: Dead code should be removed http://bugs.python.org/issue17516 #17515: Add sys.setasthook() to allow to use a custom AST optimizer http://bugs.python.org/issue17515 #17512: backport of the _sysconfigdata.py module (issue 13150) breaks http://bugs.python.org/issue17512 #17510: assertEquals deprecated in test_program.py (unittest) http://bugs.python.org/issue17510 #17504: Dropping duplicated docstring explanation of what Mocks' side_ http://bugs.python.org/issue17504 #17498: error responses from server are masked in smtplib when server http://bugs.python.org/issue17498 #17496: OS X test for Tk availability in runtktests.py doesn't work http://bugs.python.org/issue17496 #17492: Increase test coverage for random (up to 99%) http://bugs.python.org/issue17492 #17491: Consolidate traceback.format_tb and traceback.print_tb http://bugs.python.org/issue17491 #17490: Improve ast.literal_eval test suite coverage http://bugs.python.org/issue17490 #17487: wave.Wave_read.getparams should be more user friendly http://bugs.python.org/issue17487 #17484: add tests for getpass http://bugs.python.org/issue17484 #17483: Can not tell urlopen not to check the hostname for https conne http://bugs.python.org/issue17483 #17479: Fix test discovery for test_io.py http://bugs.python.org/issue17479 Top 10 most discussed issues (10) ================================= #17409: resource.setrlimit doesn't respect -1 http://bugs.python.org/issue17409 16 msgs #17445: Handle bytes comparisons in difflib.Differ http://bugs.python.org/issue17445 16 msgs #16475: Support object instancing and recursion in marshal http://bugs.python.org/issue16475 10 msgs #17206: Py_XDECREF() expands its argument multiple times http://bugs.python.org/issue17206 10 msgs #17429: platform.platform() can throw Unicode error http://bugs.python.org/issue17429 8 msgs #17444: multiprocessing.cpu_count() should use hw.availcpu on Mac OS X http://bugs.python.org/issue17444 8 msgs #17487: wave.Wave_read.getparams should be more user friendly http://bugs.python.org/issue17487 8 msgs #5051: test_update2 in test_os.py invalid due to os.environ.clear() f http://bugs.python.org/issue5051 7 msgs #10652: test___all_ + test_tcl fails (Windows installed binary) http://bugs.python.org/issue10652 7 msgs #17436: hashlib: add a method to hash the content of a file http://bugs.python.org/issue17436 7 msgs Issues closed (72) ================== #3840: if TESTFN == "/tmp/@test", some tests fail http://bugs.python.org/issue3840 closed by r.david.murray #5024: sndhdr.whathdr returns -1 for WAV file frame count http://bugs.python.org/issue5024 closed by r.david.murray #5045: imaplib should remove length of literal strings http://bugs.python.org/issue5045 closed by terry.reedy #5713: smtplib gets out of sync if server returns a 421 status http://bugs.python.org/issue5713 closed by r.david.murray #7720: Errors in tests and C implementation of raw FileIO http://bugs.python.org/issue7720 closed by r.david.murray #7898: rlcompleter add "real tab" when text is empty feature http://bugs.python.org/issue7898 closed by r.david.murray #8273: move generally useful test.support functions into the unittest http://bugs.python.org/issue8273 closed by michael.foord #8862: curses.wrapper does not restore terminal if curses.getkey() ge http://bugs.python.org/issue8862 closed by r.david.murray #8905: difflib should accept arbitrary line iterators http://bugs.python.org/issue8905 closed by terry.reedy #9506: sqlite3 mogrify - return query string http://bugs.python.org/issue9506 closed by r.david.murray #10050: urllib.request still has old 2.x urllib primitives http://bugs.python.org/issue10050 closed by orsenthil #10296: ctypes catches BreakPoint error on windows 32 http://bugs.python.org/issue10296 closed by kristjan.jonsson #11420: Make testsuite pass with -B/DONTWRITEBYTECODE set. http://bugs.python.org/issue11420 closed by ezio.melotti #15005: corrupted contents of stdout result from subprocess call under http://bugs.python.org/issue15005 closed by amaury.forgeotdarc #15038: Optimize python Locks on Windows http://bugs.python.org/issue15038 closed by kristjan.jonsson #15235: allow newer berkeley db versions http://bugs.python.org/issue15235 closed by doko #15927: csv.reader() does not support escaped newline when quoting=csv http://bugs.python.org/issue15927 closed by r.david.murray #16057: Subclasses of JSONEncoder should not be insturcted to call JSO http://bugs.python.org/issue16057 closed by r.david.murray #16709: unittest discover order is filesystem specific - hard to repro http://bugs.python.org/issue16709 closed by python-dev #16795: Patch: some changes to AST to make it more useful for static l http://bugs.python.org/issue16795 closed by python-dev #16997: subtests http://bugs.python.org/issue16997 closed by pitrou #17136: ctypes tests fail with clang on non-OS X http://bugs.python.org/issue17136 closed by benjamin.peterson #17192: libffi-3.0.13 import http://bugs.python.org/issue17192 closed by gregory.p.smith #17209: get_wch() doesn't handle KeyboardInterrupt http://bugs.python.org/issue17209 closed by haypo #17245: ctypes libffi needs to align the x86 stack to 16 bytes http://bugs.python.org/issue17245 closed by gregory.p.smith #17285: subprocess.check_output incorrectly state that output is alway http://bugs.python.org/issue17285 closed by gregory.p.smith #17398: document url argument of RobotFileParser http://bugs.python.org/issue17398 closed by terry.reedy #17415: Clarify docs of os.path.normpath() http://bugs.python.org/issue17415 closed by terry.reedy #17416: Clarify docs of os.walk() http://bugs.python.org/issue17416 closed by terry.reedy #17417: Documentation Modification Suggestion: os.walk, fwalk http://bugs.python.org/issue17417 closed by terry.reedy #17423: libffi on 32bit is broken on linux http://bugs.python.org/issue17423 closed by gregory.p.smith #17428: replace readdir to readdir_r in function posix_listdir http://bugs.python.org/issue17428 closed by Rock #17431: email.parser module has no attribute BytesFeedParser http://bugs.python.org/issue17431 closed by r.david.murray #17434: str literals, which are not docstrings, should not be allowed http://bugs.python.org/issue17434 closed by benjamin.peterson #17439: insufficient error message for failed unicode conversion http://bugs.python.org/issue17439 closed by r.david.murray #17440: Some IO related problems on x86 windows http://bugs.python.org/issue17440 closed by amaury.forgeotdarc #17443: imaplib.IMAP4_stream subprocess is opened unbuffered but ignor http://bugs.python.org/issue17443 closed by r.david.murray #17448: test_sax should skip when no xml parsers are found http://bugs.python.org/issue17448 closed by r.david.murray #17450: Failed to build _sqlite3 http://bugs.python.org/issue17450 closed by ezio.melotti #17451: Test to splitdoc in pydoc.py http://bugs.python.org/issue17451 closed by Matt.Bachmann #17452: ftplib raises exception if ssl module is not available http://bugs.python.org/issue17452 closed by giampaolo.rodola #17455: ImportError (xml.dom.minidom) in /usr/lib/python2.7/dist-packa http://bugs.python.org/issue17455 closed by christian.heimes #17456: os.py (unexpected character) http://bugs.python.org/issue17456 closed by christian.heimes #17458: Automatic type conversion from set to frozenset http://bugs.python.org/issue17458 closed by ezio.melotti #17459: unittest.assertItemsEqual reports wrong order http://bugs.python.org/issue17459 closed by Zr40 #17460: Remove the strict and related params completely removing the 0 http://bugs.python.org/issue17460 closed by orsenthil #17461: Carole should be Carol in PEP 396 http://bugs.python.org/issue17461 closed by benjamin.peterson #17463: Fix test discovery for test_pdb.py http://bugs.python.org/issue17463 closed by asvetlov #17464: Improve Test Coverage Of Pydoc http://bugs.python.org/issue17464 closed by r.david.murray #17465: Gut devinabox http://bugs.python.org/issue17465 closed by brett.cannon #17466: I can't make assignments to a list. http://bugs.python.org/issue17466 closed by mark.dickinson #17467: Enhancement: give mock_open readline() and readlines() methods http://bugs.python.org/issue17467 closed by python-dev #17470: random.choice should accept a set as input http://bugs.python.org/issue17470 closed by rhettinger #17471: Patch for Additional Test Coverage for urllib.error http://bugs.python.org/issue17471 closed by orsenthil #17472: Patch for Additional Test Coverage in urllib.parse http://bugs.python.org/issue17472 closed by r.david.murray #17474: Remove the deprecated methods of Request class http://bugs.python.org/issue17474 closed by orsenthil #17476: Pydoc allmethods does not return all methods http://bugs.python.org/issue17476 closed by r.david.murray #17485: Deleting Request data does not update Content-length header. http://bugs.python.org/issue17485 closed by r.david.murray #17494: References to stack bottom are confusing http://bugs.python.org/issue17494 closed by georg.brandl #17495: email.quoprimime.body_encode can't handle characters that enco http://bugs.python.org/issue17495 closed by rpatterson #17497: Unicode support for HTTP headers in http.client http://bugs.python.org/issue17497 closed by r.david.murray #17499: inspect.Signature and inspect.Parameter objects are mutable http://bugs.python.org/issue17499 closed by r.david.murray #17501: cannot create a raw string ending in backslash http://bugs.python.org/issue17501 closed by Thomas Fenzl #17503: replace mode is always on in console http://bugs.python.org/issue17503 closed by techtonik #17508: logging.config.ConvertingDict issue with MemoryHandler http://bugs.python.org/issue17508 closed by python-dev #17509: Incorrect package version predicate parsing by distutils http://bugs.python.org/issue17509 closed by ilja_o #17513: astrike(*) in argv http://bugs.python.org/issue17513 closed by r.david.murray #17517: StringIO() does not behave like cStringIO() when given an arra http://bugs.python.org/issue17517 closed by pitrou #17520: Except(ValueError) on Integer returns the input value as 1 http://bugs.python.org/issue17520 closed by amaury.forgeotdarc #1003195: segfault when running smtplib example http://bugs.python.org/issue1003195 closed by terry.reedy #17493: Unskip SysModuleTest.test_recursionlimit_fatalerror on Windows http://bugs.python.org/issue17493 closed by ezio.melotti #1691387: Call sys.except_hook if exception occurs in __del__ http://bugs.python.org/issue1691387 closed by r.david.murray From solipsis at pitrou.net Fri Mar 22 19:16:39 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 22 Mar 2013 19:16:39 +0100 Subject: [Python-Dev] cpython (2.7): Issue #17508: Handled out-of-order handler configuration correctly. References: <3ZXTKH6dCxzSJs@mail.python.org> Message-ID: <20130322191639.7e4b2665@pitrou.net> Hello, On Fri, 22 Mar 2013 16:28:19 +0100 (CET) vinay.sajip wrote: > http://hg.python.org/cpython/rev/8ae1c28445f8 > changeset: 82881:8ae1c28445f8 > branch: 2.7 > user: Vinay Sajip > date: Fri Mar 22 15:19:24 2013 +0000 > summary: > Issue #17508: Handled out-of-order handler configuration correctly. Could you explain what "out-of-order handler configuration" means? Also, could you add a Misc/NEWS entry for the change / bugfix? Thank you Antoine. From francismb at email.de Fri Mar 22 19:51:34 2013 From: francismb at email.de (francis) Date: Fri, 22 Mar 2013 19:51:34 +0100 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: References: Message-ID: <514CA836.8040009@email.de> > You can use idle from the command line almost as easily as the CP > interpreter: 'python -m idlelib' instead of just 'python' (I just > tried it to verify). Unlike bare 'python', IDLE includes a grep. Right > click on any 'hit' and it opens the file at the specified line. Unlike > bare 'python', you can run tests and collect the all the output, from > as many tests as you want, in a dynamically right-sized buffer. I'm just getting: ~$ python2.7 -m idlelib /usr/bin/python2.7: No module named idlelib.__main__; 'idlelib' is a package and cannot be directly executed Same with python3... ...but thank you for talking about IDLE (the possibility of reediting and using history it so easy that I'm going to give it a try at least when I'm on windows...) From dreamingforward at gmail.com Fri Mar 22 21:22:38 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Fri, 22 Mar 2013 13:22:38 -0700 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130322104846.28b4eac9@pitrou.net> References: <20130320180942.20A6A2500B3@webabinitio.net> <20130320100907.22a24586@anarchist> <20130320123833.14886a86@anarchist> <8632CEBE-50A9-414C-8E43-B53F427DB8D6@gmail.com> <20130321191300.4ffe831f@pitrou.net> <20130322104846.28b4eac9@pitrou.net> Message-ID: On Fri, Mar 22, 2013 at 2:48 AM, Antoine Pitrou wrote: > Le Thu, 21 Mar 2013 21:38:41 +0100, > Georg Brandl a ?crit : > > > Am 21.03.2013 19:13, schrieb Antoine Pitrou: > > > On Wed, 20 Mar 2013 19:57:54 -0700 > > > Raymond Hettinger wrote: > > >> > > >> On Mar 20, 2013, at 12:38 PM, Barry Warsaw > > >> wrote: > > >> > > >> > Right. Ultimately, I think IDLE should be a separate project > > >> > entirely, but I guess there's push back against that too. > > >> > > >> The most important feature of IDLE is that it ships with the > > >> standard library. Everyone who clicks on the Windows MSI on the > > >> python.org webpage automatically has IDLE. That is why I > > >> frequently teach Python with IDLE. > > >> > > >> If this thread results in IDLE being ripped out of the standard > > >> distribution, then I would likely never use it again. > > > > > > Which says a lot about its usefulness, if the only reason you use > > > it is that it's bundled with the standard distribution. > > > > Just like a lot of the stdlib, it *gets* a lot of usefulness from > > being a battery. But just because there are better/more > > comprehensive/prettier replacements out there is not reason enough to > > remove standard libraries. > > That's a good point. I guess it's difficult for me to think of IDLE as > an actual library. > > It's not a library. It's an application that is bundled in the standard distribution. Mark Tacoma, Washington. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rz1991 at foxmail.com Fri Mar 22 22:34:44 2013 From: rz1991 at foxmail.com (=?gb18030?B?yO7vow==?=) Date: Sat, 23 Mar 2013 05:34:44 +0800 Subject: [Python-Dev] Interested in GSoC for biopython Message-ID: Hi, I'm Zheng from the University of Georgia. I heard about the GSoC several weeks before and found biopython also involved in the peoject. I plan to apply for the GSoC 2013, hoping to make some contributions this summer. I browsed the proposals in biopython wiki sites and find two of them are all interesting topics, especially the codon alignment functionality. I know this has been implemented by pal2nal, and pal2nal is a good program as it accounts for the mismatches between protein and DNA sequences. However, it may raise an error when the protein sequence contains * indicating a stop codon, which is typical when the sequence is translated from genomic DNA. Maybe I could write a python implementation that relax this requirement. Many interesting statistical tests based on codon alignment can also be implemented. As I am new to this group, can anyone give me some suggestions about what I could do while preparing my proposal? Do I need to read the souce code of some major classes in BioPython to better understand how it works as well as the programming style? Thanks. Best, Zheng Ruan Institute of Bioinformatics The University of Georgia -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Fri Mar 22 22:43:35 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Fri, 22 Mar 2013 21:43:35 +0000 Subject: [Python-Dev] Interested in GSoC for biopython In-Reply-To: References: Message-ID: On 22 March 2013 21:34, ?? wrote: > Hi, > > I'm Zheng from the University of Georgia. I heard about the GSoC several > weeks before and found biopython also involved in the peoject. I plan to > apply for the GSoC 2013, hoping to make some contributions this summer. [SNIP] This mailing list is for development of Python, not biopython. Try asking on one of the biopython mailing lists: http://biopython.org/wiki/Mailing_lists Oscar From kbk at shore.net Sat Mar 23 00:59:18 2013 From: kbk at shore.net (Kurt B. Kaiser) Date: Fri, 22 Mar 2013 19:59:18 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <20130321103750.311cb6f2@pitrou.net> References: <1363819695.3885.140661207062389.589C7405@webmail.messagingengine.com> <20130321103750.311cb6f2@pitrou.net> Message-ID: <1363996758.25015.140661208001405.3E2993BA@webmail.messagingengine.com> On Thu, Mar 21, 2013, at 05:37 AM, Antoine Pitrou wrote: > Le Wed, 20 Mar 2013 18:48:15 -0400, "Kurt B. Kaiser" > a ?crit : > > > > IDLE has a single keystroke round trip - it's an IDE, not just an > > editor like Sublime Text or Notepad. In the 21st century, people > > expect some sort of IDE. Or, they should! > > I don't think I've used an IDE in years (not seriously anyway). If you haven't used IDLE lately, you might want to try it. > I also don't think beginners "expect some sort of IDE", since they > don't know what it is. They probably don't even expect a text editor > at first. Well, they will feel the need in less than a day, IMHO. These days, beginning users are accustomed to a GUI that "does something", not a command line, it seems. Right, they don't know what they need, at first. We should provide an interface that, in our experience, meets a beginner's needs. > > > I'd also like to make a plea to keep IDLE's interface clean and > > basic. There are lots of complex IDEs available for those who want > > them. It's natural for developers to add features, that's what they > > do :-), but you don't hand a novice a Ferrari (or emacs) and expect > > good results. > > What is the point of an IDE without features? None. But IDLE has plenty of features - and minimum clutter. It also works very well on small screens. > Also, this is touching another issue: IDLE needs active maintainers, > who will obviously be experienced Python developers. But if they are > experienced Python developers, they will certainly want the additional > features, otherwise's they'll stop using and maintaining IDLE. > > In other words, if IDLE were actually usable *and* pleasant for > experienced developers, I'm sure more developers would be motivated to > improve and maintain it. That's not the target audience for IDLE. There are many great IDEs for "experienced" developers. It's not my objective to turn IDLE into PyCharm, just to keep some developers motivated. Good design satisfies the target audience - IMHO, we should be working towards the best possible beginner Python interface on Windows, Mac, and Raspberry Pi. To get this done, we need IDLE developers who are interested in supporting beginners. Not so much developers who are interested in adding complex features for their more advanced usage. The complex IDE space is packed - it doesn't need another entry. OTOH, there are few simple IDEs like IDLE. It's a good niche to be in. And, yes, getting the IDLE developers to use IDLE is important - I do so most of the time (emacs for the rest :). That helps to discover IDLE and tkinter bugs, and occasionally exposes the need for a missing feature. > > > It's sometimes said that IDLE is "ugly" or "broken". These terms > > are subjective! > > Subjective statements are not baseless and idiotic. Please don't put words in my mouth. I only said those terms are subjective, which they are. > They come from the experience of people actually wanting to like a > piece of software, you shouldn't discard them at face value. Please don't allege actions I haven't taken! I agree entirely with you - one has to dig deeper to extract some constructive criticism, if it is available. Often, you can't address one person's idea of ugly or broken without raising those issues with another person. -- KBK From vinay_sajip at yahoo.co.uk Sat Mar 23 02:08:26 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 23 Mar 2013 01:08:26 +0000 (UTC) Subject: [Python-Dev] PEP 405 (venv) - why does it copy the DLLs on Windows References: Message-ID: Paul Moore gmail.com> writes: > I don't understand what this is saying - can someone clarify the > reason behind this statement? What is different about a > "non-system-wide installation" that causes this issue (I assume > "non-system-wide" means "not All Users")? The reason I ask is that > virtualenv doesn't do this, and I'm not clear if this is because of a > potential bug lurking in virtualenv (in which case, I'd like to find > out how to reproduce it) or because virtualenv takes a different > approach which avoids this issue somehow. One example of a non-system-wide installation is a source build of Python. PEP 405 venvs created from a source build should work in the same way as venvs created using an installed Python. Regards, Vinay Sajip From fijall at gmail.com Sat Mar 23 05:35:37 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 22 Mar 2013 21:35:37 -0700 Subject: [Python-Dev] Increase the code coverage of "OS" module In-Reply-To: References: Message-ID: On Fri, Mar 22, 2013 at 2:28 AM, rakesh karanth wrote: > Hi python-dev, > > I'm interested in increasing the code coverage of the Python stdlib library > "OS" > Can some one who is already working on this or on a similar issue enlighten > me on this? > > Thanks in advance. Hey You can check out pypy os tests, we cover a bit more than CPython. (it's py.test based-though you might need to adapt it) From vinay_sajip at yahoo.co.uk Sat Mar 23 10:38:44 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 23 Mar 2013 09:38:44 +0000 (UTC) Subject: [Python-Dev] cpython (2.7): Issue #17508: Handled out-of-order handler configuration correctly. References: <3ZXTKH6dCxzSJs@mail.python.org> <20130322191639.7e4b2665@pitrou.net> Message-ID: Antoine Pitrou pitrou.net> writes: >> Issue #17508: Handled out-of-order handler configuration correctly. > Could you explain what "out-of-order handler configuration" means? In logging, a MemoryHandler buffers records and has a reference to another handler - the "target" - which does the actual output of the buffered records. It can happen when using dictConfig() that the MemoryHandler is configured before the target handler, and in this case the reference to the target wasn't set up correctly. The commit rectifies this. > Also, could you add a Misc/NEWS entry for the change / bugfix? Ok, will do. Regards, Vinay Sajip From p.f.moore at gmail.com Sat Mar 23 11:06:46 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 23 Mar 2013 10:06:46 +0000 Subject: [Python-Dev] PEP 405 (venv) - why does it copy the DLLs on Windows In-Reply-To: References: Message-ID: On 23 March 2013 01:08, Vinay Sajip wrote: > Paul Moore gmail.com> writes: > >> I don't understand what this is saying - can someone clarify the >> reason behind this statement? What is different about a >> "non-system-wide installation" that causes this issue (I assume >> "non-system-wide" means "not All Users")? The reason I ask is that >> virtualenv doesn't do this, and I'm not clear if this is because of a >> potential bug lurking in virtualenv (in which case, I'd like to find >> out how to reproduce it) or because virtualenv takes a different >> approach which avoids this issue somehow. > > One example of a non-system-wide installation is a source build of Python. > PEP 405 venvs created from a source build should work in the same way as venvs > created using an installed Python. Thanks. I hadn't thought of that case. However, I'm still not entirely clear *why* the DLLs need to be copied. I'll set up a source build and test virtualenv against it to see if it fails. Assuming it does, I should be able to work out what the issue is from that. Paul From stefan.bucur at gmail.com Sat Mar 23 12:05:13 2013 From: stefan.bucur at gmail.com (Stefan Bucur) Date: Sat, 23 Mar 2013 12:05:13 +0100 Subject: [Python-Dev] Are undocumented exceptions considered bugs? Message-ID: Hi, I'm not sure this is the right place to ask this question, but I thought I'd give it a shot since it also concerns the Python standard library. I'm writing an automated test case generation tool for Python programs that explores all possible execution paths through a program. When applying this tool on Python's 2.7.3 urllib package, it discovered input strings for which the urllib.urlopen(url) call would raise a TypeError. For instance: urllib.urlopen('\x00\x00\x00') [...] File "/home/bucur/onion/python-bin/lib/python2.7/urllib.py", line 86, in urlopen return opener.open(url) File "/home/bucur/onion/python-bin/lib/python2.7/urllib.py", line 207, in open return getattr(self, name)(url) File "/home/bucur/onion/python-bin/lib/python2.7/urllib.py", line 462, in open_file return self.open_local_file(url) File "/home/bucur/onion/python-bin/lib/python2.7/urllib.py", line 474, in open_local_file stats = os.stat(localname) TypeError: must be encoded string without NULL bytes, not str In the urllib documentation it is only mentioned that the IOError is raised when the connection cannot be established. Since the input passed is a string (and not some other type), is the TypeError considered a bug (either in the documentation, or in the implementation)? Thanks a lot, Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: From shibturn at gmail.com Sat Mar 23 13:57:02 2013 From: shibturn at gmail.com (Richard Oudkerk) Date: Sat, 23 Mar 2013 12:57:02 +0000 Subject: [Python-Dev] PEP 405 (venv) - why does it copy the DLLs on Windows In-Reply-To: References: Message-ID: On 23/03/2013 10:06am, Paul Moore wrote: >> One example of a non-system-wide installation is a source build of Python. >> PEP 405 venvs created from a source build should work in the same way as venvs >> created using an installed Python. > > Thanks. I hadn't thought of that case. However, I'm still not entirely > clear *why* the DLLs need to be copied. I'll set up a source build and > test virtualenv against it to see if it fails. Assuming it does, I > should be able to work out what the issue is from that. Also, couldn't hard links be used instead of copying? (This will fail if not on the same NTFS partition, but then one can copy as a fallback.) -- Richard From solipsis at pitrou.net Sat Mar 23 13:55:41 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 23 Mar 2013 13:55:41 +0100 Subject: [Python-Dev] PEP 405 (venv) - why does it copy the DLLs on Windows References: Message-ID: <20130323135541.75c3cda6@pitrou.net> On Sat, 23 Mar 2013 12:57:02 +0000 Richard Oudkerk wrote: > On 23/03/2013 10:06am, Paul Moore wrote: > >> One example of a non-system-wide installation is a source build of Python. > >> PEP 405 venvs created from a source build should work in the same way as venvs > >> created using an installed Python. > > > > Thanks. I hadn't thought of that case. However, I'm still not entirely > > clear *why* the DLLs need to be copied. I'll set up a source build and > > test virtualenv against it to see if it fails. Assuming it does, I > > should be able to work out what the issue is from that. > > Also, couldn't hard links be used instead of copying? (This will fail > if not on the same NTFS partition, but then one can copy as a fallback.) Hard links are generally hard to discover and debug (at least under Unix, but I suppose the same applies under Windows). Regards Antoine. From p.f.moore at gmail.com Sat Mar 23 14:51:41 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 23 Mar 2013 13:51:41 +0000 Subject: [Python-Dev] PEP 405 (venv) - why does it copy the DLLs on Windows In-Reply-To: <20130323135541.75c3cda6@pitrou.net> References: <20130323135541.75c3cda6@pitrou.net> Message-ID: On 23 March 2013 12:55, Antoine Pitrou wrote: > On Sat, 23 Mar 2013 12:57:02 +0000 > Richard Oudkerk wrote: > >> On 23/03/2013 10:06am, Paul Moore wrote: >> >> One example of a non-system-wide installation is a source build of Python. >> >> PEP 405 venvs created from a source build should work in the same way as venvs >> >> created using an installed Python. >> > >> > Thanks. I hadn't thought of that case. However, I'm still not entirely >> > clear *why* the DLLs need to be copied. I'll set up a source build and >> > test virtualenv against it to see if it fails. Assuming it does, I >> > should be able to work out what the issue is from that. >> >> Also, couldn't hard links be used instead of copying? (This will fail >> if not on the same NTFS partition, but then one can copy as a fallback.) > > Hard links are generally hard to discover and debug (at least under > Unix, but I suppose the same applies under Windows). Yes. And links in general are less common, and so more of a surprise, on Windows as well. Paul From ncoghlan at gmail.com Sat Mar 23 16:21:53 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 23 Mar 2013 08:21:53 -0700 Subject: [Python-Dev] Are undocumented exceptions considered bugs? In-Reply-To: References: Message-ID: On Sat, Mar 23, 2013 at 4:05 AM, Stefan Bucur wrote: > Hi, > > I'm not sure this is the right place to ask this question, but I thought I'd > give it a shot since it also concerns the Python standard library. It's the right place to ask :) > I'm writing an automated test case generation tool for Python programs that > explores all possible execution paths through a program. When applying this > tool on Python's 2.7.3 urllib package, it discovered input strings for which > the urllib.urlopen(url) call would raise a TypeError. That sounds like a really interesting tool. > For instance: > > urllib.urlopen('\x00\x00\x00') > > [...] > File "/home/bucur/onion/python-bin/lib/python2.7/urllib.py", line 86, in > urlopen > return opener.open(url) > File "/home/bucur/onion/python-bin/lib/python2.7/urllib.py", line 207, in > open > return getattr(self, name)(url) > File "/home/bucur/onion/python-bin/lib/python2.7/urllib.py", line 462, in > open_file > return self.open_local_file(url) > File "/home/bucur/onion/python-bin/lib/python2.7/urllib.py", line 474, in > open_local_file > stats = os.stat(localname) > TypeError: must be encoded string without NULL bytes, not str > > In the urllib documentation it is only mentioned that the IOError is raised > when the connection cannot be established. Since the input passed is a > string (and not some other type), is the TypeError considered a bug (either > in the documentation, or in the implementation)? The general answer is that there are certain exceptions that usually aren't documented because almost all code can trigger them if you pass the right kind of invalid argument. For example, almost any API can emit TypeError or AttributeError if you pass an instance of the wrong type, and many can emit ValueError, IndexError or KeyError if you pass an incorrect value. Other errors like SyntaxError, ImportError, NameError and UnboundLocalError usually indicate bugs or environmental configuration issues, so are also typically omitted when documenting the possible exceptions for particular APIs. In this specific case, the error message is confusing-but-not-really-wrong, due to the "two-types-in-one" nature of Python 2.x strings - 8-bit strings are used as both text sequences (generally not containing NUL characters) and also as arbitrary binary data, including encoded text (quite likely to contain NUL bytes). I think a bug report for this would be appropriate, with the aim of making that error message less confusing (it's a fairly obscure case, though). Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From luca.sbardella at gmail.com Sat Mar 23 20:05:32 2013 From: luca.sbardella at gmail.com (Luca Sbardella) Date: Sat, 23 Mar 2013 19:05:32 +0000 Subject: [Python-Dev] wsgi validator with asynchronous handlers/servers Message-ID: Hi, I have an asynchronous wsgi application handler which yields empty bytes before it is ready to yield the response body and, importantly, to call start_response. Something like this: def wsgi_handler(environ, start_response): body = generate_body(environ) body = maybe_async(body) while is_async(body): yield b'' start_response(...) ... I started using wsgiref.validator recently, nice little gem in the standard lib, and I discovered that the above handler does not validate! Disaster. Reading pep 3333 "the application *must* invoke the start_response() callable before the iterable yields its first body bytestring, so that the server can send the headers before any body content. However, this invocation *may* be performed by the iterable's first iteration, so servers *must not* assume that start_response() has been called before they begin iterating over the iterable." The pseudocode above does yields bytes before start_response, but they are not *body* bytes, they are empty bytes so that the asynchronous wsgi server releases the eventloop and call back at the next eventloop iteration. I'm I misinterpreting the pep, or the wsgi validator should be fixed accordingly? Regards, Luca -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Sat Mar 23 20:19:40 2013 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 23 Mar 2013 14:19:40 -0500 Subject: [Python-Dev] wsgi validator with asynchronous handlers/servers In-Reply-To: References: Message-ID: Hi, The people who best understand WSGI are to be found on the Web-SIG: http://mail.python.org/mailman/listinfo/web-sig 2013/3/23 Luca Sbardella : > Hi, > > I have an asynchronous wsgi application handler which yields empty bytes > before it is ready to yield the response body and, importantly, to call > start_response. > > Something like this: > > def wsgi_handler(environ, start_response): > body = generate_body(environ) > body = maybe_async(body) > while is_async(body): > yield b'' > start_response(...) > ... > > I started using wsgiref.validator recently, nice little gem in the standard > lib, and I discovered that the above handler does not validate! Disaster. > Reading pep 3333 > > "the application must invoke the start_response() callable before the > iterable yields its first body bytestring, so that the server can send the > headers before any body content. However, this invocation may be performed > by the iterable's first iteration, so servers must not assume that > start_response() has been called before they begin iterating over the > iterable." > > The pseudocode above does yields bytes before start_response, but they are > not *body* bytes, they are empty bytes so that the asynchronous wsgi server > releases the eventloop and call back at the next eventloop iteration. > > I'm I misinterpreting the pep, or the wsgi validator should be fixed > accordingly? > > Regards, > Luca > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/benjamin%40python.org > -- Regards, Benjamin From tim.delaney at aptare.com Sat Mar 23 20:54:40 2013 From: tim.delaney at aptare.com (Tim Delaney) Date: Sun, 24 Mar 2013 06:54:40 +1100 Subject: [Python-Dev] PEP 405 (venv) - why does it copy the DLLs on Windows In-Reply-To: <20130323135541.75c3cda6@pitrou.net> References: <20130323135541.75c3cda6@pitrou.net> Message-ID: On 23 March 2013 23:55, Antoine Pitrou wrote: > On Sat, 23 Mar 2013 12:57:02 +0000 > Richard Oudkerk wrote: > > > Also, couldn't hard links be used instead of copying? (This will fail > > if not on the same NTFS partition, but then one can copy as a fallback.) > > Hard links are generally hard to discover and debug (at least under > Unix, but I suppose the same applies under Windows). > (Slightly OT, but I think useful in this case.) That's what the Link Shell Extension < http://schinagl.priv.at/nt/hardlinkshellext/hardlinkshellext.html> is for. Makes it very easy to work with Hardlinks, Symbolic links, Junctions and Volume Mountpoints. It gives different overlays for each to icons in Explorer (and Save/Open dialogs) and adds a tab to the properties of any link which gives details e.g. for hardlinks it displays the reference count and all the hardlinks to the same file. There's also a command-line version - ln < http://schinagl.priv.at/nt/ln/ln.html>. Highly recommended. Tim Delaney -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Sat Mar 23 21:27:31 2013 From: pje at telecommunity.com (PJ Eby) Date: Sat, 23 Mar 2013 16:27:31 -0400 Subject: [Python-Dev] wsgi validator with asynchronous handlers/servers In-Reply-To: References: Message-ID: On Sat, Mar 23, 2013 at 3:05 PM, Luca Sbardella wrote: > The pseudocode above does yields bytes before start_response, but they are > not *body* bytes, they are empty bytes so that the asynchronous wsgi server > releases the eventloop and call back at the next eventloop iteration. > > I'm I misinterpreting the pep, or the wsgi validator should be fixed > accordingly? The validator is correct for the spec. You *must* call start_response() before yielding any strings at all. From jeanpierreda at gmail.com Sun Mar 24 02:09:50 2013 From: jeanpierreda at gmail.com (Devin Jeanpierre) Date: Sat, 23 Mar 2013 21:09:50 -0400 Subject: [Python-Dev] Are undocumented exceptions considered bugs? In-Reply-To: References: Message-ID: On Sat, Mar 23, 2013 at 11:21 AM, Nick Coghlan wrote: > In this specific case, the error message is > confusing-but-not-really-wrong, due to the "two-types-in-one" nature > of Python 2.x strings - 8-bit strings are used as both text sequences > (generally not containing NUL characters) and also as arbitrary binary > data, including encoded text (quite likely to contain NUL bytes). With your terminology, three types: char, non-NUL-text, encoded-text (e.g. what happens with ord('ab')) That's pretty silly, considering that these are all one Python type, and TypeError is raised into Python code. Obviously it can't change, because of historical reasons, but documenting it would be straightforward and helpful. These are not errors you can just infer will happen, you need to see it via documentation, reading the source, or experimentation (and re experimentation, then you have to establish whether or not this was an accident or deliberate). -- Devin From benjamin at python.org Mon Mar 25 01:30:06 2013 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 24 Mar 2013 20:30:06 -0400 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 Message-ID: I'm happy to announce the first release candidate of 2.7.4. 2.7.4 will be the latest maintenance release in the Python 2.7 series. It includes hundreds of bugfixes to the core language and standard library. There has recently been a lot of discussion about XML-based denial of service attacks. Specifically, certain XML files can cause XML parsers, including ones in the Python stdlib, to consume gigabytes of RAM and swamp the CPU. 2.7.4 does not include any changes in Python XML code to address these issues. Interested parties should examine the defusedxml package on PyPI: https://pypi.python.org/pypi/defusedxml 2.7.4 release candidate 1 is a testing release. Deploying it in production is not recommended. However, please download it and test with your libraries and applications, reporting any bugs you may find. Assuming no horrible bugs rear their heads, a final release of 2.7.4 will occur in 2 weeks. Downloads are at http://python.org/download/releases/2.7.4/ As always, please report bugs to http://bugs.python.org/ Enjoy, Benjamin Peterson 2.7 Release Manager From rovitotv at gmail.com Mon Mar 25 20:30:59 2013 From: rovitotv at gmail.com (Todd Rovito) Date: Mon, 25 Mar 2013 15:30:59 -0400 Subject: [Python-Dev] Simple IDLE issues to commit before Python 2.7.4 release in two weeks on 4/6/2013 Message-ID: Python 2.7.4 release candidate was just announced and is ready for testing here: http://python.org/download/releases/2.7.4/ Now the clock is ticking and we have two weeks to get IDLE issues pushed into CPython before the final release of 2.7.4. Below is my list of low risk issues that would be great to get pushed into CPython. I hope this email will encourage CPython Core Developers to make the commits or tell us what we can do, like more testing or better documentation, to get these issues cleaned up and committed before 2.7.4 ships. PEP-434 is still being discussed, I asked PEP authors to post latest version on 3/24, but I think most folks agree with the principle. IDLE might have more issues that could be fixed easily so feel free to add to the list. Thanks for everybody?s hard work to make IDLE better!!!!! http://bugs.python.org/issue7136 Idle File Menu Option Improvement http://bugs.python.org/issue17390 display python version on idle title bar http://bugs.python.org/issue17511 Idle find function closes after each find operation http://bugs.python.org/issue6699 IDLE: Warn user about overwriting a file that has a newer version on filesystem http://bugs.python.org/issue10747 Include version info in Windows shortcuts the issue is for 3.x but perhaps simple to implement for 2.7? The issue below has been talked about in the recent past on idle-dev (http://mail.python.org/pipermail/idle-dev/2013-March/003228.html) it will take some work to get committed because of the deletion of the Option menu on the Mac. But even if it gets committed then is disabled for the Mac with a simple sys.platform it would be a huge victory for IDLE in my mind. http://bugs.python.org/issue2704 IDLE: Patch to make PyShell behave more like a Terminal interface From benjamin at python.org Mon Mar 25 21:40:33 2013 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 25 Mar 2013 16:40:33 -0400 Subject: [Python-Dev] Simple IDLE issues to commit before Python 2.7.4 release in two weeks on 4/6/2013 In-Reply-To: References: Message-ID: 2013/3/25 Todd Rovito : > Python 2.7.4 release candidate was just announced and is ready for testing here: > http://python.org/download/releases/2.7.4/ > > Now the clock is ticking and we have two weeks to get IDLE issues > pushed into CPython before the final release of 2.7.4. I'm afraid the ship has sailed on that. After a bugfix release candidate, I only take patches that fix regressions from earlier releases of the same series. (The goal is to not make any changes between the release candidate and final release.) Assuming PEP 343 becomes policy, IDLE changes can land for 2.7.5, which will be in approximately 6 months. -- Regards, Benjamin From victor.stinner at gmail.com Mon Mar 25 22:16:19 2013 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 25 Mar 2013 22:16:19 +0100 Subject: [Python-Dev] Can we triple quoted string as a comment? Message-ID: Hi, I just realized that the Python peephole optimizer removes useless instructions like numbers and strings between other instructions, without raising an error nor emiting an error. Example: $ python -Wd -c 'print "Hello"; "World"' Hello As part of my astoptimizer project, I wrote a function to detect such useless instructions which emit a warning. I opened the following issue to report what I found: http://bugs.python.org/issue17516 Different modules use long strings as comments. What is the "official" policy about such strings? Should we use strings or comments? (IMO a comment should be used instead.) Victor From greg at krypto.org Mon Mar 25 22:22:39 2013 From: greg at krypto.org (Gregory P. Smith) Date: Mon, 25 Mar 2013 14:22:39 -0700 Subject: [Python-Dev] Can we triple quoted string as a comment? In-Reply-To: References: Message-ID: On Mon, Mar 25, 2013 at 2:16 PM, Victor Stinner wrote: > Hi, > > I just realized that the Python peephole optimizer removes useless > instructions like numbers and strings between other instructions, > without raising an error nor emiting an error. Example: > > $ python -Wd -c 'print "Hello"; "World"' > Hello > > As part of my astoptimizer project, I wrote a function to detect such > useless instructions which emit a warning. I opened the following > issue to report what I found: > http://bugs.python.org/issue17516 > > Different modules use long strings as comments. What is the "official" > policy about such strings? Should we use strings or comments? > Comments. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nad at acm.org Mon Mar 25 22:31:16 2013 From: nad at acm.org (Ned Deily) Date: Mon, 25 Mar 2013 14:31:16 -0700 Subject: [Python-Dev] Simple IDLE issues to commit before Python 2.7.4 release in two weeks on 4/6/2013 References: Message-ID: In article , Todd Rovito wrote: > Now the clock is ticking and we have two weeks to get IDLE issues > pushed into CPython before the final release of 2.7.4. Below is my > list of low risk issues that would be great to get pushed into > CPython. I hope this email will encourage CPython Core Developers to > make the commits or tell us what we can do, like more testing or > better documentation, to get these issues cleaned up and committed > before 2.7.4 ships. Todd, Thanks for your suggestions and work to improve IDLE and Python, in general. There has been a lot of discussions recently, including your PEP proposal. I have not commented on the discussions yet, though I plan to in the next few days. Unfortunately, it comes at one of the busiest times for those of us working on releases. Not only is the 2.7.4 release in preparation but 3.2.4rc1 and 3.3.0.rc1 are about to be announced, all on similar schedules. So just a few comments from my release team perspective. No doubt, others may have other opinions. Bugfix releases, like these three, follow an abbreviated version of the full feature release process, outlined in the developer's guide: http://docs.python.org/devguide/devcycle.html#stages For bugfix releases, we typically skip alphas and betas and go right to the release candidate stage, under the assumption that the criteria used for commits added to bugfix releases are designed to avoid incompatible changes and new features, unless explicitly approved for good reasons. That means that a release candidate is meant to be just that: the final set of bits that will be released. All of us involved in software development have our own war stories of how some little last-minute change caused some unexpected breakage. So the normal expectation is that, if any change is accepted and cherry-picked after a release candidate has been published, a new RC cycle will need to happen unless the release manager decides the change is trivial enough that the risk is truly minimal, e.g. something like an obvious typo or a doc change. Certainly the changes proposed here would not normally fit those criteria. Also, before the changes could be considered to be cherry-picked for a release, they need to be applied to the branches first and given some amount of testing, preferably on all of the major platforms where we support IDLE: X11, Windows, OS X. So that's what needs to happen next. There are various developers who have been applying IDLE fixes and now Roger is able to do so as well (yay!). Once they are in, then the question of release becomes relevant. There are a couple of possible scenarios I can see. 1. It's possible that problems will show up in one or more of the current RCs, necessitating another RC cycle, at which time the release manager(s) *might* be amenable to cherrypicking one or more fixes from the current branches. 2. It's also possible (probable, I hope) that 2.7.5 and/or 3.3.2 will follow relatively quickly after 2.7.4 and 3.3.1. (3.2.4 is expected to be the final 3.2.x bugfix release before it enters security-only fix mode.) The period since the last maint releases for 2.7.x and 3.2.x was unusually long, for various good reasons, so there are about a year's worth of fixes going out for them this time, thereby raising the likelihood that new problems will be found requiring a fix in a new bugfix release. Plus there are some security issues that need a final resolution in a release. So, I'm hopeful that we won't have to wait nearly as long to see 2.7.5 after 2.7.4. There's not as long a gap since 3.3.0 but still somewhat long for a first bugfix release. BTW, there is a fair amount of activity that goes on somewhat behind the scenes with getting releases out-the-door. There a number of release artifacts that need to be produced and tested, webpages that need to be updated, announcements sent, etc. For example, for OS X, we normally release two variants of installers for each beta, rc, and final release. Between the two variants, we support 13 different architecture/os-release combinations (only 11 for 3.3.x). That means, at the moment, we have 37 different combinations we could test (including those for 2.7.4rc1, 3.2.4rc1, and 3.3.1rc1). I don't personally test every one of them but I do run the Python tests on a representative sample (various OS levels vs. ppc/Intel-32/Intel-64) of configurations, including at least very minimal manual tests of IDLE to cover things like different versions of Tcl/Tk we support on OS X and current fixes like the recent infamous preferences panel crash. Then there's the time required to investigate and writeup test failures, and decide on a fix strategy (e.g. is it a release blocker?). Similar things happen for the WIndows installers and for the main source packages. All of these things are part of what goes in to having a good batteries-included experience for our users, including IDLE users. We all work on Python releases because we want to do it and we like to do it. That doesn't mean the process is cast-in-concrete or can't be improved. But there is a cost involved in producing releases that isn't always readily apparent. Specially for IDLE, I think there are a number of things we can do to improve and speed its development process. As noted, I'll have some concrete suggestions soon but, right now, I should get back to the release candidates at hand. In the meantime, let's see what we can do to get those patches checked in, documented, and tested. -- Ned Deily, nad at acm.org From glyph at twistedmatrix.com Mon Mar 25 23:57:53 2013 From: glyph at twistedmatrix.com (Glyph) Date: Mon, 25 Mar 2013 15:57:53 -0700 Subject: [Python-Dev] Simple IDLE issues to commit before Python 2.7.4 release in two weeks on 4/6/2013 In-Reply-To: References: Message-ID: <4D28596D-C465-47E7-B568-CD90B1ADFA5D@twistedmatrix.com> On Mar 25, 2013, at 1:40 PM, Benjamin Peterson wrote: > ... Assuming PEP 343 becomes policy ... Are you sure you got this PEP number right? The 'with' statement? -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Tue Mar 26 00:04:51 2013 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 25 Mar 2013 19:04:51 -0400 Subject: [Python-Dev] Simple IDLE issues to commit before Python 2.7.4 release in two weeks on 4/6/2013 In-Reply-To: <4D28596D-C465-47E7-B568-CD90B1ADFA5D@twistedmatrix.com> References: <4D28596D-C465-47E7-B568-CD90B1ADFA5D@twistedmatrix.com> Message-ID: 2013/3/25 Glyph : > > On Mar 25, 2013, at 1:40 PM, Benjamin Peterson wrote: > > ... Assuming PEP 343 becomes policy ... > > > Are you sure you got this PEP number right? The 'with' statement? Sorry, I meant PEP 434. -- Regards, Benjamin From fijall at gmail.com Tue Mar 26 01:17:17 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 25 Mar 2013 17:17:17 -0700 Subject: [Python-Dev] Simple IDLE issues to commit before Python 2.7.4 release in two weeks on 4/6/2013 In-Reply-To: <4D28596D-C465-47E7-B568-CD90B1ADFA5D@twistedmatrix.com> References: <4D28596D-C465-47E7-B568-CD90B1ADFA5D@twistedmatrix.com> Message-ID: On Mon, Mar 25, 2013 at 3:57 PM, Glyph wrote: > > On Mar 25, 2013, at 1:40 PM, Benjamin Peterson wrote: > > ... Assuming PEP 343 becomes policy ... > > > Are you sure you got this PEP number right? The 'with' statement? > > > Don't worry even google confuses the two :) From rovitotv at gmail.com Tue Mar 26 01:54:41 2013 From: rovitotv at gmail.com (Todd Rovito) Date: Mon, 25 Mar 2013 20:54:41 -0400 Subject: [Python-Dev] Simple IDLE issues to commit before Python 2.7.4 release in two weeks on 4/6/2013 In-Reply-To: References: Message-ID: On Mon, Mar 25, 2013 at 4:40 PM, Benjamin Peterson wrote: > I'm afraid the ship has sailed on that. After a bugfix release > candidate, I only take patches that fix regressions from earlier > releases of the same series. (The goal is to not make any changes > between the release candidate and final release.) Assuming PEP 343 > becomes policy, IDLE changes can land for 2.7.5, which will be in > approximately 6 months. Benjamin, No problem we can wait 6 months. I should of read the dev guide section about Python releases before I sent the email. From raymond.hettinger at gmail.com Tue Mar 26 02:16:47 2013 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Mon, 25 Mar 2013 18:16:47 -0700 Subject: [Python-Dev] Can we triple quoted string as a comment? In-Reply-To: References: Message-ID: <7B40444C-FA79-4776-A356-7FC76A5CADC7@gmail.com> On Mar 25, 2013, at 2:16 PM, Victor Stinner wrote: > Hi, > > I just realized that the Python peephole optimizer removes useless > instructions like numbers and strings between other instructions, > without raising an error nor emiting an error. Example: > > $ python -Wd -c 'print "Hello"; "World"' > Hello IIRC, this happens upstream from the peephole optimizer and has been a part of Python for a long time. You can also "comment-out" code with "if 0:" >>> def f(x): if 0: print x return x+1 >>> from dis import dis >>> dis(f) 4 0 LOAD_FAST 0 (x) 3 LOAD_CONST 1 (1) 6 BINARY_ADD 7 RETURN_VALUE > > As part of my astoptimizer project, I wrote a function to detect such > useless instructions which emit a warning. I opened the following > issue to report what I found: > http://bugs.python.org/issue17516 Make sure it is a warning you can turn-off. I've seen code in many organizations that use multi-line strings to "turn-off" a section of code but not actually remove the code from the source. > > Different modules use long strings as comments. What is the "official" > policy about such strings? Should we use strings or comments? > > (IMO a comment should be used instead.) The module authors typically make their own decisions with respect to readability and ease of commenting. If you're editing with Emacs, it is really easy to reflow paragraphs and to insert or remove multiline comments each prefixed with #. But with other editors, it can be a PITA and a multiline string is the easiest to maintain and works well when cutting-and-pasting the comments from somewhere else. I worry that because you just discovered this feature, the initial reaction is that is a horribly wrong thing to do and should be "fixed" everywhere. Instead, it would be better to live-and-let live. No need for wholesale code changes or imposition "you must do it the way I do it" policies. my-two-cents, Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From rovitotv at gmail.com Tue Mar 26 03:18:41 2013 From: rovitotv at gmail.com (Todd Rovito) Date: Mon, 25 Mar 2013 22:18:41 -0400 Subject: [Python-Dev] Simple IDLE issues to commit before Python 2.7.4 release in two weeks on 4/6/2013 In-Reply-To: References: Message-ID: On Mon, Mar 25, 2013 at 5:31 PM, Ned Deily wrote: > Todd, > > Thanks for your suggestions and work to improve IDLE and Python, in > general. There has been a lot of discussions recently, including your > PEP proposal. I have not commented on the discussions yet, though I > plan to in the next few days. No problem I enjoy working on Python!!!! I look forward to reading your comments. > Unfortunately, it comes at one of the > busiest times for those of us working on releases. Not only is the > 2.7.4 release in preparation but 3.2.4rc1 and 3.3.0.rc1 are about to be > announced, all on similar schedules. So just a few comments from my > release team perspective. No doubt, others may have other opinions. > > Bugfix releases, like these three, follow an abbreviated version of the > full feature release process, outlined in the developer's guide: > > http://docs.python.org/devguide/devcycle.html#stages Thanks for the reference I will read it again this time more carefully. Before I sent the email I should have reviewed it so perhaps I jumped the gun a little. > For bugfix releases, we typically skip alphas and betas and go right to > the release candidate stage, under the assumption that the criteria used > for commits added to bugfix releases are designed to avoid incompatible > changes and new features, unless explicitly approved for good reasons. > That means that a release candidate is meant to be just that: the final > set of bits that will be released. All of us involved in software > development have our own war stories of how some little last-minute > change caused some unexpected breakage. So the normal expectation is > that, if any change is accepted and cherry-picked after a release > candidate has been published, a new RC cycle will need to happen unless > the release manager decides the change is trivial enough that the risk > is truly minimal, e.g. something like an obvious typo or a doc change. > Certainly the changes proposed here would not normally fit those > criteria. Thanks for taking the time to explain that to me....for sure I don't mean to rush the process. I do know that the Python team releases a high quality product and why change a good thing. > Also, before the changes could be considered to be cherry-picked for a > release, they need to be applied to the branches first and given some > amount of testing, preferably on all of the major platforms where we > support IDLE: X11, Windows, OS X. So that's what needs to happen next. I agree 100% so we just need to get these simple issues committed and done. It seems like now is a bad time so I can wait patiently, I don't want to be a pest but it drives me nuts that some of these issues have been fixed and they go uncommitted. More than anything I want the Python Core Developers to know that work has happened on IDLE and many times the patches simply don't get committed. I admit there are various reasons some of these IDLE issues don't get resolved but these first four are a slam dunk. Please Python Core Developers don't take offense I know everybody is busy moving the language forward which is super important, I want to gently tickle your belly buttons as a reminder only. > There are various developers who have been applying IDLE fixes and now > Roger is able to do so as well (yay!). Once they are in, then the > question of release becomes relevant. Giving Roger commit rights will speed things up no doubt about that but he is learning the ropes and I was hoping to get these four simple issues committed and done with before 2.7.4 comes out. Knowing now that we have 6 months (instead of two weeks) relaxes me a little. > There are a couple of possible > scenarios I can see. 1. It's possible that problems will show up in > one or more of the current RCs, necessitating another RC cycle, at which > time the release manager(s) *might* be amenable to cherrypicking one or > more fixes from the current branches. 2. It's also possible (probable, > I hope) that 2.7.5 and/or 3.3.2 will follow relatively quickly after > 2.7.4 and 3.3.1. (3.2.4 is expected to be the final 3.2.x bugfix > release before it enters security-only fix mode.) The period since the > last maint releases for 2.7.x and 3.2.x was unusually long, for various > good reasons, so there are about a year's worth of fixes going out for > them this time, thereby raising the likelihood that new problems will be > found requiring a fix in a new bugfix release. Plus there are some > security issues that need a final resolution in a release. So, I'm > hopeful that we won't have to wait nearly as long to see 2.7.5 after > 2.7.4. There's not as long a gap since 3.3.0 but still somewhat long > for a first bugfix release. All this is good information and makes me feel even better about waiting a few months to get these four issues and possibly more IDLE issues fixed before the next release. > BTW, there is a fair amount of activity that goes on somewhat behind the > scenes with getting releases out-the-door. There a number of release > artifacts that need to be produced and tested, webpages that need to be > updated, announcements sent, etc. For example, for OS X, we normally > release two variants of installers for each beta, rc, and final release. > Between the two variants, we support 13 different > architecture/os-release combinations (only 11 for 3.3.x). That means, > at the moment, we have 37 different combinations we could test > (including those for 2.7.4rc1, 3.2.4rc1, and 3.3.1rc1). I don't > personally test every one of them but I do run the Python tests on a > representative sample (various OS levels vs. ppc/Intel-32/Intel-64) of > configurations, including at least very minimal manual tests of IDLE to > cover things like different versions of Tcl/Tk we support on OS X and > current fixes like the recent infamous preferences panel crash. Then > there's the time required to investigate and writeup test failures, and > decide on a fix strategy (e.g. is it a release blocker?). Similar > things happen for the WIndows installers and for the main source > packages. All of these things are part of what goes in to having a good > batteries-included experience for our users, including IDLE users. We > all work on Python releases because we want to do it and we like to do > it. That doesn't mean the process is cast-in-concrete or can't be > improved. But there is a cost involved in producing releases that isn't > always readily apparent. Again I really appreciate the information now I have a much better idea what goes into a release. > Specially for IDLE, I think there are a number of things we can do to > improve and speed its development process. As noted, I'll have some > concrete suggestions soon but, right now, I should get back to the > release candidates at hand. In the meantime, let's see what we can do > to get those patches checked in, documented, and tested. It would be great to hear those ideas when you get time. Several people and I have been saying IDLE development is not dead these four patches represent some fairly trivial yet important fixes. I am willing to do almost anything to improve the state of IDLE (bribery is not out of the question). Just know that I am more persistent than most people and I will not go away or give up, if the issues I outlined have problems please indicate that with the issue and we will get them fixed. As people have stated on this list over the last few days far better than I, Python users do care about IDLE. It would be great if we could all work together and make sure these issues listed are all committed way before the next major release cycle. Thanks for your time and effort! From stephen at xemacs.org Tue Mar 26 04:00:25 2013 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 26 Mar 2013 12:00:25 +0900 Subject: [Python-Dev] Simple IDLE issues to commit before Python 2.7.4 release in two weeks on 4/6/2013 In-Reply-To: References: Message-ID: <87d2umdfw6.fsf@uwakimon.sk.tsukuba.ac.jp> Todd Rovito writes: > All this is good information and makes me feel even better about > waiting a few months to get these four issues and possibly more IDLE > issues fixed before the next release. +1 for more issues fixed in the next release! :-) From tjreedy at udel.edu Tue Mar 26 07:01:58 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 26 Mar 2013 02:01:58 -0400 Subject: [Python-Dev] IDLE in the stdlib In-Reply-To: <514CA836.8040009@email.de> References: <514CA836.8040009@email.de> Message-ID: On 3/22/2013 2:51 PM, francis wrote: > ~$ python2.7 -m idlelib > /usr/bin/python2.7: No module named idlelib.__main__; 'idlelib' is a > package and cannot be directly executed > > Same with python3... C:\Programs>python33\python.exe -m idlelib brings up IDLE on Windows. 2.7 and 3.2 do not work as above but require 'idlelib.idle' instead of just 'idlelib'. C:\Programs>python32\python.exe -m idlelib.idle C:\Programs>python27\python.exe -m idlelib.idle I have no idea if the change is to '-m' processing or to idlelib. If the latter, it is an example of a patch that might have been harmlessly backported with PEP434 accepted. -- Terry Jan Reedy From tjreedy at udel.edu Tue Mar 26 07:40:22 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 26 Mar 2013 02:40:22 -0400 Subject: [Python-Dev] Simple IDLE issues to commit before Python 2.7.4 release in two weeks on 4/6/2013 In-Reply-To: References: Message-ID: On 3/25/2013 3:30 PM, Todd Rovito wrote: > http://bugs.python.org/issue7136 Idle File Menu Option Improvement > http://bugs.python.org/issue17390 display python version on idle title bar > http://bugs.python.org/issue17511 Idle find function closes after each > find operation > http://bugs.python.org/issue6699 IDLE: Warn user about overwriting a > file that has a newer version on filesystem > http://bugs.python.org/issue10747 Include version info in Windows > shortcuts the issue is for 3.x but perhaps simple to implement for > 2.7? > http://bugs.python.org/issue2704 IDLE: Patch to make PyShell behave > more like a Terminal interface I am waiting for 434 or an alternative to be accepted before pushing any IDLE patch. I have no interest in bikeshedding a somewhat meaningless, for IDLE, distinction between behavior and enhancement, nor in being asked to revert a patch that I tested on multiple versions. However, I might use your list to select patches for testing on Windows. -- Terry Jan Reedy From georg at python.org Tue Mar 26 07:47:38 2013 From: georg at python.org (Georg Brandl) Date: Tue, 26 Mar 2013 07:47:38 +0100 Subject: [Python-Dev] [RELEASED] Python 3.2.4 rc 1 and Python 3.3.1 rc 1 Message-ID: <5151448A.2010103@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On behalf of the Python development team, I am pleased to announce the first release candidates of Python 3.2.4 and 3.3.1. Python 3.2.4 will be the last regular maintenance release for the Python 3.2 series, while Python 3.3.1 is the first maintenance release for the 3.3 series. Both releases include hundreds of bugfixes. There has recently been a lot of discussion about XML-based denial of service attacks. Specifically, certain XML files can cause XML parsers, including ones in the Python stdlib, to consume gigabytes of RAM and swamp the CPU. These releases do not include any changes in Python XML code to address these issues. Interested parties should examine the defusedxml package on PyPI: https://pypi.python.org/pypi/defusedxml These are testing releases: Please consider trying them with your code and reporting any bugs you may notice to: http://bugs.python.org/ To download Python 3.2.4 or Python 3.3.1, visit: http://www.python.org/download/releases/3.2.4/ or http://www.python.org/download/releases/3.3.1/ respectively. Enjoy! - -- Georg Brandl, Release Manager georg at python.org (on behalf of the entire python-dev team and all contributors) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlFRRIoACgkQN9GcIYhpnLD6jACgnzYdYRKZ4kwkKeN3zSLSZ3Zr M/IAn17vlpxI3a3xk+i/ODOrCkMnRZro =B5sA -----END PGP SIGNATURE----- From doko at ubuntu.com Tue Mar 26 11:54:26 2013 From: doko at ubuntu.com (Matthias Klose) Date: Tue, 26 Mar 2013 11:54:26 +0100 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: References: Message-ID: <51517E62.2080301@ubuntu.com> Am 25.03.2013 01:30, schrieb Benjamin Peterson: > 2.7.4 will be the latest maintenance release in the Python 2.7 series. I hope it's not (and in the IDLE thread you say so otherwise too). Matthias From regebro at gmail.com Tue Mar 26 12:05:05 2013 From: regebro at gmail.com (Lennart Regebro) Date: Tue, 26 Mar 2013 12:05:05 +0100 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: <51517E62.2080301@ubuntu.com> References: <51517E62.2080301@ubuntu.com> Message-ID: On Tue, Mar 26, 2013 at 11:54 AM, Matthias Klose wrote: > Am 25.03.2013 01:30, schrieb Benjamin Peterson: >> 2.7.4 will be the latest maintenance release in the Python 2.7 series. > > I hope it's not (and in the IDLE thread you say so otherwise too). It most certainly will be the latest once it's released. But hopefully not the last. :-) //Lennart From victor.stinner at gmail.com Tue Mar 26 12:34:34 2013 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 26 Mar 2013 12:34:34 +0100 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: References: <51517E62.2080301@ubuntu.com> Message-ID: 2013/3/26 Lennart Regebro : > On Tue, Mar 26, 2013 at 11:54 AM, Matthias Klose wrote: >> Am 25.03.2013 01:30, schrieb Benjamin Peterson: >>> 2.7.4 will be the latest maintenance release in the Python 2.7 series. >> >> I hope it's not (and in the IDLE thread you say so otherwise too). > > It most certainly will be the latest once it's released. But hopefully > not the last. :-) I also read "the last" by mistake! Anyway, you should trust Brett Canon: "Python 3.3: Trust Me, It's Better Than Python 2.7". https://speakerdeck.com/pyconslides/python-3-dot-3-trust-me-its-better-than-python-2-dot-7-by-dr-brett-cannon Victor From a.cavallo at cavallinux.eu Tue Mar 26 12:54:11 2013 From: a.cavallo at cavallinux.eu (a.cavallo at cavallinux.eu) Date: Tue, 26 Mar 2013 12:54:11 +0100 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: References: <51517E62.2080301@ubuntu.com> Message-ID: <365d993ec4df283ee38b27cd82ac6872@cavallinux.eu> It's already hard to sell 2.7 in most companies. Regards, Antonio > Anyway, you should trust Brett Canon: "Python 3.3: Trust Me, It's > Better Than Python 2.7". > > > https://speakerdeck.com/pyconslides/python-3-dot-3-trust-me-its-better-than-python-2-dot-7-by-dr-brett-cannon > > Victor From benjamin at python.org Tue Mar 26 13:13:31 2013 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 26 Mar 2013 08:13:31 -0400 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: <51517E62.2080301@ubuntu.com> References: <51517E62.2080301@ubuntu.com> Message-ID: 2013/3/26 Matthias Klose : > Am 25.03.2013 01:30, schrieb Benjamin Peterson: >> 2.7.4 will be the latest maintenance release in the Python 2.7 series. > > I hope it's not (and in the IDLE thread you say so otherwise too). "latest" is different from "last" :) -- Regards, Benjamin From rdmurray at bitdance.com Tue Mar 26 14:28:51 2013 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 26 Mar 2013 09:28:51 -0400 Subject: [Python-Dev] Can we triple quoted string as a comment? In-Reply-To: <7B40444C-FA79-4776-A356-7FC76A5CADC7@gmail.com> References: <7B40444C-FA79-4776-A356-7FC76A5CADC7@gmail.com> Message-ID: <20130326132851.8D4AB250069@webabinitio.net> On Mon, 25 Mar 2013 18:16:47 -0700, Raymond Hettinger wrote: > If you're editing with Emacs, it is really easy to reflow paragraphs > and to insert or remove multiline comments each prefixed with #. > But with other editors, it can be a PITA and a multiline string is > the easiest to maintain and works well when cutting-and-pasting > the comments from somewhere else. Just FYI it is also very easy in vim: gq plus whatever movement prefix suits the situation. --David From solipsis at pitrou.net Tue Mar 26 14:54:24 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 26 Mar 2013 14:54:24 +0100 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 References: <51517E62.2080301@ubuntu.com> Message-ID: <20130326145424.33c9326b@pitrou.net> Le Tue, 26 Mar 2013 12:34:34 +0100, Victor Stinner a ?crit : > 2013/3/26 Lennart Regebro : > > On Tue, Mar 26, 2013 at 11:54 AM, Matthias Klose > > wrote: > >> Am 25.03.2013 01:30, schrieb Benjamin Peterson: > >>> 2.7.4 will be the latest maintenance release in the Python 2.7 > >>> series. > >> > >> I hope it's not (and in the IDLE thread you say so otherwise too). > > > > It most certainly will be the latest once it's released. But > > hopefully not the last. :-) > > I also read "the last" by mistake! > > Anyway, you should trust Brett Canon: "Python 3.3: Trust Me, It's > Better Than Python 2.7". > > https://speakerdeck.com/pyconslides/python-3-dot-3-trust-me-its-better-than-python-2-dot-7-by-dr-brett-cannon You can always trust Brett Cannon! cheers Antoine. From solipsis at pitrou.net Tue Mar 26 14:55:05 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 26 Mar 2013 14:55:05 +0100 Subject: [Python-Dev] Can we triple quoted string as a comment? References: <7B40444C-FA79-4776-A356-7FC76A5CADC7@gmail.com> <20130326132851.8D4AB250069@webabinitio.net> Message-ID: <20130326145505.393ae8e5@pitrou.net> Le Tue, 26 Mar 2013 09:28:51 -0400, "R. David Murray" a ?crit : > On Mon, 25 Mar 2013 18:16:47 -0700, Raymond Hettinger > wrote: > > If you're editing with Emacs, it is really easy to reflow paragraphs > > and to insert or remove multiline comments each prefixed with #. > > But with other editors, it can be a PITA and a multiline string is > > the easiest to maintain and works well when cutting-and-pasting > > the comments from somewhere else. > > Just FYI it is also very easy in vim: gq plus whatever movement prefix > suits the situation. And on a user-friendly editor such as Kate, you can press Ctrl+D. Regards Antoine. From ethan at stoneleaf.us Tue Mar 26 16:46:23 2013 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 26 Mar 2013 08:46:23 -0700 Subject: [Python-Dev] Can we triple quoted string as a comment? In-Reply-To: References: Message-ID: <5151C2CF.8070809@stoneleaf.us> On 03/25/2013 02:16 PM, Victor Stinner wrote: > Hi, > > I just realized that the Python peephole optimizer removes useless > instructions like numbers and strings between other instructions, > without raising an error nor emiting an error. Example: > > $ python -Wd -c 'print "Hello"; "World"' > Hello > > As part of my astoptimizer project, I wrote a function to detect such > useless instructions which emit a warning. I opened the following > issue to report what I found: > http://bugs.python.org/issue17516 > > Different modules use long strings as comments. What is the "official" > policy about such strings? Should we use strings or comments? > > (IMO a comment should be used instead.) Someone will correct me if I'm wrong, I'm sure, but I believe Guido himself has said that a neat feature of triple-quoted strings is their ability to be used as comments. -- ~Ethan~ From guido at python.org Tue Mar 26 17:11:12 2013 From: guido at python.org (Guido van Rossum) Date: Tue, 26 Mar 2013 09:11:12 -0700 Subject: [Python-Dev] Can we triple quoted string as a comment? In-Reply-To: <5151C2CF.8070809@stoneleaf.us> References: <5151C2CF.8070809@stoneleaf.us> Message-ID: And I still think it's neat. :-) On Tue, Mar 26, 2013 at 8:46 AM, Ethan Furman wrote: > On 03/25/2013 02:16 PM, Victor Stinner wrote: > >> Hi, >> >> I just realized that the Python peephole optimizer removes useless >> instructions like numbers and strings between other instructions, >> without raising an error nor emiting an error. Example: >> >> $ python -Wd -c 'print "Hello"; "World"' >> Hello >> >> As part of my astoptimizer project, I wrote a function to detect such >> useless instructions which emit a warning. I opened the following >> issue to report what I found: >> http://bugs.python.org/**issue17516 >> >> Different modules use long strings as comments. What is the "official" >> policy about such strings? Should we use strings or comments? >> >> (IMO a comment should be used instead.) >> > > Someone will correct me if I'm wrong, I'm sure, but I believe Guido > himself has said that a neat feature of triple-quoted strings is their > ability to be used as comments. > > -- > ~Ethan~ > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Mar 26 19:41:46 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 26 Mar 2013 19:41:46 +0100 Subject: [Python-Dev] cpython (2.7): Issue 17538: Document XML vulnerabilties References: <3ZZz1v2N95zQBT@mail.python.org> Message-ID: <20130326194146.68662198@pitrou.net> On Tue, 26 Mar 2013 17:53:39 +0100 (CET) christian.heimes wrote: > + > +The XML processing modules are not secure against maliciously constructed data. > +An attacker can abuse vulnerabilities for e.g. denial of service attacks, to > +access local files, to generate network connections to other machines, or > +to or circumvent firewalls. Really? Where does the "access local files, generate network connections, circumvent firewalls" allegation come from? Regards Antoine. From donald at stufft.io Tue Mar 26 19:57:37 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 26 Mar 2013 14:57:37 -0400 Subject: [Python-Dev] cpython (2.7): Issue 17538: Document XML vulnerabilties In-Reply-To: <20130326194146.68662198@pitrou.net> References: <3ZZz1v2N95zQBT@mail.python.org> <20130326194146.68662198@pitrou.net> Message-ID: On Mar 26, 2013, at 2:41 PM, Antoine Pitrou wrote: > On Tue, 26 Mar 2013 17:53:39 +0100 (CET) > christian.heimes wrote: >> + >> +The XML processing modules are not secure against maliciously constructed data. >> +An attacker can abuse vulnerabilities for e.g. denial of service attacks, to >> +access local files, to generate network connections to other machines, or >> +to or circumvent firewalls. > > Really? Where does the "access local files, generate network > connections, circumvent firewalls" allegation come from? https://pypi.python.org/pypi/defusedxml#external-entity-expansion-remote https://pypi.python.org/pypi/defusedxml#external-entity-expansion-local-file https://pypi.python.org/pypi/defusedxml#dtd-retrieval > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/donald%40stufft.io ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From christian at python.org Tue Mar 26 20:04:11 2013 From: christian at python.org (Christian Heimes) Date: Tue, 26 Mar 2013 20:04:11 +0100 Subject: [Python-Dev] cpython (2.7): Issue 17538: Document XML vulnerabilties In-Reply-To: <20130326194146.68662198@pitrou.net> References: <3ZZz1v2N95zQBT@mail.python.org> <20130326194146.68662198@pitrou.net> Message-ID: <5151F12B.2070509@python.org> Am 26.03.2013 19:41, schrieb Antoine Pitrou: > On Tue, 26 Mar 2013 17:53:39 +0100 (CET) > christian.heimes wrote: >> + >> +The XML processing modules are not secure against maliciously constructed data. >> +An attacker can abuse vulnerabilities for e.g. denial of service attacks, to >> +access local files, to generate network connections to other machines, or >> +to or circumvent firewalls. > > Really? Where does the "access local files, generate network > connections, circumvent firewalls" allegation come from? Really! https://bitbucket.org/PSF/defusedxml/src/e76248a8c2b3102c565bd3451656130cb29f04f8/other/python_external.py?at=default REQUEST: -------- Aachen RESPONSE: --------- The weather in Aachen is terrible. ]> &passwd; RESPONSE: --------- Unknown city root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh sys:x:3:3:sys:/dev:/bin/sh sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/bin/sh man:x:6:12:man:/var/cache/man:/bin/sh lp:x:7:7:lp:/var/spool/lpd:/bin/sh mail:x:8:8:mail:/var/mail:/bin/sh news:x:9:9:news:/var/spool/news:/bin/sh uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh proxy:x:13:13:proxy:/bin:/bin/sh www-data:x:33:33:www-data:/var/www:/bin/sh backup:x:34:34:backup:/var/backups:/bi REQUEST: -------- ]> &url; RESPONSE: --------- Unknown city -----BEGIN DH PARAMETERS----- MEYCQQD1Kv884bEpQBgRjXyEpwpy1obEAxnIByl6ypUM2Zafq9AKUJsCRtMIPWak XUGfnHy9iUsiGSa6q6Jew1XpKgVfAgEC -----END DH PARAMETERS----- These are the 512 bit DH parameters from "Assigned Number for SKIP Protocols" (http://www.skip-vpn.org/spec/numbers.html). See there for how they were generated. Note that g is not a generator, but this is not a problem since p is a safe prime. Q.E.D. Christian From solipsis at pitrou.net Tue Mar 26 20:04:58 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 26 Mar 2013 20:04:58 +0100 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 References: <51517E62.2080301@ubuntu.com> <365d993ec4df283ee38b27cd82ac6872@cavallinux.eu> Message-ID: <20130326200458.67ce6780@pitrou.net> On Tue, 26 Mar 2013 12:54:11 +0100 a.cavallo at cavallinux.eu wrote: > It's already hard to sell 2.7 in most companies. Sure, it's often hard to sell free software! Regards Antoine. From brian at python.org Tue Mar 26 20:51:29 2013 From: brian at python.org (Brian Curtin) Date: Tue, 26 Mar 2013 14:51:29 -0500 Subject: [Python-Dev] Google Summer of Code - Organization Deadline Approaching - March 29 Message-ID: Just an FYI that there are under 3 days to apply to Google Summer of Code for mentoring organizations: http://www.google-melange.com/gsoc/homepage/google/gsoc2013. The student application deadline is later on in May. If you run a project that is interested in applying under the Python umbrella organization, contact Terri Oda at terri at zone12.com Is anyone here interested in leading CPython through GSOC? Anyone have potential students to get involved, or interested in being a mentor? From g.brandl at gmx.net Tue Mar 26 21:19:42 2013 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 26 Mar 2013 21:19:42 +0100 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: References: <51517E62.2080301@ubuntu.com> Message-ID: Am 26.03.2013 13:13, schrieb Benjamin Peterson: > 2013/3/26 Matthias Klose : >> Am 25.03.2013 01:30, schrieb Benjamin Peterson: >>> 2.7.4 will be the latest maintenance release in the Python 2.7 series. >> >> I hope it's not (and in the IDLE thread you say so otherwise too). > > "latest" is different from "last" :) As opposed to 3.2.4, which will really be the last (the last regular one, that is -- if/when the XML stuff gets released there will be a source-only release). Georg From victor.stinner at gmail.com Tue Mar 26 22:56:53 2013 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 26 Mar 2013 22:56:53 +0100 Subject: [Python-Dev] astoptimizer: static optimizer working on the AST Message-ID: Hi, I made progress since last August on my astoptimizer project (read the Changelog). Previous email thread: http://mail.python.org/pipermail/python-dev/2012-August/121286.html The astoptimizer project is an optimizer rewriting Python AST. It executes as much code as possible during the compilation. The optimizer itself is not designed to be fast, but to emit faster code. https://bitbucket.org/haypo/astoptimizer/ Some optimizations are not "pythonic": don't respect the Python language by making some assumptions (ex: on the namespace), which may be wrong in some cases. astoptimizer is written for adults which know which optimizations can be enabled, and which ones must be disabled in their application (application, not module). I'm trying to write a safe and "pythonic" default configuration. For example, len("abc") is not replaced with 3 by default. You have to enable explicitly the "builtin_funcs" configuration feature. astoptimizer can be used as a pythonic preprocessor: it replaces os.name, sys.platform and also your own constants by their value. It removes dead code and so may be used to remove completly the overhead of checks on the Python version or an the platform (if python3: ... else: ...). I would like to improve the integration of astoptimizer in Python 3.4. Brett Canon asked me to use an hook in importlib, I proposed to add a generic AST hook which would also be called by eval(), compile(), etc. I would like to allow anyone to use its own AST modifier, and simplify the usage of astoptimizer. => http://bugs.python.org/issue17515 Open issues: * what should be the name of "pyc" files? * how to handle different configuration of astoptimizer: generate different "pyc" files? * no sys.getasthook(): it would permit to chain different AST hooks -- There are many open issues proposing to write a better Python optimizer. Some of them are stuck because implementing them using the bytecode optimizer is hard. * http://bugs.python.org/issue1346238: A constant folding optimization pass for the AST * http://bugs.python.org/issue2181: optimize out local variables at end of function * http://bugs.python.org/issue2499: Fold unary + and not on constants * http://bugs.python.org/issue4264: Patch: optimize code to use LIST_APPEND instead of calling list.append * http://bugs.python.org/issue7682: Optimisation of if with constant expression * http://bugs.python.org/issue10399: AST Optimization: inlining of function calls * http://bugs.python.org/issue11549: Build-out an AST optimizer, moving some functionality out of the peephole optimizer * http://bugs.python.org/issue17068: peephole optimization for constant strings * http://bugs.python.org/issue17430: missed peephole optimization astoptimizer implements many optimizations listed in these issues. Read the README file to the list of optimizations already implemented, and the TODO file for ideas of new optimizations. -- The project is still experimental. I ran Python 2.7 and 3.4 test suites. Some tests are failing because the AST or the bytecode is different (which is expected). More tests are failing if you enable more agressive optimizations (especially if you enable the agressive mode to remove dead code). At least, Python does not crash :-) (It occurs sometimes when astoptimizer generates invalid AST!) Victor From ether.joe at gmail.com Tue Mar 26 23:05:04 2013 From: ether.joe at gmail.com (Sean Felipe Wolfe) Date: Tue, 26 Mar 2013 15:05:04 -0700 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: References: <51517E62.2080301@ubuntu.com> Message-ID: On Tue, Mar 26, 2013 at 4:34 AM, Victor Stinner wrote: > Anyway, you should trust Brett Canon: "Python 3.3: Trust Me, It's > Better Than Python 2.7". > > https://speakerdeck.com/pyconslides/python-3-dot-3-trust-me-its-better-than-python-2-dot-7-by-dr-brett-cannon Was there supposed to be audio with this, or is it slides only? I got no audio :P From hs at ox.cx Tue Mar 26 23:26:17 2013 From: hs at ox.cx (Hynek Schlawack) Date: Tue, 26 Mar 2013 23:26:17 +0100 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: References: <51517E62.2080301@ubuntu.com> Message-ID: <8ADF93F8-51CA-4943-8C9A-BD21D2D474D7@ox.cx> Am 26.03.2013 um 23:05 schrieb Sean Felipe Wolfe : >> Anyway, you should trust Brett Canon: "Python 3.3: Trust Me, It's >> Better Than Python 2.7". >> >> https://speakerdeck.com/pyconslides/python-3-dot-3-trust-me-its-better-than-python-2-dot-7-by-dr-brett-cannon > Was there supposed to be audio with this, or is it slides only? I got > no audio :P Speakerdeck is slides only. The video is here: http://pyvideo.org/video/1730/python-33-trust-me-its-better-than-27 From ether.joe at gmail.com Wed Mar 27 00:18:34 2013 From: ether.joe at gmail.com (Sean Felipe Wolfe) Date: Tue, 26 Mar 2013 16:18:34 -0700 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: <8ADF93F8-51CA-4943-8C9A-BD21D2D474D7@ox.cx> References: <51517E62.2080301@ubuntu.com> <8ADF93F8-51CA-4943-8C9A-BD21D2D474D7@ox.cx> Message-ID: On Tue, Mar 26, 2013 at 3:26 PM, Hynek Schlawack wrote: > Speakerdeck is slides only. The video is here: http://pyvideo.org/video/1730/python-33-trust-me-its-better-than-27 Sweet thanks! From ether.joe at gmail.com Wed Mar 27 00:49:32 2013 From: ether.joe at gmail.com (Sean Felipe Wolfe) Date: Tue, 26 Mar 2013 16:49:32 -0700 Subject: [Python-Dev] noob contributions to unit tests Message-ID: Hey everybody how are you all :) I am an intermediate-level python coder looking to get help out. I've been reading over the dev guide about helping increase test coverage --> http://docs.python.org/devguide/coverage.html And also the third-party code coverage referenced in the devguide page: http://coverage.livinglogic.de/ I'm seeing that according to the coverage tool, two of my favorite libraries, urllib/urllib2, have no unit tests? Is that correct or am I reading it wrong? If that's correct it seems like a great place perhaps for me to cut my teeth and I would be excited to learn and help out here. And of course any thoughts or advice for an aspiring Python contributor would be appreciated. Of course the dev guide gives me plenty of good info. Thanks! -- A musician must make music, an artist must paint, a poet must write, if he is to be ultimately at peace with himself. - Abraham Maslow From fijall at gmail.com Wed Mar 27 00:59:06 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 26 Mar 2013 16:59:06 -0700 Subject: [Python-Dev] noob contributions to unit tests In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 4:49 PM, Sean Felipe Wolfe wrote: > Hey everybody how are you all :) > > I am an intermediate-level python coder looking to get help out. I've > been reading over the dev guide about helping increase test coverage > --> > http://docs.python.org/devguide/coverage.html > > And also the third-party code coverage referenced in the devguide page: > http://coverage.livinglogic.de/ > > I'm seeing that according to the coverage tool, two of my favorite > libraries, urllib/urllib2, have no unit tests? Is that correct or am I > reading it wrong? > > If that's correct it seems like a great place perhaps for me to cut my > teeth and I would be excited to learn and help out here. > > And of course any thoughts or advice for an aspiring Python > contributor would be appreciated. Of course the dev guide gives me > plenty of good info. > > Thanks! That looks like an error in the coverage report, there are certainly urllib and urllib2 tests in test/test_urllib* From rocky at gnu.org Wed Mar 27 03:11:14 2013 From: rocky at gnu.org (Rocky Bernstein) Date: Tue, 26 Mar 2013 22:11:14 -0400 Subject: [Python-Dev] Can I introspect/reflect to get arguments exec()? Message-ID: [asked on comp.lang.python but no takers. So I'm bumping it up a notch.] I have ported my Python debugger pydbgr to Python3. See [1] or [2]. Inside the debugger, when there is an exec() somewhere in the call stack, I'd like to be able to retrieve the string parameter. With this, the debugger can show part of the string in a call stack. Or it can show the text when the frame is set to that exec() frame. Going further, the debugger could write the exec string out to a temporary file. And when reporting locations, it could report not just something like " line 4", but also give that temporary file name which a front-end could use as well. So consider this code using inspect.getargvalues() and inspect.currentframe(): import inspect def my_exec(string): show_args(inspect.currentframe()) # simulate exec(string) def show_args(frame): print(inspect.getargvalues(frame)) my_exec("show_args(inspect.currentframe())") exec("show_args(inspect.currentframe())") When run this is the output: python3 exec-args.py ArgInfo(args=['string'], varargs=None, keywords=None, locals={'string': 'show_args(inspect.currentframe())'}) ArgInfo(args=[], varargs=None, keywords=None, locals={'my_exec': ,, ... In a different setting, CPython byte-code assembly that gets generated for running exec() is: 25 88 LOAD_GLOBAL 10 (exec) 91 LOAD_CONST 4 ('show_args(inspect.currentframe())') --> 94 CALL_FUNCTION 1 97 POP_TOP What's going on? Also, I have the same question for CPython 2.6 or Python 2.7. Thanks! [1] http://code.google.com/p/pydbgr/ [2] http://code.google.com/p/python3-trepan/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Wed Mar 27 03:18:11 2013 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 26 Mar 2013 22:18:11 -0400 Subject: [Python-Dev] Can I introspect/reflect to get arguments exec()? In-Reply-To: References: Message-ID: 2013/3/26 Rocky Bernstein : > [asked on comp.lang.python but no takers. So I'm bumping it up a notch.] > > I have ported my Python debugger pydbgr to Python3. See [1] or [2]. > > Inside the debugger, when there is an exec() somewhere in the call stack, > I'd like to be able to retrieve the string parameter. With this, the > debugger can show part of the string in a call stack. Or it can show the > text when the frame is set to that exec() frame. > > Going further, the debugger could write the exec string out to a temporary > file. And when reporting locations, it could report not just something like > " line 4", but also give that temporary file name which a front-end > could use as well. > > So consider this code using inspect.getargvalues() and > inspect.currentframe(): > > import inspect > def my_exec(string): > show_args(inspect.currentframe()) # simulate exec(string) > > def show_args(frame): > print(inspect.getargvalues(frame)) > > my_exec("show_args(inspect.currentframe())") > exec("show_args(inspect.currentframe())") > > > When run this is the output: > > python3 exec-args.py > ArgInfo(args=['string'], varargs=None, keywords=None, locals={'string': > 'show_args(inspect.currentframe())'}) > ArgInfo(args=[], varargs=None, keywords=None, locals={'my_exec': > ,, ... > > > In a different setting, CPython byte-code assembly that gets generated for > running exec() is: > > 25 88 LOAD_GLOBAL 10 (exec) > 91 LOAD_CONST 4 > ('show_args(inspect.currentframe())') > --> 94 CALL_FUNCTION 1 > 97 POP_TOP > > What's going on? execing something is not the same as calling it, so there are no arguments. -- Regards, Benjamin From rdmurray at bitdance.com Wed Mar 27 03:24:22 2013 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 26 Mar 2013 22:24:22 -0400 Subject: [Python-Dev] noob contributions to unit tests In-Reply-To: References: Message-ID: <20130327022422.D7ACA250BCA@webabinitio.net> On Tue, 26 Mar 2013 16:59:06 -0700, Maciej Fijalkowski wrote: > On Tue, Mar 26, 2013 at 4:49 PM, Sean Felipe Wolfe wrote: > > Hey everybody how are you all :) > > > > I am an intermediate-level python coder looking to get help out. I've > > been reading over the dev guide about helping increase test coverage > > --> > > http://docs.python.org/devguide/coverage.html > > > > And also the third-party code coverage referenced in the devguide page: > > http://coverage.livinglogic.de/ > > > > I'm seeing that according to the coverage tool, two of my favorite > > libraries, urllib/urllib2, have no unit tests? Is that correct or am I > > reading it wrong? > > > > If that's correct it seems like a great place perhaps for me to cut my > > teeth and I would be excited to learn and help out here. > > > > And of course any thoughts or advice for an aspiring Python > > contributor would be appreciated. Of course the dev guide gives me > > plenty of good info. > > > > Thanks! > > That looks like an error in the coverage report, there are certainly > urllib and urllib2 tests in test/test_urllib* The devguide contains instructions for running coverage yourself, and if I recall correctly the 'fullcoverage' recipe does a better job than what runs at coverage.livinglogic.de. On the other hand, I'm fairly certain that even if the coverage were at 100% code-and-branch coverage, there'd still be tests worth adding, if you are as familiar with the modules as your intro suggests :) However, if you are writing new tests, please write them against the default branch, which means urllib in Python3 (the test files are still named like they are in Python2, though). --David PS: If you aren't aware of the core-mentorship mailing list, you might want to check that out as well. From rocky at gnu.org Wed Mar 27 04:00:55 2013 From: rocky at gnu.org (Rocky Bernstein) Date: Tue, 26 Mar 2013 23:00:55 -0400 Subject: [Python-Dev] Can I introspect/reflect to get arguments exec()? In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 10:18 PM, Benjamin Peterson wrote: > 2013/3/26 Rocky Bernstein : > > [asked on comp.lang.python but no takers. So I'm bumping it up a notch.] > > > > I have ported my Python debugger pydbgr to Python3. See [1] or [2]. > > > > Inside the debugger, when there is an exec() somewhere in the call stack, > > I'd like to be able to retrieve the string parameter. With this, the > > debugger can show part of the string in a call stack. Or it can show the > > text when the frame is set to that exec() frame. > > > > Going further, the debugger could write the exec string out to a > temporary > > file. And when reporting locations, it could report not just something > like > > " line 4", but also give that temporary file name which a > front-end > > could use as well. > > > > So consider this code using inspect.getargvalues() and > > inspect.currentframe(): > > > > import inspect > > def my_exec(string): > > show_args(inspect.currentframe()) # simulate exec(string) > > > > def show_args(frame): > > print(inspect.getargvalues(frame)) > > > > my_exec("show_args(inspect.currentframe())") > > exec("show_args(inspect.currentframe())") > > > > > > When run this is the output: > > > > python3 exec-args.py > > ArgInfo(args=['string'], varargs=None, keywords=None, > locals={'string': > > 'show_args(inspect.currentframe())'}) > > ArgInfo(args=[], varargs=None, keywords=None, locals={'my_exec': > > ,, ... > > > > > > In a different setting, CPython byte-code assembly that gets generated > for > > running exec() is: > > > > 25 88 LOAD_GLOBAL 10 (exec) > > 91 LOAD_CONST 4 > > ('show_args(inspect.currentframe())') > > --> 94 CALL_FUNCTION 1 > > 97 POP_TOP > > > > What's going on? > > execing something is not the same as calling it, so there are no arguments. > Okay. But is the string is still somewhere in the CPython VM stack? (The result of LOAD_CONST 4 above). Is there a way to pick it up from there? At the point that we are stopped the exec action hasn't taken place yet. > > > -- > Regards, > Benjamin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andriy.kornatskyy at live.com Wed Mar 27 08:40:29 2013 From: andriy.kornatskyy at live.com (Andriy Kornatskyy) Date: Wed, 27 Mar 2013 10:40:29 +0300 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: References: , <51517E62.2080301@ubuntu.com>, , , , <8ADF93F8-51CA-4943-8C9A-BD21D2D474D7@ox.cx>, Message-ID: Any plans backport decimal C implementation from 3.3? Thanks. Andriy Kornatskyy ---------------------------------------- > Date: Tue, 26 Mar 2013 16:18:34 -0700 > From: ether.joe at gmail.com > To: hs at ox.cx > CC: python-dev at python.org > Subject: Re: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 > > On Tue, Mar 26, 2013 at 3:26 PM, Hynek Schlawack wrote: > > Speakerdeck is slides only. The video is here: http://pyvideo.org/video/1730/python-33-trust-me-its-better-than-27 > > Sweet thanks! > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/andriy.kornatskyy%40live.com From andrew.svetlov at gmail.com Wed Mar 27 09:24:47 2013 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Wed, 27 Mar 2013 10:24:47 +0200 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: References: <51517E62.2080301@ubuntu.com> <8ADF93F8-51CA-4943-8C9A-BD21D2D474D7@ox.cx> Message-ID: No. _decimal is new functionality that will never be backported. On Wed, Mar 27, 2013 at 9:40 AM, Andriy Kornatskyy wrote: > Any plans backport decimal C implementation from 3.3? > > Thanks. > > Andriy Kornatskyy > > > ---------------------------------------- >> Date: Tue, 26 Mar 2013 16:18:34 -0700 >> From: ether.joe at gmail.com >> To: hs at ox.cx >> CC: python-dev at python.org >> Subject: Re: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 >> >> On Tue, Mar 26, 2013 at 3:26 PM, Hynek Schlawack wrote: >> > Speakerdeck is slides only. The video is here: http://pyvideo.org/video/1730/python-33-trust-me-its-better-than-27 >> >> Sweet thanks! >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/andriy.kornatskyy%40live.com > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com -- Thanks, Andrew Svetlov From andriy.kornatskyy at live.com Wed Mar 27 10:52:54 2013 From: andriy.kornatskyy at live.com (Andriy Kornatskyy) Date: Wed, 27 Mar 2013 12:52:54 +0300 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: References: , <51517E62.2080301@ubuntu.com>, , , , <8ADF93F8-51CA-4943-8C9A-BD21D2D474D7@ox.cx>, , , Message-ID: Andrew, Thank you for the prompt response back. > will never be backported. Who knows? Is python 3.3 _decimal interface 100% compatible with one pure python implementation in 2.7? I suppose code written using decimals in python 2.7 should work in python 3 with no changes, or there should be taken caution for something? According to presentation slides, 30x performance boost is something definitely valuable for 2.7 crowd. Thanks. Andriy ---------------------------------------- > Date: Wed, 27 Mar 2013 10:24:47 +0200 > Subject: Re: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 > From: andrew.svetlov at gmail.com > To: andriy.kornatskyy at live.com > CC: python-dev at python.org > > No. _decimal is new functionality that will never be backported. > > On Wed, Mar 27, 2013 at 9:40 AM, Andriy Kornatskyy > wrote: > > Any plans backport decimal C implementation from 3.3? > > > > Thanks. > > > > Andriy Kornatskyy > > > > > > ---------------------------------------- > >> Date: Tue, 26 Mar 2013 16:18:34 -0700 > >> From: ether.joe at gmail.com > >> To: hs at ox.cx > >> CC: python-dev at python.org > >> Subject: Re: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 > >> > >> On Tue, Mar 26, 2013 at 3:26 PM, Hynek Schlawack wrote: > >> > Speakerdeck is slides only. The video is here: http://pyvideo.org/video/1730/python-33-trust-me-its-better-than-27 > >> > >> Sweet thanks! > >> _______________________________________________ > >> Python-Dev mailing list > >> Python-Dev at python.org > >> http://mail.python.org/mailman/listinfo/python-dev > >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/andriy.kornatskyy%40live.com > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > http://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com > > > > -- > Thanks, > Andrew Svetlov From amauryfa at gmail.com Wed Mar 27 10:54:37 2013 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 27 Mar 2013 10:54:37 +0100 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: References: <51517E62.2080301@ubuntu.com> <8ADF93F8-51CA-4943-8C9A-BD21D2D474D7@ox.cx> Message-ID: 2013/3/27 Andrew Svetlov > No. _decimal is new functionality that will never be backported. > > On Wed, Mar 27, 2013 at 9:40 AM, Andriy Kornatskyy > wrote: > > Any plans backport decimal C implementation from 3.3? > No. 2.7 does not accept new features anymore, but you can install this backport from PyPI: https://pypi.python.org/pypi/cdecimal/2.3 -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From andriy.kornatskyy at live.com Wed Mar 27 11:08:28 2013 From: andriy.kornatskyy at live.com (Andriy Kornatskyy) Date: Wed, 27 Mar 2013 13:08:28 +0300 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: References: , <51517E62.2080301@ubuntu.com>, , , , <8ADF93F8-51CA-4943-8C9A-BD21D2D474D7@ox.cx>, , , , Message-ID: > No. 2.7 does not accept new features anymore,? It is clear now. > but you can install this backport from PyPI: https://pypi.python.org/pypi/cdecimal/2.3 This is what I was looking for. Thanks. Andriy ________________________________ > Date: Wed, 27 Mar 2013 10:54:37 +0100 > Subject: Re: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 > From: amauryfa at gmail.com > To: andrew.svetlov at gmail.com > CC: andriy.kornatskyy at live.com; python-dev at python.org > > 2013/3/27 Andrew Svetlov > > > No. _decimal is new functionality that will never be backported. > > On Wed, Mar 27, 2013 at 9:40 AM, Andriy Kornatskyy > > wrote: > > Any plans backport decimal C implementation from 3.3? > > No. 2.7 does not accept new features anymore, but you can install > this backport from PyPI: https://pypi.python.org/pypi/cdecimal/2.3 > > -- > Amaury Forgeot d'Arc From stefan at bytereef.org Wed Mar 27 12:34:35 2013 From: stefan at bytereef.org (Stefan Krah) Date: Wed, 27 Mar 2013 12:34:35 +0100 Subject: [Python-Dev] [RELEASE] Python 2.7.4 release candidate 1 In-Reply-To: References: <51517E62.2080301@ubuntu.com> <8ADF93F8-51CA-4943-8C9A-BD21D2D474D7@ox.cx> Message-ID: <20130327113435.GA32709@sleipnir.bytereef.org> Andriy Kornatskyy wrote: > > but you can install this backport from PyPI: https://pypi.python.org/pypi/cdecimal/2.3 > > This is what I was looking for. Note that for numerical work _decimal from Python3.3 is vastly faster than cdecimal. I've added two major speedups (credit for one of them goes to Antoine), so these are the numbers for the pi benchmark: Python3.3 (_decimal): time: 0.15s Python3.3 (decimal.py): time: 17.22s ===================================== Python2.7 (cdecimal): time: 0.29s Python2.7 (decimal.py): time: 17.74s ===================================== For database work and such the numbers should be about the same. Stefan Krah From vinay_sajip at yahoo.co.uk Wed Mar 27 20:38:03 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 19:38:03 +0000 (UTC) Subject: [Python-Dev] Safely importing zip files with C extensions Message-ID: > This quote is here to stop GMane complaining that I'm top-posting. Ignore. I've already posted this to distutils-sig, but thought that it might be of interest to readers here as it relates to importing C extensions ... zipimport is great, but there can be issues importing software that contains C extensions. But the new wheel format (PEP 427) may give us a better way of importing zip files containing C extensions. Since wheels are .zip files, they can sometimes be used to provide functionality without needing to be installed. But whereas .zip files contain no convention for indicating compatibility with a particular Python, wheels do contain this compatibility information. Thus, it is possible to check if a wheel can be directly imported from, and the wheel support in distlib allows you to take advantage of this using the mount() and unmount() methods. When you mount a wheel, its absolute path name is added to sys.path, allowing the Python code in it to be imported. (A DistlibException is raised if the wheel isn't compatible with the Python which calls the mount() method.) You don't need mount() just to add the wheel's name to sys.path, or to import pure-Python wheels, of course. But the mount() method goes further than just enabling Python imports - any C extensions in the wheel are also made available for import. For this to be possible, the wheel has to be built with additional metadata about extensions - a JSON file called EXTENSIONS which serialises an extension mapping dictionary. This maps extension module names to the names in the wheel of the shared libraries which implement those modules. Running unmount() on the wheel removes its absolute pathname from sys.path and makes its C extensions, if any, also unavailable for import. Wheels built with the new "distil" tool contain the EXTENSIONS metadata, so can be mounted complete with C extensions: $ distil download -d /tmp simplejson Downloading simplejson-3.1.2.tar.gz to /tmp/simplejson-3.1.2 63KB @ 73 KB/s 100 % Done: 00:00:00 Unpacking ... done. $ distil package --fo=wh -d /tmp /tmp/simplejson-3.1.2/ The following packages were built: /tmp/simplejson-3.1.2-cp27-none-linux_x86_64.whl $ python Python 2.7.2+ (default, Jul 20 2012, 22:15:08) [GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from distlib.wheel import Wheel >>> w = Wheel('/tmp/simplejson-3.1.2-cp27-none-linux_x86_64.whl') >>> w.mount() >>> import simplejson._speedups >>> dir(simplejson._speedups) ['__doc__', '__file__', '__loader__', '__name__', '__package__', 'encode_basestring_ascii', 'make_encoder', 'make_scanner', 'scanstring'] >>> simplejson._speedups.__file__ '/home/vinay/.distlib/dylib-cache/simplejson/_speedups.so' >>> Does anyone see any problems with this approach to importing C extensions from zip files? Regards, Vinay Sajip From amauryfa at gmail.com Wed Mar 27 21:13:27 2013 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 27 Mar 2013 21:13:27 +0100 Subject: [Python-Dev] Safely importing zip files with C extensions In-Reply-To: References: Message-ID: 2013/3/27 Vinay Sajip > When you mount a wheel, its absolute path name is added to > sys.path, allowing the Python code in it to be imported. > Better: just put the wheel path to sys.path sys.path.append('/tmp/simplejson-3.1.2-cp27-none-linux_x86_64.whl') and let a sys.path_hook entry do the job. Such a WheelImporter could even inherit from zipimporter, plus the magic required for C extensions. It avoids the mount/nomount methods, only the usual sys.path operations are necessary from the user. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Wed Mar 27 21:23:36 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 27 Mar 2013 21:23:36 +0100 Subject: [Python-Dev] astoptimizer: static optimizer working on the AST In-Reply-To: References: Message-ID: Victor Stinner, 26.03.2013 22:56: > * what should be the name of "pyc" files? > * how to handle different configuration of astoptimizer: generate > different "pyc" files? You could use (a part of) a crypto hash of the serialised options as part of the filename. One drawback of a .pyc duplication is that when you ship code to other environments that run with different optimiser flags, those would not pick up the same .pyc files. Although, would it really be a problem if different modules that were compiled with different optimisation settings get mixed on the same machine? Are there any non semantics preserving AST modifications for a module that have an impact on other modules? I.e., is this just a development time concern when trying out different optimisations, or is this a problem for deployed code as well? Stefan From stefan_ml at behnel.de Wed Mar 27 21:34:19 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 27 Mar 2013 21:34:19 +0100 Subject: [Python-Dev] Safely importing zip files with C extensions In-Reply-To: References: Message-ID: Vinay Sajip, 27.03.2013 20:38: > >>> w = Wheel('/tmp/simplejson-3.1.2-cp27-none-linux_x86_64.whl') > >>> w.mount() > >>> import simplejson._speedups > >>> dir(simplejson._speedups) > ['__doc__', '__file__', '__loader__', '__name__', '__package__', > 'encode_basestring_ascii', 'make_encoder', 'make_scanner', 'scanstring'] > >>> simplejson._speedups.__file__ > '/home/vinay/.distlib/dylib-cache/simplejson/_speedups.so' I've always hated this setuptools misfeature of copying C extensions from an installed archive into a user directory, one for each user. At least during normal installation, they should be properly unpacked into normal shared library files in the file system. Whether it then makes sense to special case one-shot trial imports like the above without installation is a bit of a different question, but I don't see a compelling reason for adding complexity here. It's not really an important use case. Stefan From vinay_sajip at yahoo.co.uk Wed Mar 27 21:41:05 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 20:41:05 +0000 (UTC) Subject: [Python-Dev] Safely importing zip files with C extensions References: Message-ID: Amaury Forgeot d'Arc gmail.com> writes: > Better: just put the wheel path to sys.path? ? sys.path.append('/tmp/simplejson-3.1.2-cp27-none-linux_x86_64.whl') > and let a sys.path_hook entry do the job. That's what the mount() actually does - adds the wheel to a registry that an import hook uses. You also need a place to check that the wheel being mounted is compatible with the Python doing the mounting - I'm not sure whether what the import hook should do if e.g. there is a compatibility problem with the wheel (e.g. is it clear that it should always raise an ImportError? Or ignore the wheel - seems wrong? Or do something else?) Regards, Vinay Sajip From dholth at gmail.com Wed Mar 27 21:49:59 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 27 Mar 2013 16:49:59 -0400 Subject: [Python-Dev] Safely importing zip files with C extensions In-Reply-To: References: Message-ID: Jim Fulton is right that weird failures are a characteristic of zipped eggs, so one of the #1 requests for setuptools is how to prohibit zipping from ever happening. This is an important reason why wheel is billed as an installation format -- fewer users with pitchforks. It's very cool that it works though. Debugging is slightly easier than it was in the old days because pdb can now read the source code from the zip. An unzipped wheel as a directory with the same name as the wheel would be a more reliable solution that might be interesting to work with. It would work the same as egg unless you had important files in the .data/ (currently mostly used for console scripts and include files) directory. However it was always confusing to have more than one kind (zipped, unzipped) of egg. On Wed, Mar 27, 2013 at 4:41 PM, Vinay Sajip wrote: > Amaury Forgeot d'Arc gmail.com> writes: > > >> Better: just put the wheel path to sys.path > sys.path.append('/tmp/simplejson-3.1.2-cp27-none-linux_x86_64.whl') >> and let a sys.path_hook entry do the job. > > That's what the mount() actually does - adds the wheel to a registry that an > import hook uses. You also need a place to check that the wheel being mounted > is compatible with the Python doing the mounting - I'm not sure whether what > the import hook should do if e.g. there is a compatibility problem with the > wheel (e.g. is it clear that it should always raise an ImportError? Or ignore > the wheel - seems wrong? Or do something else?) > > Regards, > > Vinay Sajip > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/dholth%40gmail.com From vinay_sajip at yahoo.co.uk Wed Mar 27 21:59:11 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 20:59:11 +0000 (UTC) Subject: [Python-Dev] Safely importing zip files with C extensions References: Message-ID: Stefan Behnel behnel.de> writes: > I've always hated this setuptools misfeature of copying C extensions from > an installed archive into a user directory, one for each user. At least > during normal installation, they should be properly unpacked into normal > shared library files in the file system. The user directory location is not a key part of the functionality, it could just as well be a shared location across all users. And this is an option for specific scenarios, not a general substitute for installing the wheel (which unpacks everything into FHS-style locations). A lot of people use virtual envs, which are per-user anyway. I'm not suggesting this is a good idea for system-wide deployments of software. > Whether it then makes sense to special case one-shot trial imports like the > above without installation is a bit of a different question, but I don't > see a compelling reason for adding complexity here. It's not really an > important use case. Well, my post was to elicit some comment about the usefulness of the feature, so fair enough. It doesn't seem especially complex though, unless I've missed something. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Wed Mar 27 22:04:40 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 27 Mar 2013 21:04:40 +0000 (UTC) Subject: [Python-Dev] Safely importing zip files with C extensions References: Message-ID: Daniel Holth gmail.com> writes: > zipping from ever happening. This is an important reason why wheel is > billed as an installation format -- fewer users with pitchforks. It's > very cool that it works though. Debugging is slightly easier than it > was in the old days because pdb can now read the source code from the > zip. Well, it's just an experiment, and I was soliciting comments because I'm not as familiar with the issues as some others are. Distlib is still only at version 0.1.1, and the mount()/unmount() functionality is not set in stone :-) Regards, Vinay Sajip From brad.froehle at gmail.com Wed Mar 27 22:19:39 2013 From: brad.froehle at gmail.com (Bradley M. Froehle) Date: Wed, 27 Mar 2013 14:19:39 -0700 Subject: [Python-Dev] Safely importing zip files with C extensions In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 1:13 PM, Amaury Forgeot d'Arc wrote: > 2013/3/27 Vinay Sajip > >> When you mount a wheel, its absolute path name is added to >> sys.path, allowing the Python code in it to be imported. >> > > Better: just put the wheel path to sys.path > sys.path.append('/tmp/simplejson-3.1.2-cp27-none-linux_x86_64.whl') > and let a sys.path_hook entry do the job. > > Such a WheelImporter could even inherit from zipimporter, plus the magic > required for C extensions. > I implemented just such a path hook ---- zipimporter plus the magic required for C extensions --- as a challenge to myself to learn more about the Python import mechanisms. See https://github.com/bfroehle/pydzipimport. Cheers, Brad -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Mar 27 23:59:55 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 27 Mar 2013 22:59:55 +0000 Subject: [Python-Dev] Writing importers and path hooks Message-ID: On 27 March 2013 21:19, Bradley M. Froehle wrote: > I implemented just such a path hook ---- zipimporter plus the magic required > for C extensions --- as a challenge to myself to learn more about the Python > import mechanisms. > > See https://github.com/bfroehle/pydzipimport. Apologies for hijacking the thread, but it's interesting that you implemented your hook like this. I notice that you didn't use any of the importlib functionality in doing so. Was there a particular reason? I ask because a few days ago, I was writing a very similar importer, as I wanted to try a proof of concept importer based on the new importlib stuff (which is intended to make writing custom importers easier), and I really struggled to get something working. It seems to me that the importlib documentation doesn't help much for people trying to import path hooks. But it might be just me. Does anyone have an example of a simple importlib-based finder/loader? That would be a huge help for me. In return for any pointers, I'll look at putting together a doc patch to clarify how to use importlib to build your own path hooks :-) Thanks, Paul From terri at zone12.com Thu Mar 28 00:25:29 2013 From: terri at zone12.com (Terri Oda) Date: Wed, 27 Mar 2013 17:25:29 -0600 Subject: [Python-Dev] Google Summer of Code - Organization Deadline Approaching - March 29 In-Reply-To: References: Message-ID: <51537FE9.4000402@zone12.com> On 03/26/2013 01:51 PM, Brian Curtin wrote: > Just an FYI that there are under 3 days to apply to Google Summer of > Code for mentoring organizations: > http://www.google-melange.com/gsoc/homepage/google/gsoc2013. The > student application deadline is later on in May. > > If you run a project that is interested in applying under the Python > umbrella organization, contact Terri Oda at terri at zone12.com I would obviously love it if you got your ideas up before the March 29th deadline so our application will totally shine and we'll be accepted immediately in to the program. But I should note that while the Google deadline is March 29th, I'm happy to accept applications for sub-organizations and new project ideas after that deadline. The students will be descending on April 9th, so if you want to attract the very best, try to get your project ideas up near then if at all possible! Presuming we get accepted, though, I can take extra ideas almost up until the student applications start to come in, but I'd prefer them before April 15th (again, so you can attract the best students). > Is anyone here interested in leading CPython through GSOC? Anyone have > potential students to get involved, or interested in being a mentor? For those on the fence about mentoring or worried about doing a good job, I have a few experienced GSoC mentors who are currently without projects who have volunteered to help us out this year. I'm happy to pair anyone who's interested with someone who loves working with GSoC students but maybe won't be familiar with the project, so you can learn the mentoring ropes and do code reviews but have a backup mentor there to help both you and the student through the GSoC process. Please let me know ASAP if you'd like to do this so I can introduce you to the person and give them time to learn as much as they can about the projects you're hoping to run before students start arriving. Terri From pje at telecommunity.com Thu Mar 28 05:10:32 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 28 Mar 2013 00:10:32 -0400 Subject: [Python-Dev] Safely importing zip files with C extensions In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 5:19 PM, Bradley M. Froehle wrote: > I implemented just such a path hook ---- zipimporter plus the magic required > for C extensions --- as a challenge to myself to learn more about the Python > import mechanisms. > > See https://github.com/bfroehle/pydzipimport. FYI, there appears to be a bug for Windows with packages: you're using '/__init__' in a couple places that should actually be os.sep+'__init__'. This does seem like a good way to address the issue, for those rare situations where this would be a good idea. The zipped .egg approach was originally intended for user-managed plugin directories for certain types of extensible platforms, where "download a file and stick it in the plugins directory" is a low-effort way to install plugins, without having to build a lot of specialized install capability. As Jim has pointed out, though, this doesn't generalize well to a full-blown packaging system. Technically, you can blame Bob Ippolito for this, since he's the one who talked me into using eggs to install Python libraries in general, not just as a plugin packaging mechanism. ;-) That being said, *unpacked* egg, er, wheels, are still a great way to meet all of the "different apps needing different versions" use cases (without needing one venv per app), and nowadays the existence of automated installer tools means that using one to install a plugin for a low-tech plugin system is not a big deal, as long as that tool supports the simple unpacked wheel scenario. So I wholeheartedly support some kind of mount/unmount or "require"-type mechanism for finding plugins. pkg_resources even has an API for handling simple dynamic plugin dependency resolution scenarios: http://peak.telecommunity.com/DevCenter/PkgResources#locating-plugins It'd be a good idea if distlib provides a similar feature, or at least the APIs upon which apps or frameworks can implement such features. (Historical note for those who weren't around back then: easy_install wasn't even an *idea* until well after eggs were created; the original idea was just that people would build plugins and libraries as eggs and manually drop them in directories, where a plugin support library would discover them and add them to sys.path as needed. And Bob and I also considered a sort of "update site" mechanism ala Eclipse, with a library to let apps fetch plugins. But as soon as eggs existed and PyPI allowed uploads, it was kind of an obvious follow-up to make an installation tool as a kind of "technology demonstration".... which promptly became a monster. The full story with all its twists and turns can also be found here: http://mail.python.org/pipermail/python-dev/2006-April/064145.html ) From pje at telecommunity.com Thu Mar 28 06:45:18 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 28 Mar 2013 01:45:18 -0400 Subject: [Python-Dev] Can I introspect/reflect to get arguments exec()? In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 11:00 PM, Rocky Bernstein wrote: > Okay. But is the string is still somewhere in the CPython VM stack? (The > result of LOAD_CONST 4 above). Is there a way to pick it up from there? Maybe using C you could peek into the frame's value stack, but that's not exposed to any Python API I know of. But that still doesn't help you, because the value will be removed from the stack before exec() is actually called, which means if you go looking for it in code called from the exec (e.g. the call event itself), you aren't going to see the data. > At the point that we are stopped the exec action hasn't taken place yet. That doesn't help if you're using line-level tracing events. At the beginning of the line, the data's not on the call stack yet, and by the time you enter the frame of the code being exec'd, it'll be off the stack again. Basically, there is no way to do what you're asking for, short of replacing the built-in exec function with your own version. And it still won't help you with stepping through the source of functions that are *compiled* using an exec() or compile(), or any other way of ending up with dynamically-generated code you want to debug. (Unless you use something like DecoratorTools to generate it, that is -- DecoratorTools has some facilities for caching dynamically-generated code so that it works properly with debuggers. But that has to be done by the code doing the generation, not the debugger. If the code generator uses DecoratorTools' caching support, then any debugger that uses the linecache module will Just Work. It might be nice for the stdlib should have something like this, but you could also potentially fake it by replacing the builtin eval, exec, compile, etc. functions w/versions that cache the source.) From trent at snakebite.org Thu Mar 28 07:26:51 2013 From: trent at snakebite.org (Trent Nelson) Date: Thu, 28 Mar 2013 02:26:51 -0400 Subject: [Python-Dev] Post-PyCon updates to PyParallel Message-ID: <20130328062651.GA64080@snakebite.org> [ python-dev: I've set up a new list for pyparallel discussions: https://lists.snakebite.net/mailman/listinfo/pyparallel. This e-mail will be the last I'll send to python-dev@ regarding the on-going pyparallel work; please drop python-dev@ from the CC and just send to pyparallel at lists.snakebite.net -- I'll stay on top of the posts-from-unsubscribed-users moderation for those that want to reply to this e-mail but not subscribe. ] Hi folks, Wanted to give a quick update on the parallel work both during and after PyCon. During the language summit when I presented the slides I uploaded to speakerdeck.com, the majority of questions from other developers revolved around the big issues like data integrity and what happens when parallel objects interact with main-thread objects and vice-versa. So, during the sprints, I explored putting guards in place to throw an exception if we detect that a user has assigned a parallel object to a non-protected main-thread object. (I describe the concept of 'protection' in my follow up posts to python-dev last week: http://mail.python.org/pipermail/python-dev/2013-March/124690.html. Basically, protecting a main-thread object allows code like this to work without crashing: d = async.dict() def foo(): # async.rdtsc() is a helper method # that basically wraps the result of # the assembly RDTSC (read time- # stamp counter) instruction into a # PyLong object. So, it's handy when # I need to test the very functionality # being demonstrated here (creating # an object within a parallel context # and persisting it elsewhere). d['foo'] = async.rdtsc() def bar(): d['bar'] = async.rdtsc() async.submit_work(foo) async.submit_work(bar) ) It was actually pretty easy, far easier than I expected. It was achieved via Px_CHECK_PROTECTION(): https://bitbucket.org/tpn/pyparallel/commits/f3fe082668c6f3f699db990f046291ff66b1b467#LInclude/object.hT1072 Various new tests related to the protection functionality: https://bitbucket.org/tpn/pyparallel/commits/f3fe082668c6f3f699db990f046291ff66b1b467#LLib/async/test/test_primitives.pyT58 The type of changes I had to make to other parts of CPython to perform the protection checks: https://bitbucket.org/tpn/pyparallel/commits/f3fe082668c6f3f699db990f046291ff66b1b467#LObjects/abstract.cT170 That was all working fine... until I started looking at adding support for lists (i.e. appending a parallel thread object to a protected, main-thread list). The problem is that appending to a list will often involve a list resize, which is done via PyMem_REALLOC() and some custom fiddling. That would mean if a parallel thread attempts to append to a list and it needs resizing, all the newly realloc'd memory would be allocated from the parallel context's heap. Now, this heap would stick around as long as the parallel objects have a refcount > 0. However, as soon as the last parallel object's refcount hits 0, the entire context will be scheduled for the cleanup/release/free dance, which will eventually blow away the entire heap and all the memory allocated against that heap... which means all the **ob_item stuff that was reallocated as part of the list resize. Not particularly desirable :-) As I was playing around with ways to potentially pre-allocate lists, it occurred to me that dicts would be affected in the exact same way; I just hadn't run into it yet because my unit tests only ever assigned a few (<5) objects to the protected dicts. Once the threshold gets reached (10?), a "dict resize" would take place, which would involve lots of PyMem_REALLOCs, and we get into the exact same situation mentioned above. So, at that point, I concluded that whole async protection stuff was not a viable long term solution. (In fact, the reason I first added it was simply to have an easy way to test things in unit tests.) The new solution I came up with: new thread-safe, interlocked data types that are *specifically* designed for this exact use case; transferring results from computation in a parallel thread back to a main thread 'container' object. First up is a new list type: xlist() (PyXListObject/PyXList_Type). I've just committed the work-in-progress stuff I've been able to hack out whilst traveling the past few days: https://bitbucket.org/tpn/pyparallel/commits/5b662eba4efe83e94d31bd9db4520a779aea612a It's not finished, and I'm pretty sure it doesn't even compile yet, but the idea is something like this: results = xlist() def worker1(input): # do work result = useful_work1() results.push(result) def worker2(input): # do work result = useful_work2() results.push(result) data = data_to_process() async.submit_work(worker1, data[:len(data)]) async.submit_work(worker2, data[len(data):]) async.run() for result in results: print(result) The big change is what happens during xlist.push(): https://bitbucket.org/tpn/pyparallel/commits/5b662eba4efe83e94d31bd9db4520a779aea612a#LPython/pyparallel.cT3844 +PyObject * +xlist_push(PyObject *obj, PyObject *src) +{ + PyXListObject *xlist = (PyXListObject *)obj; + assert(src); + + if (!Py_PXCTX) + PxList_PushObject(xlist->head, src); + else { + PyObject *dst; + _PyParallel_SetHeapOverride(xlist->heap_handle); + dst = PyObject_Clone(src, "objects of type %s cannot " + "be pushed to xlists"); + _PyParallel_RemoveHeapOverride(); + if (!dst) + return NULL; + PxList_PushObject(xlist->head, dst); + } + + /* + if (Px_CV_WAITERS(xlist)) + ConditionVariableWakeOne(&(xlist->cv)); + */ + + Py_RETURN_NONE; } Note the heap override and PyObject_Clone(), which currently looks like this: +PyObject * +PyObject_Clone(PyObject *src, const char *errmsg) +{ + int valid_type; + PyObject *dst; + PyTypeObject *tp; + + tp = Py_TYPE(src); + + valid_type = ( + PyBytes_CheckExact(src) || + PyByteArray_CheckExact(src) || + PyUnicode_CheckExact(src) || + PyLong_CheckExact(src) || + PyFloat_CheckExact(src) + ); + + if (!valid_type) { + PyErr_Format(PyExc_ValueError, errmsg, tp->tp_name); + return NULL; + } + + if (PyLong_CheckExact(src)) { + + } else if (PyFloat_CheckExact(src)) { + + } else if (PyUnicode_CheckExact(src)) { + + } else { + assert(0); + } + + +} Initially, I just want to get support working for simple types that are easy to clone. Any sort of GC/container types will obviously take a lot more work as they need to be deep-copied. You might also note the Px_CV_WAITERS() bit; these interlocked lists could quite easily function as producer/consumer queues, so, maybe you could do something like this: queue = xlist() def consumer(input): # do work ... def producer(): for i in xrange(100): queue.push(i) async.submit_queue(queue, consumer) async.submit_work(producer) Oh, forgot to mention the heap-override specifics: each xlist() gets its own heap handle -- when the "pushing" is done and the parallel object needs to be copied, the new memory is allocated against the xlist's heap. That heap will stick around until the xlist's refcnt hits 0, then everything will be blown away in one fell swoop. (Which means I'll need to tweak the memory/refcnt intercepts to handle this new concept -- like I had to do to support the notion of persisted contexts. Not a big deal.) I really like this approach; much more so than the persisted context stuff and the even-more-convoluted promotion stuff (yet to be written). Encapsulating all the memory associated with parallel to main-thread object transitions in the very object that is used to effect the transition just feels right. So, that means there are three main "memory alloc override"-type modes currently: - Normal. (Main-thread stuff, ref counting, PyMalloc stuff.) - Purely parallel. (Context-specific heap stuff, very fast.) - Parallel->main-thread transitions. (The stuff above.) (Or rather, there will be, once I finish this xlist stuff. That'll allow me to deprecate the TLS override stuff and the Context persistence stuff, both of which were nice experiments but fizzled out in practice.) ....the Px_CHECK_PROTECTION() work was definitely useful though and will need to be expanded to all objects. This will allow us to raise an exception if someone attempts to assign a parallel object to a normal main thread object (instead of one of the approved interlocked/parallel objects (like xlist)). Regards, Trent. From trent at snakebite.org Thu Mar 28 07:32:56 2013 From: trent at snakebite.org (Trent Nelson) Date: Thu, 28 Mar 2013 02:32:56 -0400 Subject: [Python-Dev] Post-PyCon updates to PyParallel In-Reply-To: <20130328062651.GA64080@snakebite.org> References: <20130328062651.GA64080@snakebite.org> Message-ID: <20130328063256.GD64080@snakebite.org> On Wed, Mar 27, 2013 at 11:26:51PM -0700, Trent Nelson wrote: > [ python-dev: I've set up a new list for pyparallel discussions: > https://lists.snakebite.net/mailman/listinfo/pyparallel. This > e-mail will be the last I'll send to python-dev@ regarding the > on-going pyparallel work; please drop python-dev@ from the CC > and just send to pyparallel at lists.snakebite.net -- I'll stay on > top of the posts-from-unsubscribed-users moderation for those that > want to reply to this e-mail but not subscribe. ] Gah, wrong e-mail address, it's pyparallel at snakebite.net. Trent. From rocky at gnu.org Thu Mar 28 11:43:10 2013 From: rocky at gnu.org (Rocky Bernstein) Date: Thu, 28 Mar 2013 06:43:10 -0400 Subject: [Python-Dev] Can I introspect/reflect to get arguments exec()? In-Reply-To: References: Message-ID: Thank you for your very thoughtful and detailed explanation of what is going on and for your considerations as to how to get this done (as opposed to why it *can't *be done). It looks like DecoratorTools or more likely a customized version of it is the way to go! (The important info is above. The rest are just some geeky details) You see, there are some places where additional care may be needed in my setting. The debuggers I write use sometimes don't use just the getline module but also use my own pyficache module. I want to also cache things like file stat() info, provide a SHA1 for the text of the file, and provide colorized syntax-highlighted versions of the text when desired. But since I control pyficache, I can mirror the changes made to getline. Of course the debugger uses sys.settrace too, so the evil-ness of that is definitely not a concern. But possibly I need to make sure that since the DecoratorTools and the debugger both hook into trace hooks they play nice together and fire in the right order. And for that I created another module called tracer() which takes into account that one might want to specify priorities in the chain hook order, and that one might want a to filter out (i.e. ignore) certain calls to the hook function for specific hooks. It may be a while before I seriously get to this, but again, it is good to have in mind an approach to take. So thanks again. On Thu, Mar 28, 2013 at 1:45 AM, PJ Eby wrote: > On Tue, Mar 26, 2013 at 11:00 PM, Rocky Bernstein wrote: > > Okay. But is the string is still somewhere in the CPython VM stack? (The > > result of LOAD_CONST 4 above). Is there a way to pick it up from there? > > Maybe using C you could peek into the frame's value stack, but that's > not exposed to any Python API I know of. But that still doesn't help > you, because the value will be removed from the stack before exec() is > actually called, which means if you go looking for it in code called > from the exec (e.g. the call event itself), you aren't going to see > the data. > > > At the point that we are stopped the exec action hasn't taken place yet. > > That doesn't help if you're using line-level tracing events. At the > beginning of the line, the data's not on the call stack yet, and by > the time you enter the frame of the code being exec'd, it'll be off > the stack again. > > Basically, there is no way to do what you're asking for, short of > replacing the built-in exec function with your own version. And it > still won't help you with stepping through the source of functions > that are *compiled* using an exec() or compile(), or any other way of > ending up with dynamically-generated code you want to debug. > > (Unless you use something like DecoratorTools to generate it, that is > -- DecoratorTools has some facilities for caching > dynamically-generated code so that it works properly with debuggers. > But that has to be done by the code doing the generation, not the > debugger. If the code generator uses DecoratorTools' caching support, > then any debugger that uses the linecache module will Just Work. It > might be nice for the stdlib should have something like this, but you > could also potentially fake it by replacing the builtin eval, exec, > compile, etc. functions w/versions that cache the source.) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xdegaye at gmail.com Thu Mar 28 12:58:17 2013 From: xdegaye at gmail.com (Xavier de Gaye) Date: Thu, 28 Mar 2013 12:58:17 +0100 Subject: [Python-Dev] Can we triple quoted string as a comment? In-Reply-To: <20130326132851.8D4AB250069@webabinitio.net> References: <7B40444C-FA79-4776-A356-7FC76A5CADC7@gmail.com> <20130326132851.8D4AB250069@webabinitio.net> Message-ID: On Tue, Mar 26, 2013 at 2:28 PM, R. David Murray wrote: > On Mon, 25 Mar 2013 18:16:47 -0700, Raymond Hettinger wrote: >> If you're editing with Emacs, it is really easy to reflow paragraphs >> and to insert or remove multiline comments each prefixed with #. >> But with other editors, it can be a PITA and a multiline string is >> the easiest to maintain and works well when cutting-and-pasting >> the comments from somewhere else. > > Just FYI it is also very easy in vim: gq plus whatever movement prefix > suits the situation. And to comment out multiple lines in vim, each prefixed with #, see ":help v_b_I" and ":help v_b_I_example". Xavier From brett at python.org Thu Mar 28 14:42:08 2013 From: brett at python.org (Brett Cannon) Date: Thu, 28 Mar 2013 09:42:08 -0400 Subject: [Python-Dev] Writing importers and path hooks In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 6:59 PM, Paul Moore wrote: > On 27 March 2013 21:19, Bradley M. Froehle wrote: > > I implemented just such a path hook ---- zipimporter plus the magic > required > > for C extensions --- as a challenge to myself to learn more about the > Python > > import mechanisms. > > > > See https://github.com/bfroehle/pydzipimport. > > Apologies for hijacking the thread, but it's interesting that you > implemented your hook like this. I notice that you didn't use any of > the importlib functionality in doing so. Was there a particular > reason? I ask because a few days ago, I was writing a very similar > importer, as I wanted to try a proof of concept importer based on the > new importlib stuff (which is intended to make writing custom > importers easier), and I really struggled to get something working. > Struggling how? With the finder? The loader? What exactly were you trying to accomplish and how were you deviating from the standard import system? > > It seems to me that the importlib documentation doesn't help much for > people trying to import path hooks. There is a bug to clarify the docs to have more geared towards writing new importers instead of just documenting what's available: http://bugs.python.org/issue15867 > But it might be just me. Does > anyone have an example of a simple importlib-based finder/loader? Define simple. =) I would argue importlib itself is easy enough to read. > That > would be a huge help for me. In return for any pointers, I'll look at > putting together a doc patch to clarify how to use importlib to build > your own path hooks :-) > Do you specifically mean the path hook aspect or the whole package of hook, finder, and loader? -------------- next part -------------- An HTML attachment was scrubbed... URL: From theller at ctypes.org Thu Mar 28 15:44:08 2013 From: theller at ctypes.org (Thomas Heller) Date: Thu, 28 Mar 2013 15:44:08 +0100 Subject: [Python-Dev] Safely importing zip files with C extensions In-Reply-To: References: Message-ID: Am 27.03.2013 20:38, schrieb Vinay Sajip: >> This quote is here to stop GMane complaining that I'm top-posting. Ignore. > > I've already posted this to distutils-sig, but thought that it might be of > interest to readers here as it relates to importing C extensions ... > > zipimport is great, but there can be issues importing software that contains > C extensions. But the new wheel format (PEP 427) may give us a better way of > importing zip files containing C extensions. Since wheels are .zip files, they > can sometimes be used to provide functionality without needing to be installed. > But whereas .zip files contain no convention for indicating compatibility with > a particular Python, wheels do contain this compatibility information. Thus, it > is possible to check if a wheel can be directly imported from, and the wheel > support in distlib allows you to take advantage of this using the mount() and > unmount() methods. When you mount a wheel, its absolute path name is added to > sys.path, allowing the Python code in it to be imported. (A DistlibException is > raised if the wheel isn't compatible with the Python which calls the mount() > method.) The zip-file itself could support importing compiled extensions when it contains a python-wrapper module that unpacks the .so/.dll file somewhere, and finally calls imp.load_dynamic() to import it and replace itself. Thomas From p.f.moore at gmail.com Thu Mar 28 16:38:19 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 28 Mar 2013 15:38:19 +0000 Subject: [Python-Dev] Writing importers and path hooks In-Reply-To: References: Message-ID: On 28 March 2013 13:42, Brett Cannon wrote: >> importer, as I wanted to try a proof of concept importer based on the >> new importlib stuff (which is intended to make writing custom >> importers easier), and I really struggled to get something working. > > Struggling how? With the finder? The loader? What exactly were you trying to > accomplish and how were you deviating from the standard import system? What I was trying to do was to write a path hook that would allow me to add a string to sys.path which contained a base-64 encoded zipfile (plus some sort of marker so it could be distinguished from a normal path entry) and as a result the contents of that embedded zip file would be available as if I'd added an actual zip file with that content to sys.path. I got in a complete mess as I tried to strip out the (essentially non-interesting) zipfile handling to get to a dummy "do nothing, everything is valid" type of example. But I don't think I would have fared much better if I'd stuck to the original full requirement. >> It seems to me that the importlib documentation doesn't help much for >> people trying to import path hooks. > > There is a bug to clarify the docs to have more geared towards writing new > importers instead of just documenting what's available: > http://bugs.python.org/issue15867 Thanks. I'll keep an eye on that. >> But it might be just me. Does >> anyone have an example of a simple importlib-based finder/loader? > > Define simple. =) I would argue importlib itself is easy enough to read. :-) Fair point. I guess importlib is not *that* hard to read, but the only case that implements packages in the filesystem one, and that also deals with C extensions and other complexities that I don't have a need for. I'll try again to have a deeper look at it, but I didn't find it easy to extract the essentials when I looked before. >> That >> would be a huge help for me. In return for any pointers, I'll look at >> putting together a doc patch to clarify how to use importlib to build >> your own path hooks :-) > > Do you specifically mean the path hook aspect or the whole package of hook, > finder, and loader? OK, after some more digging, it looks like I misunderstood the process somewhat. Writing a loader that inherits from *both* FileLoader and SourceLoader, and then implementing get_data (and module_repr - why do I need that, couldn't the ABC offer a default implementation?) does the job for that. But the finder confuses me. I assume I want a PathEntryFinder and hence I should implement find_loader(). The documentation on what I need to return from there is very sparse... In the end I worked out that for a package, I need to return (MyLoader(modulename, 'foo/__init__.py'), ['foo']) (here, "foo" is my dummy marker for my example). In essence, PathEntryFinder really has to implement some form of virtual filesystem mount point, and preserve the standard filesystem semantics of modules having a filename of .../__init__.py. So I managed to work out what was needed in the end, but it was a lot harder than I'd expected. On reflection, getting the finder semantics right (and in particular the path entry finder semantics) was the hard bit. I'm now 100% sure that some cookbook examples would help a lot. I'll see what I can do. Thanks, Paul From brett at python.org Thu Mar 28 17:08:29 2013 From: brett at python.org (Brett Cannon) Date: Thu, 28 Mar 2013 12:08:29 -0400 Subject: [Python-Dev] Writing importers and path hooks In-Reply-To: References: Message-ID: On Thu, Mar 28, 2013 at 11:38 AM, Paul Moore wrote: > On 28 March 2013 13:42, Brett Cannon wrote: > >> importer, as I wanted to try a proof of concept importer based on the > >> new importlib stuff (which is intended to make writing custom > >> importers easier), and I really struggled to get something working. > > > > Struggling how? With the finder? The loader? What exactly were you > trying to > > accomplish and how were you deviating from the standard import system? > > What I was trying to do was to write a path hook that would allow me > to add a string to sys.path which contained a base-64 encoded zipfile > (plus some sort of marker so it could be distinguished from a normal > path entry) and as a result the contents of that embedded zip file > would be available as if I'd added an actual zip file with that > content to sys.path. > > I got in a complete mess as I tried to strip out the (essentially > non-interesting) zipfile handling to get to a dummy "do nothing, > everything is valid" type of example. But I don't think I would have > fared much better if I'd stuck to the original full requirement. > > >> It seems to me that the importlib documentation doesn't help much for > >> people trying to import path hooks. > > > > There is a bug to clarify the docs to have more geared towards writing > new > > importers instead of just documenting what's available: > > http://bugs.python.org/issue15867 > > Thanks. I'll keep an eye on that. > > >> But it might be just me. Does > >> anyone have an example of a simple importlib-based finder/loader? > > > > Define simple. =) I would argue importlib itself is easy enough to read. > > :-) Fair point. I guess importlib is not *that* hard to read, but the > only case that implements packages in the filesystem one, and that > also deals with C extensions and other complexities that I don't have > a need for. I'll try again to have a deeper look at it, but I didn't > find it easy to extract the essentials when I looked before. > > >> That > >> would be a huge help for me. In return for any pointers, I'll look at > >> putting together a doc patch to clarify how to use importlib to build > >> your own path hooks :-) > > > > Do you specifically mean the path hook aspect or the whole package of > hook, > > finder, and loader? > > OK, after some more digging, it looks like I misunderstood the process > somewhat. Writing a loader that inherits from *both* FileLoader and > SourceLoader, You only need SourceLoader since you are dealing with Python source. You don't need FileLoader since you are not reading from disk but an in-memory zipfile. > and then implementing get_data (and module_repr - why do > I need that, couldn't the ABC offer a default implementation?) http://bugs.python.org/issue17093 and http://bugs.python.org/issue17566 > does > the job for that. > You should be implementing get_data, get_filename, and path_stats for SourceLoader. > > But the finder confuses me. I assume I want a PathEntryFinder and > hence I should implement find_loader(). Yes since you are pulling from sys.path. > The documentation on what I > need to return from there is very sparse... In the end I worked out > that for a package, I need to return (MyLoader(modulename, > 'foo/__init__.py'), ['foo']) (here, "foo" is my dummy marker for my > example). The second argument should just be None: "An empty list can be used for portion to signify the loader is not part of a [namespace] package". Unfortunately a key word is missing in that sentence. http://bugs.python.org/issue17567 > In essence, PathEntryFinder really has to implement some > form of virtual filesystem mount point, and preserve the standard > filesystem semantics of modules having a filename of .../__init__.py. > Well, if your zip file decided to create itself with a different file extension then it wouldn't be required, but then other people's code might break if they don't respect module abstractions (i.e. looking at __package__/__name__ or __path__ to see if something is a package). > > So I managed to work out what was needed in the end, but it was a lot > harder than I'd expected. On reflection, getting the finder semantics > right (and in particular the path entry finder semantics) was the hard > bit. > Yep, that bit has had the least API tweaks as most people don't muck with finders but with loaders. > > I'm now 100% sure that some cookbook examples would help a lot. I'll > see what I can do. > I plan on writing a pure Python zip importer for Python 3.4 which should be fairly minimal and work out as a good example chunk of code. And no one need bother writing it as I'm going to do it myself regardless to make sure I plug any missing holes in the API. If you really want something to try for fun go for a sqlite3-backed setup (don't see it going in the stdlib but it would be a project to have). -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Mar 28 17:09:26 2013 From: brett at python.org (Brett Cannon) Date: Thu, 28 Mar 2013 12:09:26 -0400 Subject: [Python-Dev] Safely importing zip files with C extensions In-Reply-To: References: Message-ID: On Thu, Mar 28, 2013 at 10:44 AM, Thomas Heller wrote: > Am 27.03.2013 20:38, schrieb Vinay Sajip: > > This quote is here to stop GMane complaining that I'm top-posting. Ignore. >>> >> >> I've already posted this to distutils-sig, but thought that it might be of >> interest to readers here as it relates to importing C extensions ... >> >> zipimport is great, but there can be issues importing software that >> contains >> C extensions. But the new wheel format (PEP 427) may give us a better way >> of >> importing zip files containing C extensions. Since wheels are .zip files, >> they >> can sometimes be used to provide functionality without needing to be >> installed. >> But whereas .zip files contain no convention for indicating compatibility >> with >> a particular Python, wheels do contain this compatibility information. >> Thus, it >> is possible to check if a wheel can be directly imported from, and the >> wheel >> support in distlib allows you to take advantage of this using the mount() >> and >> unmount() methods. When you mount a wheel, its absolute path name is >> added to >> sys.path, allowing the Python code in it to be imported. (A >> DistlibException is >> raised if the wheel isn't compatible with the Python which calls the >> mount() >> method.) >> > > The zip-file itself could support importing compiled extensions when it > contains a python-wrapper module that unpacks the .so/.dll file somewhere, > and finally calls imp.load_dynamic() to import it and replace itself. Which must be done carefully to prevent a security issue. It shouldn't be unzipped anywhere but into a directory only writable by the process. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Mar 28 17:33:23 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 28 Mar 2013 16:33:23 +0000 Subject: [Python-Dev] Writing importers and path hooks In-Reply-To: References: Message-ID: On 28 March 2013 16:08, Brett Cannon wrote: > You only need SourceLoader since you are dealing with Python source. You > don't need FileLoader since you are not reading from disk but an in-memory > zipfile. > > You should be implementing get_data, get_filename, and path_stats for > SourceLoader. OK, cool. That helps a lot. The biggest gap here is that I don't think that anywhere has a good explanation of the required semantics of get_filename - particularly where we're not actually dealing with real filenames. My initial stab at this would be: A module name is a dot-separated list of parts. A filename is an arbitrary token that can be used with get_data to get the module content. However, the following rules should be followed: - Filenames should be made up of parts separated by the OS path separator. - For packages, the final section of the filename *must* be __init__.py if the standard package detection is being used. - The initial part of the filename needs to match your path entry if submodule lookups are going to work sanely In practice, you need to implement filenames as if your finder is managing a virtual filesystem mounted at your sys.path entry, with module->filename semantics being the usual subdirectory layout. And packages have a basename of __init__.py. I'd like to know how to implement packages without the artificial __init__.py (something like a sqlite database can attach content and an "is_package" flag to the same entry). But that's advanced usage, and I can probably hack around until I work out how to do that now. >> The documentation on what I >> need to return from there is very sparse... In the end I worked out >> that for a package, I need to return (MyLoader(modulename, >> 'foo/__init__.py'), ['foo']) (here, "foo" is my dummy marker for my >> example). > > The second argument should just be None: "An empty list can be used for > portion to signify the loader is not part of a [namespace] package". > Unfortunately a key word is missing in that sentence. > http://bugs.python.org/issue17567 Ha. Yes, that makes a lot of difference :-) Did you mean None or [], by the way? >> In essence, PathEntryFinder really has to implement some >> form of virtual filesystem mount point, and preserve the standard >> filesystem semantics of modules having a filename of .../__init__.py. > > Well, if your zip file decided to create itself with a different file > extension then it wouldn't be required, but then other people's code might > break if they don't respect module abstractions (i.e. looking at > __package__/__name__ or __path__ to see if something is a package). I'm not quite sure what you mean by this, but I take your point about making sure to break people's expectations as little as possible... >> So I managed to work out what was needed in the end, but it was a lot >> harder than I'd expected. On reflection, getting the finder semantics >> right (and in particular the path entry finder semantics) was the hard >> bit. > > Yep, that bit has had the least API tweaks as most people don't muck with > finders but with loaders. Hmm. I'm not sure how you can ever write a loader without needing to write an associated finder. The existing finders wouldn't return your loader, surely? >> I'm now 100% sure that some cookbook examples would help a lot. I'll >> see what I can do. > > I plan on writing a pure Python zip importer for Python 3.4 which should be > fairly minimal and work out as a good example chunk of code. And no one > need bother writing it as I'm going to do it myself regardless to make sure > I plug any missing holes in the API. If you really want something to try for > fun go for a sqlite3-backed setup (don't see it going in the stdlib but it > would be a project to have). I'm pretty sure I'll write a zip importer first - it feels like one of those essential but largely useless exercises that people have to start with - a bit like scales on the piano :-) But I'd be interested in trying a sqlite importer as well. I might well see how I go with that. Thanks for the help with this. Paul From christian at python.org Thu Mar 28 18:12:05 2013 From: christian at python.org (Christian Heimes) Date: Thu, 28 Mar 2013 18:12:05 +0100 Subject: [Python-Dev] Safely importing zip files with C extensions In-Reply-To: References: Message-ID: <515479E5.1090805@python.org> Am 28.03.2013 17:09, schrieb Brett Cannon: > Which must be done carefully to prevent a security issue. It shouldn't > be unzipped anywhere but into a directory only writable by the process. Cleanup is going to be tricky or even impossible. Windows locks loaded DLLs and therefore prevents their removal. It's possible to unload DLLs but I don't know the implications. From dan at woz.io Thu Mar 28 18:16:47 2013 From: dan at woz.io (Daniel Wozniak) Date: Thu, 28 Mar 2013 10:16:47 -0700 Subject: [Python-Dev] noob contributions to unit tests In-Reply-To: <20130327022422.D7ACA250BCA@webabinitio.net> References: <20130327022422.D7ACA250BCA@webabinitio.net> Message-ID: <51547AFF.3070906@woz.io> Sean, During the PyCon sprints I was helping work on unittests in urllib. I think as it stands right now urllib/error.py and urllib/parse.py are at 100% line coverage. I have some additions to urllib/request.py which I have yet to submit a patch for (anything above line 700 is covered thus far) and I noticed there is a large chunk of code there which has line coverage when running tests on OSX. There is a note to refactor that to run on all platforms since there is nothing OSX specific in those tests. I believe if those tests were running on all platforms it would drastically increase line coverage for request.py, assuming your not on OSX of coarse. I have not looked at response.py or robotparser.py yet. Just wanted to give you a little brain dump in case it can save you some time. ~Daniel From brett at python.org Thu Mar 28 18:39:03 2013 From: brett at python.org (Brett Cannon) Date: Thu, 28 Mar 2013 13:39:03 -0400 Subject: [Python-Dev] Writing importers and path hooks In-Reply-To: References: Message-ID: On Thu, Mar 28, 2013 at 12:33 PM, Paul Moore wrote: > On 28 March 2013 16:08, Brett Cannon wrote: > > You only need SourceLoader since you are dealing with Python source. You > > don't need FileLoader since you are not reading from disk but an > in-memory > > zipfile. > > > > You should be implementing get_data, get_filename, and path_stats for > > SourceLoader. > > OK, cool. That helps a lot. > > The biggest gap here is that I don't think that anywhere has a good > explanation of the required semantics of get_filename - particularly > where we're not actually dealing with real filenames. It's because there aren't any. =) This is the first time alternative storage mechanisms are really easily viable without massive amounts of work, so no one has figured this out. The real question is how code out in the wild would react if you did something like /path/to/sqlite3:pkg.mod which is very much not a file path. > My initial stab > at this would be: > > A module name is a dot-separated list of parts. > A filename is an arbitrary token that can be used with get_data to get > the module content. However, the following rules should be followed: > - Filenames should be made up of parts separated by the OS path separator. > And why is that? A database doesn't need those separators as the module name would just be the primary key. > - For packages, the final section of the filename *must* be > __init__.py if the standard package detection is being used. > Once again, why? A column in a database that is nothing more than a package flag would solve this as well, negating the need for this. The whole point of is_package() on loaders is to get away from this reliance on __file__ having any meaning beyond "this is the string that represents where this module's code was loaded from". > - The initial part of the filename needs to match your path entry if > submodule lookups are going to work sanely > When applicable that's fine. > > In practice, you need to implement filenames as if your finder is > managing a virtual filesystem mounted at your sys.path entry, with > module->filename semantics being the usual subdirectory layout. And > packages have a basename of __init__.py. > That's one way of doing it, but it does very much tie imports to files and it doesn't generalize the concept to places where file paths simply do not need to apply. > > I'd like to know how to implement packages without the artificial > __init__.py (something like a sqlite database can attach content and > an "is_package" flag to the same entry). But that's advanced usage, > and I can probably hack around until I work out how to do that now. > Define is_package(). I personally want to change the API somehow so you ask for what __path__ should be set to. Unfortunately without going down the "False means not a package, everything else means it is and what is returned should be set on __path__" is a bit hairy and not backwards-compatible unless you require a list that always evaluates to True for packages. > > >> The documentation on what I > >> need to return from there is very sparse... In the end I worked out > >> that for a package, I need to return (MyLoader(modulename, > >> 'foo/__init__.py'), ['foo']) (here, "foo" is my dummy marker for my > >> example). > > > > The second argument should just be None: "An empty list can be used for > > portion to signify the loader is not part of a [namespace] package". > > Unfortunately a key word is missing in that sentence. > > http://bugs.python.org/issue17567 > > Ha. Yes, that makes a lot of difference :-) Did you mean None or [], by > the way? > Empty list. You can check the code to see if it would work with None, but a list is expected to be used so an empty list is more consistent and still false. > > >> In essence, PathEntryFinder really has to implement some > >> form of virtual filesystem mount point, and preserve the standard > >> filesystem semantics of modules having a filename of .../__init__.py. > > > > Well, if your zip file decided to create itself with a different file > > extension then it wouldn't be required, but then other people's code > might > > break if they don't respect module abstractions (i.e. looking at > > __package__/__name__ or __path__ to see if something is a package). > > I'm not quite sure what you mean by this, but I take your point about > making sure to break people's expectations as little as possible... > To tell if a module is a package, you should do either ``if mod.__name__ == mod.__package__`` or ``if hasattr(mod, '__path__')``. > > >> So I managed to work out what was needed in the end, but it was a lot > >> harder than I'd expected. On reflection, getting the finder semantics > >> right (and in particular the path entry finder semantics) was the hard > >> bit. > > > > Yep, that bit has had the least API tweaks as most people don't muck with > > finders but with loaders. > > Hmm. I'm not sure how you can ever write a loader without needing to > write an associated finder. The existing finders wouldn't return your > loader, surely? > If you are not changing the storage mechanism you don't need a new finder; what importlib provides works fine. So if you are, for instance, only providing a loader which does an AST optimization pass you only need a new loader. Or if you use a DSL that you compile into Python code then you only need a new loader. > > >> I'm now 100% sure that some cookbook examples would help a lot. I'll > >> see what I can do. > > > > I plan on writing a pure Python zip importer for Python 3.4 which should > be > > fairly minimal and work out as a good example chunk of code. And no one > > need bother writing it as I'm going to do it myself regardless to make > sure > > I plug any missing holes in the API. If you really want something to try > for > > fun go for a sqlite3-backed setup (don't see it going in the stdlib but > it > > would be a project to have). > > I'm pretty sure I'll write a zip importer first - it feels like one of > those essential but largely useless exercises that people have to > start with - a bit like scales on the piano :-) But I'd be interested > in trying a sqlite importer as well. I might well see how I go with > that. > The sqlite3 one is interesting as it does not whatsoever require file paths to operate; you can easily define a schema specific to source code and bytecode and really go db-specific and have the loader work from that (would also make finder lookups dead-simple). Otherwise you will end up writing a schema for a virtual filesystem which would also work but would show that people are not respecting abstractions on modules (or that the API has gaps which need filling in). -------------- next part -------------- An HTML attachment was scrubbed... URL: From walter at livinglogic.de Thu Mar 28 19:36:05 2013 From: walter at livinglogic.de (=?windows-1252?Q?Walter_D=F6rwald?=) Date: Thu, 28 Mar 2013 19:36:05 +0100 Subject: [Python-Dev] noob contributions to unit tests In-Reply-To: <20130327022422.D7ACA250BCA@webabinitio.net> References: <20130327022422.D7ACA250BCA@webabinitio.net> Message-ID: Am 27.03.2013 um 03:24 schrieb R. David Murray : > On Tue, 26 Mar 2013 16:59:06 -0700, Maciej Fijalkowski wrote: >> On Tue, Mar 26, 2013 at 4:49 PM, Sean Felipe Wolfe wrote: >>> Hey everybody how are you all :) >>> >>> I am an intermediate-level python coder looking to get help out. I've >>> been reading over the dev guide about helping increase test coverage >>> --> >>> http://docs.python.org/devguide/coverage.html >>> >>> And also the third-party code coverage referenced in the devguide page: >>> http://coverage.livinglogic.de/ >>> >>> I'm seeing that according to the coverage tool, two of my favorite >>> libraries, urllib/urllib2, have no unit tests? Is that correct or am I >>> reading it wrong? >>> >>> If that's correct it seems like a great place perhaps for me to cut my >>> teeth and I would be excited to learn and help out here. >>> >>> And of course any thoughts or advice for an aspiring Python >>> contributor would be appreciated. Of course the dev guide gives me >>> plenty of good info. >>> >>> Thanks! >> >> That looks like an error in the coverage report, there are certainly >> urllib and urllib2 tests in test/test_urllib* > > The devguide contains instructions for running coverage yourself, > and if I recall correctly the 'fullcoverage' recipe does a better > job than what runs at coverage.livinglogic.de. The job that produces that output has been broken for some time now, and I haven't found the time to look into it. If someone wants to try, here's the code: https://pypi.python.org/pypi/pycoco/0.7.2 > [?] Servus, Walter From pje at telecommunity.com Thu Mar 28 20:44:10 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 28 Mar 2013 15:44:10 -0400 Subject: [Python-Dev] Can I introspect/reflect to get arguments exec()? In-Reply-To: References: Message-ID: On Thu, Mar 28, 2013 at 6:43 AM, Rocky Bernstein wrote: > Of course the debugger uses sys.settrace too, so the evil-ness of that is > definitely not a concern. But possibly I need to make sure that since the > DecoratorTools and the debugger both hook into trace hooks they play nice > together and fire in the right order. DecoratorTools' trace hooking is unrelated to its linecache functionality. All you need from it is the cache_source() function; you can pretty much ignore everything else for your purposes. You'll just need to give it a phony filename to work with, and the associated string. From ezio.melotti at gmail.com Thu Mar 28 23:56:46 2013 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Fri, 29 Mar 2013 00:56:46 +0200 Subject: [Python-Dev] [Python-checkins] cpython (2.7): Issue 17538: Document XML vulnerabilties In-Reply-To: <3ZZz1v2N95zQBT@mail.python.org> References: <3ZZz1v2N95zQBT@mail.python.org> Message-ID: Hi, On Tue, Mar 26, 2013 at 6:53 PM, christian.heimes wrote: > http://hg.python.org/cpython/rev/e87364449954 > changeset: 82973:e87364449954 > branch: 2.7 > parent: 82963:d321885ff8f3 > user: Christian Heimes > date: Tue Mar 26 17:53:05 2013 +0100 > summary: > Issue 17538: Document XML vulnerabilties > > [...] > > diff --git a/Doc/library/xml.rst b/Doc/library/xml.rst > new file mode 100644 > --- /dev/null > +++ b/Doc/library/xml.rst > @@ -0,0 +1,131 @@ > +.. _xml: > + > +XML Processing Modules > +====================== > + > +.. module:: xml > + :synopsis: Package containing XML processing modules > +.. sectionauthor:: Christian Heimes > +.. sectionauthor:: Georg Brandl > + > + > +Python's interfaces for processing XML are grouped in the ``xml`` package. > + > +.. warning:: > + > + The XML modules are not secure against erroneous or maliciously > + constructed data. If you need to parse untrusted or unauthenticated data see > + :ref:`xml-vulnerabilities`. > + > +It is important to note that modules in the :mod:`xml` package require that > +there be at least one SAX-compliant XML parser available. The Expat parser is > +included with Python, so the :mod:`xml.parsers.expat` module will always be > +available. > + > +The documentation for the :mod:`xml.dom` and :mod:`xml.sax` packages are the > +definition of the Python bindings for the DOM and SAX interfaces. > + > +The XML handling submodules are: > + > +* :mod:`xml.etree.ElementTree`: the ElementTree API, a simple and lightweight Something is missing here ^ > + > +.. > + > +* :mod:`xml.dom`: the DOM API definition > +* :mod:`xml.dom.minidom`: a lightweight DOM implementation > +* :mod:`xml.dom.pulldom`: support for building partial DOM trees > + > +.. > + > +* :mod:`xml.sax`: SAX2 base classes and convenience functions > +* :mod:`xml.parsers.expat`: the Expat parser binding > + > + > +.. _xml-vulnerabilities: > + > [...] > + > +defused packages > +---------------- > + > +`defusedxml`_ is a pure Python package with modified subclasses of all stdlib > +XML parsers that prevent any potentially malicious operation. The courses of > +action are recommended for any server code that parses untrusted XML data. This last sentence doesn't make much sense to me. Is it even correct? > The > +package also ships with example exploits and an extended documentation on more > +XML exploits like xpath injection. > + > +`defusedexpat`_ provides a modified libexpat and patched replacment s/replacment/replacement/ > +:mod:`pyexpat` extension module with countermeasures against entity expansion > +DoS attacks. Defusedexpat still allows a sane and configurable amount of entity > +expansions. The modifications will be merged into future releases of Python. > + > +The workarounds and modifications are not included in patch releases as they > +break backward compatibility. After all inline DTD and entity expansion are > +well-definied XML features. s/definied/defined/ > + > + > +.. _defusedxml: > +.. _defusedexpat: > +.. _Billion Laughs: http://en.wikipedia.org/wiki/Billion_laughs > +.. _ZIP bomb: http://en.wikipedia.org/wiki/Zip_bomb > +.. _DTD: http://en.wikipedia.org/wiki/Document_Type_Definition > [...] Best Regards, Ezio Melotti From greg at krypto.org Fri Mar 29 02:06:35 2013 From: greg at krypto.org (Gregory P. Smith) Date: Thu, 28 Mar 2013 18:06:35 -0700 Subject: [Python-Dev] Safely importing zip files with C extensions In-Reply-To: References: Message-ID: On Thu, Mar 28, 2013 at 9:09 AM, Brett Cannon wrote: > > > > On Thu, Mar 28, 2013 at 10:44 AM, Thomas Heller wrote: > >> Am 27.03.2013 20:38, schrieb Vinay Sajip: >> >> This quote is here to stop GMane complaining that I'm top-posting. >>>> Ignore. >>>> >>> >>> I've already posted this to distutils-sig, but thought that it might be >>> of >>> interest to readers here as it relates to importing C extensions ... >>> >>> zipimport is great, but there can be issues importing software that >>> contains >>> C extensions. But the new wheel format (PEP 427) may give us a better >>> way of >>> importing zip files containing C extensions. Since wheels are .zip >>> files, they >>> can sometimes be used to provide functionality without needing to be >>> installed. >>> But whereas .zip files contain no convention for indicating >>> compatibility with >>> a particular Python, wheels do contain this compatibility information. >>> Thus, it >>> is possible to check if a wheel can be directly imported from, and the >>> wheel >>> support in distlib allows you to take advantage of this using the >>> mount() and >>> unmount() methods. When you mount a wheel, its absolute path name is >>> added to >>> sys.path, allowing the Python code in it to be imported. (A >>> DistlibException is >>> raised if the wheel isn't compatible with the Python which calls the >>> mount() >>> method.) >>> >> >> The zip-file itself could support importing compiled extensions when it >> contains a python-wrapper module that unpacks the .so/.dll file somewhere, >> and finally calls imp.load_dynamic() to import it and replace itself. > > > Which must be done carefully to prevent a security issue. It shouldn't be > unzipped anywhere but into a directory only writable by the process. > > Once http://sourceware.org/bugzilla/show_bug.cgi?id=11767 is implemented and available in libc, no extraction of .so's should be needed (they will likely need to be stored uncompressed in the .zip file for that though). -------------- next part -------------- An HTML attachment was scrubbed... URL: From theller at ctypes.org Fri Mar 29 13:00:29 2013 From: theller at ctypes.org (Thomas Heller) Date: Fri, 29 Mar 2013 13:00:29 +0100 Subject: [Python-Dev] Safely importing zip files with C extensions In-Reply-To: References: Message-ID: Am 29.03.2013 02:06, schrieb Gregory P. Smith: > > On Thu, Mar 28, 2013 at 9:09 AM, Brett Cannon > wrote: > > On Thu, Mar 28, 2013 at 10:44 AM, Thomas Heller > wrote: > > The zip-file itself could support importing compiled extensions > when it contains a python-wrapper module that unpacks the > .so/.dll file somewhere, and finally calls imp.load_dynamic() to > import it and replace itself. > > > Which must be done carefully to prevent a security issue. It > shouldn't be unzipped anywhere but into a directory only writable by > the process. > > > Once http://sourceware.org/bugzilla/show_bug.cgi?id=11767 is implemented > and available in libc, no extraction of .so's should be needed (they > will likely need to be stored uncompressed in the .zip file for that > though). For windows there is already code that does it: http://www.py2exe.org/index.cgi/Hacks/ZipExtImporter This page is not up-to-date, but it describes the idea and the implementation. The code currently is 32-bit only and for Python 2 but that probably can be fixed. It is based on Joachim Bauch's MemoryModule: https://github.com/fancycode/MemoryModule Thomas From status at bugs.python.org Fri Mar 29 18:07:31 2013 From: status at bugs.python.org (Python tracker) Date: Fri, 29 Mar 2013 18:07:31 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20130329170731.7600156915@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2013-03-22 - 2013-03-29) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3887 (-21) closed 25461 (+72) total 29348 (+51) Open issues with patches: 1724 Issues opened (46) ================== #6671: webbrowser doesn't respect xfce default browser http://bugs.python.org/issue6671 reopened by eric.araujo #17295: __slots__ on PyVarObject subclass http://bugs.python.org/issue17295 reopened by haypo #17523: Additional tests for the os module. http://bugs.python.org/issue17523 opened by willweaver #17525: os.getcwd() fails on cifs share http://bugs.python.org/issue17525 opened by dcuddihy #17526: inspect.findsource raises undocumented error for code objects http://bugs.python.org/issue17526 opened by Nils.Bruin #17527: PATCH as valid request method in wsgiref.validator http://bugs.python.org/issue17527 opened by lsbardel #17528: Implement dumps/loads for lru_cache http://bugs.python.org/issue17528 opened by frafra #17529: fix os.sendfile() documentation regarding the type of file des http://bugs.python.org/issue17529 opened by neologix #17530: pprint could use line continuation for long bytes literals http://bugs.python.org/issue17530 opened by pitrou #17532: IDLE: Always include "Options" menu on MacOSX http://bugs.python.org/issue17532 opened by roger.serwy #17533: test_xpickle fails with "cannot import name precisionbigmemtes http://bugs.python.org/issue17533 opened by ned.deily #17534: unittest keeps references to test cases alive http://bugs.python.org/issue17534 opened by ezio.melotti #17535: IDLE: Add an option to show line numbers along the left side o http://bugs.python.org/issue17535 opened by Todd.Rovito #17536: update browser list with additional browser names http://bugs.python.org/issue17536 opened by doko #17537: csv.DictReader should fail if >1 column has the same name http://bugs.python.org/issue17537 opened by doko #17538: Document XML Vulnerabilties http://bugs.python.org/issue17538 opened by dstufft #17539: Use the builtins module in the unittest.mock.patch example http://bugs.python.org/issue17539 opened by berker.peksag #17540: logging formatter support 'style' key in dictionary config http://bugs.python.org/issue17540 opened by monson #17544: regex code re-raises exceptions on success http://bugs.python.org/issue17544 opened by Zden??k.Pavlas #17545: os.listdir and os.path.join inconsistent on empty path http://bugs.python.org/issue17545 opened by babou #17546: Document the circumstances where the locals() dict gets update http://bugs.python.org/issue17546 opened by techtonik #17547: "checking whether gcc supports ParseTuple __format__... " erro http://bugs.python.org/issue17547 opened by dmalcolm #17548: unittest.mock: test_create_autospec_unbound_methods is skipped http://bugs.python.org/issue17548 opened by haypo #17549: Some exceptions not highlighted in exceptions documentation. http://bugs.python.org/issue17549 opened by Ramchandra Apte #17551: Windows - accessing drive with nothing mounted forces user int http://bugs.python.org/issue17551 opened by bobjalex #17552: socket.sendfile() http://bugs.python.org/issue17552 opened by giampaolo.rodola #17553: Note that distutils??? bdist_rpm command is not used to build http://bugs.python.org/issue17553 opened by Sean.Carolan #17554: Compact output for regrtest http://bugs.python.org/issue17554 opened by ezio.melotti #17555: Creating new processes after importing multiprocessing.manager http://bugs.python.org/issue17555 opened by Marc.Br??nink #17557: test_getgroups of test_posix can fail on OS X 10.8 if more tha http://bugs.python.org/issue17557 opened by ned.deily #17558: gdb debugging python frames in optimised interpreters http://bugs.python.org/issue17558 opened by mcobden #17560: problem using multiprocessing with really big objects? http://bugs.python.org/issue17560 opened by mrjbq7 #17561: Add socket.create_server_sock() convenience function http://bugs.python.org/issue17561 opened by giampaolo.rodola #17563: Excessive resizing of dicts when used as a cache http://bugs.python.org/issue17563 opened by Mark.Shannon #17564: test_urllib2_localnet fails http://bugs.python.org/issue17564 opened by Mark.Shannon #17565: segfaults during serialization http://bugs.python.org/issue17565 opened by eddiewrc #17566: Document that importlib.abc.Loader.module_repr is abstract and http://bugs.python.org/issue17566 opened by brett.cannon #17567: Clarify importlib.abc.PathEntryFinder.find_loader() docs http://bugs.python.org/issue17567 opened by brett.cannon #17568: re: Infinite loop with repeated empty alternative http://bugs.python.org/issue17568 opened by ericp #17569: urllib2 urlopen truncates https pages after 32768 characters http://bugs.python.org/issue17569 opened by jhp7e #17570: Improve devguide Windows instructions http://bugs.python.org/issue17570 opened by ezio.melotti #17571: broken links on Lib/datetime.py docstring http://bugs.python.org/issue17571 opened by tshepang #17572: strptime exception context http://bugs.python.org/issue17572 opened by Claudiu.Popa #17573: add ElementTree XML processing benchmark to benchmark suite http://bugs.python.org/issue17573 opened by scoder #17574: pysetup failing with "OSError: [Errno 18] Invalid cross-device http://bugs.python.org/issue17574 opened by vaab #17575: HTTPConnection.send http://bugs.python.org/issue17575 opened by dspublic at freemail.hu Most recent 15 issues with no replies (15) ========================================== #17575: HTTPConnection.send http://bugs.python.org/issue17575 #17573: add ElementTree XML processing benchmark to benchmark suite http://bugs.python.org/issue17573 #17572: strptime exception context http://bugs.python.org/issue17572 #17571: broken links on Lib/datetime.py docstring http://bugs.python.org/issue17571 #17570: Improve devguide Windows instructions http://bugs.python.org/issue17570 #17568: re: Infinite loop with repeated empty alternative http://bugs.python.org/issue17568 #17567: Clarify importlib.abc.PathEntryFinder.find_loader() docs http://bugs.python.org/issue17567 #17566: Document that importlib.abc.Loader.module_repr is abstract and http://bugs.python.org/issue17566 #17563: Excessive resizing of dicts when used as a cache http://bugs.python.org/issue17563 #17558: gdb debugging python frames in optimised interpreters http://bugs.python.org/issue17558 #17557: test_getgroups of test_posix can fail on OS X 10.8 if more tha http://bugs.python.org/issue17557 #17547: "checking whether gcc supports ParseTuple __format__... " erro http://bugs.python.org/issue17547 #17540: logging formatter support 'style' key in dictionary config http://bugs.python.org/issue17540 #17539: Use the builtins module in the unittest.mock.patch example http://bugs.python.org/issue17539 #17532: IDLE: Always include "Options" menu on MacOSX http://bugs.python.org/issue17532 Most recent 15 issues waiting for review (15) ============================================= #17573: add ElementTree XML processing benchmark to benchmark suite http://bugs.python.org/issue17573 #17572: strptime exception context http://bugs.python.org/issue17572 #17563: Excessive resizing of dicts when used as a cache http://bugs.python.org/issue17563 #17561: Add socket.create_server_sock() convenience function http://bugs.python.org/issue17561 #17555: Creating new processes after importing multiprocessing.manager http://bugs.python.org/issue17555 #17554: Compact output for regrtest http://bugs.python.org/issue17554 #17552: socket.sendfile() http://bugs.python.org/issue17552 #17547: "checking whether gcc supports ParseTuple __format__... " erro http://bugs.python.org/issue17547 #17540: logging formatter support 'style' key in dictionary config http://bugs.python.org/issue17540 #17539: Use the builtins module in the unittest.mock.patch example http://bugs.python.org/issue17539 #17538: Document XML Vulnerabilties http://bugs.python.org/issue17538 #17536: update browser list with additional browser names http://bugs.python.org/issue17536 #17535: IDLE: Add an option to show line numbers along the left side o http://bugs.python.org/issue17535 #17529: fix os.sendfile() documentation regarding the type of file des http://bugs.python.org/issue17529 #17527: PATCH as valid request method in wsgiref.validator http://bugs.python.org/issue17527 Top 10 most discussed issues (10) ================================= #17546: Document the circumstances where the locals() dict gets update http://bugs.python.org/issue17546 24 msgs #17561: Add socket.create_server_sock() convenience function http://bugs.python.org/issue17561 14 msgs #17538: Document XML Vulnerabilties http://bugs.python.org/issue17538 13 msgs #17560: problem using multiprocessing with really big objects? http://bugs.python.org/issue17560 13 msgs #17554: Compact output for regrtest http://bugs.python.org/issue17554 9 msgs #17536: update browser list with additional browser names http://bugs.python.org/issue17536 8 msgs #17522: Add api PyGILState_Check http://bugs.python.org/issue17522 7 msgs #17564: test_urllib2_localnet fails http://bugs.python.org/issue17564 7 msgs #2704: IDLE: Patch to make PyShell behave more like a Terminal interf http://bugs.python.org/issue2704 6 msgs #7511: msvc9compiler.py: ValueError when trying to compile with VC Ex http://bugs.python.org/issue7511 6 msgs Issues closed (56) ================== #4022: 2.6 dependent on c:\python26\ on windows http://bugs.python.org/issue4022 closed by georg.brandl #4159: Table about Standard Encodings is cut off at the bottom - 35 e http://bugs.python.org/issue4159 closed by python-dev #4653: Patch to fix typos in C code http://bugs.python.org/issue4653 closed by gregory.p.smith #5135: Expose simplegeneric function in functools module http://bugs.python.org/issue5135 closed by terry.reedy #5445: codecs.StreamWriter.writelines problem when passed generator http://bugs.python.org/issue5445 closed by benjamin.peterson #5970: sys.exc_info leaks into a generator http://bugs.python.org/issue5970 closed by georg.brandl #6310: Windows "App Paths" key is not checked when installed for curr http://bugs.python.org/issue6310 closed by georg.brandl #6661: Transient test_multiprocessing failure (test_active_children) http://bugs.python.org/issue6661 closed by georg.brandl #7300: Unicode arguments in str.format() http://bugs.python.org/issue7300 closed by georg.brandl #7565: Increasing resource.RLIMIT_NOFILE has no effect http://bugs.python.org/issue7565 closed by georg.brandl #8552: msilib can't create large CAB files http://bugs.python.org/issue8552 closed by r.david.murray #8906: Document TestCase attributes in class docstring http://bugs.python.org/issue8906 closed by ezio.melotti #8911: regrtest.main should have a test skipping argument http://bugs.python.org/issue8911 closed by ezio.melotti #9672: test_xpickle fails on Windows: invokes pythonx.y instead of py http://bugs.python.org/issue9672 closed by georg.brandl #9986: PDF files of python docs have text missing http://bugs.python.org/issue9986 closed by georg.brandl #10211: BufferObject doesn't support new buffer interface http://bugs.python.org/issue10211 closed by kristjan.jonsson #10359: ISO C cleanup http://bugs.python.org/issue10359 closed by gregory.p.smith #11087: Speeding up the interpreter with a few lines of code http://bugs.python.org/issue11087 closed by pitrou #12207: Document ast.PyCF_ONLY_AST http://bugs.python.org/issue12207 closed by eric.araujo #14468: Update cloning guidelines in devguide http://bugs.python.org/issue14468 closed by ezio.melotti #15052: Outdated comments in build_ssl.py http://bugs.python.org/issue15052 closed by loewis #15611: devguide: add "core mentors" area to Experts Index http://bugs.python.org/issue15611 closed by brett.cannon #16204: PyBuffer_FillInfo returns 'B' buffer, whose behavior has chang http://bugs.python.org/issue16204 closed by skrah #16676: Segfault under Python 3.3 after PyType_GenericNew http://bugs.python.org/issue16676 closed by benjamin.peterson #16692: Support TLS 1.1 and TLS 1.2 http://bugs.python.org/issue16692 closed by pitrou #16880: Importing "imp" will fail if dynamic loading not supported http://bugs.python.org/issue16880 closed by brett.cannon #17025: reduce multiprocessing.Queue contention http://bugs.python.org/issue17025 closed by neologix #17100: rotating an ordereddict http://bugs.python.org/issue17100 closed by rhettinger #17150: pprint could use line continuation for long string literals http://bugs.python.org/issue17150 closed by pitrou #17316: Add Django 1.5 to benchmarks http://bugs.python.org/issue17316 closed by brett.cannon #17317: Benchmark driver should calculate actual benchmark count in -h http://bugs.python.org/issue17317 closed by brett.cannon #17323: Disable [X refs, Y blocks] ouput in debug builds http://bugs.python.org/issue17323 closed by ezio.melotti #17329: Document unittest.SkipTest http://bugs.python.org/issue17329 closed by ezio.melotti #17389: Optimize Event.wait() http://bugs.python.org/issue17389 closed by pitrou #17425: Update OpenSSL versions in Windows builds http://bugs.python.org/issue17425 closed by loewis #17433: stdlib generator-like iterators don't forward send/throw http://bugs.python.org/issue17433 closed by terry.reedy #17438: json.load docs should mention that it always return unicode http://bugs.python.org/issue17438 closed by ezio.melotti #17447: str.identifier shouldn't accept Python keywords http://bugs.python.org/issue17447 closed by rhettinger #17479: Fix test discovery for test_io.py http://bugs.python.org/issue17479 closed by ezio.melotti #17488: subprocess.Popen bufsize=0 parameter behaves differently in Py http://bugs.python.org/issue17488 closed by gregory.p.smith #17489: random.Random implements __getstate__() and __reduce__() http://bugs.python.org/issue17489 closed by rhettinger #17504: Dropping duplicated docstring explanation of what Mocks' side_ http://bugs.python.org/issue17504 closed by ezio.melotti #17510: assertEquals deprecated in test_program.py (unittest) http://bugs.python.org/issue17510 closed by ezio.melotti #17516: Dead code should be removed http://bugs.python.org/issue17516 closed by haypo #17519: unittest should not try to run abstract classes http://bugs.python.org/issue17519 closed by michael.foord #17521: fileConfig() disables any previously-used "named" loggers, eve http://bugs.python.org/issue17521 closed by python-dev #17524: Problem to run a method http://bugs.python.org/issue17524 closed by brett.cannon #17531: test_grp and test_pwd fail with 32-bit builds on OS X systems http://bugs.python.org/issue17531 closed by ned.deily #17541: Importing `webbrowser` module gives NameError in Python 2.7.4r http://bugs.python.org/issue17541 closed by ezio.melotti #17550: --enable-profiling does nothing (shell syntax bug in configure http://bugs.python.org/issue17550 closed by python-dev #17556: os.path.join() converts None to '' by default http://bugs.python.org/issue17556 closed by georg.brandl #17559: str.is* implementation seem suboptimal for single character st http://bugs.python.org/issue17559 closed by georg.brandl #17562: spam http://bugs.python.org/issue17562 closed by ezio.melotti #444582: Finding programs in PATH, adding shutil.which http://bugs.python.org/issue444582 closed by eric.araujo #616013: cPickle documentation incomplete http://bugs.python.org/issue616013 closed by georg.brandl #886488: popen2 on Windows does not check _fdopen return value http://bugs.python.org/issue886488 closed by terry.reedy From ncoghlan at gmail.com Fri Mar 29 21:41:19 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 30 Mar 2013 06:41:19 +1000 Subject: [Python-Dev] Writing importers and path hooks In-Reply-To: References: Message-ID: On Fri, Mar 29, 2013 at 3:39 AM, Brett Cannon wrote: > To tell if a module is a package, you should do either ``if mod.__name__ == > mod.__package__`` or ``if hasattr(mod, '__path__')``. The second of those is actually a bit more reliable. As with many import quirks, the answer to "But why?" is "Because __main__" :P Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Mar 30 02:33:59 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 30 Mar 2013 11:33:59 +1000 Subject: [Python-Dev] Accepting PEP 434, Idle Enhancement Exception Message-ID: I am accepting Todd Rovito's and Terry Reedy's PEP 434, officially declaring IDLE to be an application bundled with Python, with the contents of "Lib/idlelib" exempt from the usual "no new features in maintenance releases" rule. As stated in the PEP, this isn't carte blanche to do major rewrites in maintenance releases, merely acknowledgement that, when in doubt, we better serve our users by treating IDLE as a bundled application and making it behave consistently across all supported versions than we do by treating it as a library first and an application second. Hopefully this clarification, and the stated goal of supporting IDLE as a high quality cross-platform default starting point for new Python users that aren't already accustomed to the command line and editing text files directly, will make it easier for the IDLE developers to focus on making IDLE excel at that task. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From fijall at gmail.com Sat Mar 30 03:40:14 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 29 Mar 2013 19:40:14 -0700 Subject: [Python-Dev] Accepting PEP 434, Idle Enhancement Exception In-Reply-To: References: Message-ID: On Fri, Mar 29, 2013 at 6:33 PM, Nick Coghlan wrote: > I am accepting Todd Rovito's and Terry Reedy's PEP 434, officially > declaring IDLE to be an application bundled with Python, with the > contents of "Lib/idlelib" exempt from the usual "no new features in > maintenance releases" rule. > > As stated in the PEP, this isn't carte blanche to do major rewrites in > maintenance releases, merely acknowledgement that, when in doubt, we > better serve our users by treating IDLE as a bundled application and > making it behave consistently across all supported versions than we do > by treating it as a library first and an application second. > > Hopefully this clarification, and the stated goal of supporting IDLE > as a high quality cross-platform default starting point for new Python > users that aren't already accustomed to the command line and editing > text files directly, will make it easier for the IDLE developers to > focus on making IDLE excel at that task. > > Regards, > Nick. Does that mean that mainstream idle development should move out of the python tree? From rovitotv at gmail.com Sat Mar 30 04:01:09 2013 From: rovitotv at gmail.com (Todd Rovito) Date: Fri, 29 Mar 2013 23:01:09 -0400 Subject: [Python-Dev] Accepting PEP 434, Idle Enhancement Exception In-Reply-To: References: Message-ID: On Fri, Mar 29, 2013 at 10:40 PM, Maciej Fijalkowski wrote: > Does that mean that mainstream idle development should move out of the > python tree? No the acceptance of PEP-434 does not mean IDLE development should move out of the python tree. The acceptance of PEP-434 means that the restriction on applying enhancements be relaxed for IDLE code residing in ../Lib/idlelib. In other words Python Core Developers can apply enhancements (but not major rewrites) even to the 2.7 branch. The relaxation was requested in the hope that we can apply many of the already existing patches quickly and allow IDLE to become a high quality cross-platform default starting point for new Python users that aren't already accustomed to the command line and editing text files directly. PEP-434 doesn't suggest moving the IDLE code outside of the Python tree. Please let me know if you have additional questions, feel free to help us with IDLE development! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Mar 30 07:21:35 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 30 Mar 2013 16:21:35 +1000 Subject: [Python-Dev] Accepting PEP 434, Idle Enhancement Exception In-Reply-To: References: Message-ID: On Sat, Mar 30, 2013 at 12:40 PM, Maciej Fijalkowski wrote: > On Fri, Mar 29, 2013 at 6:33 PM, Nick Coghlan wrote: >> I am accepting Todd Rovito's and Terry Reedy's PEP 434, officially >> declaring IDLE to be an application bundled with Python, with the >> contents of "Lib/idlelib" exempt from the usual "no new features in >> maintenance releases" rule. >> >> As stated in the PEP, this isn't carte blanche to do major rewrites in >> maintenance releases, merely acknowledgement that, when in doubt, we >> better serve our users by treating IDLE as a bundled application and >> making it behave consistently across all supported versions than we do >> by treating it as a library first and an application second. >> >> Hopefully this clarification, and the stated goal of supporting IDLE >> as a high quality cross-platform default starting point for new Python >> users that aren't already accustomed to the command line and editing >> text files directly, will make it easier for the IDLE developers to >> focus on making IDLE excel at that task. >> >> Regards, >> Nick. > > Does that mean that mainstream idle development should move out of the > python tree? That will ultimately be up to the IDLE developers. However, I don't expect it to happen any time soon, as remaining in the CPython repo allows them to easily re-use the existing buildbot fleet as they try to build out a decent test suite, and also means they don't have to spend their time working out a completely new development workflow rather than working on IDLE as it exists now. Past experience also suggests that maintaining things in the CPython repo and cutting periodic external releases (if the IDLE developers ever choose to do that) works a *lot* better than trying to periodically reintegrate an externally maintained tool. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From hodgestar+pythondev at gmail.com Sat Mar 30 07:33:38 2013 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Sat, 30 Mar 2013 08:33:38 +0200 Subject: [Python-Dev] Accepting PEP 434, Idle Enhancement Exception In-Reply-To: References: Message-ID: Having a standalone version of IDLE might be really useful to alternative Python implementations. From solipsis at pitrou.net Sat Mar 30 13:26:08 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 30 Mar 2013 13:26:08 +0100 Subject: [Python-Dev] Accepting PEP 434, Idle Enhancement Exception References: Message-ID: <20130330132608.6ec898bd@pitrou.net> On Sat, 30 Mar 2013 08:33:38 +0200 Simon Cross wrote: > Having a standalone version of IDLE might be really useful to > alternative Python implementations. Why? From solipsis at pitrou.net Sat Mar 30 13:28:00 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 30 Mar 2013 13:28:00 +0100 Subject: [Python-Dev] cpython (2.7): Fix typos and clear up one very odd bit of wording as pointed out by References: <3ZdCsJ4BKJzPyZ@mail.python.org> Message-ID: <20130330132800.5b3b9454@pitrou.net> On Sat, 30 Mar 2013 09:39:00 +0100 (CET) gregory.p.smith wrote: > > The workarounds and modifications are not included in patch releases as they > break backward compatibility. This is not entirely true. It was perfectly possible to give the *options* to change behaviour, but the patch wasn't ready. Regards Antoine. From fijall at gmail.com Sat Mar 30 13:35:44 2013 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 30 Mar 2013 05:35:44 -0700 Subject: [Python-Dev] Accepting PEP 434, Idle Enhancement Exception In-Reply-To: <20130330132608.6ec898bd@pitrou.net> References: <20130330132608.6ec898bd@pitrou.net> Message-ID: On Sat, Mar 30, 2013 at 5:26 AM, Antoine Pitrou wrote: > On Sat, 30 Mar 2013 08:33:38 +0200 > Simon Cross wrote: > >> Having a standalone version of IDLE might be really useful to >> alternative Python implementations. > > Why? I don't think it's worth discussing - tkinter does not work on any other implementation than CPython and it seems it won't work. It's a bit pity, but I guess if I felt really bad about it, I should just make it work. PS. are there idle projects in SoC? Maybe we should put a more pypy-friendly one there too? Cheers, fijal From dholth at gmail.com Sat Mar 30 20:26:51 2013 From: dholth at gmail.com (Daniel Holth) Date: Sat, 30 Mar 2013 15:26:51 -0400 Subject: [Python-Dev] Accepting PEP 434, Idle Enhancement Exception In-Reply-To: References: <20130330132608.6ec898bd@pitrou.net> Message-ID: Yes, it would probably make more sense to split the editor and shell processes as many Python IDEs do, with IDLE running in CPython and the user's computation running in the chosen interpreter. On Sat, Mar 30, 2013 at 8:35 AM, Maciej Fijalkowski wrote: > On Sat, Mar 30, 2013 at 5:26 AM, Antoine Pitrou wrote: >> On Sat, 30 Mar 2013 08:33:38 +0200 >> Simon Cross wrote: >> >>> Having a standalone version of IDLE might be really useful to >>> alternative Python implementations. >> >> Why? > > I don't think it's worth discussing - tkinter does not work on any > other implementation than CPython and it seems it won't work. It's a > bit pity, but I guess if I felt really bad about it, I should just > make it work. > > PS. are there idle projects in SoC? Maybe we should put a more > pypy-friendly one there too? > > Cheers, > fijal > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/dholth%40gmail.com From fwierzbicki at gmail.com Sat Mar 30 21:36:39 2013 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Sat, 30 Mar 2013 13:36:39 -0700 Subject: [Python-Dev] Accepting PEP 434, Idle Enhancement Exception In-Reply-To: References: Message-ID: On Fri, Mar 29, 2013 at 11:33 PM, Simon Cross wrote: > Having a standalone version of IDLE might be really useful to > alternative Python implementations. I suspect it's too hard. I remember seeing some work on "anygui.py" that looked like an attempt to make these sorts of things work across various windowing platforms, but I don't think it made it very far. -Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sat Mar 30 21:38:08 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 30 Mar 2013 21:38:08 +0100 Subject: [Python-Dev] Accepting PEP 434, Idle Enhancement Exception References: Message-ID: <20130330213808.375ee9ee@pitrou.net> On Sat, 30 Mar 2013 13:36:39 -0700 "fwierzbicki at gmail.com" wrote: > On Fri, Mar 29, 2013 at 11:33 PM, Simon Cross > wrote: > > > Having a standalone version of IDLE might be really useful to > > alternative Python implementations. > > I suspect it's too hard. I remember seeing some work on "anygui.py" that > looked like an attempt to make these sorts of things work across various > windowing platforms, but I don't think it made it very far. Instead of "anygui.py", one could probably start with Qt or wxWidgets. But that would add a dependency to a very large 3rd party library. Regards Antoine. From mynameisfiber at gmail.com Sat Mar 30 22:31:26 2013 From: mynameisfiber at gmail.com (Micha Gorelick) Date: Sat, 30 Mar 2013 17:31:26 -0400 Subject: [Python-Dev] py2.7: dictobject not properly resizing Message-ID: I was taking a look at dictobject.c and realized that the logic controlling whether a resizedict will occur in dict_set_item_by_hash_or_entry disallows for the shrinking of a dictionary. This is contrary to what the comments directly above say: (http://hg.python.org/cpython/file/f3032825f637/Objects/dictobject.c#l771) 771 /* If we added a key, we can safely resize. Otherwise just return! 772 * If fill >= 2/3 size, adjust size. Normally, this doubles or 773 * quaduples the size, but it's also possible for the dict to shrink 774 * (if ma_fill is much larger than ma_used, meaning a lot of dict 775 * keys have been * deleted). The "bug" occures in the following conditional since we exit out of the function without checking the relative magnitudes of ma_filled to ma_used. Instead, we only check if we still have a correct loading factor (and the "don't resize on modification" bit). This can be fixed by changing the following conditional on line 785 to: if (mp->ma_used <= n_used || (mp->ma_fill*3 < (mp->ma_mask+1)*2 && mp->ma_used*5 > mp->ma_fill)) The factor of 5 was chosen arbitrarily... I'm sure with some benchmarking we could tune it to an optimal value for the internal use of dictionaries. However, before I put this effort in I was wondering if this is in fact desired behavior or if it is indeed a bug. At the very least, the comments should be updated to reflect the actual resizing dynamics of the dictionary. Micha ----------------------------- http://micha.gd/ http://github.com/mynameisfiber/ From solipsis at pitrou.net Sat Mar 30 22:37:01 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 30 Mar 2013 22:37:01 +0100 Subject: [Python-Dev] py2.7: dictobject not properly resizing References: Message-ID: <20130330223701.6ac267d8@pitrou.net> On Sat, 30 Mar 2013 17:31:26 -0400 Micha Gorelick wrote: > I was taking a look at dictobject.c and realized that the logic > controlling whether a resizedict will occur in > dict_set_item_by_hash_or_entry disallows for the shrinking of a > dictionary. This is contrary to what the comments directly above say: Also in 3.4: >>> d = {i: i for i in range(1000)} >>> sys.getsizeof(d) 49264 >>> for i in range(900): del d[i] ... >>> sys.getsizeof(d) 49264 >>> for i in range(900, 1000): del d[i] ... >>> sys.getsizeof(d) 49264 >>> len(d) 0 >>> d.clear() >>> sys.getsizeof(d) 88 Regards Antoine. From arigo at tunes.org Sat Mar 30 22:56:10 2013 From: arigo at tunes.org (Armin Rigo) Date: Sat, 30 Mar 2013 22:56:10 +0100 Subject: [Python-Dev] py2.7: dictobject not properly resizing In-Reply-To: <20130330223701.6ac267d8@pitrou.net> References: <20130330223701.6ac267d8@pitrou.net> Message-ID: Hi Antoine, On Sat, Mar 30, 2013 at 10:37 PM, Antoine Pitrou wrote: > On Sat, 30 Mar 2013 17:31:26 -0400 > Micha Gorelick wrote: >> I was taking a look at dictobject.c and realized that the logic >> controlling whether a resizedict will occur in >> dict_set_item_by_hash_or_entry disallows for the shrinking of a >> dictionary. It doesn't disallow shrinking. If you take a dictionary of size 1000, remove of its elements, and continue to use it (write and delete more items) for long enough, then eventually it is shrinked. It just takes a while because it needs to fill 2/3 of the slots of the big table with "deleted" markers before it happens. Python 3.3.0b1 (default:07ddf5ecaafa, Aug 12 2012, 17:47:28) [GCC 3.4.6 (Gentoo 3.4.6-r2, ssp-3.4.6-1.0, pie-8.7.10)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> d = {i:i for i in range(1000)} >>> sys.getsizeof(d) 24624 >>> for i in range(1000): del d[i] ... >>> sys.getsizeof(d) 24624 >>> for j in range(1001, 2000): ... d[j] = 0; del d[j] ... >>> sys.getsizeof(d) 144 >>> A bient?t, Armin. From mynameisfiber at gmail.com Sat Mar 30 23:30:38 2013 From: mynameisfiber at gmail.com (Micha Gorelick) Date: Sat, 30 Mar 2013 18:30:38 -0400 Subject: [Python-Dev] py2.7: dictobject not properly resizing Message-ID: True, but your example mechanism of getting a shrink event is purely based on ma_fill. This is happening because your last loop is increasing ma_fill to the point where it thinks it needs to resize because of the load factor and then it calculates the new size based on ma_used. The comment that I pointed out from the code seems to imply that simply having ma_fill >> ma_used will trigger a resize. The two conditions for a resize are definitely not equivalent! Micha ----------------------------- http://micha.gd/ http://github.com/mynameisfiber/ From tjreedy at udel.edu Sun Mar 31 04:34:32 2013 From: tjreedy at udel.edu (Terry Jan Reedy) Date: Sat, 30 Mar 2013 22:34:32 -0400 Subject: [Python-Dev] Idle, site.py, and the release candidates Message-ID: While trying to test the patch for http://bugs.python.org/issue5492 on Windows, I discovered that quit() and exit() in the Idle Shell are now disabled, it seems, for all versions on all systems rather than just sometimes on Linux. The problem is a change in idlelib that invalidated an assumption made in site.py. Revs 81718-81721 for http://bugs.python.org/issue9290 changed idlelib.PyShell.PseudoFile (line 1277 in 3.3) to subclass io.TextIOBase, which subclasses IOBase. This gave PseudoFile and its subclasses a .fileno instance method attribute that raises io.UnsupportedOperation: fileno. This is not a bug since the doc for io.IOBase.fileno says: "Return the underlying file descriptor (an integer) of the stream if it exists. An OSError is raised if the IO object does not use a file descriptor." (the particular error raised is not an issue here). This is the code for Quitter.__call__ in site.py (line 368 in 3.3): def __call__(self, code=None): # Shells like IDLE catch the SystemExit, but listen when # stdin wrapper is closed. try: fd = -1 if hasattr(sys.stdin, "fileno"): fd = sys.stdin.fileno() if fd != 0: # Don't close stdin if it wraps fd 0 sys.stdin.close() except: pass raise SystemExit(code) The incorrect assumption is that if sys.stdin.fileno exits but raises, the call did not come from a shell that needs .close called. I do not know enough about other circumstances in which stdin.fileno would do something other than return 0 to be sure of what the proper fix would be. (I increasingly dislike bare excepts as they hide the thinking and knowledge of the original programmer. What exception was expected that should be passed away?) Given that the callable constants exit and quit and are optionally suppressed on startup and are not standard builtins http://docs.python.org/3/library/constants.html#constants-added-by-the-site-module it would be alright with me to ignore this regression and release as scheduled. But I though people should be aware of it. -- Terry Jan Reedy From ncoghlan at gmail.com Sun Mar 31 08:39:49 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 31 Mar 2013 16:39:49 +1000 Subject: [Python-Dev] Idle, site.py, and the release candidates In-Reply-To: References: Message-ID: On Sun, Mar 31, 2013 at 12:34 PM, Terry Jan Reedy wrote: > While trying to test the patch for > http://bugs.python.org/**issue5492 > on Windows, I discovered that quit() and exit() in the Idle Shell are now > disabled, it seems, for all versions on all systems rather than just > sometimes on Linux. > > The problem is a change in idlelib that invalidated an assumption made in > site.py. Revs 81718-81721 for > http://bugs.python.org/**issue9290 > changed idlelib.PyShell.PseudoFile (line 1277 in 3.3) to subclass > io.TextIOBase, which subclasses IOBase. This gave PseudoFile and its > subclasses a .fileno instance method attribute that raises > io.UnsupportedOperation: fileno. > > This is not a bug since the doc for io.IOBase.fileno says: > "Return the underlying file descriptor (an integer) of the stream if it > exists. An OSError is raised if the IO object does not use a file > descriptor." > (the particular error raised is not an issue here). > > This is the code for Quitter.__call__ in site.py (line 368 in 3.3): > > def __call__(self, code=None): > # Shells like IDLE catch the SystemExit, but listen when > # stdin wrapper is closed. > try: > fd = -1 > if hasattr(sys.stdin, "fileno"): > fd = sys.stdin.fileno() > if fd != 0: > # Don't close stdin if it wraps fd 0 > sys.stdin.close() > except: > pass > raise SystemExit(code) > > The incorrect assumption is that if sys.stdin.fileno exits but raises, the > call did not come from a shell that needs .close called. > > I do not know enough about other circumstances in which stdin.fileno would > do something other than return 0 to be sure of what the proper fix would > be. (I increasingly dislike bare excepts as they hide the thinking and > knowledge of the original programmer. What exception was expected that > should be passed away?) > The other problem is that making *two* function calls inside a broad try/except is almost always a terrible idea. It seems to me that the intended logic is more like this: try: # Close stdin if it wraps any fd other than 0 close_stdin = (sys.stdin.fileno() != 0) except (AttributeError, OSError, io.UnsupportedOperation): # Also close stdin if it doesn't expose a file descriptor close_stdin = True if close_stdin: try: sys.stdin.close() except Exception: pass raise SystemExit(code) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Sun Mar 31 09:52:26 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 31 Mar 2013 03:52:26 -0400 Subject: [Python-Dev] Idle, site.py, and the release candidates In-Reply-To: References: Message-ID: <5157EB3A.4050904@udel.edu> On 3/31/2013 2:39 AM, Nick Coghlan wrote: > On Sun, Mar 31, 2013 at 12:34 PM, Terry Jan Reedy > > > I do not know enough about other circumstances in which > stdin.fileno would do something other than return 0 to be sure of > what the proper fix would be. (I increasingly dislike bare > excepts as they hide the thinking and knowledge of the original > programmer. What exception was expected that should be passed away?) > > > The other problem is that making *two* function calls inside a broad > try/except is almost always a terrible idea. That too. I could not decide which function an exception was expected from. > It seems to me that the intended logic is more like this: > > try: > # Close stdin if it wraps any fd other than 0 > close_stdin = (sys.stdin.fileno() != 0) > except (AttributeError, OSError, io.UnsupportedOperation): > # Also close stdin if it doesn't expose a file descriptor > close_stdin = True > if close_stdin: > try: > sys.stdin.close() > except Exception: > pass > raise SystemExit(code) > There are 4 possible situations for sys.stdin: 1. No .fileno 2. .fileno raises 3. .fileno() != 0 4. lfileno() == 0 I believe the current code calls .close for 1 and raises SystemExit for 2-4. Your code calls for 1-3 and raises for 4. I suspect that is correct. For an rc patch, the safest temporary patch would be to start .__call__ with if sys.stdin.__name__ == 'PseudoInputFile': sys.stdin.close() I would have to check that the name is correct as seen in the user process (cannot at moment). The deeper problem, I think, is that none of sys.version, sys.hexversion, sys._version, sys.platform tell a program that it is running under Idle. It usually does not matter but there are a few situations in which it does. -- Terry Jan Reedy -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sun Mar 31 12:01:50 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 31 Mar 2013 12:01:50 +0200 Subject: [Python-Dev] Idle, site.py, and the release candidates References: Message-ID: <20130331120150.4ac38f40@pitrou.net> On Sat, 30 Mar 2013 22:34:32 -0400 Terry Jan Reedy wrote: > > I do not know enough about other circumstances in which stdin.fileno > would do something other than return 0 to be sure of what the proper fix > would be. > (I increasingly dislike bare excepts as they hide the > thinking and knowledge of the original programmer. You should learn to use the power of version control: http://docs.python.org/devguide/faq.html#how-do-i-find-out-who-edited-or-what-revision-changed-a-line-last $ hg ann Lib/site.py will tell you that the lines you mention come from the following changeset: $ hg log -r 86358b43c8bb -vp changeset: 44390:86358b43c8bb parent: 44385:5670104acd39 user: Christian Heimes date: Mon Dec 31 03:07:24 2007 +0000 files: Lib/site.py description: Don't close sys.stdin with quit() if sys.stdin wraps fd 0. Otherwise it will raise a warning: Lib/io.py:1221: RuntimeWarning: Trying to close unclosable fd diff --git a/Lib/site.py b/Lib/site.py --- a/Lib/site.py +++ b/Lib/site.py @@ -247,7 +247,12 @@ # Shells like IDLE catch the SystemExit, but listen when their # stdin wrapper is closed. try: - sys.stdin.close() + fd = -1 + if hasattr(sys.stdin, "fileno"): + fd = sys.stdin.fileno() + if fd != 0: + # Don't close stdin if it wraps fd 0 + sys.stdin.close() except: pass raise SystemExit(code) In this case the original motivation seems obsolete: >>> import sys >>> sys.stdin.fileno() 0 >>> sys.stdin.close() >>> That said, if IDLE users expect those global functions, perhaps IDLE should define its own ones rather than rely on site.py. Regards Antoine. From mark at hotpy.org Sun Mar 31 15:29:58 2013 From: mark at hotpy.org (Mark Shannon) Date: Sun, 31 Mar 2013 14:29:58 +0100 Subject: [Python-Dev] Semantics of __int__(), __index__() Message-ID: <51583A56.8070404@hotpy.org> Hi all, I was looking into http://bugs.python.org/issue17576 and I found that the semantics of __int__() and __index__() are not precisely defined in the documentation and that the implementation (CPython 3.4a) has some odd behaviour. Defining two classes: class Int1(int): def __init__(self, val=0): print("new %s" % self.__class__) class Int2(Int1): def __int__(self): return self and two instances i1 = Int1() i2 = Int2() we get the following behaviour: >>> type(int(i1)) I would have expected 'Int1' >>> type(int(i2)) new Why is a new Int2 being created? operator.index does similar things. So, 1. Should type(int(x)) be exactly int, or is any subclass OK? 2. Should type(index(x)) be exactly int, or is any subclass OK? 3. Should int(x) be defined as int_check(x.__int__())? 4. Should operator.index(x) be defined as index_check(x.__index__())? where: def int_check(x): if is_int(x): return x else: raise TypeError(...) def index_check(x): if is_index(x): return x else: raise TypeError(...) The definition of is_int(x) and is_index(x) follow from the answers to 1 and 2. I had previously assumed (and would expect) that the answers were: 1. Any subclass is OK 2. Ditto 3. Yes 4. Yes Which means that def is_int(x): return int in type(x).__mro__ is_index = is_int Cheers, Mark. From dickinsm at gmail.com Sun Mar 31 16:28:56 2013 From: dickinsm at gmail.com (Mark Dickinson) Date: Sun, 31 Mar 2013 15:28:56 +0100 Subject: [Python-Dev] Semantics of __int__(), __index__() In-Reply-To: <51583A56.8070404@hotpy.org> References: <51583A56.8070404@hotpy.org> Message-ID: On Sun, Mar 31, 2013 at 2:29 PM, Mark Shannon wrote: > class Int1(int): > def __init__(self, val=0): > print("new %s" % self.__class__) > > class Int2(Int1): > def __int__(self): > return self > > and two instances > i1 = Int1() > i2 = Int2() > > we get the following behaviour: > > >>> type(int(i1)) > > > I would have expected 'Int1' > Wouldn't that remove the one obvious way to get an 'int' from an 'Int1'? > 1. Should type(int(x)) be exactly int, or is any subclass OK? > 2. Should type(index(x)) be exactly int, or is any subclass OK? > 3. Should int(x) be defined as int_check(x.__int__())? > 4. Should operator.index(x) be defined as index_check(x.__index__())? > For (1), I'd say yes, it should be exactly an int, so my answer to (3) is no. As written, int_check would do the wrong thing for bools, too: I definitely want int(True) to be 1, not True. For (2) and (4), it's not so clear. Are there use-cases for an __index__ return value that's not directly of type int? I can't think of any offhand. Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Sun Mar 31 17:23:53 2013 From: tjreedy at udel.edu (Terry Jan Reedy) Date: Sun, 31 Mar 2013 11:23:53 -0400 Subject: [Python-Dev] Idle, site.py, and the release candidates In-Reply-To: <5157EB3A.4050904@udel.edu> References: <5157EB3A.4050904@udel.edu> Message-ID: On 3/31/2013 3:52 AM, Terry Reedy wrote: > For an rc patch, the safest temporary patch would be to start > .__call__ with > > if sys.stdin.__name__ == 'PseudoInputFile': sys.stdin.close() > > I would have to check that the name is correct as seen in the user > process (cannot at moment). In addition, idlelib.PyShell.PseudoInputFile needs a .close method + def close(self): + self.shell.close() + http://bugs.python.org/issue17585 -- Terry Jan Reedy From tjreedy at udel.edu Sun Mar 31 17:38:07 2013 From: tjreedy at udel.edu (Terry Jan Reedy) Date: Sun, 31 Mar 2013 11:38:07 -0400 Subject: [Python-Dev] Idle, site.py, and the release candidates In-Reply-To: <20130331120150.4ac38f40@pitrou.net> References: <20130331120150.4ac38f40@pitrou.net> Message-ID: On 3/31/2013 6:01 AM, Antoine Pitrou wrote: > That said, if IDLE users expect those global functions, perhaps IDLE > should define its own ones rather than rely on site.py. I thought of that. Idle would have to check the beginning of every statement before sending it to the user process, which would do the same thing. I personally think exit/quit should eventually go, as ^d/^z or [x] button are as easy. From solipsis at pitrou.net Sun Mar 31 17:43:56 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 31 Mar 2013 17:43:56 +0200 Subject: [Python-Dev] Idle, site.py, and the release candidates References: <20130331120150.4ac38f40@pitrou.net> Message-ID: <20130331174356.51457754@pitrou.net> On Sun, 31 Mar 2013 11:38:07 -0400 Terry Jan Reedy wrote: > On 3/31/2013 6:01 AM, Antoine Pitrou wrote: > > > That said, if IDLE users expect those global functions, perhaps IDLE > > should define its own ones rather than rely on site.py. > > I thought of that. Idle would have to check the beginning of every > statement before sending it to the user process, which would do the same > thing. I personally think exit/quit should eventually go, as ^d/^z or > [x] button are as easy. I never use them myself, but they are more discoverable than keyboard shortcuts, and hence can be useful for beginners or casual users. Regards Antoine. From mynameisfiber at gmail.com Sun Mar 31 18:51:26 2013 From: mynameisfiber at gmail.com (Micha Gorelick) Date: Sun, 31 Mar 2013 12:51:26 -0400 Subject: [Python-Dev] py2.7: dictobject not properly resizing In-Reply-To: References: Message-ID: So I did a bit of benchmarking and attached is the code I used. With downsizing happening when ma_used * 2 <= ma_filled, or the following for the condition in question: if (mp->ma_used <= n_used || (mp->ma_fill*3 < (mp->ma_mask+1)*2 && mp->ma_used*2 > mp->ma_fill)) I see marginally faster performance with downsizing. I chose a factor of 2x because it will ensure downsizings before the ma_fill load factor comes into play and will also not cause a resize on the next insert. Using the vanilla python2.7.3 code, there are never any resize events where the new size is smaller than the old size. Cheers, Micha ----------------------------- http://micha.gd/ http://github.com/mynameisfiber/ -------------- next part -------------- A non-text attachment was scrubbed... Name: test.py Type: application/octet-stream Size: 1060 bytes Desc: not available URL: From francismb at email.de Sun Mar 31 21:13:59 2013 From: francismb at email.de (francis) Date: Sun, 31 Mar 2013 21:13:59 +0200 Subject: [Python-Dev] Semantics of __int__(), __index__() In-Reply-To: <51583A56.8070404@hotpy.org> References: <51583A56.8070404@hotpy.org> Message-ID: <51588AF7.6000801@email.de> > and two instances > i1 = Int1() > i2 = Int2() > > we get the following behaviour: > > >>> type(int(i1)) > > > I would have expected 'Int1' >>> type(float(i1)) >>> type(float(i2)) >>> isinstance(int(i1), int) True >>> isinstance(int(i2), int) new True >>> isinstance(float(i1), float) True >>> isinstance(float(i2), float) True why is printing new ? From brett at yvrsfo.ca Sun Mar 31 23:11:35 2013 From: brett at yvrsfo.ca (Brett Cannon) Date: Sun, 31 Mar 2013 17:11:35 -0400 Subject: [Python-Dev] [Python-checkins] cpython (2.7): Add an itertools recipe showing how to use t.__copy__(). In-Reply-To: <3Zdn7J53MYzPt5@mail.python.org> References: <3Zdn7J53MYzPt5@mail.python.org> Message-ID: "Upcomping" -> "upcoming" On Mar 31, 2013 2:38 AM, "raymond.hettinger" wrote: > http://hg.python.org/cpython/rev/1026b1d47f30 > changeset: 83037:1026b1d47f30 > branch: 2.7 > parent: 83034:e044d22d2f61 > user: Raymond Hettinger > date: Sat Mar 30 23:37:57 2013 -0700 > summary: > Add an itertools recipe showing how to use t.__copy__(). > > files: > Doc/library/itertools.rst | 12 ++++++++++++ > 1 files changed, 12 insertions(+), 0 deletions(-) > > > diff --git a/Doc/library/itertools.rst b/Doc/library/itertools.rst > --- a/Doc/library/itertools.rst > +++ b/Doc/library/itertools.rst > @@ -828,6 +828,18 @@ > indices = sorted(random.randrange(n) for i in xrange(r)) > return tuple(pool[i] for i in indices) > > + def tee_lookahead(t, i): > + """Inspect the i-th upcomping value from a tee object > + while leaving the tee object at its current position. > + > + Raise an IndexError if the underlying iterator doesn't > + have enough values. > + > + """ > + for value in islice(t.__copy__(), i, None): > + return value > + raise IndexError(i) > + > Note, many of the above recipes can be optimized by replacing global > lookups > with local variables defined as default values. For example, the > *dotproduct* recipe can be written as:: > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bugtrack at roumenpetrov.info Sun Mar 31 23:26:16 2013 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Mon, 01 Apr 2013 00:26:16 +0300 Subject: [Python-Dev] mingw32 port In-Reply-To: <5109B192.9050504@ubuntu.com> References: <5109B192.9050504@ubuntu.com> Message-ID: <5158A9F8.5000102@roumenpetrov.info> Matthias Klose wrote: > [No, I'm not interested in the port myself] > > patches for a mingw32 port are floating around on the web and the python bug > tracker, although most of them as a set of patches in one issue addressing > several things, and which maybe outdated for the trunk. at least for me > re-reading a big patch in a new version is more work than having the patches in > different issues. So proposing to break down the patches in independent ones, > dealing with: > > - mingw32 support (excluding any cross-build support). tested with > a native build with srcdir == builddir. The changes for distutils > mingw32 support should be a separate patch. Who could review these? > Asked Martin, but didn't get a reply yet. > > - patches to cross-build for mingw32. > > - patches to deal with a srcdir != builddir configuration, where the > srcdir is read-only (please don't mix with the cross-build support). > > All work should be done on the tip/trunk. > > So ok to close issue16526, issue3871, issue3754 and suggest in the reports to > start over with more granular changes? I post first part of "split of issue3871" . It is related only to build of interpreter core. Second part consist of 23 patches related to build of standard extensions . 3 of them are reported as separate issue by other users. As prerequisite is modernization of cygwincompiler.py - ref issue12641. I will post after 2-3 weeks remaining 20 granular updates. Third part is now with only tree updates related to installation (new). Unlike issue3871 will be supported posix installation scheme as all users refuse to use windows scheme. > Matthias > Roumen