From barry at python.org Tue May 1 00:53:44 2007 From: barry at python.org (Barry Warsaw) Date: Mon, 30 Apr 2007 18:53:44 -0400 Subject: [Python-Dev] Call for junior PEP editors Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 David Goodger and I have been the PEP editors for ages. Well, mostly David lately as I've been way too busy to be of much use. David is also pretty busy, and he lamented that he doesn't have much time for editing when he put out his call for PEPs earlier this month. We've now, or will soon have three more experienced Pythonistas helping out as PEP editors, Georg Brandl, Brett Cannon, and Anthony Baxter. As long as they've been hacking Python, you'd have thought they'd have learned their lesson by now, but we'll gladly consume more of their time and souls. David and I would like to see some junior Pythonistas join the PEP editor team, as a great way to gain experience and become more involved with the community. As David says, PEP editing is something a tech writer can do; it doesn't require intimate knowledge of Python's C code base for example. PEP editors don't judge the worthiness of a PEP -- that's for the Python community to do, but they do the important work of ensuring that the PEPs are up to the high quality and consistent standards that have been established over the years. A PEP editor is sometimes also involved in the meta process of developing and maintaining the PEPs. A good editor's eye, excellent written communication skills, and the inhuman amount of spare time that only the young have are your most important qualifications. If you're a budding Pythonista and are interested in becoming a PEP editor, please send an email to peps at python.org. Let us know about your writing and/or editing experience, how long you've been using Python, how long you've been programming in general, and how much cash you'll be sending our way. Kidding about that last bit. python- dev lurkers are encouraged to apply! Again, this call is for junior Pythonistas only. I think we have enough experienced people now to cover our bases and to help mentor new editors. We're really eager to get some new blood involved in the Python community. We may not accept all applicants; we're aiming for two or three additional editors, but that number isn't set in stone. Cheers, - -Barry (and David) -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (Darwin) iQCVAwUBRjZzeXEjvBPtnXfVAQIzFAQAhj1VIaa+QZ6+7O2BlmDGgCzXiJt1Yfb+ uz8HxVfL+5wjXeQELAPM0hxp07qKtq8ys131gZI19BtGe8F+imzEIkyZJvHrJYNw vOboLs9cSJuDlH0QCKT5p9HNf9H75tm5gOFiCTnDDKJ4/BRzsUG62EHDd4Tz9brq euEKecOMiwk= =vNz1 -----END PGP SIGNATURE----- From martin at v.loewis.de Tue May 1 00:56:39 2007 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 01 May 2007 00:56:39 +0200 Subject: [Python-Dev] Python 2.5.1 In-Reply-To: References: <4633F38A.4040605@v.loewis.de> <46345127.1020504@v.loewis.de> <76fd5acf0704290522u674118eai97c4a8fe778243b4@mail.gmail.com> <463494AC.1080608@v.loewis.de> <76fd5acf0704290837t1c57ede8wdb6e45ab6c6f3b95@mail.gmail.com> <4634C1FC.7090408@v.loewis.de> <76fd5acf0704290953i6dcc0bffk98d49619ef922e1b@mail.gmail.com> <4634DB32.1020009@v.loewis.de> <76fd5acf0704291057g590b909emc9ef3dfa6343450d@mail.gmail.com> <4634E26C.5030208@v.loewis.de> Message-ID: <46367427.2070208@v.loewis.de> > After doing some research I found that it seems to be impossible to > use CreateFile for a file that doesn't have SHARE_READ. I played with > different combinations and with FLAG_BACKUP_SEMANTICS and nothing > helped. However on Windows there's still a possibility to read > attributes: use FindFirstFile. _WIN32_FIND_DATA structure seems to > have all the necessary fields (attributes, file times, size and > full/short filename), and FindFirstFile doesn't care about sharing at > all... So what about GetFileAttributesEx? What are the conditions under which I can successfully invoke it? Regards, Martin From pje at telecommunity.com Tue May 1 02:29:17 2007 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 30 Apr 2007 20:29:17 -0400 Subject: [Python-Dev] PEP 0365: Adding the pkg_resources module Message-ID: <5.1.1.6.0.20070430202700.04cb8128@sparrow.telecommunity.com> I wanted to get this in before the Py3K PEP deadline, since this is a Python 2.6 PEP that would presumably impact 3.x as well. Feedback welcome. PEP: 365 Title: Adding the pkg_resources module Version: $Revision: 55032 $ Last-Modified: $Date: 2007-04-30 20:24:48 -0400 (Mon, 30 Apr 2007) $ Author: Phillip J. Eby Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 30-Apr-2007 Post-History: 30-Apr-2007 Abstract ======== This PEP proposes adding an enhanced version of the ``pkg_resources`` module to the standard library. ``pkg_resources`` is a module used to find and manage Python package/version dependencies and access bundled files and resources, including those inside of zipped ``.egg`` files. Currently, ``pkg_resources`` is only available through installing the entire ``setuptools`` distribution, but it does not depend on any other part of setuptools; in effect, it comprises the entire runtime support library for Python Eggs, and is independently useful. In addition, with one feature addition, this module could support easy bootstrap installation of several Python package management tools, including ``setuptools``, ``workingenv``, and ``zc.buildout``. Proposal ======== Rather than proposing to include ``setuptools`` in the standard library, this PEP proposes only that ``pkg_resources`` be added to the standard library for Python 2.6 and 3.0. ``pkg_resources`` is considerably more stable than the rest of setuptools, with virtually no new features being added in the last 12 months. However, this PEP also proposes that a new feature be added to ``pkg_resources``, before being added to the stdlib. Specifically, it should be possible to do something like:: python -m pkg_resources SomePackage==1.2 to request downloading and installation of ``SomePackage`` from PyPI. This feature would *not* be a replacement for ``easy_install``; instead, it would rely on ``SomePackage`` having pure-Python ``.egg`` files listed for download via the PyPI XML-RPC API, and the eggs would be placed in the ``$PYTHONEGGS`` cache, where they would **not** be importable by default. (And no scripts would be installed) However, if the download egg contains installation bootstrap code, it will be given a chance to run. These restrictions would allow the code to be extremely simple, yet still powerful enough to support users downloading package management tools such as ``setuptools``, ``workingenv`` and ``zc.buildout``, simply by supplying the tool's name on the command line. Rationale ========= Many users have requested that ``setuptools`` be included in the standard library, to save users needing to go through the awkward process of bootstrapping it. However, most of the bootstrapping complexity comes from the fact that setuptools-installed code cannot use the ``pkg_resources`` runtime module unless setuptools is already installed. Thus, installing setuptools requires (in a sense) that setuptools already be installed. Other Python package management tools, such as ``workingenv`` and ``zc.buildout``, have similar bootstrapping issues, since they both make use of setuptools, but also want to provide users with something approaching a "one-step install". The complexity of creating bootstrap utilities for these and any other such tools that arise in future, is greatly reduced if ``pkg_resources`` is already present, and is also able to download pre-packaged eggs from PyPI. (It would also mean that setuptools would not need to be installed in order to simply *use* eggs, as opposed to building them.) Finally, in addition to providing access to eggs built via setuptools or other packaging tools, it should be noted that since Python 2.5, the distutils install package metadata (aka ``PKG-INFO``) files that can be read by ``pkg_resources`` to identify what distributions are already on ``sys.path``. In environments where Python packages are installed using system package tools (like RPM), the ``pkg_resources`` module provides an API for detecting what versions of what packages are installed, even if those packages were installed via the distutils instead of setuptools. Implementation and Documentation ================================ The ``pkg_resources`` implementation is maintained in the Python SVN repository under ``/sandbox/trunk/setuptools/``; see ``pkg_resources.py`` and ``pkg_resources.txt``. Documentation for the egg format(s) supported by ``pkg_resources`` can be found in ``doc/formats.txt``. HTML versions of these documents are available at: * http://peak.telecommunity.com/DevCenter/PkgResources and * http://peak.telecommunity.com/DevCenter/EggFormats (These HTML versions are for setuptools 0.6; they may not reflect all of the changes found in the Subversion trunk's ``.txt`` versions.) Copyright ========= This document has been placed in the public domain. From andrew-pythondev at puzzling.org Tue May 1 02:50:04 2007 From: andrew-pythondev at puzzling.org (Andrew Bennetts) Date: Tue, 1 May 2007 10:50:04 +1000 Subject: [Python-Dev] os.rename on windows In-Reply-To: <2c51ecee0704300749m63620406j4c8d9d6bff6aee6a@mail.gmail.com> References: <2c51ecee0704300749m63620406j4c8d9d6bff6aee6a@mail.gmail.com> Message-ID: <20070501005004.GD15094@steerpike.home.puzzling.org> Raghuram Devarakonda wrote: > Hi, > > I have submitted a patch (http://www.python.org/sf/1704547) that > allows os.rename to replace the destination file if it exists, on > windows. As part of discussion in the tracker, Martin suggested that > python-dev should discuss the change. Does MOVEFILE_REPLACE_EXISTING mean the rename over an existing file is actually atomic? I cannot find any MSDN docs that say so (and I've seen some that suggest to me that it probably isn't). If it's not atomic, then this doesn't offer any advantage over shutil.move that I can see (and in fact would damage the usefulness of os.rename, which is currently atomic on all platforms AFAIK, even though it cannot succeed all the time). If MOVEFILE_REPLACE_EXISTING miraculously turns out to be atomic, then my opinion is: * this feature would be very useful, and I would like a cross-platform way to do this in Python. * this feature should not be called "os.rename", which for years has done something else on some platforms, and so changing it will invite unnecessary breakage. A new function would be better, or perhaps an optional flag. I propose "os.atomic_rename". Also, I assume this cannot replace files that are in use? > Currently, os.rename() on windows uses the API MoveFile() which fails > if the destination file exists. The patch replaces this API with > MoveFileEx() and uses the flag MOVEFILE_REPLACE_EXISTING which causes > the destination file to be replaced if it exists. However, this change > is subtle and if there is any existing code that depends on current > os.rename behaviour on windows, their code is silently broken with > (any) destination file being overwritten. But the functionality of > replacing is important and I would like to know the best of way of > supporting it. If it is deemed that this change is not good to go in > as-is, how about having an optional parameter to os.rename (say, > win_replace) that can be used by callers to explicitly request > replacing? I'd be ok with a flag, but it should have a cross-platform name. "require_atomic" or "replace" or something like that. I think a new function makes more sense, though. > I must also point out that the patch uses another flag > MOVEFILE_COPY_ALLOWED effectively allowing renamed files to be on > separate file systems. The renaming in this case is not atomic and I > used this flag only to support current functionality. It is not a bad > idea to disallow such renames which brings it in line with the > behaviour on many unix flavors. This also has the potential to break > code but not silently. I don't quite follow what you're saying here, but I'd be against an operation called "rename" that sometimes was atomic and sometimes wasn't. > Lastly, I found an old discussion about the same topic by this list. > > http://mail.python.org/pipermail/python-dev/2001-May/014957.html > > Even though Guido indicated that he doesn't support API change in this > thread, I am posting again as I did not see any one mention > MoveFileEx() in that thread. Does MoveFileEx solve the atomicity problem that Guido raised in that thread? I don't think it does, so I think the situation is still the same. -Andrew. From greg.ewing at canterbury.ac.nz Tue May 1 04:23:12 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 01 May 2007 14:23:12 +1200 Subject: [Python-Dev] (no subject) In-Reply-To: References: Message-ID: <4636A490.3020108@canterbury.ac.nz> JOSHUA ABRAHAM wrote: > I was hoping you guys would consider creating function in os.path or > otherwise that would find the full path of a file when given only it's base > name and nothing else.I have been made to understand that this is not > currently possible. Does os.path.abspath() do what you want? If not, what exactly *do* you want? -- Greg From draghuram at gmail.com Tue May 1 06:14:47 2007 From: draghuram at gmail.com (Raghuram Devarakonda) Date: Tue, 1 May 2007 00:14:47 -0400 Subject: [Python-Dev] os.rename on windows In-Reply-To: <20070501005004.GD15094@steerpike.home.puzzling.org> References: <2c51ecee0704300749m63620406j4c8d9d6bff6aee6a@mail.gmail.com> <20070501005004.GD15094@steerpike.home.puzzling.org> Message-ID: <2c51ecee0704302114v70c070b8oab798568ccb67688@mail.gmail.com> On 4/30/07, Andrew Bennetts wrote: > Does MOVEFILE_REPLACE_EXISTING mean the rename over an existing file is actually > atomic? I cannot find any MSDN docs that say so (and I've seen some that > suggest to me that it probably isn't). Even though MSDN docs do not say it explicitly, I found some discussions claiming that MOVEFILE_REPLACE_EXISTING is atomic. However, after seeing your comment, I did a more thorough search and I too found some references claiming otherwise. As a last resort, I checked cygwin documentation which claims that it's rename() is POSIX.1 compliant. If I am not mistaken, POSIX.1 does require atomicity so I am curious how rename() is implemented there. I checked out the sources and I will try to find more about their implementation. I completely agree that without positive proof of atomicity, there is no point in making this code change. > Also, I assume this cannot replace files that are in use? A simple test shows that it can indeed replace files that are open. Thanks, Raghu From mike.klaas at gmail.com Tue May 1 06:49:48 2007 From: mike.klaas at gmail.com (Mike Klaas) Date: Mon, 30 Apr 2007 21:49:48 -0700 Subject: [Python-Dev] (no subject) In-Reply-To: <4636A490.3020108@canterbury.ac.nz> References: <4636A490.3020108@canterbury.ac.nz> Message-ID: <3d2ce8cb0704302149ia9e1fe5h53b6098860d8fdac@mail.gmail.com> On 4/30/07, Greg Ewing wrote: > JOSHUA ABRAHAM wrote: > > I was hoping you guys would consider creating function in os.path or > > otherwise that would find the full path of a file when given only it's base > > name and nothing else.I have been made to understand that this is not > > currently possible. > > Does os.path.abspath() do what you want? > > If not, what exactly *do* you want? probably: def find_in_path(filename): for path in os.environ['PATH'].split(os.pathsep): if os.path.exists(filename): return os.path.abspath(os.path.join(path, filename)) -Mike From nnorwitz at gmail.com Tue May 1 08:38:42 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 30 Apr 2007 23:38:42 -0700 Subject: [Python-Dev] head crashing (was: Fwd: [Python-checkins] buildbot warnings in x86 mvlgcc trunk) Message-ID: This is the third time I've seen a crash on 2 different machines. This is the first time I noticed this unexplained crash: http://python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/1983/step-test/0 That was at r54982. I tried to reproduce this: with a non-debug build, with a debug build, with valgrind with both types of build. I could never reproduce it. Valgrind did not report any errors either. Here is the third failure: http://python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/1986/step-test/0 The failure below prints: python: Objects/obmalloc.c:746: PyObject_Malloc: Assertion `bp != ((void *)0)' failed. which probably doesn't really help since the corruption has already occurred. See http://python.org/dev/buildbot/all/x86%20mvlgcc%20trunk/builds/497/step-test/0 Anyone have ideas what might have caused this? n -- ---------- Forwarded message ---------- From: buildbot at python.org Date: Apr 30, 2007 11:17 PM Subject: [Python-checkins] buildbot warnings in x86 mvlgcc trunk To: python-checkins at python.org The Buildbot has detected a new failure of x86 mvlgcc trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%2520mvlgcc%2520trunk/builds/497 Buildbot URL: http://www.python.org/dev/buildbot/all/ Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl Build had warnings: warnings test Excerpt from the test logfile: make: *** [buildbottest] Aborted (core dumped) sincerely, -The Buildbot _______________________________________________ Python-checkins mailing list Python-checkins at python.org http://mail.python.org/mailman/listinfo/python-checkins From nnorwitz at gmail.com Tue May 1 09:27:30 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 1 May 2007 00:27:30 -0700 Subject: [Python-Dev] head crashing (was: Fwd: [Python-checkins] buildbot warnings in x86 mvlgcc trunk) In-Reply-To: References: Message-ID: In rev 54982 (the first time this crash was seen), I see something which might create a problem. In python/trunk/Modules/posixmodule.c (near line 6300): + PyMem_FREE(mode); Py_END_ALLOW_THREADS Can you call PyMem_FREE() without the GIL held? I couldn't find it documented either way. Of the 3 failures I know of, below is the intersection of the tests that were run prior to crashing: set(['test_threadedtempfile', 'test_cgi', 'test_dircache', 'test_set', 'test_binascii', 'test_imp', 'test_multibytecodec', 'test_weakref', 'test_ftplib', 'test_posixpath', 'test_xmlrpc', 'test_urllibnet', 'test_old_mailbox', 'test_distutils', 'test_site', 'test_runpy', 'test_fork1', 'test_traceback']) n -- On 4/30/07, Neal Norwitz wrote: > This is the third time I've seen a crash on 2 different machines. > This is the first time I noticed this unexplained crash: > > http://python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/1983/step-test/0 > > That was at r54982. > > I tried to reproduce this: with a non-debug build, with a debug build, > with valgrind with both types of build. I could never reproduce it. > Valgrind did not report any errors either. > > Here is the third failure: > > http://python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/1986/step-test/0 > > The failure below prints: > python: Objects/obmalloc.c:746: PyObject_Malloc: Assertion `bp != > ((void *)0)' failed. > > which probably doesn't really help since the corruption has already > occurred. See http://python.org/dev/buildbot/all/x86%20mvlgcc%20trunk/builds/497/step-test/0 > > Anyone have ideas what might have caused this? > > n > -- > > ---------- Forwarded message ---------- > From: buildbot at python.org > Date: Apr 30, 2007 11:17 PM > Subject: [Python-checkins] buildbot warnings in x86 mvlgcc trunk > To: python-checkins at python.org > > > The Buildbot has detected a new failure of x86 mvlgcc trunk. > Full details are available at: > http://www.python.org/dev/buildbot/all/x86%2520mvlgcc%2520trunk/builds/497 > > Buildbot URL: http://www.python.org/dev/buildbot/all/ > > Build Reason: > Build Source Stamp: [branch trunk] HEAD > Blamelist: georg.brandl > > Build had warnings: warnings test > > Excerpt from the test logfile: > make: *** [buildbottest] Aborted (core dumped) > > sincerely, > -The Buildbot > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > From snaury at gmail.com Tue May 1 09:36:47 2007 From: snaury at gmail.com (Alexey Borzenkov) Date: Tue, 1 May 2007 11:36:47 +0400 Subject: [Python-Dev] Python 2.5.1 In-Reply-To: <46367427.2070208@v.loewis.de> References: <4633F38A.4040605@v.loewis.de> <463494AC.1080608@v.loewis.de> <76fd5acf0704290837t1c57ede8wdb6e45ab6c6f3b95@mail.gmail.com> <4634C1FC.7090408@v.loewis.de> <76fd5acf0704290953i6dcc0bffk98d49619ef922e1b@mail.gmail.com> <4634DB32.1020009@v.loewis.de> <76fd5acf0704291057g590b909emc9ef3dfa6343450d@mail.gmail.com> <4634E26C.5030208@v.loewis.de> <46367427.2070208@v.loewis.de> Message-ID: On 5/1/07, "Martin v. L?wis" wrote: > > After doing some research I found that it seems to be impossible to > > use CreateFile for a file that doesn't have SHARE_READ. I played with > > different combinations and with FLAG_BACKUP_SEMANTICS and nothing > > helped. However on Windows there's still a possibility to read > > attributes: use FindFirstFile. _WIN32_FIND_DATA structure seems to > > have all the necessary fields (attributes, file times, size and > > full/short filename), and FindFirstFile doesn't care about sharing at > > all... > So what about GetFileAttributesEx? What are the conditions under which > I can successfully invoke it? Seems to have the same problems as with CreateFile(...): // 1.cc #include #include int main(int argc, char** argv) { WIN32_FILE_ATTRIBUTE_DATA data; if(!GetFileAttributesEx("pagefile.sys", GetFileExInfoStandard, (LPVOID)&data)) { char buffer[1024]; FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM, NULL, GetLastError(), 0, buffer, 1024, NULL); printf("Error %d: %s\n", GetLastError(), buffer); return 1; } printf("Success\n"); return 0; } // 2.cc #include #include int main(int argc, char** argv) { WIN32_FIND_DATA data; if(INVALID_HANDLE_VALUE == FindFirstFile("pagefile.sys", &data)) { char buffer[1024]; FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM, NULL, GetLastError(), 0, buffer, 1024, NULL); printf("Error %d: %s\n", GetLastError(), buffer); return 1; } printf("Success\n"); return 0; } C:\>C:\3\1.exe Error 32: The process cannot access the file because it is being used by another process. C:\>C:\3\2.exe Success From scott+python-dev at scottdial.com Tue May 1 09:33:01 2007 From: scott+python-dev at scottdial.com (Scott Dial) Date: Tue, 01 May 2007 03:33:01 -0400 Subject: [Python-Dev] os.rename on windows In-Reply-To: <2c51ecee0704302114v70c070b8oab798568ccb67688@mail.gmail.com> References: <2c51ecee0704300749m63620406j4c8d9d6bff6aee6a@mail.gmail.com> <20070501005004.GD15094@steerpike.home.puzzling.org> <2c51ecee0704302114v70c070b8oab798568ccb67688@mail.gmail.com> Message-ID: <4636ED2D.5000402@scottdial.com> Raghuram Devarakonda wrote: > As a last resort, I > checked cygwin documentation which claims that it's rename() is > POSIX.1 compliant. If I am not mistaken, POSIX.1 does require > atomicity so I am curious how rename() is implemented there. The cygwin implementation of rename goes like this: 1) Try to use MoveFile 2) Try to use MoveFileEx(..., MOVEFILE_REPLACE_EXISTING) 3) Try to unlink destination, then try to use MoveFile And as you say, Cygwin claims it meets POSIX.1. And, POSIX.1 says, "If newpath already exists it will be atomically replaced (subject to a few conditions; see ERRORS below), so that there is no point at which another process attempting to access newpath will find it missing." Clearly, unliking and then calling MoveFile is not atomic. So, cygwin is not being honest here because in these less frequent cases, the rename will not be atomic. Also note, MVCRT only tries step 1 of cygwin's version. Which I believe also suggests that it's the only version that is atomic. -Scott -- Scott Dial scott at scottdial.com scodial at cs.indiana.edu From martin at v.loewis.de Tue May 1 10:13:54 2007 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 01 May 2007 10:13:54 +0200 Subject: [Python-Dev] Python 2.5.1 In-Reply-To: References: <4633F38A.4040605@v.loewis.de> <463494AC.1080608@v.loewis.de> <76fd5acf0704290837t1c57ede8wdb6e45ab6c6f3b95@mail.gmail.com> <4634C1FC.7090408@v.loewis.de> <76fd5acf0704290953i6dcc0bffk98d49619ef922e1b@mail.gmail.com> <4634DB32.1020009@v.loewis.de> <76fd5acf0704291057g590b909emc9ef3dfa6343450d@mail.gmail.com> <4634E26C.5030208@v.loewis.de> <46367427.2070208@v.loewis.de> Message-ID: <4636F6C2.9090302@v.loewis.de> Alexey Borzenkov schrieb: > On 5/1/07, "Martin v. L?wis" wrote: >> > After doing some research I found that it seems to be impossible to >> > use CreateFile for a file that doesn't have SHARE_READ. >> So what about GetFileAttributesEx? What are the conditions under which >> I can successfully invoke it? > > Seems to have the same problems as with CreateFile(...): That code only tests it for pagefile.sys. My question was about open handles in general. Both Calvin Spealman and I found that you cannot reproduce the problem when you, in Python 2.5.0, open a file, and then try to os.stat() it - even though, in Python 2.5.0, os.stat() will perform GetFileAttributesEx. So even though we opened the file with not passing any sharing flags, we could still do GetFileAttributesEx on it. I now studied the CRT sources, and it seems that if you use a regular open() call, the CRT will pass FILE_SHARE_READ | FILE_SHARE_WRITE to CreateFile. You would have to use _sopen in the CRT to create any kind of sharing conflict, and that isn't exposed in Python. So I guess we need continue using pagefile.sys as a test case. Regards, Martin From ncoghlan at gmail.com Tue May 1 13:23:01 2007 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 01 May 2007 21:23:01 +1000 Subject: [Python-Dev] PEP 366: Main module explicit relative imports Message-ID: <46372315.5050906@gmail.com> Brett's PEP 3122 prompted me to finally PEP'ify my proposed solution for the current incompatibility between PEP 328 (absolute imports) and PEP 338 (executing modules as scripts). The only user visible change (other than bug 1510172 going away) would be the presence of a new module level attribute in the main module. Regards, Nick. PEP: 366 Title: Main module explicit relative imports Version: $Revision: 55046 $ Last-Modified: $Date: 2007-05-01 21:13:47 +1000 (Tue, 01 May 2007) $ Author: Nick Coghlan Status: Final Type: Standards Track Content-Type: text/x-rst Created: 1-May-2007 Python-Version: 2.6 Post-History: 1-May-2007 Abstract ======== This PEP proposes a backwards compatible mechanism that permits the use of explicit relative imports from executable modules within packages. Such imports currently fail due to an awkward interaction between PEP 328 and PEP 338 - this behaviour is the subject of at least one open SF bug report (#1510172)[1]. With the proposed mechanism, relative imports will work automatically if the module is executed using the ``-m`` switch. A small amount of boilerplate will be needed in the module itself to allow the relative imports to work when the file is executed by name. Import Statements and the Main Module ===================================== (This section is taken from the final revision of PEP 338) The release of 2.5b1 showed a surprising (although obvious in retrospect) interaction between PEP 338 and PEP 328 - explicit relative imports don't work from a main module. This is due to the fact that relative imports rely on ``__name__`` to determine the current module's position in the package hierarchy. In a main module, the value of ``__name__`` is always ``'__main__'``, so explicit relative imports will always fail (as they only work for a module inside a package). Investigation into why implicit relative imports *appear* to work when a main module is executed directly but fail when executed using ``-m`` showed that such imports are actually always treated as absolute imports. Because of the way direct execution works, the package containing the executed module is added to sys.path, so its sibling modules are actually imported as top level modules. This can easily lead to multiple copies of the sibling modules in the application if implicit relative imports are used in modules that may be directly executed (e.g. test modules or utility scripts). For the 2.5 release, the recommendation is to always use absolute imports in any module that is intended to be used as a main module. The ``-m`` switch already provides a benefit here, as it inserts the current directory into ``sys.path``, instead of the directory contain the main module. This means that it is possible to run a module from inside a package using ``-m`` so long as the current directory contains the top level directory for the package. Absolute imports will work correctly even if the package isn't installed anywhere else on sys.path. If the module is executed directly and uses absolute imports to retrieve its sibling modules, then the top level package directory needs to be installed somewhere on sys.path (since the current directory won't be added automatically). Here's an example file layout:: devel/ pkg/ __init__.py moduleA.py moduleB.py test/ __init__.py test_A.py test_B.py So long as the current directory is ``devel``, or ``devel`` is already on ``sys.path`` and the test modules use absolute imports (such as ``import pkg.moduleA`` to retrieve the module under test, PEP 338 allows the tests to be run as:: python -m pkg.test.test_A python -m pkg.test.test_B Rationale for Change ==================== In rejecting PEP 3122 (which proposed a higher impact solution to this problem), Guido has indicated that he still isn't particularly keen on the idea of executing modules inside packages as scripts [2]. Despite these misgivings he has previously approved the addition of the ``-m`` switch in Python 2.4, and the ``runpy`` module based enhancements described in PEP 338 for Python 2.5. The philosophy that motivated those previous additions (i.e. access to utility or testing scripts without needing to worry about name clashes in either the OS executable namespace or the top level Python namespace) is also the motivation behind fixing what I see as a bug in the current implementation. This PEP is intended to provide a solution which permits explicit relative imports from main modules, without incurring any significant costs during interpreter startup or normal module import. Proposed Solution ================= The heart of the proposed solution is a new module attribute ``__package_name__``. This attribute will be defined only in the main module (i.e. modules where ``__name__ == "__main__"``). For a directly executed main module, this attribute will be set to the empty string. For a module executed using ``runpy.run_module()`` with the ``run_name`` parameter set to ``"__main__"``, the attribute will be set to ``mod_name.rpartition('.')[0]`` (i.e., everything up to but not including the last period). In the import machinery there is an error handling path which deals with the case where an explicit relative reference attempts to go higher than the top level in the package hierarchy. This error path would be changed to fall back on the ``__package_name__`` attribute for explicit relative imports when the importing module is called ``"__main__"``. With this change, explicit relative imports will work automatically from a script executed with the ``-m`` switch. To allow direct execution of the module, the following boilerplate would be needed at the top of the script:: if __name__ == "__main__" and not __package_name__: __package_name__ = "" Note that this boilerplate has the same disadvantage as the use of absolute imports of sibling modules - if the script is moved to a different package or subpackage, the boilerplate will need to be updated manually. With this feature in place, the test scripts in the package above would be able to change their import lines to something along the lines of ``import ..moduleA``. The scripts could then be executed unmodified even if the name of the package was changed. (Rev 47142 in SVN implemented an early variant of this proposal which stored the main module's real module name in the '__module_name__' attribute. It was reverted due to the fact that 2.5 was already in beta by that time.) Alternative Proposals ===================== PEP 3122 proposed addressing this problem by changing the way the main module is identified. That's a huge compatibility cost to incur to fix something that is a pretty minor bug in the overall scheme of things. The advantage of the proposal in this PEP is that its only impact on normal code is the tiny amount of time needed at startup to set the extra attribute in the main module. The changes to the import machinery are all in an existing error handling path, so normal imports don't incur any performance penalty at all. References ========== .. [1] Absolute/relative import not working? (http://www.python.org/sf/1510172) .. [2] Guido's rejection of PEP 3122 (http://mail.python.org/pipermail/python-3000/2007-April/006793.html) Copyright ========= This document has been placed in the public domain. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From snaury at gmail.com Tue May 1 15:21:17 2007 From: snaury at gmail.com (Alexey Borzenkov) Date: Tue, 1 May 2007 17:21:17 +0400 Subject: [Python-Dev] Python 2.5.1 In-Reply-To: <4636F6C2.9090302@v.loewis.de> References: <4633F38A.4040605@v.loewis.de> <4634C1FC.7090408@v.loewis.de> <76fd5acf0704290953i6dcc0bffk98d49619ef922e1b@mail.gmail.com> <4634DB32.1020009@v.loewis.de> <76fd5acf0704291057g590b909emc9ef3dfa6343450d@mail.gmail.com> <4634E26C.5030208@v.loewis.de> <46367427.2070208@v.loewis.de> <4636F6C2.9090302@v.loewis.de> Message-ID: On 5/1/07, "Martin v. L?wis" wrote: > That code only tests it for pagefile.sys. My question was about open > handles in general. Both Calvin Spealman and I found that you cannot > reproduce the problem when you, in Python 2.5.0, open a file, and then > try to os.stat() it - even though, in Python 2.5.0, os.stat() will > perform GetFileAttributesEx. So even though we opened the file with > not passing any sharing flags, we could still do GetFileAttributesEx > on it. > > I now studied the CRT sources, and it seems that if you use a regular > open() call, the CRT will pass FILE_SHARE_READ | FILE_SHARE_WRITE to > CreateFile. You would have to use _sopen in the CRT to create any > kind of sharing conflict, and that isn't exposed in Python. Wow, I'm very sorry, I didn't realize how much special pagefile.sys and hiberfil.sys are. As it turns out, even if you create a file with no sharing allowed, you can still open it with backup semantics in other processes, and thus can use GetFileAttributesEx, GetFileTime, etc. The file pagefile.sys seems almost magical then, I don't understand how it's opened to behave like that. The difference is also immediately visible if you try to open Properties of pagefile.sys, you won't even see Security tab there (even when I create file something.txt and then remove all ACLs, including SYSTEM, I can't access the file, but I can see Security tab and can grant myself permissions back), it looks like all kinds of opening that file are denied. Maybe this is a special security feature, so that no process could access swapped pages (otherwise it could be possible with backup semantics). Thus you can't access the file itself, you can only access containing directory. > So I guess we need continue using pagefile.sys as a test case. Seems to be true, it's just maybe it shouldn't be hardcoded to C:\ There's REG_MULTI_SZ PagingFiles in "HKLM\System\CurrentControlSet\Control\Session Manager\Memory Management", btw. The format seems to be "filename minmbsize maxmbsize" for every line. Best regards, Alexey. From kristjan at ccpgames.com Tue May 1 15:50:30 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 1 May 2007 13:50:30 +0000 Subject: [Python-Dev] Python 2.5.1 In-Reply-To: References: <4633F38A.4040605@v.loewis.de> <4634C1FC.7090408@v.loewis.de> <76fd5acf0704290953i6dcc0bffk98d49619ef922e1b@mail.gmail.com> <4634DB32.1020009@v.loewis.de> <76fd5acf0704291057g590b909emc9ef3dfa6343450d@mail.gmail.com> <4634E26C.5030208@v.loewis.de> <46367427.2070208@v.loewis.de> <4636F6C2.9090302@v.loewis.de> Message-ID: <4E9372E6B2234D4F859320D896059A9508CD57D7EA@exchis.ccp.ad.local> Hm, I fail to see the importance of a special regression test for that peculiar file then with its special magical OS properties. Why not focus our attention on real, user generated files?. -----Original Message----- Wow, I'm very sorry, I didn't realize how much special pagefile.sys and hiberfil.sys are. As it turns out, even if you create a file with no sharing allowed, you can still open it with backup semantics in other processes, and thus can use GetFileAttributesEx, GetFileTime, etc. The file pagefile.sys seems almost magical then, I don't understand how it's opened to behave like that. The difference is also immediately visible if you try to open Properties of pagefile.sys, you won't even see Security tab there (even when I create file something.txt and then remove all ACLs, including SYSTEM, I can't access the file, but I can see Security tab and can grant myself permissions back), it looks like all kinds of opening that file are denied. Maybe this is a special security feature, so that no process could access swapped pages (otherwise it could be possible with backup semantics). Thus you can't access the file itself, you can only access containing directory. > So I guess we need continue using pagefile.sys as a test case. Seems to be true, it's just maybe it shouldn't be hardcoded to C:\ There's REG_MULTI_SZ PagingFiles in "HKLM\System\CurrentControlSet\Control\Session Manager\Memory Management", btw. The format seems to be "filename minmbsize maxmbsize" for every line. From martin at v.loewis.de Tue May 1 16:23:56 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 01 May 2007 16:23:56 +0200 Subject: [Python-Dev] Python 2.5.1 In-Reply-To: <4E9372E6B2234D4F859320D896059A9508CD57D7EA@exchis.ccp.ad.local> References: <4633F38A.4040605@v.loewis.de> <4634C1FC.7090408@v.loewis.de> <76fd5acf0704290953i6dcc0bffk98d49619ef922e1b@mail.gmail.com> <4634DB32.1020009@v.loewis.de> <76fd5acf0704291057g590b909emc9ef3dfa6343450d@mail.gmail.com> <4634E26C.5030208@v.loewis.de> <46367427.2070208@v.loewis.de> <4636F6C2.9090302@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508CD57D7EA@exchis.ccp.ad.local> Message-ID: <46374D7C.500@v.loewis.de> Kristj?n Valur J?nsson schrieb: > Hm, I fail to see the importance of a special regression test for that > peculiar file then with its special magical OS properties. Why not focus > our attention on real, user generated files?. Because real users really had real problems with this very real file, or else they had not reported that as a real bug, really. Are you proposing to unfix the bug? Regards, Martin From draghuram at gmail.com Tue May 1 16:40:19 2007 From: draghuram at gmail.com (Raghuram Devarakonda) Date: Tue, 1 May 2007 10:40:19 -0400 Subject: [Python-Dev] os.rename on windows In-Reply-To: <4636ED2D.5000402@scottdial.com> References: <2c51ecee0704300749m63620406j4c8d9d6bff6aee6a@mail.gmail.com> <20070501005004.GD15094@steerpike.home.puzzling.org> <2c51ecee0704302114v70c070b8oab798568ccb67688@mail.gmail.com> <4636ED2D.5000402@scottdial.com> Message-ID: <2c51ecee0705010740v384fb269ice3df80ec04994fe@mail.gmail.com> On 5/1/07, Scott Dial wrote: > The cygwin implementation of rename goes like this: > > 1) Try to use MoveFile > 2) Try to use MoveFileEx(..., MOVEFILE_REPLACE_EXISTING) > 3) Try to unlink destination, then try to use MoveFile > > And as you say, Cygwin claims it meets POSIX.1. And, POSIX.1 says, "If > newpath already exists it will be atomically replaced (subject to > a few conditions; see ERRORS below), so that there is no point at which > another process attempting to access newpath will find it missing." > Clearly, unliking and then calling MoveFile is not atomic. So, cygwin is > not being honest here because in these less frequent cases, the rename > will not be atomic. You are right. I just checked cygwin's rename() code and it is convincing enough for me to withdraw the patch. Thanks for all the comments. Raghu From ironfroggy at gmail.com Tue May 1 17:54:17 2007 From: ironfroggy at gmail.com (Calvin Spealman) Date: Tue, 1 May 2007 11:54:17 -0400 Subject: [Python-Dev] Python 2.5.1 In-Reply-To: <4E9372E6B2234D4F859320D896059A9508CD57D7EA@exchis.ccp.ad.local> References: <4633F38A.4040605@v.loewis.de> <4634DB32.1020009@v.loewis.de> <76fd5acf0704291057g590b909emc9ef3dfa6343450d@mail.gmail.com> <4634E26C.5030208@v.loewis.de> <46367427.2070208@v.loewis.de> <4636F6C2.9090302@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508CD57D7EA@exchis.ccp.ad.local> Message-ID: <76fd5acf0705010854h5b6e8388v896ab23c2fbfe776@mail.gmail.com> On 5/1/07, Kristj?n Valur J?nsson wrote: > Hm, I fail to see the importance of a special regression test for that > peculiar file then with its special magical OS properties. Why not focus > our attention on real, user generated files?. (Try to stick to the posting conventions and reply under the actual segments you respond to.) Not all the user generated files are directly from python. Consider all the extension libraries that can do anything they want opening files on lower levels. For example, database files are likely to have different sharing flags than the default. I'm not sure if sqlite, for example, may or may not have such problems. Martin: Would tests that use ctypes do do the open directly be acceptable ways of solving this? > -----Original Message----- > > Wow, I'm very sorry, I didn't realize how much special pagefile.sys > and hiberfil.sys are. As it turns out, even if you create a file with > no sharing allowed, you can still open it with backup semantics in > other processes, and thus can use GetFileAttributesEx, GetFileTime, > etc. The file pagefile.sys seems almost magical then, I don't > understand how it's opened to behave like that. The difference is also > immediately visible if you try to open Properties of pagefile.sys, you > won't even see Security tab there (even when I create file > something.txt and then remove all ACLs, including SYSTEM, I can't > access the file, but I can see Security tab and can grant myself > permissions back), it looks like all kinds of opening that file are > denied. Maybe this is a special security feature, so that no process > could access swapped pages (otherwise it could be possible with backup > semantics). Thus you can't access the file itself, you can only access > containing directory. > > > So I guess we need continue using pagefile.sys as a test case. > > Seems to be true, it's just maybe it shouldn't be hardcoded to C:\ > There's REG_MULTI_SZ PagingFiles in > "HKLM\System\CurrentControlSet\Control\Session Manager\Memory > Management", btw. The format seems to be "filename minmbsize > maxmbsize" for every line. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com > -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://ironfroggy-code.blogspot.com/ From martin at v.loewis.de Tue May 1 18:15:43 2007 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 01 May 2007 18:15:43 +0200 Subject: [Python-Dev] Python 2.5.1 In-Reply-To: <76fd5acf0705010854h5b6e8388v896ab23c2fbfe776@mail.gmail.com> References: <4633F38A.4040605@v.loewis.de> <4634DB32.1020009@v.loewis.de> <76fd5acf0704291057g590b909emc9ef3dfa6343450d@mail.gmail.com> <4634E26C.5030208@v.loewis.de> <46367427.2070208@v.loewis.de> <4636F6C2.9090302@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508CD57D7EA@exchis.ccp.ad.local> <76fd5acf0705010854h5b6e8388v896ab23c2fbfe776@mail.gmail.com> Message-ID: <463767AF.2050400@v.loewis.de> > Would tests that use ctypes do do the open directly be acceptable ways > of solving this? If it solves it - sure. Regards, Martin From ronaldoussoren at mac.com Tue May 1 19:00:25 2007 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Tue, 1 May 2007 19:00:25 +0200 Subject: [Python-Dev] New operations in Decimal In-Reply-To: References: Message-ID: On 27 Apr, 2007, at 20:39, Facundo Batista wrote: > > - and (and), or (or), xor (xor) [CD]: Takes two logical operands, the > result is the logical operation applied between each digit. "and" and "or" are keywords, you can't have methods with those names: >>> def and(l, r): pass File "", line 1 def and(l, r): pass ^ SyntaxError: invalid syntax >>> Ronald -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3562 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20070501/f17f623f/attachment.bin From guido at python.org Tue May 1 19:17:25 2007 From: guido at python.org (Guido van Rossum) Date: Tue, 1 May 2007 10:17:25 -0700 Subject: [Python-Dev] PEP index out of date, and work-around Message-ID: There seems to be an issue with the PEP index: http://python.org/dev/peps/ lists PEP 3122 as the last PEP (not counting PEP 3141 which is deliberately out of sequence). As a work-around, an up to date index is here: http://python.org/dev/peps/pep-0000/ PEPs 3123-3128 are alive and well and reachable via this index. One of the webmasters will look into this tonight. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From nmm1 at cus.cam.ac.uk Tue May 1 19:17:17 2007 From: nmm1 at cus.cam.ac.uk (Nick Maclaren) Date: Tue, 01 May 2007 18:17:17 +0100 Subject: [Python-Dev] New operations in Decimal Message-ID: Ronald Oussoren wrote: > On 27 Apr, 2007, at 20:39, Facundo Batista wrote: > > > - and (and), or (or), xor (xor) [CD]: Takes two logical operands, the > > result is the logical operation applied between each digit. > > "and" and "or" are keywords, you can't have methods with those names: Am I losing my marbles, or is this a proposal to add the logical operations to FLOATING-POINT? I may have missed a previous posting, in which case I apologise, but this is truly mind-boggling. Regards, Nick Maclaren, University of Cambridge Computing Service, New Museums Site, Pembroke Street, Cambridge CB2 3QH, England. Email: nmm1 at cam.ac.uk Tel.: +44 1223 334761 Fax: +44 1223 334679 From brett at python.org Tue May 1 20:17:53 2007 From: brett at python.org (Brett Cannon) Date: Tue, 1 May 2007 11:17:53 -0700 Subject: [Python-Dev] head crashing (was: Fwd: [Python-checkins] buildbot warnings in x86 mvlgcc trunk) In-Reply-To: References: Message-ID: On 5/1/07, Neal Norwitz wrote: > > In rev 54982 (the first time this crash was seen), I see something > which might create a problem. In python/trunk/Modules/posixmodule.c > (near line 6300): > > + PyMem_FREE(mode); > Py_END_ALLOW_THREADS The PyMem_MALLOC call that creates 'mode' is also called without explicitly holding the GIL. Can you call PyMem_FREE() without the GIL held? I couldn't find it > documented either way. I believe the GIL does not need to be held, but obviously Tim or someone with more memory experience should step in to say definitively. If you look at Include/pymem.h, PyMem_FREE gets defined as PyObject_FREE in a debug build. PyObject_Free is defined at _PyObject_DebugFree. That function checks that the memory has not been written with the debug bit pattern and then calls PyObject_Free. That call just sticks the memory back into pymalloc's memory pool which is implemented without using any Python objects. In other words no Python objects are used in pymalloc (to my knowledge) and thus is safe to use without the GIL. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070501/c62dbbf6/attachment.htm From martin at v.loewis.de Tue May 1 21:07:26 2007 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 01 May 2007 21:07:26 +0200 Subject: [Python-Dev] head crashing In-Reply-To: References: Message-ID: <46378FEE.8090207@v.loewis.de> > I believe the GIL does not need to be held, but obviously Tim or someone > with more memory experience should step in to say definitively. > > If you look at Include/pymem.h, PyMem_FREE gets defined as PyObject_FREE > in a debug build. PyObject_Free is defined at _PyObject_DebugFree. > That function checks that the memory has not been written with the debug > bit pattern and then calls PyObject_Free. That call just sticks the > memory back into pymalloc's memory pool which is implemented without > using any Python objects. > > In other words no Python objects are used in pymalloc (to my knowledge) This is also what I found. > and thus is safe to use without the GIL. but I got to a different conclusion. If it really goes through the pymalloc pool (obmalloc), then it must hold the GIL while doing so. obmalloc itself is not thread-safe, and relies on the GIL for thread-safety. In release mode, PyMEM_FREE goes directly to free, which is thread-safe. Regards, Martin From micahel at gmail.com Tue May 1 21:08:56 2007 From: micahel at gmail.com (Michael Hudson) Date: Tue, 1 May 2007 19:08:56 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?head_crashing_=28was=3A_Fwd=3A_=5BPython-c?= =?utf-8?q?heckins=5D=09buildbot_warnings_in_x86_mvlgcc_trunk=29?= References: Message-ID: Neal Norwitz gmail.com> writes: > > Can you call PyMem_FREE() without the GIL held? I couldn't find it > documented either way. Nope. See comments at the top of Python/pystate.c. Cheers, mwh From kristjan at ccpgames.com Tue May 1 21:37:01 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 1 May 2007 19:37:01 +0000 Subject: [Python-Dev] head crashing In-Reply-To: <46378FEE.8090207@v.loewis.de> References: <46378FEE.8090207@v.loewis.de> Message-ID: <4E9372E6B2234D4F859320D896059A9508CD57D847@exchis.ccp.ad.local> > but I got to a different conclusion. If it really goes through the > pymalloc pool (obmalloc), then it must hold the GIL while doing so. > obmalloc itself is not thread-safe, and relies on the GIL for > thread-safety. > > In release mode, PyMEM_FREE goes directly to free, which is thread- > safe. Yes. It is quite unfortunate how PyMem_* gets redirected to the PyObject_* functions in debug builds. Even worse is how PyObject_Malloc gets #defined to PyObject_DebugMalloc for debug builds, changing linkage of modules. But that is a different matter. One thing I'd like to point out however, is that it is quite unnecessary for the PyObject_DebugMalloc() functions to lie on top of PyObject_Malloc() They can just call malloc() etc. directly, since in debug builds the performance benefit of the block allocator is moot. I'd suggest to keep the debug functions as a thin layer on top of malloc to do basic testing. I'd even suggest that we reverse things, and move the debug library to pymem.c. This would keep the debug functionalty threadsafe on top of regular malloc, rather than wrapping it in there with the non-threadsafe object allocator. We would then have void *PyMem_DebugMalloc() /* layers malloc /* void *PyMem_Malloc() /* calls PyMem_MALLOC */ #ifndef _DEBUG #define PyMem_MALLOC malloc #else #define PyMem_MALLOC PyMem_DebugMalloc #endif PyObject_Malloc() would then just call PyMem_DebugMalloc in DEBUG builds. The reason I have opinions on this is that at CCP we have spent considerable effort on squeezing our own veneer functions into the memory allocators, both for the PyMem ones and PyObject. And the structure of the macros and their interconnectivity really doesn't make it easy. We ended up creating a set of macros like PyMem_MALLOC_INNER() and ease our functions between the MALLOC and the INNER. I'll try to show you the patch one day which is a reasonable attempt at a slight reform in the structure of these memory APIs. Perhaps something for Py3K. Kristjan From facundo at taniquetil.com.ar Tue May 1 22:06:50 2007 From: facundo at taniquetil.com.ar (Facundo Batista) Date: Tue, 1 May 2007 20:06:50 +0000 (UTC) Subject: [Python-Dev] New operations in Decimal References: Message-ID: Ronald Oussoren wrote: >> - and (and), or (or), xor (xor) [CD]: Takes two logical operands, the >> result is the logical operation applied between each digit. > > "and" and "or" are keywords, you can't have methods with those names: You're right. I'll name them logical_and, logical_or, and logical_xor. Regards, -- . Facundo . Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From facundo at taniquetil.com.ar Tue May 1 22:15:33 2007 From: facundo at taniquetil.com.ar (Facundo Batista) Date: Tue, 1 May 2007 20:15:33 +0000 (UTC) Subject: [Python-Dev] New operations in Decimal References: Message-ID: Nick Maclaren wrote: > Am I losing my marbles, or is this a proposal to add the logical > operations to FLOATING-POINT? Sort of. This is a proposal to keep compliant with the General Decimal Arithmetic Specification, as we promised. http://www2.hursley.ibm.com/decimal/ Regards, -- . Facundo . Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From nmm1 at cus.cam.ac.uk Tue May 1 22:32:50 2007 From: nmm1 at cus.cam.ac.uk (Nick Maclaren) Date: Tue, 01 May 2007 21:32:50 +0100 Subject: [Python-Dev] New operations in Decimal Message-ID: Facundo Batista wrote: > > > Am I losing my marbles, or is this a proposal to add the logical > > operations to FLOATING-POINT? > > Sort of. This is a proposal to keep compliant with the General Decimal > Arithmetic Specification, as we promised. > > http://www2.hursley.ibm.com/decimal/ Or, more precisely: http://www2.hursley.ibm.com/decimal/damisc.html All right. Neither you nor I have lost our marbles, but the authors of that assuredly did. It's totally insane. And implementing it for a software emulation of that specification built on top of a twos complement binary integer model is insanity squared. But promises are promises and mere insanity is not in itself an obstacle to political success .... I shall attempt to forget that I ever asked the question :-) Regards, Nick Maclaren, University of Cambridge Computing Service, New Museums Site, Pembroke Street, Cambridge CB2 3QH, England. Email: nmm1 at cam.ac.uk Tel.: +44 1223 334761 Fax: +44 1223 334679 From skip at pobox.com Tue May 1 22:45:44 2007 From: skip at pobox.com (skip at pobox.com) Date: Tue, 1 May 2007 15:45:44 -0500 Subject: [Python-Dev] head crashing In-Reply-To: <4E9372E6B2234D4F859320D896059A9508CD57D847@exchis.ccp.ad.local> References: <46378FEE.8090207@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508CD57D847@exchis.ccp.ad.local> Message-ID: <17975.42744.596923.90759@montanaro.dyndns.org> Kristj?n> I'd suggest to keep the debug functions as a thin layer on top Kristj?n> of malloc to do basic testing. But then you would substantially change the memory access behavior of the program in a debug build, that is, more than it is already changed by the fact that you have changed the memory layout of Python objects. Skip From tim.peters at gmail.com Tue May 1 23:29:00 2007 From: tim.peters at gmail.com (Tim Peters) Date: Tue, 1 May 2007 17:29:00 -0400 Subject: [Python-Dev] head crashing (was: Fwd: [Python-checkins] buildbot warnings in x86 mvlgcc trunk) In-Reply-To: References: Message-ID: <1f7befae0705011428l400a19ambb9e883b59a2ab5@mail.gmail.com> [Neal Norwitz] >> In rev 54982 (the first time this crash was seen), I see something >> which might create a problem. In python/trunk/Modules/posixmodule.c >> (near line 6300): >> >> + PyMem_FREE(mode); >> Py_END_ALLOW_THREADS Shouldn't do that. [Brett Cannon] > The PyMem_MALLOC call that creates 'mode' is also called without explicitly > holding the GIL. Or that ;-) >> Can you call PyMem_FREE() without the GIL held? I couldn't find it >> documented either way. > I believe the GIL does not need to be held, but obviously Tim or someone > with more memory experience should step in to say definitively. The GIL should be held. The relevant docs are in the Python/C API manual, section "8.1 Thread State and the Global Interpreter Lock": Therefore, the rule exists that only the thread that has acquired the global interpreter lock may operate on Python objects or call Python/C API functions. PyMem_XYZ is certainly a "Python/C API function". There are functions you can call without holding the GIL, and section 8.1 intends to give an exhaustive list of those. These are functions that can't rely on the GIL, like PyEval_InitThreads() (which /creates/ the GIL), and various functions that create and destroy thread and interpreter state. > If you look at Include/pymem.h, PyMem_FREE gets defined as PyObject_FREE in > a debug build. PyObject_Free is defined at _PyObject_DebugFree. That > function checks that the memory has not been written with the debug bit > pattern and then calls PyObject_Free. That call just sticks the memory back > into pymalloc's memory pool which is implemented without using any Python > objects. But pymalloc's pools have a complex internal structure of their own, and cannot be mucked with safely by multiple threads simultaneously. > In other words no Python objects are used in pymalloc (to my knowledge) and > thus is safe to use without the GIL. Nope. For example, if two threads simultaneously try to free objects in the same obmalloc size class, there are a number of potential thread-race disasters in linking the objects into the same size-class chain. In a release build this doesn't matter, since PyMem_XYZ map directly to the platform malloc/realloc/free, and so inherit the thread safety (or lack thereof) of the platform C implementations. If it's necessary to do malloc/free kinds of things without holding the GIL, then the platform malloc/free must be called directly. Perhaps that's what posixmodule.c wants to do in this case. From brett at python.org Tue May 1 23:36:04 2007 From: brett at python.org (Brett Cannon) Date: Tue, 1 May 2007 14:36:04 -0700 Subject: [Python-Dev] head crashing (was: Fwd: [Python-checkins] buildbot warnings in x86 mvlgcc trunk) In-Reply-To: <1f7befae0705011428l400a19ambb9e883b59a2ab5@mail.gmail.com> References: <1f7befae0705011428l400a19ambb9e883b59a2ab5@mail.gmail.com> Message-ID: On 5/1/07, Tim Peters wrote: > > [Neal Norwitz] > >> In rev 54982 (the first time this crash was seen), I see something > >> which might create a problem. In python/trunk/Modules/posixmodule.c > >> (near line 6300): > >> > >> + PyMem_FREE(mode); > >> Py_END_ALLOW_THREADS > > Shouldn't do that. > > [Brett Cannon] > > The PyMem_MALLOC call that creates 'mode' is also called without > explicitly > > holding the GIL. > > Or that ;-) Luckily I misread the code so it doesn't do that boo-boo. >> Can you call PyMem_FREE() without the GIL held? I couldn't find it > >> documented either way. > > > I believe the GIL does not need to be held, but obviously Tim or someone > > with more memory experience should step in to say definitively. > > The GIL should be held. The relevant docs are in the Python/C API > manual, section "8.1 Thread State and the Global Interpreter Lock": > > Therefore, the rule exists that only the thread that has acquired the > global > interpreter lock may operate on Python objects or call Python/C > API functions. > > PyMem_XYZ is certainly a "Python/C API function". There are functions > you can call without holding the GIL, and section 8.1 intends to give > an exhaustive list of those. These are functions that can't rely on > the GIL, like PyEval_InitThreads() (which /creates/ the GIL), and > various functions that create and destroy thread and interpreter > state. > > > If you look at Include/pymem.h, PyMem_FREE gets defined as PyObject_FREE > in > > a debug build. PyObject_Free is defined at _PyObject_DebugFree. That > > function checks that the memory has not been written with the debug bit > > pattern and then calls PyObject_Free. That call just sticks the memory > back > > into pymalloc's memory pool which is implemented without using any > Python > > objects. > > But pymalloc's pools have a complex internal structure of their own, > and cannot be mucked with safely by multiple threads simultaneously. Ah, OK. That makes sense. Glad I pointed out my ignorance then. =) -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070501/3c900428/attachment.htm From guido at python.org Wed May 2 00:07:00 2007 From: guido at python.org (Guido van Rossum) Date: Tue, 1 May 2007 15:07:00 -0700 Subject: [Python-Dev] New Super PEP In-Reply-To: <76fd5acf0704291019y3b30e3ebn2fff564bb71bc462@mail.gmail.com> References: <76fd5acf0704281943i6ea9162by4448cb7ce5b646bf@mail.gmail.com> <43aa6ff70704290906p43b59ccdkaa2292ae615174bd@mail.gmail.com> <76fd5acf0704290947o79cb9722k66c8ba37fa0b6826@mail.gmail.com> <76fd5acf0704291019y3b30e3ebn2fff564bb71bc462@mail.gmail.com> Message-ID: On 4/29/07, Calvin Spealman wrote: > Draft Attempt Number Duo: > > PEP: XXX > Title: New Super Checked in as PEP 3133. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ironfroggy at gmail.com Wed May 2 00:22:05 2007 From: ironfroggy at gmail.com (Calvin Spealman) Date: Tue, 1 May 2007 18:22:05 -0400 Subject: [Python-Dev] New Super PEP In-Reply-To: References: <76fd5acf0704281943i6ea9162by4448cb7ce5b646bf@mail.gmail.com> <43aa6ff70704290906p43b59ccdkaa2292ae615174bd@mail.gmail.com> <76fd5acf0704290947o79cb9722k66c8ba37fa0b6826@mail.gmail.com> <76fd5acf0704291019y3b30e3ebn2fff564bb71bc462@mail.gmail.com> Message-ID: <76fd5acf0705011522ncb002f6ice5f21e254349934@mail.gmail.com> Georg Brandl has just checked this PEP in as 367. I had submitted it to the peps at python.org address, per the policy documentation. Sorry if I subverted some policy order, or was non-vocal about it. I didn't realize anyone else would check it in. On 5/1/07, Guido van Rossum wrote: > On 4/29/07, Calvin Spealman wrote: > > Draft Attempt Number Duo: > > > > PEP: XXX > > Title: New Super > > Checked in as PEP 3133. > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://ironfroggy-code.blogspot.com/ From guido at python.org Wed May 2 02:14:52 2007 From: guido at python.org (Guido van Rossum) Date: Tue, 1 May 2007 17:14:52 -0700 Subject: [Python-Dev] New Super PEP In-Reply-To: <76fd5acf0705011522ncb002f6ice5f21e254349934@mail.gmail.com> References: <76fd5acf0704281943i6ea9162by4448cb7ce5b646bf@mail.gmail.com> <43aa6ff70704290906p43b59ccdkaa2292ae615174bd@mail.gmail.com> <76fd5acf0704290947o79cb9722k66c8ba37fa0b6826@mail.gmail.com> <76fd5acf0704291019y3b30e3ebn2fff564bb71bc462@mail.gmail.com> <76fd5acf0705011522ncb002f6ice5f21e254349934@mail.gmail.com> Message-ID: Totally my screwup. I'll discard 3133. On 5/1/07, Calvin Spealman wrote: > Georg Brandl has just checked this PEP in as 367. I had submitted it > to the peps at python.org address, per the policy documentation. Sorry if > I subverted some policy order, or was non-vocal about it. I didn't > realize anyone else would check it in. > > On 5/1/07, Guido van Rossum wrote: > > On 4/29/07, Calvin Spealman wrote: > > > Draft Attempt Number Duo: > > > > > > PEP: XXX > > > Title: New Super > > > > Checked in as PEP 3133. > > > > -- > > --Guido van Rossum (home page: http://www.python.org/~guido/) > > > > > -- > Read my blog! I depend on your acceptance of my opinion! I am interesting! > http://ironfroggy-code.blogspot.com/ > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From rasky at develer.com Wed May 2 11:36:30 2007 From: rasky at develer.com (Giovanni Bajo) Date: Wed, 02 May 2007 11:36:30 +0200 Subject: [Python-Dev] New Super PEP In-Reply-To: References: <76fd5acf0704281943i6ea9162by4448cb7ce5b646bf@mail.gmail.com> Message-ID: <46385B9E.7040803@develer.com> On 29/04/2007 17.04, Guido van Rossum wrote: >> This is only a halfway fix to DRY, and it really only fixes the less >> important half. The important problem with super is that it >> encourages people to write incorrect code by requiring that you >> explicitly specify an argument list. Since calling super with any >> arguments other than the exact same arguments you have received is >> nearly always wrong, requiring that the arglist be specified is an >> attractive nuisance. > > Nearly always wrong? You must be kidding. There are tons of reasons to > call your super method with modified arguments. E.g. clipping, > transforming, ... Really? http://fuhm.net/super-harmful/ I don't believe that there are really so many. I would object to forcing super to *only* be able to pass unmodified arguments. But if it had an alternative syntax to do it (ala Dylan's next-method), I would surely use it often enough to make it worth. -- Giovanni Bajo From kristjan at ccpgames.com Wed May 2 12:00:15 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 2 May 2007 10:00:15 +0000 Subject: [Python-Dev] head crashing In-Reply-To: <17975.42744.596923.90759@montanaro.dyndns.org> References: <46378FEE.8090207@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508CD57D847@exchis.ccp.ad.local> <17975.42744.596923.90759@montanaro.dyndns.org> Message-ID: <4E9372E6B2234D4F859320D896059A9508CD57D8A9@exchis.ccp.ad.local> > -----Original Message----- > From: skip at pobox.com [mailto:skip at pobox.com] > Sent: Tuesday, May 01, 2007 20:46 > But then you would substantially change the memory access behavior of > the > program in a debug build, that is, more than it is already changed by > the > fact that you have changed the memory layout of Python objects. Well, as we say in Iceland, that is a piece difference, not the whole sheep. In fact, most of the memory is already managed by the Object allocator, so there is only slight additional change. Further, at least one platform (windows) already employs a different memory allocator implementation for malloc in debug builds, namely a debug allocator. In addition, many python structures grow extra members in debug builds, most notably the PyObject _head. So you probably never have the exactly same blocksize pattern anyway. At any rate, the debug memory only changes memory access patterns by growing every block by a fixed amount. And why do we want to keep the same memory pattern? Isn't the memory allocator supposed to be a black box? The only reason I can see for maintaining the exact same pattern in a debug build is to reproduce some sort of memory access error, but that is precisely what the debug routines are for. Admittedly, I have never used the debug routines much. generally disable the object allocator for debug builds, and rely on the windows debug malloc implementation to spot errors for me, or failing that, I use Rational Purify, which costs money. From tanzer at swing.co.at Wed May 2 12:00:16 2007 From: tanzer at swing.co.at (Christian Tanzer) Date: Wed, 02 May 2007 12:00:16 +0200 Subject: [Python-Dev] New Super PEP In-Reply-To: Your message of "Wed, 02 May 2007 11:36:30 +0200." <46385B9E.7040803@develer.com> Message-ID: Giovanni Bajo wrote: > On 29/04/2007 17.04, Guido van Rossum wrote: > > Nearly always wrong? You must be kidding. There are tons of reasons to > > call your super method with modified arguments. E.g. clipping, > > transforming, ... > > Really? > http://fuhm.net/super-harmful/ Hmmm. I've just counted more than 1600 usages of `super` in my sandbox. And all my tests pass. How does that square with the title of the rant you quote: Python's Super is nifty, but you can't use it ? Although the rest of `super-harmful` is slightly better than the title, the premise of James Knight is utterly wrong: Note that the __init__ method is not special -- the same thing happens with any method -- Christian Tanzer http://www.c-tanzer.at/ From rasky at develer.com Wed May 2 12:45:01 2007 From: rasky at develer.com (Giovanni Bajo) Date: Wed, 02 May 2007 12:45:01 +0200 Subject: [Python-Dev] New Super PEP In-Reply-To: References: <46385B9E.7040803@develer.com> Message-ID: On 02/05/2007 12.00, Christian Tanzer wrote: >>> Nearly always wrong? You must be kidding. There are tons of reasons to >>> call your super method with modified arguments. E.g. clipping, >>> transforming, ... >> Really? >> http://fuhm.net/super-harmful/ > > Hmmm. > > I've just counted more than 1600 usages of `super` in my > sandbox. And all my tests pass. And you don't follow any of the guidelines reported in that article? And you never met any of those problems? I find it hard to believe. The fact that your code *works* is of little importance, since the article is more about maintenance of existing code using super (and the suggestions he proposes are specifically for making code using super less fragile to refactorings). -- Giovanni Bajo From fuzzyman at voidspace.org.uk Wed May 2 17:42:09 2007 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Wed, 02 May 2007 16:42:09 +0100 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: References: Message-ID: <4638B151.6020901@voidspace.org.uk> Jim Jewett wrote: > PEP: 30xz > Title: Simplified Parsing > Version: $Revision$ > Last-Modified: $Date$ > Author: Jim J. Jewett > Status: Draft > Type: Standards Track > Content-Type: text/plain > Created: 29-Apr-2007 > Post-History: 29-Apr-2007 > > > Abstract > > Python initially inherited its parsing from C. While this has > been generally useful, there are some remnants which have been > less useful for python, and should be eliminated. > > + Implicit String concatenation > > + Line continuation with "\" > > + 034 as an octal number (== decimal 28). Note that this is > listed only for completeness; the decision to raise an > Exception for leading zeros has already been made in the > context of PEP XXX, about adding a binary literal. > > > Rationale for Removing Implicit String Concatenation > > Implicit String concatentation can lead to confusing, or even > silent, errors. [1] > > def f(arg1, arg2=None): pass > > f("abc" "def") # forgot the comma, no warning ... > # silently becomes f("abcdef", None) > > Implicit string concatenation is massively useful for creating long strings in a readable way though: call_something("first part\n" "second line\n" "third line\n") I find it an elegant way of building strings and would be sad to see it go. Adding trailing '+' signs is ugly. Michael Foord > or, using the scons build framework, > > sourceFiles = [ > 'foo.c', > 'bar.c', > #...many lines omitted... > 'q1000x.c'] > > It's a common mistake to leave off a comma, and then scons complains > that it can't find 'foo.cbar.c'. This is pretty bewildering behavior > even if you *are* a Python programmer, and not everyone here is. > > Note that in C, the implicit concatenation is more justified; there > is no other way to join strings without (at least) a function call. > > In Python, strings are objects which support the __add__ operator; > it is possible to write: > > "abc" + "def" > > Because these are literals, this addition can still be optimized > away by the compiler. > > Guido indicated [2] that this change should be handled by PEP, because > there were a few edge cases with other string operators, such as the %. > The resolution is to treat them the same as today. > > ("abc %s def" + "ghi" % var) # fails like today. > # raises TypeError because of > # precedence. (% before +) > > ("abc" + "def %s ghi" % var) # works like today; precedence makes > # the optimization more difficult to > # recognize, but does not change the > # semantics. > > ("abc %s def" + "ghi") % var # works like today, because of > # precedence: () before % > # CPython compiler can already > # add the literals at compile-time. > > > Rationale for Removing Explicit Line Continuation > > A terminal "\" indicates that the logical line is continued on the > following physical line (after whitespace). > > Note that a non-terminal "\" does not have this meaning, even if the > only additional characters are invisible whitespace. (Python depends > heavily on *visible* whitespace at the beginning of a line; it does > not otherwise depend on *invisible* terminal whitespace.) Adding > whitespace after a "\" will typically cause a syntax error rather > than a silent bug, but it still isn't desirable. > > The reason to keep "\" is that occasionally code looks better with > a "\" than with a () pair. > > assert True, ( > "This Paren is goofy") > > But realistically, that paren is no worse than a "\". The only > advantage of "\" is that it is slightly more familiar to users of > C-based languages. These same languages all also support line > continuation with (), so reading code will not be a problem, and > there will be one less rule to learn for people entirely new to > programming. > > > Rationale for Removing Implicit Octal Literals > > This decision should be covered by PEP ???, on numeric literals. > It is mentioned here only for completeness. > > C treats integers beginning with "0" as octal, rather than decimal. > Historically, Python has inherited this usage. This has caused > quite a few annoying bugs for people who forgot the rule, and > tried to line up their constants. > > a = 123 > b = 024 # really only 20, because octal > c = 245 > > In Python 3.0, the second line will instead raise a SyntaxError, > because of the ambiguity. Instead, the line should be written > as in one of the following ways: > > b = 24 # PEP 8 > b = 24 # columns line up, for quick scanning > b = 0t24 # really did want an Octal! > > > References > > [1] Implicit String Concatenation, Jewett, Orendorff > http://mail.python.org/pipermail/python-ideas/2007-April/000397.html > > [2] PEP 12, Sample reStructuredText PEP Template, Goodger, Warsaw > http://www.python.org/peps/pep-0012 > > [3] http://www.opencontent.org/openpub/ > > > > Copyright > > This document has been placed in the public domain. > > > > Local Variables: > mode: indented-text > indent-tabs-mode: nil > sentence-end-double-space: t > fill-column: 70 > coding: utf-8 > End: > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > > From kristjan at ccpgames.com Wed May 2 18:25:23 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 2 May 2007 16:25:23 +0000 Subject: [Python-Dev] 64 bit warnings Message-ID: <4E9372E6B2234D4F859320D896059A9508CD57D996@exchis.ccp.ad.local> There is a considerable amount of warnings present for 64 bit builds on windows. You can see them using VisualStudio 2005 even if you don't have the x64 compilers installed, by turning on "Detect 64 bit portability issues" in the general tab for pythoncore. Now, some of those just need straightforward upgrades of loop counters and so on to Py_ssize_t. Others probably require more judgement. E.g., do we want to change the signature of PyEval_EvalCodeEx() to accept Py_ssize_t counters rather than int? And if not, should we then use Py_SAFE_DOWNCAST() or just regular (int) typecast? Note that on x64 there is rarely any performance cost associated with usin 64 bit variables for function calls, since most of the time arguments are passed in registers. i.e. it is mostly structs that we want to keep unchanged, imo. Any thoughts? Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070502/963702d6/attachment.htm From steven.bethard at gmail.com Wed May 2 19:00:01 2007 From: steven.bethard at gmail.com (Steven Bethard) Date: Wed, 2 May 2007 11:00:01 -0600 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <4638B151.6020901@voidspace.org.uk> References: <4638B151.6020901@voidspace.org.uk> Message-ID: On 5/2/07, Michael Foord wrote: > Implicit string concatenation is massively useful for creating long > strings in a readable way though: > > call_something("first part\n" > "second line\n" > "third line\n") > > I find it an elegant way of building strings and would be sad to see it > go. Adding trailing '+' signs is ugly. You'll still have textwrap.dedent:: call_something(dedent('''\ first part second line third line ''')) And using textwrap.dedent, you don't have to remember to add the \n at the end of every line. STeVe -- I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a tiny blip on the distant coast of sanity. --- Bucky Katt, Get Fuzzy From trentm at activestate.com Wed May 2 19:34:15 2007 From: trentm at activestate.com (Trent Mick) Date: Wed, 02 May 2007 10:34:15 -0700 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: References: <4638B151.6020901@voidspace.org.uk> Message-ID: <4638CB97.1040503@activestate.com> Steven Bethard wrote: > On 5/2/07, Michael Foord wrote: >> Implicit string concatenation is massively useful for creating long >> strings in a readable way though: >> >> call_something("first part\n" >> "second line\n" >> "third line\n") >> >> I find it an elegant way of building strings and would be sad to see it >> go. Adding trailing '+' signs is ugly. > > You'll still have textwrap.dedent:: > > call_something(dedent('''\ > first part > second line > third line > ''')) > > And using textwrap.dedent, you don't have to remember to add the \n at > the end of every line. But if you don't want the EOLs? Example from some code of mine: raise MakeError("extracting '%s' in '%s' did not create the " "directory that the Python build will expect: " "'%s'" % (src_pkg, dst_dir, dst)) I use this kind of thing frequently. Don't know if others consider it bad style. Trent -- Trent Mick trentm at activestate.com From amk at amk.ca Wed May 2 19:53:01 2007 From: amk at amk.ca (A.M. Kuchling) Date: Wed, 2 May 2007 13:53:01 -0400 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <4638B151.6020901@voidspace.org.uk> References: <4638B151.6020901@voidspace.org.uk> Message-ID: <20070502175301.GA24510@localhost.localdomain> On Wed, May 02, 2007 at 04:42:09PM +0100, Michael Foord wrote: > Implicit string concatenation is massively useful for creating long > strings in a readable way though: This PEP doesn't seem very well-argued: "It's a common mistake to leave off a comma, and then scons complains that it can't find 'foo.cbar.c'." Yes, and then you say "oh, right!" and add the missing comma; problem fixed! The whole cycle takes about a minute. Is this really an issue worth fixing? --amk From guido at python.org Wed May 2 20:10:13 2007 From: guido at python.org (Guido van Rossum) Date: Wed, 2 May 2007 11:10:13 -0700 Subject: [Python-Dev] New Super PEP In-Reply-To: References: <46385B9E.7040803@develer.com> Message-ID: Please stop arguing about an opinionated piece of anti-super PR. On 5/2/07, Giovanni Bajo wrote: > On 02/05/2007 12.00, Christian Tanzer wrote: > > >>> Nearly always wrong? You must be kidding. There are tons of reasons to > >>> call your super method with modified arguments. E.g. clipping, > >>> transforming, ... > > >> Really? > >> http://fuhm.net/super-harmful/ > > > > Hmmm. > > > > I've just counted more than 1600 usages of `super` in my > > sandbox. And all my tests pass. > > And you don't follow any of the guidelines reported in that article? And you > never met any of those problems? I find it hard to believe. > > The fact that your code *works* is of little importance, since the article is > more about maintenance of existing code using super (and the suggestions he > proposes are specifically for making code using super less fragile to > refactorings). > -- > Giovanni Bajo > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ferringb at gmail.com Wed May 2 20:19:37 2007 From: ferringb at gmail.com (Brian Harring) Date: Wed, 2 May 2007 11:19:37 -0700 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <20070502175301.GA24510@localhost.localdomain> References: <4638B151.6020901@voidspace.org.uk> <20070502175301.GA24510@localhost.localdomain> Message-ID: <20070502181937.GF19189@seldon> On Wed, May 02, 2007 at 01:53:01PM -0400, A.M. Kuchling wrote: > On Wed, May 02, 2007 at 04:42:09PM +0100, Michael Foord wrote: > > Implicit string concatenation is massively useful for creating long > > strings in a readable way though: > > This PEP doesn't seem very well-argued: "It's a common mistake to > leave off a comma, and then scons complains that it can't find > 'foo.cbar.c'." Yes, and then you say "oh, right!" and add the missing > comma; problem fixed! The whole cycle takes about a minute. Is this > really an issue worth fixing? The 'cycle' can also generally be avoided via a few good habits- sourceFiles = [ 'foo.c', 'bar.c', #...many lines omitted... 'q1000x.c'] That's the original example provided; each file is on a seperate line so it's a bit easier to tell what changed if you're reviewing the delta. That said, doing sourceFiles = [ 'foo.c', 'bar.c', #...many lines omitted... 'q1000x.c', ] is (in my experience) a fair bit better; you *can* have the trailing comma without any ill affects, plus shifting the ']' to a seperate line is lessy noisy delta wise for the usual "add another string to the end of the list". Personally, I'm -1 on nuking implicit string concatenation; the examples provided for the 'why' aren't that strong in my experience, and the forced shift to concattenation is rather annoying when you're dealing with code limits (80 char limit for example)- dprint("depends level cycle: %s: " "dropping cycle for %s from %s" % (cur_frame.atom, datom, cur_frame.current_pkg), "cycle") Converting that over isn't hard, but it's a great way to inadvertantly bite yourself in the butt- triple quote isn't usually much of an option in such a case also since you don't want the newlines coming through. ~harring -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20070502/d0f7a90c/attachment.pgp From pje at telecommunity.com Wed May 2 20:51:06 2007 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 02 May 2007 14:51:06 -0400 Subject: [Python-Dev] [Python-3000] PEP 30XZ: Simplified Parsing In-Reply-To: <4638CB97.1040503@activestate.com> References: <4638B151.6020901@voidspace.org.uk> Message-ID: <5.1.1.6.0.20070502144742.02bc1908@sparrow.telecommunity.com> At 10:34 AM 5/2/2007 -0700, Trent Mick wrote: >But if you don't want the EOLs? Example from some code of mine: > > raise MakeError("extracting '%s' in '%s' did not create the " > "directory that the Python build will expect: " > "'%s'" % (src_pkg, dst_dir, dst)) > >I use this kind of thing frequently. Don't know if others consider it >bad style. Well, I do it a lot too; don't know if that makes it good or bad, though. :) I personally don't see a lot of benefit to changing the lexical rules for Py3K, however. The hard part of lexing Python is INDENT/DEDENT (and the associated unbalanced parens rule), and none of these proposals suggest removing *that*. Overall, this whole thing seems like a bikeshed to me. From fdrake at acm.org Wed May 2 20:57:38 2007 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Wed, 2 May 2007 14:57:38 -0400 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <4638CB97.1040503@activestate.com> References: <4638CB97.1040503@activestate.com> Message-ID: <200705021457.38344.fdrake@acm.org> On Wednesday 02 May 2007, Trent Mick wrote: > raise MakeError("extracting '%s' in '%s' did not create the " > "directory that the Python build will expect: " > "'%s'" % (src_pkg, dst_dir, dst)) > > I use this kind of thing frequently. Don't know if others consider it > bad style. I do this too; this is a good way to have a simple human-readable message without doing weird things to about extraneous newlines or strange indentation. -1 on removing implicit string catenation. -Fred -- Fred L. Drake, Jr. From snaury at gmail.com Wed May 2 21:23:47 2007 From: snaury at gmail.com (Alexey Borzenkov) Date: Wed, 2 May 2007 23:23:47 +0400 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: References: Message-ID: On 4/30/07, Jim Jewett wrote: > Python initially inherited its parsing from C. While this has > been generally useful, there are some remnants which have been > less useful for python, and should be eliminated. > > + Implicit String concatenation > > + Line continuation with "\" I don't know if I can vote, but if I could I'd be -1 on this. Can't say I'm using continuation often, but there's one case when I'm using it and I'd like to continue using it: #!/usr/bin/env python """\ Usage: some-tool.py [arguments...] Does this and that based on its arguments""" if condition: print __doc__ sys.exit(1) This way usage immediately stands out much better, without any unnecessary new lines. Best regards, Alexey. From barry at python.org Wed May 2 21:40:33 2007 From: barry at python.org (Barry Warsaw) Date: Wed, 2 May 2007 15:40:33 -0400 Subject: [Python-Dev] [Python-3000] PEP 30XZ: Simplified Parsing In-Reply-To: <5.1.1.6.0.20070502144742.02bc1908@sparrow.telecommunity.com> References: <4638B151.6020901@voidspace.org.uk> <5.1.1.6.0.20070502144742.02bc1908@sparrow.telecommunity.com> Message-ID: <179D5383-88F0-4246-B355-5A817B9F7EBE@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 2, 2007, at 2:51 PM, Phillip J. Eby wrote: > At 10:34 AM 5/2/2007 -0700, Trent Mick wrote: >> But if you don't want the EOLs? Example from some code of mine: >> >> raise MakeError("extracting '%s' in '%s' did not create the " >> "directory that the Python build will expect: " >> "'%s'" % (src_pkg, dst_dir, dst)) >> >> I use this kind of thing frequently. Don't know if others consider it >> bad style. > > Well, I do it a lot too; don't know if that makes it good or bad, > though. :) I just realized that changing these lexical rules might have an adverse affect on internationalization. Or it might force more lines to go over the 79 character limit. The problem is that _("some string" " and more of it") is not the same as _("some string" + " and more of it") because the latter won't be extracted by tools like pygettext (I'm not sure about standard gettext). You would either have to teach pygettext and maybe gettext about this construct, or you'd have to use something different. Triple quoted strings are probably not so good because you'd have to still backslash the trailing newlines. You can't split the strings up into sentence fragments because that makes some translations impossible. Someone ease my worries here. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (Darwin) iQCVAwUBRjjpOHEjvBPtnXfVAQJ/xwP7BNMGvrmuxKmb7QiIawYjORKt9Pxmz7XJ kFVHl47UusOGzgmtwm6Qi2DeSDsG0JOu0XwlZbX3YPE8omTzTP8WLdavJ1e+i2nP V8GwXVyFgyFHx3V1jb0o9eiUGFEwkXInCGcOFqdWOEF49TtRNHGY6ne+eumwkqxK qOyTGkcreG4= =J6I/ -----END PGP SIGNATURE----- From barry at python.org Wed May 2 21:41:38 2007 From: barry at python.org (Barry Warsaw) Date: Wed, 2 May 2007 15:41:38 -0400 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 2, 2007, at 3:23 PM, Alexey Borzenkov wrote: > On 4/30/07, Jim Jewett wrote: >> Python initially inherited its parsing from C. While this has >> been generally useful, there are some remnants which have been >> less useful for python, and should be eliminated. >> >> + Implicit String concatenation >> >> + Line continuation with "\" > > I don't know if I can vote, but if I could I'd be -1 on this. Can't > say I'm using continuation often, but there's one case when I'm using > it and I'd like to continue using it: > > #!/usr/bin/env python > """\ > Usage: some-tool.py [arguments...] > > Does this and that based on its arguments""" > > if condition: > print __doc__ > sys.exit(1) > > This way usage immediately stands out much better, without any > unnecessary new lines. Me too, all the time. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (Darwin) iQCVAwUBRjjpcnEjvBPtnXfVAQL0ngP9FwE7swQSdPiH4wAMQRe1CAzWXBLCXKok d08GHhyp5GWHs1UzDZbnxnLRVZt+ra/3iSJT8g32X2gX9gWkFUJfqZFN9wLVjzDZ qlX4m2cJs4nlskRDsycPMY9MLGUwQ8bt7mn92Oh3vXAvtXm42Dxu66NvTlyYdIFQ 9M2HrMbBn1M= =3kNg -----END PGP SIGNATURE----- From skip at pobox.com Wed May 2 23:23:00 2007 From: skip at pobox.com (skip at pobox.com) Date: Wed, 2 May 2007 16:23:00 -0500 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <4638CB97.1040503@activestate.com> References: <4638B151.6020901@voidspace.org.uk> <4638CB97.1040503@activestate.com> Message-ID: <17977.308.192435.48545@montanaro.dyndns.org> Trent> But if you don't want the EOLs? Example from some code of mine: Trent> raise MakeError("extracting '%s' in '%s' did not create the " Trent> "directory that the Python build will expect: " Trent> "'%s'" % (src_pkg, dst_dir, dst)) Trent> I use this kind of thing frequently. Don't know if others Trent> consider it bad style. I use it all the time. For example, to build up (what I consider to be) readable SQL queries: rows = self.executesql("select cities.city, state, country" " from cities, venues, events, addresses" " where cities.city like %s" " and events.active = 1" " and venues.address = addresses.id" " and addresses.city = cities.id" " and events.venue = venues.id", (city,)) I would be disappointed it string literal concatention went away. Skip From martin at v.loewis.de Thu May 3 00:12:10 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 03 May 2007 00:12:10 +0200 Subject: [Python-Dev] 64 bit warnings In-Reply-To: <4E9372E6B2234D4F859320D896059A9508CD57D996@exchis.ccp.ad.local> References: <4E9372E6B2234D4F859320D896059A9508CD57D996@exchis.ccp.ad.local> Message-ID: <46390CBA.8080904@v.loewis.de> > Any thoughts? These should be fixed on a case-by-case basis. Please submit patches to SF, and assign them to me. Changes should only go into 2.6. As a principle, values that could exceed 2Gi in a hand-crafted Python program should be Py_ssize_t. Values that can never exceed the int range (because of other constraints, such as limitations of the byte code) should be safe-downcast to int (or smaller). In the particular case of PyEval_EvalCodeEx, I think most values can't grow beyond 2**31 because the byte code format wouldn't allow such indexes. There should be documentation on what the valid ranges are for argcount, kwcount, locals, and what the rationale for these limitations are, and then they should get a consistent datatype. Regards, Martin From mhammond at skippinet.com.au Thu May 3 01:59:35 2007 From: mhammond at skippinet.com.au (Mark Hammond) Date: Thu, 3 May 2007 09:59:35 +1000 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: Message-ID: <02b401c78d15$f04b6110$090a0a0a@enfoldsystems.local> Please add my -1 to the chorus here, for the same reasons already expressed. Cheers, Mark > -----Original Message----- > From: python-dev-bounces+mhammond=keypoint.com.au at python.org > [mailto:python-dev-bounces+mhammond=keypoint.com.au at python.org > ]On Behalf > Of Jim Jewett > Sent: Monday, 30 April 2007 1:29 PM > To: Python 3000; Python Dev > Subject: [Python-Dev] PEP 30XZ: Simplified Parsing > > > PEP: 30xz > Title: Simplified Parsing > Version: $Revision$ > Last-Modified: $Date$ > Author: Jim J. Jewett > Status: Draft > Type: Standards Track > Content-Type: text/plain > Created: 29-Apr-2007 > Post-History: 29-Apr-2007 > > > Abstract > > Python initially inherited its parsing from C. While this has > been generally useful, there are some remnants which have been > less useful for python, and should be eliminated. > > + Implicit String concatenation > > + Line continuation with "\" > > + 034 as an octal number (== decimal 28). Note that this is > listed only for completeness; the decision to raise an > Exception for leading zeros has already been made in the > context of PEP XXX, about adding a binary literal. > > > Rationale for Removing Implicit String Concatenation > > Implicit String concatentation can lead to confusing, or even > silent, errors. [1] > > def f(arg1, arg2=None): pass > > f("abc" "def") # forgot the comma, no warning ... > # silently becomes f("abcdef", None) > > or, using the scons build framework, > > sourceFiles = [ > 'foo.c', > 'bar.c', > #...many lines omitted... > 'q1000x.c'] > > It's a common mistake to leave off a comma, and then > scons complains > that it can't find 'foo.cbar.c'. This is pretty > bewildering behavior > even if you *are* a Python programmer, and not everyone here is. > > Note that in C, the implicit concatenation is more > justified; there > is no other way to join strings without (at least) a > function call. > > In Python, strings are objects which support the __add__ operator; > it is possible to write: > > "abc" + "def" > > Because these are literals, this addition can still be optimized > away by the compiler. > > Guido indicated [2] that this change should be handled by > PEP, because > there were a few edge cases with other string operators, > such as the %. > The resolution is to treat them the same as today. > > ("abc %s def" + "ghi" % var) # fails like today. > # raises TypeError because of > # precedence. (% before +) > > ("abc" + "def %s ghi" % var) # works like today; > precedence makes > # the optimization more > difficult to > # recognize, but does > not change the > # semantics. > > ("abc %s def" + "ghi") % var # works like today, because of > # precedence: () before % > # CPython compiler can already > # add the literals at > compile-time. > > > Rationale for Removing Explicit Line Continuation > > A terminal "\" indicates that the logical line is continued on the > following physical line (after whitespace). > > Note that a non-terminal "\" does not have this meaning, > even if the > only additional characters are invisible whitespace. > (Python depends > heavily on *visible* whitespace at the beginning of a > line; it does > not otherwise depend on *invisible* terminal whitespace.) Adding > whitespace after a "\" will typically cause a syntax error rather > than a silent bug, but it still isn't desirable. > > The reason to keep "\" is that occasionally code looks better with > a "\" than with a () pair. > > assert True, ( > "This Paren is goofy") > > But realistically, that paren is no worse than a "\". The only > advantage of "\" is that it is slightly more familiar to users of > C-based languages. These same languages all also support line > continuation with (), so reading code will not be a problem, and > there will be one less rule to learn for people entirely new to > programming. > > > Rationale for Removing Implicit Octal Literals > > This decision should be covered by PEP ???, on numeric literals. > It is mentioned here only for completeness. > > C treats integers beginning with "0" as octal, rather > than decimal. > Historically, Python has inherited this usage. This has caused > quite a few annoying bugs for people who forgot the rule, and > tried to line up their constants. > > a = 123 > b = 024 # really only 20, because octal > c = 245 > > In Python 3.0, the second line will instead raise a SyntaxError, > because of the ambiguity. Instead, the line should be written > as in one of the following ways: > > b = 24 # PEP 8 > b = 24 # columns line up, for quick scanning > b = 0t24 # really did want an Octal! > > > References > > [1] Implicit String Concatenation, Jewett, Orendorff > http://mail.python.org/pipermail/python-ideas/2007-April/000397.html [2] PEP 12, Sample reStructuredText PEP Template, Goodger, Warsaw http://www.python.org/peps/pep-0012 [3] http://www.opencontent.org/openpub/ Copyright This document has been placed in the public domain. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/mhammond%40keypoint.com.au From python-dev at zesty.ca Thu May 3 02:26:36 2007 From: python-dev at zesty.ca (Ka-Ping Yee) Date: Wed, 2 May 2007 19:26:36 -0500 (CDT) Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: References: Message-ID: I fully support the removal of implicit string concatenation (explicit is better than implicit; there's only one way to do it). I also fully support the removal of backslashes for line continuation of statements (same reasons). (I mean this as distinct from line continuation within a string; that's a separate issue.) -- ?!ng From python at rcn.com Thu May 3 03:03:39 2007 From: python at rcn.com (Raymond Hettinger) Date: Wed, 2 May 2007 21:03:39 -0400 (EDT) Subject: [Python-Dev] PEP 30XZ: Simplified Parsing Message-ID: <20070502210339.BHU28881@ms09.lnh.mail.rcn.net> [Skip] > I use it all the time. For example, to build up (what I consider to be) >readable SQL queries: > > rows = self.executesql("select cities.city, state, country" > " from cities, venues, events, addresses" > " where cities.city like %s" > " and events.active = 1" > " and venues.address = addresses.id" > " and addresses.city = cities.id" > " and events.venue = venues.id", > (city,)) I find that style hard to maintain. What is the advantage over multi-line strings? rows = self.executesql(''' select cities.city, state, country from cities, venues, events, addresses where cities.city like %s and events.active = 1 and venues.address = addresses.id and addresses.city = cities.id and events.venue = venues.id ''', (city,)) Raymond From skip at pobox.com Thu May 3 03:45:30 2007 From: skip at pobox.com (skip at pobox.com) Date: Wed, 2 May 2007 20:45:30 -0500 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <20070502210339.BHU28881@ms09.lnh.mail.rcn.net> References: <20070502210339.BHU28881@ms09.lnh.mail.rcn.net> Message-ID: <17977.16058.847429.905398@montanaro.dyndns.org> Raymond> [Skip] >> I use it all the time. For example, to build up (what I consider to be) >> readable SQL queries: >> >> rows = self.executesql("select cities.city, state, country" >> " from cities, venues, events, addresses" >> " where cities.city like %s" >> " and events.active = 1" >> " and venues.address = addresses.id" >> " and addresses.city = cities.id" >> " and events.venue = venues.id", >> (city,)) Raymond> I find that style hard to maintain. What is the advantage over Raymond> multi-line strings? Raymond> rows = self.executesql(''' Raymond> select cities.city, state, country Raymond> from cities, venues, events, addresses Raymond> where cities.city like %s Raymond> and events.active = 1 Raymond> and venues.address = addresses.id Raymond> and addresses.city = cities.id Raymond> and events.venue = venues.id Raymond> ''', Raymond> (city,)) Maybe it's just a quirk of how python-mode in Emacs treats multiline strings that caused me to start doing things this way (I've been doing my embedded SQL statements this way for several years now), but when I hit LF in an open multiline string a newline is inserted and the cursor is lined up under the "r" of "rows", not under the opening quote of the multiline string, and not where you chose to indent your example. When I use individual strings the parameters line up where I want them to (the way I lined things up in my example). At any rate, it's what I'm used to now. Skip From g.brandl at gmx.net Thu May 3 07:08:51 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 03 May 2007 07:08:51 +0200 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <02b401c78d15$f04b6110$090a0a0a@enfoldsystems.local> References: <02b401c78d15$f04b6110$090a0a0a@enfoldsystems.local> Message-ID: FWIW, I'm -1 on both proposals too. I like implicit string literal concatenation and I really can't see what we gain from backslash continuation removal. Georg Mark Hammond schrieb: > Please add my -1 to the chorus here, for the same reasons already expressed. > > Cheers, > > Mark > >> -----Original Message----- >> From: python-dev-bounces+mhammond=keypoint.com.au at python.org >> [mailto:python-dev-bounces+mhammond=keypoint.com.au at python.org >> ]On Behalf >> Of Jim Jewett >> Sent: Monday, 30 April 2007 1:29 PM >> To: Python 3000; Python Dev >> Subject: [Python-Dev] PEP 30XZ: Simplified Parsing >> >> >> PEP: 30xz >> Title: Simplified Parsing >> Version: $Revision$ >> Last-Modified: $Date$ >> Author: Jim J. Jewett >> Status: Draft >> Type: Standards Track >> Content-Type: text/plain >> Created: 29-Apr-2007 >> Post-History: 29-Apr-2007 >> >> >> Abstract >> >> Python initially inherited its parsing from C. While this has >> been generally useful, there are some remnants which have been >> less useful for python, and should be eliminated. >> >> + Implicit String concatenation >> >> + Line continuation with "\" >> >> + 034 as an octal number (== decimal 28). Note that this is >> listed only for completeness; the decision to raise an >> Exception for leading zeros has already been made in the >> context of PEP XXX, about adding a binary literal. >> >> >> Rationale for Removing Implicit String Concatenation >> >> Implicit String concatentation can lead to confusing, or even >> silent, errors. [1] >> >> def f(arg1, arg2=None): pass >> >> f("abc" "def") # forgot the comma, no warning ... >> # silently becomes f("abcdef", None) >> >> or, using the scons build framework, >> >> sourceFiles = [ >> 'foo.c', >> 'bar.c', >> #...many lines omitted... >> 'q1000x.c'] >> >> It's a common mistake to leave off a comma, and then >> scons complains >> that it can't find 'foo.cbar.c'. This is pretty >> bewildering behavior >> even if you *are* a Python programmer, and not everyone here is. >> >> Note that in C, the implicit concatenation is more >> justified; there >> is no other way to join strings without (at least) a >> function call. >> >> In Python, strings are objects which support the __add__ operator; >> it is possible to write: >> >> "abc" + "def" >> >> Because these are literals, this addition can still be optimized >> away by the compiler. >> >> Guido indicated [2] that this change should be handled by >> PEP, because >> there were a few edge cases with other string operators, >> such as the %. >> The resolution is to treat them the same as today. >> >> ("abc %s def" + "ghi" % var) # fails like today. >> # raises TypeError because of >> # precedence. (% before +) >> >> ("abc" + "def %s ghi" % var) # works like today; >> precedence makes >> # the optimization more >> difficult to >> # recognize, but does >> not change the >> # semantics. >> >> ("abc %s def" + "ghi") % var # works like today, because of >> # precedence: () before % >> # CPython compiler can already >> # add the literals at >> compile-time. >> >> >> Rationale for Removing Explicit Line Continuation >> >> A terminal "\" indicates that the logical line is continued on the >> following physical line (after whitespace). >> >> Note that a non-terminal "\" does not have this meaning, >> even if the >> only additional characters are invisible whitespace. >> (Python depends >> heavily on *visible* whitespace at the beginning of a >> line; it does >> not otherwise depend on *invisible* terminal whitespace.) Adding >> whitespace after a "\" will typically cause a syntax error rather >> than a silent bug, but it still isn't desirable. >> >> The reason to keep "\" is that occasionally code looks better with >> a "\" than with a () pair. >> >> assert True, ( >> "This Paren is goofy") >> >> But realistically, that paren is no worse than a "\". The only >> advantage of "\" is that it is slightly more familiar to users of >> C-based languages. These same languages all also support line >> continuation with (), so reading code will not be a problem, and >> there will be one less rule to learn for people entirely new to >> programming. >> >> >> Rationale for Removing Implicit Octal Literals >> >> This decision should be covered by PEP ???, on numeric literals. >> It is mentioned here only for completeness. >> >> C treats integers beginning with "0" as octal, rather >> than decimal. >> Historically, Python has inherited this usage. This has caused >> quite a few annoying bugs for people who forgot the rule, and >> tried to line up their constants. >> >> a = 123 >> b = 024 # really only 20, because octal >> c = 245 >> >> In Python 3.0, the second line will instead raise a SyntaxError, >> because of the ambiguity. Instead, the line should be written >> as in one of the following ways: >> >> b = 24 # PEP 8 >> b = 24 # columns line up, for quick scanning >> b = 0t24 # really did want an Octal! >> >> >> References >> >> [1] Implicit String Concatenation, Jewett, Orendorff >> > http://mail.python.org/pipermail/python-ideas/2007-April/000397.html > > [2] PEP 12, Sample reStructuredText PEP Template, Goodger, Warsaw > http://www.python.org/peps/pep-0012 > > [3] http://www.opencontent.org/openpub/ > > > > Copyright > > This document has been placed in the public domain. > > > > Local Variables: > mode: indented-text > indent-tabs-mode: nil > sentence-end-double-space: t > fill-column: 70 > coding: utf-8 > End: > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/mhammond%40keypoint.com.au > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/python-python-dev%40m.gmane.org > -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From rrr at ronadam.com Thu May 3 08:05:38 2007 From: rrr at ronadam.com (Ron Adam) Date: Thu, 03 May 2007 01:05:38 -0500 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: References: <02b401c78d15$f04b6110$090a0a0a@enfoldsystems.local> Message-ID: <46397BB2.4060404@ronadam.com> Georg Brandl wrote: > FWIW, I'm -1 on both proposals too. I like implicit string literal concatenation > and I really can't see what we gain from backslash continuation removal. > > Georg -1 on removing them also. I find they are helpful. It could be made optional in block headers that end with a ':'. It's optional, (just more white space), in parenthesized expressions, tuples, lists, and dictionary literals already. >>> [1,\ ... 2,\ ... 3] [1, 2, 3] >>> (1,\ ... 2,\ ... 3) (1, 2, 3) >>> {1:'a',\ ... 2:'b',\ ... 3:'c'} {1: 'a', 2: 'b', 3: 'c'} The rule would be any keyword that starts a block, (class, def, if, elif, with, ... etc.), until an unused (for anything else) colon, would always evaluate to be a single line weather or not it has parentheses or line continuations in it. These can never be multi-line statements as far as I know. The back slash would still be needed in console input. The following inconsistency still bothers me, but I suppose it's an edge case that doesn't cause problems. >>> print r"hello world\" File "", line 1 print r"hello world\" ^ SyntaxError: EOL while scanning single-quoted string >>> print r"hello\ ... world" hello\ world In the first case, it's treated as a continuation character even though it's not at the end of a physical line. So it gives an error. In the second case, its accepted as a continuation character, *and* a '\' character at the same time. (?) Cheers, Ron From python at rcn.com Thu May 3 07:23:39 2007 From: python at rcn.com (Raymond Hettinger) Date: Wed, 2 May 2007 22:23:39 -0700 Subject: [Python-Dev] Implicit String Concatenation and Octal Literals Was: PEP 30XZ: Simplified Parsing References: <20070502210339.BHU28881@ms09.lnh.mail.rcn.net> <17977.16058.847429.905398@montanaro.dyndns.org> Message-ID: <000401c78d4c$796bfe60$f301a8c0@RaymondLaptop1> > Raymond> I find that style hard to maintain. What is the advantage over > Raymond> multi-line strings? > > Raymond> rows = self.executesql(''' > Raymond> select cities.city, state, country > Raymond> from cities, venues, events, addresses > Raymond> where cities.city like %s > Raymond> and events.active = 1 > Raymond> and venues.address = addresses.id > Raymond> and addresses.city = cities.id > Raymond> and events.venue = venues.id > Raymond> ''', > Raymond> (city,)) [Skip] > Maybe it's just a quirk of how python-mode in Emacs treats multiline strings > that caused me to start doing things this way (I've been doing my embedded > SQL statements this way for several years now), but when I hit LF in an open > multiline string a newline is inserted and the cursor is lined up under the > "r" of "rows", not under the opening quote of the multiline string, and not > where you chose to indent your example. When I use individual strings the > parameters line up where I want them to (the way I lined things up in my > example). At any rate, it's what I'm used to now. I completely understand. Almost any simplification or feature elimination proposal is going to bump-up against, "what we're used to now". Py3k may be our last chance to simplify the language. We have so many special little rules that even advanced users can't keep them all in their head. Certainly, every feature has someone who uses it. But, there is some value to reducing the number of rules, especially if those rules are non-essential (i.e. implicit string concatenation has simple, clear alternatives with multi-line strings or with the plus-operator). Another way to look at it is to ask whether we would consider adding implicit string concatenation if we didn't already have it. I think there would be a chorus of emails against it -- arguing against language bloat and noting that we already have triple-quoted strings, raw-strings, a verbose flag for regexs, backslashes inside multiline strings, the explicit plus-operator, and multi-line expressions delimited by parentheses or brackets. Collectively, that is A LOT of ways to do it. I'm asking this group to give up a minor habit so that we can achieve at least a few simplifications on the way to Py3.0 -- basically, our last chance. Similar thoughts apply to the octal literal PEP. I'm -1 on introducing yet another way to write the literal (and a non-standard one at that). My proposal was simply to eliminate it. The use cases are few and far between (translating C headers and setting unix file permissions). In either case, writing int('0777', 8) suffices. In the latter case, we've already provided clear symbolic alternatives. This simplification of the language would be a freebie (impacting very little code, simplifying the lexer, eliminating a special rule, and eliminating a source of confusion for the young amoung us who do not know about such things). Raymond From greg.ewing at canterbury.ac.nz Thu May 3 08:36:15 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 03 May 2007 18:36:15 +1200 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <17977.16058.847429.905398@montanaro.dyndns.org> References: <20070502210339.BHU28881@ms09.lnh.mail.rcn.net> <17977.16058.847429.905398@montanaro.dyndns.org> Message-ID: <463982DF.6000700@canterbury.ac.nz> skip at pobox.com wrote: > when I hit LF in an open > multiline string a newline is inserted and the cursor is lined up under the > "r" of "rows", not under the opening quote of the multiline string, and not > where you chose to indent your example. Seems to me that Python actually benefits from an editor which doesn't try to be too clever about auto-formatting. I'm doing most of my Python editing at the moment using BBEdit Lite, which knows nothing at all about Python code -- but it works very well. -- Greg From talin at acm.org Thu May 3 09:24:30 2007 From: talin at acm.org (Talin) Date: Thu, 03 May 2007 00:24:30 -0700 Subject: [Python-Dev] [Python-3000] Implicit String Concatenation and Octal Literals Was: PEP 30XZ: Simplified Parsing In-Reply-To: <000401c78d4c$796bfe60$f301a8c0@RaymondLaptop1> References: <20070502210339.BHU28881@ms09.lnh.mail.rcn.net> <17977.16058.847429.905398@montanaro.dyndns.org> <000401c78d4c$796bfe60$f301a8c0@RaymondLaptop1> Message-ID: <46398E2E.1010604@acm.org> Raymond Hettinger wrote: >> Raymond> I find that style hard to maintain. What is the advantage over >> Raymond> multi-line strings? >> >> Raymond> rows = self.executesql(''' >> Raymond> select cities.city, state, country >> Raymond> from cities, venues, events, addresses >> Raymond> where cities.city like %s >> Raymond> and events.active = 1 >> Raymond> and venues.address = addresses.id >> Raymond> and addresses.city = cities.id >> Raymond> and events.venue = venues.id >> Raymond> ''', >> Raymond> (city,)) > > [Skip] >> Maybe it's just a quirk of how python-mode in Emacs treats multiline strings >> that caused me to start doing things this way (I've been doing my embedded >> SQL statements this way for several years now), but when I hit LF in an open >> multiline string a newline is inserted and the cursor is lined up under the >> "r" of "rows", not under the opening quote of the multiline string, and not >> where you chose to indent your example. When I use individual strings the >> parameters line up where I want them to (the way I lined things up in my >> example). At any rate, it's what I'm used to now. > > > I completely understand. Almost any simplification or feature elimination > proposal is going to bump-up against, "what we're used to now". > Py3k may be our last chance to simplify the language. We have so many > special little rules that even advanced users can't keep them > all in their head. Certainly, every feature has someone who uses it. > But, there is some value to reducing the number of rules, especially > if those rules are non-essential (i.e. implicit string concatenation has > simple, clear alternatives with multi-line strings or with the plus-operator). > > Another way to look at it is to ask whether we would consider > adding implicit string concatenation if we didn't already have it. > I think there would be a chorus of emails against it -- arguing > against language bloat and noting that we already have triple-quoted > strings, raw-strings, a verbose flag for regexs, backslashes inside multiline > strings, the explicit plus-operator, and multi-line expressions delimited > by parentheses or brackets. Collectively, that is A LOT of ways to do it. > > I'm asking this group to give up a minor habit so that we can achieve > at least a few simplifications on the way to Py3.0 -- basically, our last chance. > > Similar thoughts apply to the octal literal PEP. I'm -1 on introducing > yet another way to write the literal (and a non-standard one at that). > My proposal was simply to eliminate it. The use cases are few and > far between (translating C headers and setting unix file permissions). > In either case, writing int('0777', 8) suffices. In the latter case, we've > already provided clear symbolic alternatives. This simplification of the > language would be a freebie (impacting very little code, simplifying the > lexer, eliminating a special rule, and eliminating a source of confusion > for the young amoung us who do not know about such things). My counter argument is that these simplifications aren't simplifying much - that is, the removals don't cascade and cause other simplifications. The grammar file, for example, won't look dramatically different if these changes are made. The simplification argument seems weak to me because the change in overall language complexity is very small, whereas the inconvenience caused, while not huge, is at least significant. That being said, line continuation is the only one I really care about. And I would happily give up backslashes in exchange for a more sane method of continuing lines. Either way avoids "spurious" grouping operators which IMHO don't make for easier-to-read code. -- Talin From skip at pobox.com Thu May 3 12:35:09 2007 From: skip at pobox.com (skip at pobox.com) Date: Thu, 3 May 2007 05:35:09 -0500 Subject: [Python-Dev] Implicit String Concatenation and Octal Literals Was: PEP 30XZ: Simplified Parsing In-Reply-To: <000401c78d4c$796bfe60$f301a8c0@RaymondLaptop1> References: <20070502210339.BHU28881@ms09.lnh.mail.rcn.net> <17977.16058.847429.905398@montanaro.dyndns.org> <000401c78d4c$796bfe60$f301a8c0@RaymondLaptop1> Message-ID: <17977.47837.397664.190390@montanaro.dyndns.org> Raymond> Another way to look at it is to ask whether we would consider Raymond> adding implicit string concatenation if we didn't already have Raymond> it. As I recall it was a "relatively recent" addition. Maybe 2.0 or 2.1? It certainly hasn't been there from the beginning. Skip From jon+python-dev at unequivocal.co.uk Thu May 3 12:49:05 2007 From: jon+python-dev at unequivocal.co.uk (Jon Ribbens) Date: Thu, 3 May 2007 11:49:05 +0100 Subject: [Python-Dev] Implicit String Concatenation and Octal Literals Was: PEP 30XZ: Simplified Parsing In-Reply-To: <000401c78d4c$796bfe60$f301a8c0@RaymondLaptop1> References: <20070502210339.BHU28881@ms09.lnh.mail.rcn.net> <17977.16058.847429.905398@montanaro.dyndns.org> <000401c78d4c$796bfe60$f301a8c0@RaymondLaptop1> Message-ID: <20070503104905.GL2921@snowy.squish.net> On Wed, May 02, 2007 at 10:23:39PM -0700, Raymond Hettinger wrote: > Another way to look at it is to ask whether we would consider > adding implicit string concatenation if we didn't already have it. > I think there would be a chorus of emails against it Personally, I would have been irritated if it wasn't there. I'm used to it from other languages, and it would seem like a gratuitous incompatability if it wasn't supported. I'm definitely against this proposal in its entirety. From skip at pobox.com Thu May 3 15:11:01 2007 From: skip at pobox.com (skip at pobox.com) Date: Thu, 3 May 2007 08:11:01 -0500 Subject: [Python-Dev] Implicit String Concatenation and Octal Literals Was: PEP 30XZ: Simplified Parsing In-Reply-To: <17977.47837.397664.190390@montanaro.dyndns.org> References: <20070502210339.BHU28881@ms09.lnh.mail.rcn.net> <17977.16058.847429.905398@montanaro.dyndns.org> <000401c78d4c$796bfe60$f301a8c0@RaymondLaptop1> <17977.47837.397664.190390@montanaro.dyndns.org> Message-ID: <17977.57189.849175.981712@montanaro.dyndns.org> >>>>> "skip" == skip writes: Raymond> Another way to look at it is to ask whether we would consider Raymond> adding implicit string concatenation if we didn't already have Raymond> it. skip> As I recall it was a "relatively recent" addition. Maybe 2.0 or skip> 2.1? It certainly hasn't been there from the beginning. Misc/HISTORY suggests this feature was added in 1.0.2 (May 1994). Apologies for my bad memory. Skip From benji at benjiyork.com Thu May 3 15:01:54 2007 From: benji at benjiyork.com (Benji York) Date: Thu, 03 May 2007 09:01:54 -0400 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <46397BB2.4060404@ronadam.com> References: <02b401c78d15$f04b6110$090a0a0a@enfoldsystems.local> <46397BB2.4060404@ronadam.com> Message-ID: <4639DD42.3020307@benjiyork.com> Ron Adam wrote: > The following inconsistency still bothers me, but I suppose it's an edge > case that doesn't cause problems. > > >>> print r"hello world\" > File "", line 1 > print r"hello world\" > ^ > SyntaxError: EOL while scanning single-quoted string > In the first case, it's treated as a continuation character even though > it's not at the end of a physical line. So it gives an error. No, that is unrelated to line continuation. The \" is an escape sequence, therefore there is no double-quote to end the string literal. -- Benji York http://benjiyork.com From rrr at ronadam.com Thu May 3 15:55:13 2007 From: rrr at ronadam.com (Ron Adam) Date: Thu, 03 May 2007 08:55:13 -0500 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <4639DD42.3020307@benjiyork.com> References: <02b401c78d15$f04b6110$090a0a0a@enfoldsystems.local> <46397BB2.4060404@ronadam.com> <4639DD42.3020307@benjiyork.com> Message-ID: <4639E9C1.4010109@ronadam.com> Benji York wrote: > Ron Adam wrote: >> The following inconsistency still bothers me, but I suppose it's an edge >> case that doesn't cause problems. >> >> >>> print r"hello world\" >> File "", line 1 >> print r"hello world\" >> ^ >> SyntaxError: EOL while scanning single-quoted string > >> In the first case, it's treated as a continuation character even though >> it's not at the end of a physical line. So it gives an error. > > No, that is unrelated to line continuation. The \" is an escape > sequence, therefore there is no double-quote to end the string literal. Are you sure? >>> print r'\"' \" It's just a '\' here. These are raw strings if you didn't notice. Cheers, Ron From rrr at ronadam.com Thu May 3 15:55:13 2007 From: rrr at ronadam.com (Ron Adam) Date: Thu, 03 May 2007 08:55:13 -0500 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <4639DD42.3020307@benjiyork.com> References: <02b401c78d15$f04b6110$090a0a0a@enfoldsystems.local> <46397BB2.4060404@ronadam.com> <4639DD42.3020307@benjiyork.com> Message-ID: <4639E9C1.4010109@ronadam.com> Benji York wrote: > Ron Adam wrote: >> The following inconsistency still bothers me, but I suppose it's an edge >> case that doesn't cause problems. >> >> >>> print r"hello world\" >> File "", line 1 >> print r"hello world\" >> ^ >> SyntaxError: EOL while scanning single-quoted string > >> In the first case, it's treated as a continuation character even though >> it's not at the end of a physical line. So it gives an error. > > No, that is unrelated to line continuation. The \" is an escape > sequence, therefore there is no double-quote to end the string literal. Are you sure? >>> print r'\"' \" It's just a '\' here. These are raw strings if you didn't notice. Cheers, Ron From g.brandl at gmx.net Thu May 3 16:01:03 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 03 May 2007 16:01:03 +0200 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <4639E9C1.4010109@ronadam.com> References: <02b401c78d15$f04b6110$090a0a0a@enfoldsystems.local> <46397BB2.4060404@ronadam.com> <4639DD42.3020307@benjiyork.com> <4639E9C1.4010109@ronadam.com> Message-ID: Ron Adam schrieb: > Benji York wrote: >> Ron Adam wrote: >>> The following inconsistency still bothers me, but I suppose it's an edge >>> case that doesn't cause problems. >>> >>> >>> print r"hello world\" >>> File "", line 1 >>> print r"hello world\" >>> ^ >>> SyntaxError: EOL while scanning single-quoted string >> >>> In the first case, it's treated as a continuation character even though >>> it's not at the end of a physical line. So it gives an error. >> >> No, that is unrelated to line continuation. The \" is an escape >> sequence, therefore there is no double-quote to end the string literal. > > Are you sure? > > > >>> print r'\"' > \" > > It's just a '\' here. > > These are raw strings if you didn't notice. It's all in the implementation. The tokenizer takes it as an escape sequence -- it doesn't specialcase raw strings -- the AST builder (parsestr() in ast.c) doesn't. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From turnbull at sk.tsukuba.ac.jp Thu May 3 16:40:03 2007 From: turnbull at sk.tsukuba.ac.jp (Stephen J. Turnbull) Date: Thu, 03 May 2007 23:40:03 +0900 Subject: [Python-Dev] [Python-3000] PEP 30XZ: Simplified Parsing In-Reply-To: <179D5383-88F0-4246-B355-5A817B9F7EBE@python.org> References: <4638B151.6020901@voidspace.org.uk> <5.1.1.6.0.20070502144742.02bc1908@sparrow.telecommunity.com> <179D5383-88F0-4246-B355-5A817B9F7EBE@python.org> Message-ID: <87hcquezss.fsf@uwakimon.sk.tsukuba.ac.jp> Barry Warsaw writes: > The problem is that > > _("some string" > " and more of it") > > is not the same as > > _("some string" + > " and more of it") Are you worried about translators? The gettext functions themselves will just see the result of the operation. The extraction tools like xgettext do fail, however. Translating the above to # The problem is that gettext("some string" " and more of it") # is not the same as gettext("some string" + " and more of it") and invoking "xgettext --force-po --language=Python test.py" gives # SOME DESCRIPTIVE TITLE. # Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER # This file is distributed under the same license as the PACKAGE package. # FIRST AUTHOR , YEAR. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: PACKAGE VERSION\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2007-05-03 23:32+0900\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=CHARSET\n" "Content-Transfer-Encoding: 8bit\n" #: test.py:3 msgid "some string and more of it" msgstr "" #: test.py:8 msgid "some string" msgstr "" BTW, it doesn't work for the C equivalent, either. > You would either have to teach pygettext and maybe gettext about > this construct, or you'd have to use something different. Teaching Python-based extraction tools about it isn't hard, just make sure that you slurp in the whole argument, and eval it. If what you get isn't a string, throw an exception. xgettext will be harder, since apparently does not do it, nor does it even know enough to error or warn on syntax it doesn't handle within gettext()'s argument. From barry at python.org Thu May 3 17:34:58 2007 From: barry at python.org (Barry Warsaw) Date: Thu, 3 May 2007 11:34:58 -0400 Subject: [Python-Dev] [Python-3000] PEP 30XZ: Simplified Parsing In-Reply-To: <87hcquezss.fsf@uwakimon.sk.tsukuba.ac.jp> References: <4638B151.6020901@voidspace.org.uk> <5.1.1.6.0.20070502144742.02bc1908@sparrow.telecommunity.com> <179D5383-88F0-4246-B355-5A817B9F7EBE@python.org> <87hcquezss.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <2CF5A0DA-509D-4A3D-96A6-30D601572E3E@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 3, 2007, at 10:40 AM, Stephen J. Turnbull wrote: > Barry Warsaw writes: > >> The problem is that >> >> _("some string" >> " and more of it") >> >> is not the same as >> >> _("some string" + >> " and more of it") > > Are you worried about translators? The gettext functions themselves > will just see the result of the operation. The extraction tools like > xgettext do fail, however. Yep, sorry, it is the extraction tools I'm worried about. > Teaching Python-based extraction tools about it isn't hard, just make > sure that you slurp in the whole argument, and eval it. If what you > get isn't a string, throw an exception. xgettext will be harder, > since apparently does not do it, nor does it even know enough to error > or warn on syntax it doesn't handle within gettext()'s argument. IMO, this is a problem. We can make the Python extraction tool work, but we should still be very careful about breaking 3rd party tools like xgettext, since other projects may be using such tools. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (Darwin) iQCVAwUBRjoBI3EjvBPtnXfVAQLg0AP/Y1ncqie1NgzRFzuZpnZapMs/+oo+5BCK 1MYqsJwucnDJnOqrUcU34Vq3SB7X7VsSDv3TuoTNnheinX6senorIFQKRAj4abKT f2Y63t6BT97mSOAITFZvVSj0YSG+zkD/HMGeDj4dOJFLj1tYxgKpVprlhMbELzG1 AIKe+wsYjcs= =+oFV -----END PGP SIGNATURE----- From kristjan at ccpgames.com Thu May 3 17:57:26 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Thu, 3 May 2007 15:57:26 +0000 Subject: [Python-Dev] PyInt_AsSsize_t on x64 Message-ID: <4E9372E6B2234D4F859320D896059A9508CD57DB7E@exchis.ccp.ad.local> Hello there. I'm working on getting the 64 bit build of the trunk pass the testsuite. Here is one snag, that you could help me fix. In test_getargs2(), there is at line 190: self.failUnlessEqual(99, getargs_n(Long())) Now, the Long class has a __int__ method which returns 99L. However, we run into trouble here: intobject.c:210 if ((nb = op->ob_type->tp_as_number) == NULL || (nb->nb_int == NULL && nb->nb_long == 0)) { PyErr_SetString(PyExc_TypeError, "an integer is required"); return -1; } if (nb->nb_long != 0) { io = (PyIntObject*) (*nb->nb_long) (op); } else { io = (PyIntObject*) (*nb->nb_int) (op); } trouble here The trouble is that nb->nb_long is non zero, but when called, it returns an attribute error, since __long__ is missing. nb_long points to instance_long. Now, how to fix this? Should the code in intobject.c catch the AttributeError, maybe, and continue to the nb_int? Cheers, Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070503/09a8f9ae/attachment.html From turnbull at sk.tsukuba.ac.jp Thu May 3 18:41:40 2007 From: turnbull at sk.tsukuba.ac.jp (Stephen J. Turnbull) Date: Fri, 04 May 2007 01:41:40 +0900 Subject: [Python-Dev] [Python-3000] PEP 30XZ: Simplified Parsing In-Reply-To: <2CF5A0DA-509D-4A3D-96A6-30D601572E3E@python.org> References: <4638B151.6020901@voidspace.org.uk> <5.1.1.6.0.20070502144742.02bc1908@sparrow.telecommunity.com> <179D5383-88F0-4246-B355-5A817B9F7EBE@python.org> <87hcquezss.fsf@uwakimon.sk.tsukuba.ac.jp> <2CF5A0DA-509D-4A3D-96A6-30D601572E3E@python.org> Message-ID: <878xc5g8qj.fsf@uwakimon.sk.tsukuba.ac.jp> Barry Warsaw writes: > IMO, this is a problem. We can make the Python extraction tool work, > but we should still be very careful about breaking 3rd party tools > like xgettext, since other projects may be using such tools. But _("some string" + " and more of it") is already legal Python, and xgettext is already broken for it. Arguably, xgettext's implementation of -L Python should be execve ("pygettext", argv, environ); From kristjan at ccpgames.com Thu May 3 18:32:44 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Thu, 3 May 2007 16:32:44 +0000 Subject: [Python-Dev] x64 and the testsuite Message-ID: <4E9372E6B2234D4F859320D896059A9508CD57DB99@exchis.ccp.ad.local> Hello again. A lot of overflow tests fail in the testsuite, by expecting overflow using sys.maxint. for example this line, 196, in test_index.py: self.assertEqual(x[self.neg:self.pos], (-1, maxint)) At the moment, I am disabling these tests with if "64 bit" not in sys.version: So, two questions: Should we add something like sys.maxsize to keep these overflow tests valid? Also, some tests just kill the computer when given large values, that are expected to overflow. Sometimes it would be good to test for a 64 bit machine with virtually infinite ram. Is there a better way than the "64 bit" in sys.version test? Should we have something like sys.bits? Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070503/5df7792c/attachment.htm From ms at cerenity.org Thu May 3 18:06:58 2007 From: ms at cerenity.org (Michael Sparks) Date: Thu, 3 May 2007 17:06:58 +0100 Subject: [Python-Dev] [Python-3000] PEP 30XZ: Simplified Parsing In-Reply-To: <87hcquezss.fsf@uwakimon.sk.tsukuba.ac.jp> References: <179D5383-88F0-4246-B355-5A817B9F7EBE@python.org> <87hcquezss.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <200705031706.59685.ms@cerenity.org> On Thursday 03 May 2007 15:40, Stephen J. Turnbull wrote: > Teaching Python-based extraction tools about it isn't hard, just make > sure that you slurp in the whole argument, and eval it. We generate our component documentation based on going through the AST generated by compiler.ast, finding doc strings (and other strings in other known/expected locations), and then formatting using docutils. Eval'ing the file isn't always going to work due to imports relying on libraries that may need to be installed. (This is especially the case with Kamaelia because we tend to wrap libraries for usage as components in a convenient way) We've also specifically moved away from importing the file or eval'ing things because of this issue. It makes it easier to have docs built on a random machine with not too much installed on it. You could special case "12345" + "67890" as a compile timeconstructor and jiggle things such that by the time it came out the parser that looked like "1234567890", but I don't see what that has to gain over the current form. (which doesn't look like an expression) I also think that's a rather nasty version. On the flip side if we're eval'ing an expression to get a docstring, there would be great temptation to extend that to be a doc-object - eg using dictionaries, etc as well for more specific docs. Is that wise? I don't know :) Michael. -- Kamaelia project lead http://kamaelia.sourceforge.net/Home From theller at ctypes.org Thu May 3 19:03:45 2007 From: theller at ctypes.org (Thomas Heller) Date: Thu, 03 May 2007 19:03:45 +0200 Subject: [Python-Dev] x64 and the testsuite In-Reply-To: <4E9372E6B2234D4F859320D896059A9508CD57DB99@exchis.ccp.ad.local> References: <4E9372E6B2234D4F859320D896059A9508CD57DB99@exchis.ccp.ad.local> Message-ID: Kristj?n Valur J?nsson schrieb: > Hello again. > A lot of overflow tests fail in the testsuite, by expecting overflow using sys.maxint. > for example this line, 196, in test_index.py: self.assertEqual(x[self.neg:self.pos], (-1, maxint)) On my (virtual) win64-machine, which has less than 1GB, quite some tests fail with MemoryError. The tests pass on 64-bit linux machines, though. > At the moment, I am disabling these tests with > if "64 bit" not in sys.version: > > So, two questions: Should we add something like sys.maxsize to keep these overflow tests valid? > > Also, some tests just kill the computer when given large values, that are expected to overflow. Sometimes > it would be good to test for a 64 bit machine with virtually infinite ram. Is there a better way > than the "64 bit" in sys.version test? Should we have something like sys.bits? You can use 'struct.calcsize("P")' to find out the pointer size. Thomas From kristjan at ccpgames.com Thu May 3 19:21:18 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Thu, 3 May 2007 17:21:18 +0000 Subject: [Python-Dev] x64 and the testsuite In-Reply-To: References: <4E9372E6B2234D4F859320D896059A9508CD57DB99@exchis.ccp.ad.local> Message-ID: <4E9372E6B2234D4F859320D896059A9508CD57DBAC@exchis.ccp.ad.local> > -----Original Message----- > From: python-dev-bounces+kristjan=ccpgames.com at python.org > [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf > Of Thomas Heller > Sent: Thursday, May 03, 2007 17:04 > To: python-dev at python.org > Subject: Re: [Python-Dev] x64 and the testsuite > > Kristj?n Valur J?nsson schrieb: > > Hello again. > > A lot of overflow tests fail in the testsuite, by expecting overflow > using sys.maxint. > > for example this line, 196, in test_index.py: > self.assertEqual(x[self.neg:self.pos], (-1, maxint)) > > On my (virtual) win64-machine, which has less than 1GB, quite some > tests fail with MemoryError. > The tests pass on 64-bit linux machines, though. I haven't got memory error yet. Many tests allocate some 4GB, and the OS happily enters paging mode. They then succeed eventually. As for linux, many of the range overflow tests rely on sys.maxint. Presumably, this is 1<<63 on those machines? If that is the case, I have two suggestions: a) Propagate the Windows idiom of sizeof(size_t) != sizeof(long) by keeping some sys.maxsize for list length, indices, etc. b) Elevate int to 64 bits on windows too! B is probably a huge change. Not only change PyIntObject but probably create some Py_int and so on. Ok, b) is not a real suggestion, then. > > than the "64 bit" in sys.version test? Should we have something like > sys.bits? > > You can use 'struct.calcsize("P")' to find out the pointer size. Hm, I could do that I suppose. maxsize = (1L<<(struct.calcsize("P")*8-1))-1 minsize = maxsize-1 And use those in the testsuite... K From ziga.seilnacht at gmail.com Thu May 3 19:38:04 2007 From: ziga.seilnacht at gmail.com (=?windows-1252?Q?=8Eiga_Seilnacht?=) Date: Thu, 03 May 2007 19:38:04 +0200 Subject: [Python-Dev] x64 and the testsuite In-Reply-To: <4E9372E6B2234D4F859320D896059A9508CD57DB99@exchis.ccp.ad.local> References: <4E9372E6B2234D4F859320D896059A9508CD57DB99@exchis.ccp.ad.local> Message-ID: <463A1DFC.6000503@gmail.com> Kristj?n Valur J?nsson wrote: > Hello again. > A lot of overflow tests fail in the testsuite, by expecting overflow using sys.maxint. > for example this line, 196, in test_index.py: self.assertEqual(x[self.neg:self.pos], (-1, maxint)) Those tests should be fixed to use test.test_support.MAX_Py_ssize_t instead of sys.maxint. Regards, Ziga From stephen at xemacs.org Thu May 3 19:54:54 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 04 May 2007 02:54:54 +0900 Subject: [Python-Dev] [Python-3000] PEP 30XZ: Simplified Parsing In-Reply-To: <200705031706.59685.ms@cerenity.org> References: <179D5383-88F0-4246-B355-5A817B9F7EBE@python.org> <87hcquezss.fsf@uwakimon.sk.tsukuba.ac.jp> <200705031706.59685.ms@cerenity.org> Message-ID: <87y7k5eqs1.fsf@uwakimon.sk.tsukuba.ac.jp> Michael Sparks writes: > We generate our component documentation based on going through the AST > generated by compiler.ast, finding doc strings (and other strings in > other known/expected locations), and then formatting using docutils. Are you talking about I18N and gettext? If so, I'm really lost .... > You could special case "12345" + "67890" as a compile timeconstructor and > jiggle things such that by the time it came out the parser that looked like > "1234567890", but I don't see what that has to gain over the current form. I'm not arguing it's a gain, simply that it's a case that *should* be handled by extractors of translatable strings anyway, and if it were, there would not be an I18N issue in this PEP. It *should* be handled because this is just constant folding. Any half-witted compiler does it, and programmers expect their compilers to do it. pygettext and xgettext are (very special) compilers. I don't see why that expectation should be violated just because the constants in question are translatable strings. I recognize that for xgettext implementing that in C for languages as disparate as Lisp, Python, and Perl (all of which have string concatenation operators) is hard, and to the extent that xgettext is recommended by 9 out of 10 translators, we need to worry about how long it's going to take for xgettext to get fixed (because it *is* broken in this respect, at least for Python). From barry at python.org Thu May 3 19:52:11 2007 From: barry at python.org (Barry Warsaw) Date: Thu, 3 May 2007 13:52:11 -0400 Subject: [Python-Dev] [Python-3000] PEP 30XZ: Simplified Parsing In-Reply-To: <878xc5g8qj.fsf@uwakimon.sk.tsukuba.ac.jp> References: <4638B151.6020901@voidspace.org.uk> <5.1.1.6.0.20070502144742.02bc1908@sparrow.telecommunity.com> <179D5383-88F0-4246-B355-5A817B9F7EBE@python.org> <87hcquezss.fsf@uwakimon.sk.tsukuba.ac.jp> <2CF5A0DA-509D-4A3D-96A6-30D601572E3E@python.org> <878xc5g8qj.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <1C94BBE1-F569-4F59-85E0-B585B9D21D1A@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 3, 2007, at 12:41 PM, Stephen J. Turnbull wrote: > Barry Warsaw writes: > >> IMO, this is a problem. We can make the Python extraction tool work, >> but we should still be very careful about breaking 3rd party tools >> like xgettext, since other projects may be using such tools. > > But > > _("some string" + > " and more of it") > > is already legal Python, and xgettext is already broken for it. Yep, but the idiom that *gettext accepts is used far more often. If that's outlawed then the tools /have/ to be taught the alternative. > Arguably, xgettext's implementation of -L Python should be > > execve ("pygettext", argv, environ); > > Ouch. :) - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (Darwin) iQCVAwUBRjohUXEjvBPtnXfVAQLHhAQAmKNyjbPpIMIlz7zObvb09wdw7jyC2bBa 2w+rDilRgxicUXWqH/L6AeHHl3HiVOO+tELU6upTxOWBMlJG8xcY70rde/32I0gb Wm0ylLlvDU/bAlSMyUscs77BVt82UQsBEqXyQ2+PRfQj7aOkpqgT8P3dwCYrtPaH L4W4JzvoK1M= =9pgu -----END PGP SIGNATURE----- From guido at python.org Thu May 3 20:35:50 2007 From: guido at python.org (Guido van Rossum) Date: Thu, 3 May 2007 11:35:50 -0700 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: References: <02b401c78d15$f04b6110$090a0a0a@enfoldsystems.local> <46397BB2.4060404@ronadam.com> <4639DD42.3020307@benjiyork.com> <4639E9C1.4010109@ronadam.com> Message-ID: On 5/3/07, Georg Brandl wrote: > > These are raw strings if you didn't notice. > > It's all in the implementation. The tokenizer takes it as an escape sequence > -- it doesn't specialcase raw strings -- the AST builder (parsestr() in ast.c) > doesn't. FWIW, it wasn't designed this way so as to be easy to implement. It was designed this way because the overwhelming use case is regular expressions, where one needs to be able to escape single and double quotes -- the re module unescapes \" and \' when it encounters them. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From kristjan at ccpgames.com Thu May 3 22:38:06 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Thu, 3 May 2007 20:38:06 +0000 Subject: [Python-Dev] 2.6, x64 and PGO Message-ID: <4E9372E6B2234D4F859320D896059A9508CD57DBD9@exchis.ccp.ad.local> Well, I have checked in the fixes to the trunk to make an x64 build run the testsuite. It works pretty well. I did a quick test and found the x64Release to run 51000 pystones vs. 44000 for the win32Release version, a difference of some 10%. And the x64PGO version ran some 61000 pystones. Some more extensive testing is of course required, but it would seem that gains can be had from both going 64 bit and using PGO. I encourage you to take a look at the PGO build process. It is much simpler than before. I also think we ought to find out if we could start producing a proper vc8 build, setting up a buildbot, etc. I can probably convince management at CCP to arrange for one. We could also think about a way to have the 64 bit and 32 bit versions coexist on the same machine. Cheers, Kristjan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070503/91c68405/attachment.html From martin at v.loewis.de Thu May 3 23:34:01 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 03 May 2007 23:34:01 +0200 Subject: [Python-Dev] x64 and the testsuite In-Reply-To: <4E9372E6B2234D4F859320D896059A9508CD57DBAC@exchis.ccp.ad.local> References: <4E9372E6B2234D4F859320D896059A9508CD57DB99@exchis.ccp.ad.local> <4E9372E6B2234D4F859320D896059A9508CD57DBAC@exchis.ccp.ad.local> Message-ID: <463A5549.1000401@v.loewis.de> > If that is the case, I have two suggestions: > a) Propagate the Windows idiom of sizeof(size_t) != sizeof(long) by keeping > some sys.maxsize for list length, indices, etc. > b) Elevate int to 64 bits on windows too! > B is probably a huge change. Not only change PyIntObject but probably create some Py_int and so on. > Ok, b) is not a real suggestion, then. Also, in Py3k, the int type will go away, along with this entire problem. Regards, Martin From martin at v.loewis.de Thu May 3 23:38:40 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 03 May 2007 23:38:40 +0200 Subject: [Python-Dev] 2.6, x64 and PGO In-Reply-To: <4E9372E6B2234D4F859320D896059A9508CD57DBD9@exchis.ccp.ad.local> References: <4E9372E6B2234D4F859320D896059A9508CD57DBD9@exchis.ccp.ad.local> Message-ID: <463A5660.7020905@v.loewis.de> > Some more extensive testing is of course required, but it would seem > that gains can be had from both going 64 bit and using PGO. I > encourage you to take a look at the PGO build process. It is much > simpler than before. I also think we ought to find out if we could > start producing a proper vc8 build, setting up a buildbot, etc. I > can probably convince management at CCP to arrange for one. When you are ready to operate a buildbot, just let me know. > We could also think about a way to have the 64 bit and 32 bit > versions coexist on the same machine. Please, no. This will complicate things very much for everybody, for the sake of a few elitist users who think they have a use case. Regards, Martin From mal at egenix.com Fri May 4 15:23:35 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 04 May 2007 15:23:35 +0200 Subject: [Python-Dev] Changing string constants to byte arrays ([Python-checkins] r55119 - in python/branches/py3k-struni/Lib: codecs.py test/test_codecs.py) In-Reply-To: <20070504130514.640D01E4018@bag.python.org> References: <20070504130514.640D01E4018@bag.python.org> Message-ID: <463B33D7.9060208@egenix.com> Hi Walter, if the bytes type does turn out to be a mutable type as suggested in PEP 358, then please make sure that no code (C code in particular), relies on the constantness of these byte objects. This is especially important when it comes to codecs, since the error callback logic would allow the callback to manipulate the byte object contents and length without the codec taking note of this change. I expect there to be other places in the interpreter which would break as well. Otherwise, you end up opening the door for segfaults and easy DOS attacks on Python3. Regards, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 04 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 On 2007-05-04 15:05, walter.doerwald wrote: > Author: walter.doerwald > Date: Fri May 4 15:05:09 2007 > New Revision: 55119 > > Modified: > python/branches/py3k-struni/Lib/codecs.py > python/branches/py3k-struni/Lib/test/test_codecs.py > Log: > Make the BOM constants in codecs.py bytes. > > Make the buffered input for decoders a bytes object. > From mal at egenix.com Fri May 4 16:06:54 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 04 May 2007 16:06:54 +0200 Subject: [Python-Dev] PEP 0365: Adding the pkg_resources module In-Reply-To: <5.1.1.6.0.20070430202700.04cb8128@sparrow.telecommunity.com> References: <5.1.1.6.0.20070430202700.04cb8128@sparrow.telecommunity.com> Message-ID: <463B3DFE.4030508@egenix.com> On 2007-05-01 02:29, Phillip J. Eby wrote: > I wanted to get this in before the Py3K PEP deadline, since this is a > Python 2.6 PEP that would presumably impact 3.x as well. Feedback welcome. Could you add a section that explains the side effects of importing pkg_resources ? The documentation of the module doesn't mention any, but the code suggests that you are installing (some form of) import hooks. Some other comments: * Wouldn't it be better to factor out all the meta-data access code that's not related to eggs into pkgutil ?! * How about then renaming the remaining module to egglib ?! * The module needs some reorganization: imports, globals and constants at the top, maybe a few comments delimiting the various sections, * The get_*_platform() should probably use the platform module which is a lot more flexible than distutils' get_platform() (which should probably use the platform module as well in the long run) Thanks, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 04 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 > PEP: 365 > Title: Adding the pkg_resources module > Version: $Revision: 55032 $ > Last-Modified: $Date: 2007-04-30 20:24:48 -0400 (Mon, 30 Apr 2007) $ > Author: Phillip J. Eby > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 30-Apr-2007 > Post-History: 30-Apr-2007 > > > Abstract > ======== > > This PEP proposes adding an enhanced version of the ``pkg_resources`` > module to the standard library. > > ``pkg_resources`` is a module used to find and manage Python > package/version dependencies and access bundled files and resources, > including those inside of zipped ``.egg`` files. Currently, > ``pkg_resources`` is only available through installing the entire > ``setuptools`` distribution, but it does not depend on any other part > of setuptools; in effect, it comprises the entire runtime support > library for Python Eggs, and is independently useful. > > In addition, with one feature addition, this module could support > easy bootstrap installation of several Python package management > tools, including ``setuptools``, ``workingenv``, and ``zc.buildout``. > > > Proposal > ======== > > Rather than proposing to include ``setuptools`` in the standard > library, this PEP proposes only that ``pkg_resources`` be added to the > standard library for Python 2.6 and 3.0. ``pkg_resources`` is > considerably more stable than the rest of setuptools, with virtually > no new features being added in the last 12 months. > > However, this PEP also proposes that a new feature be added to > ``pkg_resources``, before being added to the stdlib. Specifically, it > should be possible to do something like:: > > python -m pkg_resources SomePackage==1.2 > > to request downloading and installation of ``SomePackage`` from PyPI. > This feature would *not* be a replacement for ``easy_install``; > instead, it would rely on ``SomePackage`` having pure-Python ``.egg`` > files listed for download via the PyPI XML-RPC API, and the eggs would > be placed in the ``$PYTHONEGGS`` cache, where they would **not** be > importable by default. (And no scripts would be installed) However, > if the download egg contains installation bootstrap code, it will be > given a chance to run. > > These restrictions would allow the code to be extremely simple, yet > still powerful enough to support users downloading package management > tools such as ``setuptools``, ``workingenv`` and ``zc.buildout``, > simply by supplying the tool's name on the command line. > > > Rationale > ========= > > Many users have requested that ``setuptools`` be included in the > standard library, to save users needing to go through the awkward > process of bootstrapping it. However, most of the bootstrapping > complexity comes from the fact that setuptools-installed code cannot > use the ``pkg_resources`` runtime module unless setuptools is already > installed. Thus, installing setuptools requires (in a sense) that > setuptools already be installed. > > Other Python package management tools, such as ``workingenv`` and > ``zc.buildout``, have similar bootstrapping issues, since they both > make use of setuptools, but also want to provide users with something > approaching a "one-step install". The complexity of creating bootstrap > utilities for these and any other such tools that arise in future, is > greatly reduced if ``pkg_resources`` is already present, and is also > able to download pre-packaged eggs from PyPI. > > (It would also mean that setuptools would not need to be installed > in order to simply *use* eggs, as opposed to building them.) > > Finally, in addition to providing access to eggs built via setuptools > or other packaging tools, it should be noted that since Python 2.5, > the distutils install package metadata (aka ``PKG-INFO``) files that > can be read by ``pkg_resources`` to identify what distributions are > already on ``sys.path``. In environments where Python packages are > installed using system package tools (like RPM), the ``pkg_resources`` > module provides an API for detecting what versions of what packages > are installed, even if those packages were installed via the distutils > instead of setuptools. > > > Implementation and Documentation > ================================ > > The ``pkg_resources`` implementation is maintained in the Python > SVN repository under ``/sandbox/trunk/setuptools/``; see > ``pkg_resources.py`` and ``pkg_resources.txt``. Documentation for the > egg format(s) supported by ``pkg_resources`` can be found in > ``doc/formats.txt``. HTML versions of these documents are available > at: > > * http://peak.telecommunity.com/DevCenter/PkgResources and > > * http://peak.telecommunity.com/DevCenter/EggFormats > > (These HTML versions are for setuptools 0.6; they may not reflect all > of the changes found in the Subversion trunk's ``.txt`` versions.) > > > Copyright > ========= > > This document has been placed in the public domain. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/mal%40egenix.com From walter at livinglogic.de Fri May 4 16:29:15 2007 From: walter at livinglogic.de (=?UTF-8?B?V2FsdGVyIETDtnJ3YWxk?=) Date: Fri, 04 May 2007 16:29:15 +0200 Subject: [Python-Dev] Changing string constants to byte arrays ([Python-checkins] r55119 - in python/branches/py3k-struni/Lib: codecs.py test/test_codecs.py) In-Reply-To: <463B33D7.9060208@egenix.com> References: <20070504130514.640D01E4018@bag.python.org> <463B33D7.9060208@egenix.com> Message-ID: <463B433B.4030506@livinglogic.de> M.-A. Lemburg wrote: > Hi Walter, > > if the bytes type does turn out to be a mutable type as suggested > in PEP 358, it is. > then please make sure that no code (C code in > particular), relies on the constantness of these byte objects. > > This is especially important when it comes to codecs, since > the error callback logic would allow the callback to manipulate > the byte object contents and length without the codec taking > note of this change. Encoding is not a problem because the error callback never sees or returns a byte object. However decoding is a problem. After the callback returns the codec has to recalculate it's variables. > I expect there to be other places in the interpreter which would > break as well. > > Otherwise, you end up opening the door for segfaults and > easy DOS attacks on Python3. True, registering an even callback could crash the interpreter. Seems we have to update all decoding functions. Servus, Walter From g.brandl at gmx.net Fri May 4 18:53:52 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 04 May 2007 18:53:52 +0200 Subject: [Python-Dev] Changing string constants to byte arrays ([Python-checkins] r55119 - in python/branches/py3k-struni/Lib: codecs.py test/test_codecs.py) In-Reply-To: <463B33D7.9060208@egenix.com> References: <20070504130514.640D01E4018@bag.python.org> <463B33D7.9060208@egenix.com> Message-ID: M.-A. Lemburg schrieb: > Hi Walter, > > if the bytes type does turn out to be a mutable type as suggested > in PEP 358, then please make sure that no code (C code in > particular), relies on the constantness of these byte objects. > > This is especially important when it comes to codecs, since > the error callback logic would allow the callback to manipulate > the byte object contents and length without the codec taking > note of this change. > > I expect there to be other places in the interpreter which would > break as well. > > Otherwise, you end up opening the door for segfaults and > easy DOS attacks on Python3. If the user does not need to change these bytes objects and this is needed in more places, adding an "immutable" flag for "internal" bytes objects only settable from C, or even an immutable byte base class might be an idea. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From mal at egenix.com Fri May 4 19:38:56 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 04 May 2007 19:38:56 +0200 Subject: [Python-Dev] Changing string constants to byte arrays ([Python-checkins] r55119 - in python/branches/py3k-struni/Lib: codecs.py test/test_codecs.py) In-Reply-To: References: <20070504130514.640D01E4018@bag.python.org> <463B33D7.9060208@egenix.com> Message-ID: <463B6FB0.5070604@egenix.com> On 2007-05-04 18:53, Georg Brandl wrote: > M.-A. Lemburg schrieb: >> Hi Walter, >> >> if the bytes type does turn out to be a mutable type as suggested >> in PEP 358, then please make sure that no code (C code in >> particular), relies on the constantness of these byte objects. >> >> This is especially important when it comes to codecs, since >> the error callback logic would allow the callback to manipulate >> the byte object contents and length without the codec taking >> note of this change. >> >> I expect there to be other places in the interpreter which would >> break as well. >> >> Otherwise, you end up opening the door for segfaults and >> easy DOS attacks on Python3. > > If the user does not need to change these bytes objects and this is needed > in more places, adding an "immutable" flag for "internal" bytes objects > only settable from C, or even an immutable byte base class might be an idea. +1 I also suggest making all bytes literals immutable to avoid running into any issues like the above. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 04 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From fdrake at acm.org Fri May 4 19:42:45 2007 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 4 May 2007 13:42:45 -0400 Subject: [Python-Dev] =?iso-8859-1?q?Changing_string_constants_to_byte_arr?= =?iso-8859-1?q?ays_=28=5BPython-checkins=5D_r55119_-_in_python/branches/p?= =?iso-8859-1?q?y3k-struni/Lib=3A=09codecs=2Epy_test/test=5Fcodecs=2Epy_?= =?iso-8859-1?q?=29?= In-Reply-To: <463B6FB0.5070604@egenix.com> References: <20070504130514.640D01E4018@bag.python.org> <463B6FB0.5070604@egenix.com> Message-ID: <200705041342.46028.fdrake@acm.org> On Friday 04 May 2007, M.-A. Lemburg wrote: > I also suggest making all bytes literals immutable to avoid running > into any issues like the above. +1 from me. -Fred -- Fred L. Drake, Jr. From guido at python.org Fri May 4 19:51:45 2007 From: guido at python.org (Guido van Rossum) Date: Fri, 4 May 2007 10:51:45 -0700 Subject: [Python-Dev] [Python-checkins] Changing string constants to byte arrays ( r55119 - in python/branches/py3k-struni/Lib: codecs.py test/test_codecs.py ) In-Reply-To: <200705041342.46028.fdrake@acm.org> References: <20070504130514.640D01E4018@bag.python.org> <463B6FB0.5070604@egenix.com> <200705041342.46028.fdrake@acm.org> Message-ID: [-python-dev] On 5/4/07, Fred L. Drake, Jr. wrote: > On Friday 04 May 2007, M.-A. Lemburg wrote: > > I also suggest making all bytes literals immutable to avoid running > > into any issues like the above. > > +1 from me. Rather than adding immutability to bytes objects (which has big implementation and type checking implications), consider using buffer(b"123") as an immutable bytes literal. You can freely concatenate and compare buffer objects with bytes objects. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From steve at holdenweb.com Fri May 4 20:34:57 2007 From: steve at holdenweb.com (Steve Holden) Date: Fri, 04 May 2007 14:34:57 -0400 Subject: [Python-Dev] [Python-3000] Pre-pre PEP for 'super' keyword In-Reply-To: <20070430033619.GA1854@mithrandi.za.net> References: <76fd5acf0704240711p22f8060k25d787c0e85b6fb8@mail.gmail.com> <002401c78778$75fb7eb0$0201a8c0@ryoko> <00b601c78a9f$38ec9390$0201a8c0@ryoko> <20070430001137.GA29084@mithrandi.za.net> <20070430033619.GA1854@mithrandi.za.net> Message-ID: Tristan Seligmann wrote: > * Guido van Rossum [2007-04-29 18:19:20 -0700]: > >>> In my mind, 'if' and 'or' are "syntax", whereas things like 'None' or >>> 'True' are "values"; even if None becomes an actual keyword, rather than >>> a builtin. >> I'm sorry, but that is such an incredibly subjective difference that I >> can't do anything with it. String literals and numeric literals are >> syntax too, even though they are values. A keyword, or reserved word, >> is simply something that looks like an identifier but is converted >> into a different token (by the lexer or by something sitting between >> the lexer and the parse) before the parser sees it. > > Let me try a less subjective description. Things like None, 2.3, 'foo', > True are values or "expressions"; I'm not certain exactly what the term > for these is in Python's grammar, but I basically mean something that > can be on the RHS of an assignment.. However, something like 'for' or > 'if' is part of some other grammatical construct, generally a statement > or operator of some kind, so I tend to think of those differently. > > How about "a keyword is an identifier that appears as a literal in the grammar"? regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://del.icio.us/steve.holden ------------------ Asciimercial --------------------- Get on the web: Blog, lens and tag your way to fame!! holdenweb.blogspot.com squidoo.com/pythonology tagged items: del.icio.us/steve.holden/python All these services currently offer free registration! -------------- Thank You for Reading ---------------- From steve at holdenweb.com Fri May 4 20:40:26 2007 From: steve at holdenweb.com (Steve Holden) Date: Fri, 04 May 2007 14:40:26 -0400 Subject: [Python-Dev] Pre-pre PEP for 'super' keyword In-Reply-To: <00e701c78b25$be456c20$0201a8c0@ryoko> References: <2773CAC687FD5F4689F526998C7E4E5F074481@au3010avexu1.global.avaya.com> <76fd5acf0704300529r1801bc6at9a2f3d5329d1ea0d@mail.gmail.com> <00e701c78b25$be456c20$0201a8c0@ryoko> Message-ID: Tim Delaney wrote: > From: "Calvin Spealman" > >> I believe the direction my PEP took with all this is a good bit >> primitive compared to this approach, although I still find value in it >> because at least a prototype came out of it that can be used to test >> the waters, regardless of if a more direct-in-the-language approach >> would be superior. > > I've been working on improved super syntax for quite a while now - my > original approach was 'self.super' which used _getframe() and mro crawling > too. I hit on using bytecode hacking to instantiate a super object at the > start of the method to gain performance, which required storing the class in > co_consts, etc. It turns out that using a metaclass then makes this a lot > cleaner. > >> However, I seem to think that if the __this_class__ PEP goes through, >> your version can be simplified as well. No tricky stuffy things in >> cells would be needed, but we can just expand the super 'keyword' to >> __super__(__this_class__, self), which has been suggested at least >> once. It seems this would be much simpler to implement, and it also >> brings up a second point. >> >> Also, I like that the super object is created at the beginning of the >> function, which my proposal couldn't even do. It is more efficient if >> you have multiple super calls, and gets around a problem I completely >> missed: what happens if the instance name were rebound before the >> implicit lookup of the instance object at the time of the super call? > > You could expand it inline, but I think your second point is a strong > argument against it. Also, sticking the super instance into a cell means > that inner classes get access to it for free. Otherwise each inner class > would *also* need to instantiate a super instance, and __this_class__ (or > whatever it's called) would need to be in a cell for them to get access to > it instead. > > BTW, one of my test cases involves multiple super calls in the same method - > there is a *very* large performance improvement by instantiating it once. > And how does speed deteriorate for methods with no uses of super at all (which will, I suspect, be in the majority)? regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://del.icio.us/steve.holden ------------------ Asciimercial --------------------- Get on the web: Blog, lens and tag your way to fame!! holdenweb.blogspot.com squidoo.com/pythonology tagged items: del.icio.us/steve.holden/python All these services currently offer free registration! -------------- Thank You for Reading ---------------- From steve at holdenweb.com Fri May 4 20:51:00 2007 From: steve at holdenweb.com (Steve Holden) Date: Fri, 04 May 2007 14:51:00 -0400 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <4638B151.6020901@voidspace.org.uk> References: <4638B151.6020901@voidspace.org.uk> Message-ID: Michael Foord wrote: > Jim Jewett wrote: >> PEP: 30xz >> Title: Simplified Parsing >> Version: $Revision$ >> Last-Modified: $Date$ >> Author: Jim J. Jewett >> Status: Draft >> Type: Standards Track >> Content-Type: text/plain >> Created: 29-Apr-2007 >> Post-History: 29-Apr-2007 >> >> >> Abstract >> >> Python initially inherited its parsing from C. While this has >> been generally useful, there are some remnants which have been >> less useful for python, and should be eliminated. >> >> + Implicit String concatenation >> >> + Line continuation with "\" >> >> + 034 as an octal number (== decimal 28). Note that this is >> listed only for completeness; the decision to raise an >> Exception for leading zeros has already been made in the >> context of PEP XXX, about adding a binary literal. >> >> >> Rationale for Removing Implicit String Concatenation >> >> Implicit String concatentation can lead to confusing, or even >> silent, errors. [1] >> >> def f(arg1, arg2=None): pass >> >> f("abc" "def") # forgot the comma, no warning ... >> # silently becomes f("abcdef", None) >> >> > Implicit string concatenation is massively useful for creating long > strings in a readable way though: > > call_something("first part\n" > "second line\n" > "third line\n") > > I find it an elegant way of building strings and would be sad to see it > go. Adding trailing '+' signs is ugly. > Currently at least possible, though doubtless some people won't like the left-hand alignment, is call_something("""\ first part second part third part """) Alas if the proposal to remove the continuation backslash goes through this may not remain available to us. I realise that the arrival of Py3 means all these are up for grabs, but don't think any of them are really warty enough to require removal. I take the point that octal constants are counter-intuitive and wouldn't be too disappointed by their removal. I still think Icon had the right answer there in allowing an explicit decimal radix in constants, so 16 as a binary constant would be 10000r2, or 10r16. IIRC it still allowed 0x10 as well (though Tim may shoot me down there). regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://del.icio.us/steve.holden ------------------ Asciimercial --------------------- Get on the web: Blog, lens and tag your way to fame!! holdenweb.blogspot.com squidoo.com/pythonology tagged items: del.icio.us/steve.holden/python All these services currently offer free registration! -------------- Thank You for Reading ---------------- From steve at holdenweb.com Fri May 4 21:00:38 2007 From: steve at holdenweb.com (Steve Holden) Date: Fri, 04 May 2007 15:00:38 -0400 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <17977.308.192435.48545@montanaro.dyndns.org> References: <4638B151.6020901@voidspace.org.uk> <4638CB97.1040503@activestate.com> <17977.308.192435.48545@montanaro.dyndns.org> Message-ID: skip at pobox.com wrote: > Trent> But if you don't want the EOLs? Example from some code of mine: > > Trent> raise MakeError("extracting '%s' in '%s' did not create the " > Trent> "directory that the Python build will expect: " > Trent> "'%s'" % (src_pkg, dst_dir, dst)) > > Trent> I use this kind of thing frequently. Don't know if others > Trent> consider it bad style. > > I use it all the time. For example, to build up (what I consider to be) > readable SQL queries: > > rows = self.executesql("select cities.city, state, country" > " from cities, venues, events, addresses" > " where cities.city like %s" > " and events.active = 1" > " and venues.address = addresses.id" > " and addresses.city = cities.id" > " and events.venue = venues.id", > (city,)) > > I would be disappointed it string literal concatention went away. > Tripe-quoted strings are much easier here, and SQL is insensitive to the newlines and additional spaces. Why not just use rows = self.executesql("""select cities.city, state, country from cities, venues, events, addresses where cities.city like %s and events.active = 1 and venues.address = addresses.id and addresses.city = cities.id and events.venue = venues.id""", (city,)) It also gives you better error messages from most database back-ends. I realise it makes the constants slightly longer, but if that's an issue I'd have thought people would want to indent code with tabs and not spaces. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://del.icio.us/steve.holden ------------------ Asciimercial --------------------- Get on the web: Blog, lens and tag your way to fame!! holdenweb.blogspot.com squidoo.com/pythonology tagged items: del.icio.us/steve.holden/python All these services currently offer free registration! -------------- Thank You for Reading ---------------- From jimjjewett at gmail.com Fri May 4 21:09:46 2007 From: jimjjewett at gmail.com (Jim Jewett) Date: Fri, 4 May 2007 15:09:46 -0400 Subject: [Python-Dev] updated PEP3125, Remove Backslash Continuation Message-ID: Major rewrite. The inside-a-string continuation is separated from the general continuation. The alternatives section is expaned to als list Andrew Koenig's improved inside-expressions variant, since that is a real contender. If anyone feels I haven't acknowledged their concerns, please tell me. -------------- PEP: 3125 Title: Remove Backslash Continuation Version: $Revision$ Last-Modified: $Date$ Author: Jim J. Jewett Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 29-Apr-2007 Post-History: 29-Apr-2007, 30-Apr-2007, 04-May-2007 Abstract ======== Python initially inherited its parsing from C. While this has been generally useful, there are some remnants which have been less useful for python, and should be eliminated. This PEP proposes elimination of terminal ``\`` as a marker for line continuation. Motivation ========== One goal for Python 3000 should be to simplify the language by removing unnecessary or duplicated features. There are currently several ways to indicate that a logical line is continued on the following physical line. The other continuation methods are easily explained as a logical consequence of the semantics they provide; ``\`` is simply an escape character that needs to be memorized. Existing Line Continuation Methods ================================== Parenthetical Expression - ([{}]) --------------------------------- Open a parenthetical expression. It doesn't matter whether people view the "line" as continuing; they do immediately recognize that the expression needs to be closed before the statement can end. An examples using each of (), [], and {}:: def fn(long_argname1, long_argname2): settings = {"background": "random noise" "volume": "barely audible"} restrictions = ["Warrantee void if used", "Notice must be recieved by yesterday" "Not responsible for sales pitch"] Note that it is always possible to parenthesize an expression, but it can seem odd to parenthesize an expression that needs them only for the line break:: assert val>4, ( "val is too small") Triple-Quoted Strings --------------------- Open a triple-quoted string; again, people recognize that the string needs to finish before the next statement starts. banner_message = """ Satisfaction Guaranteed, or DOUBLE YOUR MONEY BACK!!! some minor restrictions apply""" Terminal ``\`` in the general case ---------------------------------- A terminal ``\`` indicates that the logical line is continued on the following physical line (after whitespace). There are no particular semantics associated with this. This form is never required, although it may look better (particularly for people with a C language background) in some cases:: >>> assert val>4, \ "val is too small" Also note that the ``\`` must be the final character in the line. If your editor navigation can add whitespace to the end of a line, that invisible change will alter the semantics of the program. Fortunately, the typical result is only a syntax error, rather than a runtime bug:: >>> assert val>4, \ "val is too small" SyntaxError: unexpected character after line continuation character This PEP proposes to eliminate this redundant and potentially confusing alternative. Terminal ``\`` within a string ------------------------------ A terminal ``\`` within a single-quoted string, at the end of the line. This is arguably a special case of the terminal ``\``, but it is a special case that may be worth keeping. >>> "abd\ def" 'abd def' + Many of the objections to removing ``\`` termination were really just objections to removing it within literal strings; several people clarified that they want to keep this literal-string usage, but don't mind losing the general case. + The use of ``\`` for an escape character within strings is well known. - But note that this particular usage is odd, because the escaped character (the newline) is invisible, and the special treatment is to delete the character. That said, the ``\`` of ``\(newline)`` is still an escape which changes the meaning of the following character. Alternate Proposals =================== Several people have suggested alternative ways of marking the line end. Most of these were rejected for not actually simplifying things. The one exception was to let any unfished expression signify a line continuation, possibly in conjunction with increased indentation. This is attractive because it is a generalization of the rule for parentheses. The initial objections to this were: - The amount of whitespace may be contentious; expression continuation should not be confused with opening a new suite. - The "expression continuation" markers are not as clearly marked in Python as the grouping punctuation "(), [], {}" marks are:: # Plus needs another operand, so the line continues "abc" + "def" # String ends an expression, so the line does not # not continue. The next line is a syntax error because # unary plus does not apply to strings. "abc" + "def" - Guido objected for technical reasons. [#dedent]_ The most obvious implementation would require allowing INDENT or DEDENT tokens anywhere, or at least in a widely expanded (and ill-defined) set of locations. While this is concern only for the internal parsing mechanism (rather than for users), it would be a major new source of complexity. Andrew Koenig then pointed out [#lexical]_ a better implementation strategy, and said that it had worked quite well in other languages. [#snocone]_ The improved suggestion boiled down to:: The whitespace that follows an (operator or) open bracket or parenthesis can include newline characters. It would be implemented at a very low lexical level -- even before the decision is made to turn a newline followed by spaces into an INDENT or DEDENT token. There is still some concern that it could mask bugs, as in this example [#guidobughide]_:: # Used to be y+1, the 1 got dropped. Syntax Error (today) # would become nonsense. x = y+ f(x) Requiring that the continuation be indented more than the initial line would add both safety and complexity. Open Issues =========== + Should ``\``-continuation be removed even inside strings? + Should the continuation markers be expanced from just ([{}]) to include lines ending with an operator? + As a safety measure, should the continuation line be required to be more indented than the initial line? References ========== .. [#dedent] (email subject) PEP 30XZ: Simplified Parsing, van Rossum http://mail.python.org/pipermail/python-3000/2007-April/007063.html .. [#lexical] (email subject) PEP-3125 -- remove backslash continuation, Koenig http://mail.python.org/pipermail/python-3000/2007-May/007237.html .. [#snocone] The Snocone Programming Language, Koenig http://www.snobol4.com/report.htm .. [#guidobughide] (email subject) PEP-3125 -- remove backslash continuation, van Rossum http://mail.python.org/pipermail/python-3000/2007-May/007244.html Copyright ========= This document has been placed in the public domain. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From baptiste13 at altern.org Fri May 4 22:16:17 2007 From: baptiste13 at altern.org (Baptiste Carvello) Date: Fri, 04 May 2007 22:16:17 +0200 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: References: <4638B151.6020901@voidspace.org.uk> Message-ID: Steven Bethard a ?crit : > On 5/2/07, Michael Foord wrote: >> Implicit string concatenation is massively useful for creating long >> strings in a readable way though: >> >> call_something("first part\n" >> "second line\n" >> "third line\n") >> >> I find it an elegant way of building strings and would be sad to see it >> go. Adding trailing '+' signs is ugly. > > You'll still have textwrap.dedent:: > > call_something(dedent('''\ > first part > second line > third line > ''')) > > And using textwrap.dedent, you don't have to remember to add the \n at > the end of every line. > > STeVe maybe we could have a "dedent" literal that would remove the first newline and all indentation so that you can just write: call_something( d''' first part second line third line ''' ) Cheers Baptiste From mike.klaas at gmail.com Fri May 4 22:45:00 2007 From: mike.klaas at gmail.com (Mike Klaas) Date: Fri, 4 May 2007 13:45:00 -0700 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: References: <4638B151.6020901@voidspace.org.uk> Message-ID: <3d2ce8cb0705041345m5b5d2b30oe11b0d392e7324cd@mail.gmail.com> On 5/4/07, Baptiste Carvello wrote: > maybe we could have a "dedent" literal that would remove the first newline and > all indentation so that you can just write: > > call_something( d''' > first part > second line > third line > ''' ) Surely from textwrap import dedent as d is close enough? -Mike From arigo at tunes.org Sat May 5 12:55:18 2007 From: arigo at tunes.org (Armin Rigo) Date: Sat, 5 May 2007 12:55:18 +0200 Subject: [Python-Dev] PyInt_AsSsize_t on x64 In-Reply-To: <4E9372E6B2234D4F859320D896059A9508CD57DB7E@exchis.ccp.ad.local> References: <4E9372E6B2234D4F859320D896059A9508CD57DB7E@exchis.ccp.ad.local> Message-ID: <20070505105518.GA26182@code0.codespeak.net> Hi Kristj?n, On Thu, May 03, 2007 at 03:57:26PM +0000, Kristj?n Valur J?nsson wrote: > if (nb->nb_long != 0) { > io = (PyIntObject*) (*nb->nb_long) (op); > } else { > io = (PyIntObject*) (*nb->nb_int) (op); > } > Now, how to fix this? Should the code in intobject.c catch the > AttributeError, maybe, and continue to the nb_int? The problem is specific to old-style classes: in principle, only them have all slots non-null even if they don't really define the corresponding special methods. With anything else than old-style classes you don't get an AttributeError if a special method doesn't exist, which makes checking for AttributeError in PyInt_AsSsize_t() strange. Given that the __long__ vs __int__ difference doesn't really make sense any more nowadays, I'd think it would be saner to change the nb_long slot of old-style instances instead of PyInt_AsSsize_t(). The logic would be that if the nb_long slot implementation finds no "__long__" attribute on the instance it tries with "__int__" instead. A bientot, Armin. From arigo at tunes.org Sat May 5 12:57:28 2007 From: arigo at tunes.org (Armin Rigo) Date: Sat, 5 May 2007 12:57:28 +0200 Subject: [Python-Dev] x64 and the testsuite In-Reply-To: <463A1DFC.6000503@gmail.com> References: <4E9372E6B2234D4F859320D896059A9508CD57DB99@exchis.ccp.ad.local> <463A1DFC.6000503@gmail.com> Message-ID: <20070505105728.GB26182@code0.codespeak.net> Hi Kristj?n, On Thu, May 03, 2007 at 07:38:04PM +0200, ?iga Seilnacht wrote: > Those tests should be fixed to use test.test_support.MAX_Py_ssize_t instead of sys.maxint. See also the bigmemtest() and bigaddrspacetest() decorators in test_support. A bientot, Armin. From kristjan at ccpgames.com Sat May 5 13:24:26 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Sat, 5 May 2007 11:24:26 +0000 Subject: [Python-Dev] PyInt_AsSsize_t on x64 In-Reply-To: <20070505105518.GA26182@code0.codespeak.net> References: <4E9372E6B2234D4F859320D896059A9508CD57DB7E@exchis.ccp.ad.local> <20070505105518.GA26182@code0.codespeak.net> Message-ID: <4E9372E6B2234D4F859320D896059A9508CDDDECB4@exchis.ccp.ad.local> > -----Original Message----- > From: Armin Rigo [mailto:arigo at tunes.org] > Given that the __long__ vs __int__ difference doesn't really make sense > any more nowadays, I'd think it would be saner to change the nb_long > slot of old-style instances instead of PyInt_AsSsize_t(). The logic > would be that if the nb_long slot implementation finds no "__long__" > attribute on the instance it tries with "__int__" instead. > That sounds fine to me. I'll be happy to make the change (and revert the change to intobject.c) if everyone concurs. From mal at egenix.com Sat May 5 14:49:51 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Sat, 05 May 2007 14:49:51 +0200 Subject: [Python-Dev] Changing string constants to byte arrays in Py3k In-Reply-To: References: <20070504130514.640D01E4018@bag.python.org> <463B6FB0.5070604@egenix.com> <200705041342.46028.fdrake@acm.org> Message-ID: <463C7D6F.80705@egenix.com> On 2007-05-04 19:51, Guido van Rossum wrote: > [-python-dev] > > On 5/4/07, Fred L. Drake, Jr. wrote: >> On Friday 04 May 2007, M.-A. Lemburg wrote: >> > I also suggest making all bytes literals immutable to avoid running >> > into any issues like the above. >> >> +1 from me. > > Rather than adding immutability to bytes objects (which has big > implementation and type checking implications), consider using > buffer(b"123") as an immutable bytes literal. You can freely > concatenate and compare buffer objects with bytes objects. I like Georg's idea of having an immutable bytes subclass. b"abc" could then be a shortcut constructor for this subclass. In general, I don't think it's a good idea to have literals turn into mutable objects, since literals are normally perceived as being constant. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 05 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From aahz at pythoncraft.com Sat May 5 17:00:35 2007 From: aahz at pythoncraft.com (Aahz) Date: Sat, 5 May 2007 08:00:35 -0700 Subject: [Python-Dev] Byte literals (was Re: [Python-checkins] Changing string constants to byte arrays ( r55119 - in python/branches/py3k-struni/Lib: codecs.py test/test_codecs.py )) In-Reply-To: References: <20070504130514.640D01E4018@bag.python.org> <463B6FB0.5070604@egenix.com> <200705041342.46028.fdrake@acm.org> Message-ID: <20070505150035.GA16303@panix.com> On Fri, May 04, 2007, Guido van Rossum wrote: > > [-python-dev] > > On 5/4/07, Fred L. Drake, Jr. wrote: >> On Friday 04 May 2007, M.-A. Lemburg wrote: >>> >>> I also suggest making all bytes literals immutable to avoid running >>> into any issues like the above. >> >> +1 from me. > > Rather than adding immutability to bytes objects (which has big > implementation and type checking implications), consider using > buffer(b"123") as an immutable bytes literal. You can freely > concatenate and compare buffer objects with bytes objects. I'm with MAL and Fred on making literals immutable -- that's safe and lots of newbies will need to use byte literals early in their Python experience if they pick up Python to operate on network data. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Look, it's your affair if you want to play with five people, but don't go calling it doubles." --John Cleese anticipates Usenet From steven.bethard at gmail.com Sat May 5 18:11:56 2007 From: steven.bethard at gmail.com (Steven Bethard) Date: Sat, 5 May 2007 10:11:56 -0600 Subject: [Python-Dev] Changing string constants to byte arrays in Py3k In-Reply-To: <463C7D6F.80705@egenix.com> References: <20070504130514.640D01E4018@bag.python.org> <463B6FB0.5070604@egenix.com> <200705041342.46028.fdrake@acm.org> <463C7D6F.80705@egenix.com> Message-ID: On 5/5/07, M.-A. Lemburg wrote: > On 2007-05-04 19:51, Guido van Rossum wrote: > > [-python-dev] > > > > On 5/4/07, Fred L. Drake, Jr. wrote: > >> On Friday 04 May 2007, M.-A. Lemburg wrote: > >> > I also suggest making all bytes literals immutable to avoid running > >> > into any issues like the above. > >> > >> +1 from me. > > > > Rather than adding immutability to bytes objects (which has big > > implementation and type checking implications), consider using > > buffer(b"123") as an immutable bytes literal. You can freely > > concatenate and compare buffer objects with bytes objects. > > I like Georg's idea of having an immutable bytes subclass. > b"abc" could then be a shortcut constructor for this subclass. > > In general, I don't think it's a good idea to have literals > turn into mutable objects, since literals are normally perceived > as being constant. Does that mean you want list literals to be immutable too? lst = ['a', 'b', 'c'] lst.append('d') # raises an error? STeVe -- I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a tiny blip on the distant coast of sanity. --- Bucky Katt, Get Fuzzy From fdrake at acm.org Sat May 5 19:34:44 2007 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Sat, 5 May 2007 13:34:44 -0400 Subject: [Python-Dev] Byte literals (was Re: [Python-checkins] Changing string constants to byte arrays ( r55119 - in python/branches/py3k-struni/Lib: codecs.py test/test_codecs.py )) In-Reply-To: <20070505150035.GA16303@panix.com> References: <20070504130514.640D01E4018@bag.python.org> <20070505150035.GA16303@panix.com> Message-ID: <200705051334.45120.fdrake@acm.org> On Saturday 05 May 2007, Aahz wrote: > I'm with MAL and Fred on making literals immutable -- that's safe and > lots of newbies will need to use byte literals early in their Python > experience if they pick up Python to operate on network data. Yes; there are lots of places where bytes literals will be used the way str literals are today. buffer(b'...') might be good enough, but it seems more than a little idiomatic, and doesn't seem particularly readable. I'm not suggesting that /all/ literals result in constants, but bytes literals seem like a case where what's wanted is the value. If b'...' results in a new object on every reference, that's a lot of overhead for a network protocol implementation, where the data is just going to be written to a socket or concatenated with other data. An immutable bytes type would be very useful as a dictionary key as well, and more space-efficient than tuple(b'...'). Whether there should be one type with a flag indicating mutability, or two separate types (as with set and frozenset), I'm not sure. The later offers some small performance benefits, but I don't expect there's enough to really matter there. -Fred -- Fred L. Drake, Jr. From mal at egenix.com Sat May 5 21:47:45 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Sat, 05 May 2007 21:47:45 +0200 Subject: [Python-Dev] Changing string constants to byte arrays in Py3k In-Reply-To: References: <20070504130514.640D01E4018@bag.python.org> <463B6FB0.5070604@egenix.com> <200705041342.46028.fdrake@acm.org> <463C7D6F.80705@egenix.com> Message-ID: <463CDF61.5070709@egenix.com> On 2007-05-05 18:11, Steven Bethard wrote: > On 5/5/07, M.-A. Lemburg wrote: >> On 2007-05-04 19:51, Guido van Rossum wrote: >> > [-python-dev] >> > >> > On 5/4/07, Fred L. Drake, Jr. wrote: >> >> On Friday 04 May 2007, M.-A. Lemburg wrote: >> >> > I also suggest making all bytes literals immutable to avoid running >> >> > into any issues like the above. >> >> >> >> +1 from me. >> > >> > Rather than adding immutability to bytes objects (which has big >> > implementation and type checking implications), consider using >> > buffer(b"123") as an immutable bytes literal. You can freely >> > concatenate and compare buffer objects with bytes objects. >> >> I like Georg's idea of having an immutable bytes subclass. >> b"abc" could then be a shortcut constructor for this subclass. >> >> In general, I don't think it's a good idea to have literals >> turn into mutable objects, since literals are normally perceived >> as being constant. > > Does that mean you want list literals to be immutable too? > > lst = ['a', 'b', 'c'] > lst.append('d') # raises an error? Sorry, I was referring to Python literals: http://docs.python.org/ref/literals.html ie. strings and numeric constant values defined in a Python program. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 05 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From jcarlson at uci.edu Sat May 5 21:52:27 2007 From: jcarlson at uci.edu (Josiah Carlson) Date: Sat, 05 May 2007 12:52:27 -0700 Subject: [Python-Dev] Byte literals (was Re: [Python-checkins] Changing string constants to byte arrays ( r55119 - in python/branches/py3k-struni/Lib: codecs.py test/test_codecs.py )) In-Reply-To: <200705051334.45120.fdrake@acm.org> References: <20070505150035.GA16303@panix.com> <200705051334.45120.fdrake@acm.org> Message-ID: <20070505124008.648D.JCARLSON@uci.edu> "Fred L. Drake, Jr." wrote: > > On Saturday 05 May 2007, Aahz wrote: > > I'm with MAL and Fred on making literals immutable -- that's safe and > > lots of newbies will need to use byte literals early in their Python > > experience if they pick up Python to operate on network data. > > Yes; there are lots of places where bytes literals will be used the way str > literals are today. buffer(b'...') might be good enough, but it seems more > than a little idiomatic, and doesn't seem particularly readable. > > I'm not suggesting that /all/ literals result in constants, but bytes literals > seem like a case where what's wanted is the value. If b'...' results in a > new object on every reference, that's a lot of overhead for a network > protocol implementation, where the data is just going to be written to a > socket or concatenated with other data. An immutable bytes type would be > very useful as a dictionary key as well, and more space-efficient than > tuple(b'...'). I was saying the exact same thing last summer. See my discussion with Martin about parsing/unmarshaling. What I expect will happen with bytes as dictionary keys is that people will end up subclassing dictionaries (with varying amounts of success and correctness) to do something like the following... class bytesKeys(dict): ... def __setitem__(self, key, value): if isinstance(key, bytes): key = key.decode('latin-1') else: raise KeyError("only bytes can be used as keys") dict.__setitem__(self, key, value) ... Is it optimal? No. Would it be nice to have immtable bytes? Yes. Do I think it will really be a problem in parsing/unmarshaling? I don't know, but the fact that there now exists a reasonable literal syntax b'...' rather than the previous bytes([1, 2, 3, ...]) means that we are coming much closer to having what really is about the best way to handle this; Python 2.x str. - Josiah From martin at v.loewis.de Sat May 5 23:20:15 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 05 May 2007 23:20:15 +0200 Subject: [Python-Dev] Changing string constants to byte arrays in Py3k In-Reply-To: References: <20070504130514.640D01E4018@bag.python.org> <463B6FB0.5070604@egenix.com> <200705041342.46028.fdrake@acm.org> <463C7D6F.80705@egenix.com> Message-ID: <463CF50F.2040603@v.loewis.de> >> In general, I don't think it's a good idea to have literals >> turn into mutable objects, since literals are normally perceived >> as being constant. > > Does that mean you want list literals to be immutable too? > > lst = ['a', 'b', 'c'] > lst.append('d') # raises an error? That's not a literal, it's a display. The difference is that a literal denotes the same object every time it is executed. A display creates a new object every time it is executed. (another difference is that a display is a constructed thing which may contain runtime-computed components, unlike a literal). So if bytes are mutable and also have source-level representation, they should be displays, not literals. Regards, Martin From steve at holdenweb.com Sat May 5 23:30:31 2007 From: steve at holdenweb.com (Steve Holden) Date: Sat, 05 May 2007 17:30:31 -0400 Subject: [Python-Dev] Changing string constants to byte arrays in Py3k In-Reply-To: References: <20070504130514.640D01E4018@bag.python.org> <463B6FB0.5070604@egenix.com> <200705041342.46028.fdrake@acm.org> <463C7D6F.80705@egenix.com> Message-ID: Steven Bethard wrote: > On 5/5/07, M.-A. Lemburg wrote: >> On 2007-05-04 19:51, Guido van Rossum wrote: >>> [-python-dev] >>> >>> On 5/4/07, Fred L. Drake, Jr. wrote: >>>> On Friday 04 May 2007, M.-A. Lemburg wrote: >>>> > I also suggest making all bytes literals immutable to avoid running >>>> > into any issues like the above. >>>> >>>> +1 from me. >>> Rather than adding immutability to bytes objects (which has big >>> implementation and type checking implications), consider using >>> buffer(b"123") as an immutable bytes literal. You can freely >>> concatenate and compare buffer objects with bytes objects. >> I like Georg's idea of having an immutable bytes subclass. >> b"abc" could then be a shortcut constructor for this subclass. >> >> In general, I don't think it's a good idea to have literals >> turn into mutable objects, since literals are normally perceived >> as being constant. > > Does that mean you want list literals to be immutable too? > > lst = ['a', 'b', 'c'] > lst.append('d') # raises an error? > > STeVe I think the point is rather that changes to the list linked by lst after the initial assignment shouldn't result in the assignemtn of a different value to lst if the statement is executed again (as part of a function body or in a loop, for example). regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://del.icio.us/steve.holden ------------------ Asciimercial --------------------- Get on the web: Blog, lens and tag your way to fame!! holdenweb.blogspot.com squidoo.com/pythonology tagged items: del.icio.us/steve.holden/python All these services currently offer free registration! -------------- Thank You for Reading ---------------- From steven.bethard at gmail.com Sun May 6 00:51:12 2007 From: steven.bethard at gmail.com (Steven Bethard) Date: Sat, 5 May 2007 16:51:12 -0600 Subject: [Python-Dev] Changing string constants to byte arrays in Py3k In-Reply-To: <463CF50F.2040603@v.loewis.de> References: <20070504130514.640D01E4018@bag.python.org> <463B6FB0.5070604@egenix.com> <200705041342.46028.fdrake@acm.org> <463C7D6F.80705@egenix.com> <463CF50F.2040603@v.loewis.de> Message-ID: On 5/5/07, "Martin v. L?wis" wrote: > >> In general, I don't think it's a good idea to have literals > >> turn into mutable objects, since literals are normally perceived > >> as being constant. > > > > Does that mean you want list literals to be immutable too? > > > > lst = ['a', 'b', 'c'] > > lst.append('d') # raises an error? > > That's not a literal, it's a display. The difference is that > a literal denotes the same object every time it is executed. > A display creates a new object every time it is executed. > (another difference is that a display is a constructed thing > which may contain runtime-computed components, unlike a > literal). > > So if bytes are mutable and also have source-level > representation, they should be displays, not literals. So is having mutable bytes just a matter of calling them "byte displays" instead of "byte literals" or does that also require changing something in the back end? STeVe -- I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a tiny blip on the distant coast of sanity. --- Bucky Katt, Get Fuzzy From greg.ewing at canterbury.ac.nz Sun May 6 01:25:26 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 06 May 2007 11:25:26 +1200 Subject: [Python-Dev] Changing string constants to byte arrays in Py3k In-Reply-To: References: <20070504130514.640D01E4018@bag.python.org> <463B6FB0.5070604@egenix.com> <200705041342.46028.fdrake@acm.org> <463C7D6F.80705@egenix.com> Message-ID: <463D1266.3090709@canterbury.ac.nz> Steven Bethard wrote: > Does that mean you want list literals to be immutable too? There are no "list literals" in Python, only expressions that construct lists. You might argue that b"abc" is not a literal either, but an expression that constructs a bytes object. However, it *looks* so much like a string literal that this would be a difficult distinction to keep in mind, and very likely to trip people up. -- Greg From status at bugs.python.org Sun May 6 02:00:55 2007 From: status at bugs.python.org (Tracker) Date: Sun, 6 May 2007 00:00:55 +0000 (UTC) Subject: [Python-Dev] Summary of Tracker Issues Message-ID: <20070506000055.7023278224@psf.upfronthosting.co.za> ACTIVITY SUMMARY (04/29/07 - 05/06/07) Tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue number. Do NOT respond to this message. 1650 open ( +1) / 8584 closed ( +0) / 10234 total ( +1) Average duration of open issues: 784 days. Median duration of open issues: 736 days. Open Issues Breakdown open 1650 ( +1) pending 0 ( +0) Issues Created Or Reopened (1) ______________________________ yahoo 05/02/07 http://bugs.python.org/issue1030 created step -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070506/909ec738/attachment.htm From kbk at shore.net Sun May 6 05:34:37 2007 From: kbk at shore.net (Kurt B. Kaiser) Date: Sat, 5 May 2007 23:34:37 -0400 (EDT) Subject: [Python-Dev] Weekly Python Patch/Bug Summary Message-ID: <200705060334.l463Ybt2011846@bayview.thirdcreek.com> Patch / Bug Summary ___________________ Patches : 360 open ( +4) / 3760 closed ( +4) / 4120 total ( +8) Bugs : 971 open ( +3) / 6683 closed (+10) / 7654 total (+13) RFE : 257 open ( +3) / 282 closed ( +0) / 539 total ( +3) New / Reopened Patches ______________________ test_1686475 of test_os & pagefile.sys (2007-04-28) http://python.org/sf/1709112 opened by A.B., Khalid run test_1565150(test_os.py) only on NTFS (2007-04-29) http://python.org/sf/1709599 opened by Hirokazu Yamamoto Update locale.__all__ (2007-04-30) CLOSED http://python.org/sf/1710352 opened by Humberto Di?genes PEP 318 -- add resolution and XRef (2007-05-01) CLOSED http://python.org/sf/1710853 opened by Jim Jewett PEP 3132: extended unpacking (2007-05-02) http://python.org/sf/1711529 opened by Georg Brandl syslog syscall support for SysLogLogger (2007-05-02) http://python.org/sf/1711603 opened by Luke-Jr fix for bug 1712742 (2007-05-04) http://python.org/sf/1713041 opened by Raghuram Devarakonda Fix warnings related to PyLong_FromVoidPtr (2007-05-05) http://python.org/sf/1713234 opened by Hirokazu Yamamoto Patches Closed ______________ Picky floats (2006-04-28) http://python.org/sf/1478364 closed by loewis Update locale.__all__ (2007-05-01) http://python.org/sf/1710352 closed by gbrandl Use MoveFileEx() to implement os.rename() on windows (2007-04-20) http://python.org/sf/1704547 closed by loewis PEP 318 -- add resolution and XRef (2007-05-01) http://python.org/sf/1710853 closed by gbrandl New / Reopened Bugs ___________________ test_1686475 fails when pagefile.sys does not exist (2007-04-28) CLOSED http://python.org/sf/1709282 opened by Calvin Spealman test_1686475 fails because pagefile.sys does not exist (2007-04-28) http://python.org/sf/1709284 opened by Calvin Spealman struct.calcsize() incorrect (2007-04-29) CLOSED http://python.org/sf/1709506 opened by JoelBondurant Tutorial - Section 8.3 - (2007-04-30) CLOSED http://python.org/sf/1710295 opened by elrond79 zipfile.ZipFile behavior inconsistent. (2007-05-01) http://python.org/sf/1710703 opened by Mark Flacy Ctrl+Shift block marking by words (2007-05-01) http://python.org/sf/1710718 opened by zorkin subprocess must escape redirection characters under win32 (2007-05-01) CLOSED http://python.org/sf/1710802 opened by Patrick M?zard CGIHttpServer leaves traces of previous requests in env (2007-05-03) http://python.org/sf/1711605 opened by Steve Cassidy CGIHttpServer fails if python exe has spaces (2007-05-03) http://python.org/sf/1711608 opened by Steve Cassidy SequenceMatcher bug with insert/delete block after "replace" (2007-05-03) http://python.org/sf/1711800 opened by Christian Hammond __getslice__ changes integer arguments (2007-05-03) http://python.org/sf/1712236 opened by Imri Goldberg Cannot use dict with unicode keys as keyword arguments (2007-05-04) http://python.org/sf/1712419 opened by Viktor Ferenczi urllib.quote throws exception on Unicode URL (2007-05-04) http://python.org/sf/1712522 opened by John Nagle pprint handles depth argument incorrectly (2007-05-04) http://python.org/sf/1712742 opened by Dmitrii Tisnek character set in Japanese on Ubuntu distribution (2007-05-04) http://python.org/sf/1713252 opened by Christopher Grell Error inside logging module's documentation (2007-05-05) CLOSED http://python.org/sf/1713535 opened by billiejoex Bugs Closed ___________ TimedRotatingFileHandler's doRollover opens file in "w" mode (2007-04-27) http://python.org/sf/1708538 closed by vsajip test_1686475 fails when pagefile.sys does not exist (2007-04-28) http://python.org/sf/1709282 deleted by ironfroggy struct.calcsize() incorrect (2007-04-29) http://python.org/sf/1709506 closed by loewis Tutorial - Section 8.3 - (2007-04-30) http://python.org/sf/1710295 closed by gbrandl Portability issue: os.rename behaves differently on win32 (2007-04-03) http://python.org/sf/1693753 closed by loewis subprocess must escape redirection characters under win32 (2007-05-01) http://python.org/sf/1710802 closed by astrand Bypassing __dict__ readonlyness (2005-09-24) http://python.org/sf/1303614 closed by arigo subclassing ModuleType and another built-in type (2005-04-01) http://python.org/sf/1174712 closed by arigo Error inside logging module documentation (2007-05-05) http://python.org/sf/1713535 closed by gbrandl New / Reopened RFE __________________ commands module (2007-05-05) http://python.org/sf/1713624 opened by Joseph Armbruster From martin at v.loewis.de Sun May 6 08:53:13 2007 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 06 May 2007 08:53:13 +0200 Subject: [Python-Dev] Changing string constants to byte arrays in Py3k In-Reply-To: References: <20070504130514.640D01E4018@bag.python.org> <463B6FB0.5070604@egenix.com> <200705041342.46028.fdrake@acm.org> <463C7D6F.80705@egenix.com> <463CF50F.2040603@v.loewis.de> Message-ID: <463D7B59.6090405@v.loewis.de> >> That's not a literal, it's a display. The difference is that >> a literal denotes the same object every time it is executed. >> A display creates a new object every time it is executed. >> (another difference is that a display is a constructed thing >> which may contain runtime-computed components, unlike a >> literal). >> >> So if bytes are mutable and also have source-level >> representation, they should be displays, not literals. > > So is having mutable bytes just a matter of calling them "byte > displays" instead of "byte literals" or does that also require > changing something in the back end? It's certainly also an issue of language semantics (i.e. changes to interpreter code). There are a number of options: 1. don't support creation of byte values through syntax. Instead, create bytes through a constructor function. 2. if there is syntax support, make it a display: every time you execute a bytes display, create a new value, which can then be mutated. 3. if you want it to be a literal, make it immutable: change the type, or add a flag so that it is immutable. Then put it into the co_consts array of the code object. The original complaint was that it shouldn't be in co_consts if it is mutable. In case these three options aren't clear yet, some examples: 1. def foo(): return bytes([1,2,3]) print foo() is foo() # False x = foo() x[0] = 5 # supported 2. def foo(): return b"\x01\x02\x03" print foo() is foo() # False x = foo() x[0] = 5 # supported 3. def foo(): return b"\x01\x02\x03" print foo() is foo() # True x = foo() x[0] = 5 # TypeError HTH, Martin From g.brandl at gmx.net Sun May 6 19:23:47 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 06 May 2007 19:23:47 +0200 Subject: [Python-Dev] ImportError on no permission Message-ID: Today, I got a request regarding importing semantics. When a module file cannot be opened because of, say, lacking read permission, the rest of sys.path will be tried, and if nothing else is found, you get "no module named foo". The reporter claimed, and I understand that, that this is a pain to debug and it would be good to at least add a better message to the import error. Now, why don't we change the semantics as follows: if a file with matching name exists (in import.c::find_module), but opening fails, ImportError is raised immediately with the concrete error message, and without trying the rest of sys.path. That shouldn't cause any working and sane setup to break, or did I overlook something obvious here? Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From martin at v.loewis.de Sun May 6 20:39:45 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 06 May 2007 20:39:45 +0200 Subject: [Python-Dev] ImportError on no permission In-Reply-To: References: Message-ID: <463E20F1.5000102@v.loewis.de> > Now, why don't we change the semantics as follows: if a file with matching name > exists (in import.c::find_module), but opening fails, ImportError is raised > immediately with the concrete error message, and without trying the rest of > sys.path. That shouldn't cause any working and sane setup to break, or did I > overlook something obvious here? I wonder how this would behave if a directory on sys.path was unreadable. You might get an ImportError on *any* import, as it tries the unreadable directory first, gets a permission error, and immediately aborts. Now, I think it is quite possible that you have inaccessible directories on sys.path, e.g. when you inherit PYTHONPATH from a parent process. So I would rather let importing proceed, and add a note to the error message that some files could not be read. Regards, Martin From brett at python.org Sun May 6 20:51:32 2007 From: brett at python.org (Brett Cannon) Date: Sun, 6 May 2007 11:51:32 -0700 Subject: [Python-Dev] ImportError on no permission In-Reply-To: <463E20F1.5000102@v.loewis.de> References: <463E20F1.5000102@v.loewis.de> Message-ID: On 5/6/07, "Martin v. L?wis" wrote: > > > Now, why don't we change the semantics as follows: if a file with > matching name > > exists (in import.c::find_module), but opening fails, ImportError is > raised > > immediately with the concrete error message, and without trying the rest > of > > sys.path. That shouldn't cause any working and sane setup to break, or > did I > > overlook something obvious here? > > I wonder how this would behave if a directory on sys.path was > unreadable. You might get an ImportError on *any* import, as > it tries the unreadable directory first, gets a permission error, > and immediately aborts. > > Now, I think it is quite possible that you have inaccessible > directories on sys.path, e.g. when you inherit PYTHONPATH from > a parent process. > > So I would rather let importing proceed, and add a note to the > error message that some files could not be read. How about an ImportWarning instead? That way people can have either have import halt immediately, or continue (with or without a message). -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070506/124fc4a9/attachment.html From martin at v.loewis.de Sun May 6 21:06:50 2007 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 06 May 2007 21:06:50 +0200 Subject: [Python-Dev] ImportError on no permission In-Reply-To: References: <463E20F1.5000102@v.loewis.de> Message-ID: <463E274A.5080207@v.loewis.de> > How about an ImportWarning instead? That way people can have either > have import halt immediately, or continue (with or without a message). If I put my dislike of warnings aside: yes, that would also work. Martin From tjreedy at udel.edu Sun May 6 21:48:09 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 6 May 2007 15:48:09 -0400 Subject: [Python-Dev] ImportError on no permission References: <463E20F1.5000102@v.loewis.de> Message-ID: ""Martin v. L?wis"" wrote in message news:463E20F1.5000102 at v.loewis.de... |> Now, why don't we change the semantics as follows: if a file with matching name | > exists (in import.c::find_module), but opening fails, ImportError is raised | > immediately with the concrete error message, and without trying the rest of | > sys.path. That shouldn't cause any working and sane setup to break, or did I | > overlook something obvious here? | | I wonder how this would behave if a directory on sys.path was | unreadable. I understood Brett to be talking about a different case where the directory *is* readable and the target file shows up in the directory list. In this limited case, stopping seems sane to me. | You might get an ImportError on *any* import, as | it tries the unreadable directory first, gets a permission error, | and immediately aborts. Not if the patch is properly and narrowly written to only apply to unreadable files in readable directories. tjr From g.brandl at gmx.net Sun May 6 23:18:34 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 06 May 2007 23:18:34 +0200 Subject: [Python-Dev] ImportError on no permission In-Reply-To: <463E20F1.5000102@v.loewis.de> References: <463E20F1.5000102@v.loewis.de> Message-ID: Martin v. L?wis schrieb: >> Now, why don't we change the semantics as follows: if a file with matching name >> exists (in import.c::find_module), but opening fails, ImportError is raised >> immediately with the concrete error message, and without trying the rest of >> sys.path. That shouldn't cause any working and sane setup to break, or did I >> overlook something obvious here? > > I wonder how this would behave if a directory on sys.path was > unreadable. You might get an ImportError on *any* import, as > it tries the unreadable directory first, gets a permission error, > and immediately aborts. That case should be handled differently, yes. My case is that you have a file in the directory with the correct name, but opening it fails (this obviously requires a two-step process, first find the file, then open it). > Now, I think it is quite possible that you have inaccessible > directories on sys.path, e.g. when you inherit PYTHONPATH from > a parent process. > > So I would rather let importing proceed, and add a note to the > error message that some files could not be read. The warning idea is also fine with me, if it's limited to the above case. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From tdelaney at avaya.com Mon May 7 00:34:52 2007 From: tdelaney at avaya.com (Delaney, Timothy (Tim)) Date: Mon, 7 May 2007 08:34:52 +1000 Subject: [Python-Dev] Pre-pre PEP for 'super' keyword Message-ID: <2773CAC687FD5F4689F526998C7E4E5FF1ED82@au3010avexu1.global.avaya.com> Steve Holden wrote: > Tim Delaney wrote: >> BTW, one of my test cases involves multiple super calls in the same >> method - there is a *very* large performance improvement by >> instantiating it once. >> > And how does speed deteriorate for methods with no uses of super at > all (which will, I suspect, be in the majority)? Zero - in those cases, no super instance is instantiated. There is a small one-time cost when the class is constructed in the reference implementation (due to the need to parse the bytecode to determine if if 'super' is used) but in the final implementation that information will be gathered during compilation. Tim Delaney From ironfroggy at gmail.com Mon May 7 06:41:29 2007 From: ironfroggy at gmail.com (Calvin Spealman) Date: Mon, 7 May 2007 00:41:29 -0400 Subject: [Python-Dev] Commit Keys Message-ID: <76fd5acf0705062141u19e221c6pac97a6096f61f5d1@mail.gmail.com> I lost the key I originally gave for commiting my summaries. Who do I talk to about fixing that? -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://ironfroggy-code.blogspot.com/ From nnorwitz at gmail.com Mon May 7 06:46:11 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sun, 6 May 2007 21:46:11 -0700 Subject: [Python-Dev] Commit Keys In-Reply-To: <76fd5acf0705062141u19e221c6pac97a6096f61f5d1@mail.gmail.com> References: <76fd5acf0705062141u19e221c6pac97a6096f61f5d1@mail.gmail.com> Message-ID: On 5/6/07, Calvin Spealman wrote: > I lost the key I originally gave for commiting my summaries. Who do I > talk to about fixing that? Send your new key to pydotorg. -- n From martin at v.loewis.de Mon May 7 07:32:54 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 07 May 2007 07:32:54 +0200 Subject: [Python-Dev] Commit Keys In-Reply-To: References: <76fd5acf0705062141u19e221c6pac97a6096f61f5d1@mail.gmail.com> Message-ID: <463EBA06.2090209@v.loewis.de> Neal Norwitz schrieb: > On 5/6/07, Calvin Spealman wrote: >> I lost the key I originally gave for commiting my summaries. Who do I >> talk to about fixing that? > > Send your new key to pydotorg. -- n In doing so, please indicate whether you just lost it, or somebody else may have found it. Regards, Martin From skip at pobox.com Sun May 6 14:09:31 2007 From: skip at pobox.com (skip at pobox.com) Date: Sun, 6 May 2007 07:09:31 -0500 Subject: [Python-Dev] Changing string constants to byte arrays in Py3k In-Reply-To: <463D7B59.6090405@v.loewis.de> References: <20070504130514.640D01E4018@bag.python.org> <463B6FB0.5070604@egenix.com> <200705041342.46028.fdrake@acm.org> <463C7D6F.80705@egenix.com> <463CF50F.2040603@v.loewis.de> <463D7B59.6090405@v.loewis.de> Message-ID: <17981.50555.467520.132107@montanaro.dyndns.org> >> So is having mutable bytes just a matter of calling them "byte >> displays" instead of "byte literals" or does that also require >> changing something in the back end? Martin> It's certainly also an issue of language semantics (i.e. changes Martin> to interpreter code). There are a number of options: Martin> 1. don't support creation of byte values through syntax. Instead, Martin> create bytes through a constructor function. I don't read the py3k mailing list. I presume the distinction between "display" and "literal" is old hat to those folks. I've never seen the term. Can someone explain it? Thx, Skip From ncoghlan at gmail.com Mon May 7 18:00:02 2007 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 08 May 2007 02:00:02 +1000 Subject: [Python-Dev] Changing string constants to byte arrays in Py3k In-Reply-To: <17981.50555.467520.132107@montanaro.dyndns.org> References: <20070504130514.640D01E4018@bag.python.org> <463B6FB0.5070604@egenix.com> <200705041342.46028.fdrake@acm.org> <463C7D6F.80705@egenix.com> <463CF50F.2040603@v.loewis.de> <463D7B59.6090405@v.loewis.de> <17981.50555.467520.132107@montanaro.dyndns.org> Message-ID: <463F4D02.5080208@gmail.com> skip at pobox.com wrote: > >> So is having mutable bytes just a matter of calling them "byte > >> displays" instead of "byte literals" or does that also require > >> changing something in the back end? > > Martin> It's certainly also an issue of language semantics (i.e. changes > Martin> to interpreter code). There are a number of options: > Martin> 1. don't support creation of byte values through syntax. Instead, > Martin> create bytes through a constructor function. > > I don't read the py3k mailing list. I presume the distinction between > "display" and "literal" is old hat to those folks. I've never seen the > term. Can someone explain it? A literal refers to an immutable constant (i.e. 'assert 1 is 1' is allowed to be true) A display always creates a new mutable object (i.e. 'assert [] is []' is *required* to be false) The question is whether we have byte literals or displays in Py3k, and if we make them literals, whether it is still permissible for them to be mutable. Mutable objects pose all sorts of caching and aliasing problems that just don't come up with immutable objects like strings or numbers. For the work I do with low level hardware control, I suspect not having an immutable bytes variant to throw in a dictionary would be something of an inconvenience (that said, work only switched to Python 2.4 relatively recently, so I doubt Py3k will pose me any significant practical concerns on that front for quite some time :). I would personally like an interoperable bytes/frozenbytes pair (along the lines of set/frozenset) with a literal syntax to produce instances of the latter. However, I don't have a great deal of development time to devote to helping to make that happen. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From guido at python.org Mon May 7 19:42:41 2007 From: guido at python.org (Guido van Rossum) Date: Mon, 7 May 2007 10:42:41 -0700 Subject: [Python-Dev] Byte literals (was Re: [Python-checkins] Changing string constants to byte arrays ( r55119 - in python/branches/py3k-struni/Lib: codecs.py test/test_codecs.py )) In-Reply-To: <20070505124008.648D.JCARLSON@uci.edu> References: <20070505150035.GA16303@panix.com> <200705051334.45120.fdrake@acm.org> <20070505124008.648D.JCARLSON@uci.edu> Message-ID: [+python-3000; replies please remove python-dev] On 5/5/07, Josiah Carlson wrote: > > "Fred L. Drake, Jr." wrote: > > > > On Saturday 05 May 2007, Aahz wrote: > > > I'm with MAL and Fred on making literals immutable -- that's safe and > > > lots of newbies will need to use byte literals early in their Python > > > experience if they pick up Python to operate on network data. > > > > Yes; there are lots of places where bytes literals will be used the way str > > literals are today. buffer(b'...') might be good enough, but it seems more > > than a little idiomatic, and doesn't seem particularly readable. > > > > I'm not suggesting that /all/ literals result in constants, but bytes literals > > seem like a case where what's wanted is the value. If b'...' results in a > > new object on every reference, that's a lot of overhead for a network > > protocol implementation, where the data is just going to be written to a > > socket or concatenated with other data. An immutable bytes type would be > > very useful as a dictionary key as well, and more space-efficient than > > tuple(b'...'). > > I was saying the exact same thing last summer. See my discussion with > Martin about parsing/unmarshaling. What I expect will happen with bytes > as dictionary keys is that people will end up subclassing dictionaries > (with varying amounts of success and correctness) to do something like > the following... > > class bytesKeys(dict): > ... > def __setitem__(self, key, value): > if isinstance(key, bytes): > key = key.decode('latin-1') > else: > raise KeyError("only bytes can be used as keys") > dict.__setitem__(self, key, value) > ... > > Is it optimal? No. Would it be nice to have immtable bytes? Yes. Do > I think it will really be a problem in parsing/unmarshaling? I don't > know, but the fact that there now exists a reasonable literal syntax b'...' > rather than the previous bytes([1, 2, 3, ...]) means that we are coming > much closer to having what really is about the best way to handle this; > Python 2.x str. I don't know how this will work out yet. I'm not convinced that having both mutable and immutable bytes is the right thing to do; but I'm also not convinced of the opposite. I am slowly working on the string/unicode unification, and so far, unfortunately, it is quite daunting to get rid of 8-bit strings even at the Python level let alone at the C level. I suggest that the following exercise, to be carried out in the py3k-struni branch, might be helpful: (1) change the socket module to return bytes instead of strings (it already takes bytes, by virtue of the buffer protocol); (2) change its makefile() method so that it uses the new io.py library, in particular the SocketIO wrapper there; (3) fix up the httplib module and perhaps other similar ones. Take copious notes while doing this. Anyone up for this? I will listen! (I'd do it myself but I don't know where I'd find the time). -- --Guido van Rossum (home page: http://www.python.org/~guido/) From nick at craig-wood.com Tue May 8 00:26:46 2007 From: nick at craig-wood.com (Nick Craig-Wood) Date: Mon, 7 May 2007 23:26:46 +0100 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <3d2ce8cb0705041345m5b5d2b30oe11b0d392e7324cd@mail.gmail.com> References: <4638B151.6020901@voidspace.org.uk> <3d2ce8cb0705041345m5b5d2b30oe11b0d392e7324cd@mail.gmail.com> Message-ID: <20070507222647.157A714C571@irishsea.home.craig-wood.com> Mike Klaas wrote: > On 5/4/07, Baptiste Carvello wrote: > > > maybe we could have a "dedent" literal that would remove the first newline and > > all indentation so that you can just write: > > > > call_something( d''' > > first part > > second line > > third line > > ''' ) > > Surely > > from textwrap import dedent as d > > is close enough? Apart from it happening at run time rather than compile time. -- Nick Craig-Wood -- http://www.craig-wood.com/nick From anthony at interlink.com.au Tue May 8 01:14:02 2007 From: anthony at interlink.com.au (Anthony Baxter) Date: Tue, 8 May 2007 09:14:02 +1000 Subject: [Python-Dev] best practices stdlib: purging xrange Message-ID: <200705080914.04578.anthony@interlink.com.au> I'd like to suggest that we remove all (or nearly all) uses of xrange from the stdlib. A quick scan shows that most of the usage of it is unnecessary. With it going away in 3.0, and it being informally deprecated anyway, it seems like a good thing to go away where possible. Any objections? Anthony -- Anthony Baxter It's never too late to have a happy childhood. From guido at python.org Tue May 8 02:03:43 2007 From: guido at python.org (Guido van Rossum) Date: Mon, 7 May 2007 17:03:43 -0700 Subject: [Python-Dev] best practices stdlib: purging xrange In-Reply-To: <200705080914.04578.anthony@interlink.com.au> References: <200705080914.04578.anthony@interlink.com.au> Message-ID: But why bother? The 2to3 converter can do this for you. In a sense using range() is more likely to produce broken results in 3.0: if your code depends on the fact that range() returns a list, it is broken in 3.0, and 2to3 cannot help you here. But if you use list(xrange()) today, the converter will turn this into list(range()) in 3.0 and that will continue to work correctly. --Guido On 5/7/07, Anthony Baxter wrote: > I'd like to suggest that we remove all (or nearly all) uses of > xrange from the stdlib. A quick scan shows that most of the usage > of it is unnecessary. With it going away in 3.0, and it being > informally deprecated anyway, it seems like a good thing to go away > where possible. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From skip at pobox.com Tue May 8 02:03:50 2007 From: skip at pobox.com (skip at pobox.com) Date: Mon, 7 May 2007 19:03:50 -0500 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <20070507222647.157A714C571@irishsea.home.craig-wood.com> References: <4638B151.6020901@voidspace.org.uk> <3d2ce8cb0705041345m5b5d2b30oe11b0d392e7324cd@mail.gmail.com> <20070507222647.157A714C571@irishsea.home.craig-wood.com> Message-ID: <17983.48742.128669.51208@montanaro.dyndns.org> >> Surely >> >> from textwrap import dedent as d >> >> is close enough? Nick> Apart from it happening at run time rather than compile time. And as someone else pointed out, what if you don't want each chunk of text terminated by a newline? Skip From ferringb at gmail.com Tue May 8 02:10:22 2007 From: ferringb at gmail.com (Brian Harring) Date: Mon, 7 May 2007 17:10:22 -0700 Subject: [Python-Dev] best practices stdlib: purging xrange In-Reply-To: <200705080914.04578.anthony@interlink.com.au> References: <200705080914.04578.anthony@interlink.com.au> Message-ID: <20070508001022.GA25195@seldon> On Tue, May 08, 2007 at 09:14:02AM +1000, Anthony Baxter wrote: > I'd like to suggest that we remove all (or nearly all) uses of > xrange from the stdlib. A quick scan shows that most of the usage > of it is unnecessary. With it going away in 3.0, and it being > informally deprecated anyway, it seems like a good thing to go away > where possible. > > Any objections? Punt it when it's no longer useful (py3k); xrange exists for a reason- most usage just needs to iterate over a range of numbers (xrange), not instantiate a list of the range, then iterate over said range (range). Don't much see the point in making stdlib more wasteful in runtime for an "informally deprecated" func that lots of folks in the real world still use. ~brian -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20070507/37a5ada5/attachment.pgp From tjreedy at udel.edu Tue May 8 02:21:31 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 7 May 2007 20:21:31 -0400 Subject: [Python-Dev] best practices stdlib: purging xrange References: <200705080914.04578.anthony@interlink.com.au> Message-ID: "Guido van Rossum" wrote in message news:ca471dc20705071703p54a9afc3yfe9693c5fe6e2f23 at mail.gmail.com... | But why bother? The 2to3 converter can do this for you. | | In a sense using range() is more likely to produce broken results in | 3.0: if your code depends on the fact that range() returns a list, it | is broken in 3.0, and 2to3 cannot help you here. But if you use | list(xrange()) today, the converter will turn this into list(range()) | in 3.0 and that will continue to work correctly. Just curious why 2to3 would not replace range() with list(range())? tjr From python at rcn.com Tue May 8 02:57:06 2007 From: python at rcn.com (Raymond Hettinger) Date: Mon, 7 May 2007 20:57:06 -0400 (EDT) Subject: [Python-Dev] best practices stdlib: purging xrange Message-ID: <20070507205706.BIJ25137@ms09.lnh.mail.rcn.net> > I'd like to suggest that we remove all (or nearly all) uses of > xrange from the stdlib. A quick scan shows that most of the usage > of it is unnecessary. With it going away in 3.0, and it being > informally deprecated anyway, it seems like a good thing to go away > where possible. > >Any objections? -1 It isn't "informally deprecated". The xrange() builtin has different performance characteristics and is still needed in Py2.x. Only in Py3k where range() becomes lazy like will the need disappear. Seriously, we should make every effort to make sure that Py3k doesn't unnecessarily backpropagate into an otherwise very stable codebase. An unwarranted s/xrange/range/g would be just one more reason to not upgrade to Py2.6. Raymond From guido at python.org Tue May 8 04:09:54 2007 From: guido at python.org (Guido van Rossum) Date: Mon, 7 May 2007 19:09:54 -0700 Subject: [Python-Dev] best practices stdlib: purging xrange In-Reply-To: References: <200705080914.04578.anthony@interlink.com.au> Message-ID: On 5/7/07, Terry Reedy wrote: > > "Guido van Rossum" wrote in message > news:ca471dc20705071703p54a9afc3yfe9693c5fe6e2f23 at mail.gmail.com... > | But why bother? The 2to3 converter can do this for you. > | > | In a sense using range() is more likely to produce broken results in > | 3.0: if your code depends on the fact that range() returns a list, it > | is broken in 3.0, and 2to3 cannot help you here. But if you use > | list(xrange()) today, the converter will turn this into list(range()) > | in 3.0 and that will continue to work correctly. > > Just curious why 2to3 would not replace range() with list(range())? That's a good idea. But I'd like someone else to implement it... -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ncoghlan at gmail.com Tue May 8 14:45:05 2007 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 08 May 2007 22:45:05 +1000 Subject: [Python-Dev] PEP 30XZ: Simplified Parsing In-Reply-To: <02b401c78d15$f04b6110$090a0a0a@enfoldsystems.local> References: <02b401c78d15$f04b6110$090a0a0a@enfoldsystems.local> Message-ID: <464070D1.6040701@gmail.com> Mark Hammond wrote: > Please add my -1 to the chorus here, for the same reasons already expressed. Another -1 here - while I agree there are benefits to removing backslash continuations and string literal concatenation, I don't think they're significant enough to justify the hassle of making it happen. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From arigo at tunes.org Tue May 8 14:49:21 2007 From: arigo at tunes.org (Armin Rigo) Date: Tue, 8 May 2007 14:49:21 +0200 Subject: [Python-Dev] best practices stdlib: purging xrange In-Reply-To: <200705080914.04578.anthony@interlink.com.au> References: <200705080914.04578.anthony@interlink.com.au> Message-ID: <20070508124921.GA25986@code0.codespeak.net> Hi Anthony, On Tue, May 08, 2007 at 09:14:02AM +1000, Anthony Baxter wrote: > I'd like to suggest that we remove all (or nearly all) uses of > xrange from the stdlib. A quick scan shows that most of the usage > of it is unnecessary. With it going away in 3.0, and it being > informally deprecated anyway, it seems like a good thing to go away > where possible. The first step is to focus the question on the places where replacing xrange() with range() can really make no difference at all, as far as we can see. This is not the case of "nearly all" the uses of xrange() from the stdlib - but it's still the case of a number of them: ''.join(chr(x) for x in xrange(256)) # at global module level or: for i in xrange(self.firstweekday, self.firstweekday + 7): I personally think that replacing these with range() is a clean-up, but I also know that not everybody agrees to that. So: should we, or should we not, replace xrange() with range() as a matter of clean-up when the difference between the two is really completely irrelevant? A bientot, Armin. From guido at python.org Tue May 8 16:04:08 2007 From: guido at python.org (Guido van Rossum) Date: Tue, 8 May 2007 07:04:08 -0700 Subject: [Python-Dev] best practices stdlib: purging xrange In-Reply-To: <20070508124921.GA25986@code0.codespeak.net> References: <200705080914.04578.anthony@interlink.com.au> <20070508124921.GA25986@code0.codespeak.net> Message-ID: On 5/8/07, Armin Rigo wrote: > On Tue, May 08, 2007 at 09:14:02AM +1000, Anthony Baxter wrote: > > I'd like to suggest that we remove all (or nearly all) uses of > > xrange from the stdlib. A quick scan shows that most of the usage > > of it is unnecessary. With it going away in 3.0, and it being > > informally deprecated anyway, it seems like a good thing to go away > > where possible. > > The first step is to focus the question on the places where replacing > xrange() with range() can really make no difference at all, as far as we > can see. This is not the case of "nearly all" the uses of xrange() from > the stdlib - but it's still the case of a number of them: > > ''.join(chr(x) for x in xrange(256)) # at global module level > > or: > > for i in xrange(self.firstweekday, self.firstweekday + 7): > > I personally think that replacing these with range() is a clean-up, but > I also know that not everybody agrees to that. So: should we, or should > we not, replace xrange() with range() as a matter of clean-up when the > difference between the two is really completely irrelevant? I'm all for that -- personally, I wouldn't have written xrange() in the first place in such cases! -- --Guido van Rossum (home page: http://www.python.org/~guido/) From foom at fuhm.net Tue May 8 17:18:44 2007 From: foom at fuhm.net (James Y Knight) Date: Tue, 8 May 2007 11:18:44 -0400 Subject: [Python-Dev] best practices stdlib: purging xrange In-Reply-To: <20070508124921.GA25986@code0.codespeak.net> References: <200705080914.04578.anthony@interlink.com.au> <20070508124921.GA25986@code0.codespeak.net> Message-ID: On May 8, 2007, at 8:49 AM, Armin Rigo wrote: > On Tue, May 08, 2007 at 09:14:02AM +1000, Anthony Baxter wrote: >> I'd like to suggest that we remove all (or nearly all) uses of >> xrange from the stdlib. A quick scan shows that most of the usage >> of it is unnecessary. With it going away in 3.0, and it being >> informally deprecated anyway, it seems like a good thing to go away >> where possible. > > I personally think that replacing these with range() is a clean-up, > but > I also know that not everybody agrees to that. So: should we, or > should > we not, replace xrange() with range() as a matter of clean-up when the > difference between the two is really completely irrelevant? But doesn't doing this now this make the conversion to Py3 *harder*? If 2to3 is going to rewrite xrange() as range(), and range() to list (range()), then moving towards xrange where possible would actually be preferable, wouldn't it? Or is there no plan to run 2to3 on the stdlib? James From kristjan at ccpgames.com Tue May 8 17:33:22 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 8 May 2007 15:33:22 +0000 Subject: [Python-Dev] svn logs Message-ID: <4E9372E6B2234D4F859320D896059A9508CDDDF0D1@exchis.ccp.ad.local> Hello there. Does anyone know why getting the SVN logs for a project is so excruciatingly slow? Is this an inherent SVN problem or are the python.org servers simply overloaded? It takes something like 10 minutes for me to get the log messages window up for the root of a branch. Cheers, Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070508/5e2e4119/attachment.html From steve at holdenweb.com Tue May 8 18:21:46 2007 From: steve at holdenweb.com (Steve Holden) Date: Tue, 08 May 2007 12:21:46 -0400 Subject: [Python-Dev] Byte literals (was Re: [Python-checkins] Changing string constants to byte arrays ( r55119 - in python/branches/py3k-struni/Lib: codecs.py test/test_codecs.py )) In-Reply-To: References: <20070505150035.GA16303@panix.com> <200705051334.45120.fdrake@acm.org> <20070505124008.648D.JCARLSON@uci.edu> Message-ID: Guido van Rossum wrote: > [+python-3000; replies please remove python-dev] > > On 5/5/07, Josiah Carlson wrote: >> "Fred L. Drake, Jr." wrote: >>> On Saturday 05 May 2007, Aahz wrote: >>> > I'm with MAL and Fred on making literals immutable -- that's safe and >>> > lots of newbies will need to use byte literals early in their Python >>> > experience if they pick up Python to operate on network data. >>> >>> Yes; there are lots of places where bytes literals will be used the way str >>> literals are today. buffer(b'...') might be good enough, but it seems more >>> than a little idiomatic, and doesn't seem particularly readable. >>> >>> I'm not suggesting that /all/ literals result in constants, but bytes literals >>> seem like a case where what's wanted is the value. If b'...' results in a >>> new object on every reference, that's a lot of overhead for a network >>> protocol implementation, where the data is just going to be written to a >>> socket or concatenated with other data. An immutable bytes type would be >>> very useful as a dictionary key as well, and more space-efficient than >>> tuple(b'...'). >> I was saying the exact same thing last summer. See my discussion with >> Martin about parsing/unmarshaling. What I expect will happen with bytes >> as dictionary keys is that people will end up subclassing dictionaries >> (with varying amounts of success and correctness) to do something like >> the following... >> >> class bytesKeys(dict): >> ... >> def __setitem__(self, key, value): >> if isinstance(key, bytes): >> key = key.decode('latin-1') >> else: >> raise KeyError("only bytes can be used as keys") >> dict.__setitem__(self, key, value) >> ... >> >> Is it optimal? No. Would it be nice to have immtable bytes? Yes. Do >> I think it will really be a problem in parsing/unmarshaling? I don't >> know, but the fact that there now exists a reasonable literal syntax b'...' >> rather than the previous bytes([1, 2, 3, ...]) means that we are coming >> much closer to having what really is about the best way to handle this; >> Python 2.x str. > > I don't know how this will work out yet. I'm not convinced that having > both mutable and immutable bytes is the right thing to do; but I'm > also not convinced of the opposite. I am slowly working on the > string/unicode unification, and so far, unfortunately, it is quite > daunting to get rid of 8-bit strings even at the Python level let > alone at the C level. > > I suggest that the following exercise, to be carried out in the > py3k-struni branch, might be helpful: (1) change the socket module to > return bytes instead of strings (it already takes bytes, by virtue of > the buffer protocol); (2) change its makefile() method so that it uses > the new io.py library, in particular the SocketIO wrapper there; (3) > fix up the httplib module and perhaps other similar ones. Take copious > notes while doing this. Anyone up for this? I will listen! (I'd do it > myself but I don't know where I'd find the time). > I'm having a hard time understanding why bytes literals would be a good thing. OK, displays require the work of creating a new object (since bytes types will be mutable) but surely a mutable literal is always going to make programs potentially hard to read. If you want a representation of a bytes object in your program text doesn't that always (like other mutable types) have to represent the same value, creating new objects as necessary if previously-created objects could have been mutated. What am I missing here? regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://del.icio.us/steve.holden ------------------ Asciimercial --------------------- Get on the web: Blog, lens and tag your way to fame!! holdenweb.blogspot.com squidoo.com/pythonology tagged items: del.icio.us/steve.holden/python All these services currently offer free registration! -------------- Thank You for Reading ---------------- From dustin at v.igoro.us Tue May 8 18:51:39 2007 From: dustin at v.igoro.us (dustin at v.igoro.us) Date: Tue, 8 May 2007 11:51:39 -0500 Subject: [Python-Dev] svn logs In-Reply-To: <4E9372E6B2234D4F859320D896059A9508CDDDF0D1@exchis.ccp.ad.local> References: <4E9372E6B2234D4F859320D896059A9508CDDDF0D1@exchis.ccp.ad.local> Message-ID: <20070508165139.GE5902@v.igoro.us> On Tue, May 08, 2007 at 03:33:22PM +0000, Kristj?n Valur J?nsson wrote: > Does anyone know why getting the SVN logs for a project is so > excruciatingly slow? > > Is this an inherent SVN problem or are the python.org servers simply > overloaded? I believe it's because there are multiple requests required to get the whole thing, but I don't know the details. You'll notice that svn annotate is also really slow. One thing you can do to help is to specify a range of revisions you'd like to see. This has been the case with just about every remote repository I've ever accessed. Dustin From nnorwitz at gmail.com Tue May 8 19:37:54 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 8 May 2007 10:37:54 -0700 Subject: [Python-Dev] svn logs In-Reply-To: <4E9372E6B2234D4F859320D896059A9508CDDDF0D1@exchis.ccp.ad.local> References: <4E9372E6B2234D4F859320D896059A9508CDDDF0D1@exchis.ccp.ad.local> Message-ID: Part of the problem might be that we are using an old version of svn (1.1) AFAIK. IIRC these operations were sped up in later versions. n -- On 5/8/07, Kristj?n Valur J?nsson wrote: > > > > > Hello there. > > Does anyone know why getting the SVN logs for a project is so excruciatingly > slow? > > Is this an inherent SVN problem or are the python.org servers simply > overloaded? > > It takes something like 10 minutes for me to get the log messages window up > for the root > > of a branch. > > > > Cheers, > > Kristj?n > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/nnorwitz%40gmail.com > > From martin at v.loewis.de Wed May 9 06:46:50 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 09 May 2007 06:46:50 +0200 Subject: [Python-Dev] best practices stdlib: purging xrange In-Reply-To: References: <200705080914.04578.anthony@interlink.com.au> Message-ID: <4641523A.2070106@v.loewis.de> > Just curious why 2to3 would not replace range() with list(range())? In most usages of range(), using the 3.0 range() will work just as well, and be more efficient. If I wanted to write code that works in both versions (which I understand is not the 2to3 objective), then I would use range(). If I worry about creating a list in 2.x, I would write try: xrange except NameError: xrange=range at the top of the file. Regards, Martin From nnorwitz at gmail.com Wed May 9 08:59:10 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 8 May 2007 23:59:10 -0700 Subject: [Python-Dev] \code or \constant in tex markup Message-ID: Which is correct? $ egrep '(False|True)' Doc/lib/*.tex | grep -c \\constant{ 74 $ egrep '(False|True)' Doc/lib/*.tex | grep -c \\code{ 204 $ egrep 'None' Doc/lib/*.tex | grep -c \\code{ 512 $ egrep 'None' Doc/lib/*.tex | grep -c \\constant{ 83 n From rasky at develer.com Wed May 9 10:36:53 2007 From: rasky at develer.com (Giovanni Bajo) Date: Wed, 09 May 2007 10:36:53 +0200 Subject: [Python-Dev] svn logs In-Reply-To: References: <4E9372E6B2234D4F859320D896059A9508CDDDF0D1@exchis.ccp.ad.local> Message-ID: On 08/05/2007 19.37, Neal Norwitz wrote: > Part of the problem might be that we are using an old version of svn > (1.1) AFAIK. IIRC these operations were sped up in later versions. Yes they were. If that's the case, then probably the server should be updated. -- Giovanni Bajo From tjreedy at udel.edu Wed May 9 17:05:36 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 9 May 2007 11:05:36 -0400 Subject: [Python-Dev] best practices stdlib: purging xrange References: <200705080914.04578.anthony@interlink.com.au> <4641523A.2070106@v.loewis.de> Message-ID: ""Martin v. L?wis"" wrote in message news:4641523A.2070106 at v.loewis.de... |> Just curious why 2to3 would not replace range() with list(range())? | | In most usages of range(), using the 3.0 range() will work just as | well, and be more efficient. If so, which it would seem from range2x functionally equal to list(range3), then the suggestion of the subject line is backwards. What should be purged eventually is range in for statement headers (or list(range) after conversion). It seems that what some consider best practice now (make a list unless it is long and un-needed) is different from what will be best practice in Py3 (do not make a list unless actually need it). tjr From orsenthil at users.sourceforge.net Thu May 10 06:16:14 2007 From: orsenthil at users.sourceforge.net (O.R.Senthil Kumaran) Date: Thu, 10 May 2007 09:46:14 +0530 Subject: [Python-Dev] best practices stdlib: purging xrange In-Reply-To: References: <200705080914.04578.anthony@interlink.com.au> <20070508124921.GA25986@code0.codespeak.net> Message-ID: <20070510041614.GC3327@gmail.com> * James Y Knight [2007-05-08 11:18:44]: > On May 8, 2007, at 8:49 AM, Armin Rigo wrote: > > On Tue, May 08, 2007 at 09:14:02AM +1000, Anthony Baxter wrote: > >> I'd like to suggest that we remove all (or nearly all) uses of > >> xrange from the stdlib. A quick scan shows that most of the usage > >> of it is unnecessary. With it going away in 3.0, and it being > >> informally deprecated anyway, it seems like a good thing to go away > >> where possible. > > > > I personally think that replacing these with range() is a clean-up, > > but > > I also know that not everybody agrees to that. So: should we, or > > should > > we not, replace xrange() with range() as a matter of clean-up when the > > difference between the two is really completely irrelevant? > > But doesn't doing this now this make the conversion to Py3 *harder*? > If 2to3 is going to rewrite xrange() as range(), and range() to list > (range()), then moving towards xrange where possible would actually > be preferable, wouldn't it? Or is there no plan to run 2to3 on the > stdlib? Looking at xrange() and range() definitions and from this discussion, it seems to me that xrange() to be preferable over range(). Its common that most of the code have range() because its simple use in iteration, but if same functionality is provided with xrange as an object. And doing :s/xrange/range/g would make sense also. ( Am I right in understanding this?) Why range or xrange and why not xrange or range? Or is this discussion about why having two functions with similar (or rather same) functionality, and lets move to one and in which case either of them is fine. -- O.R.Senthil Kumaran From fdrake at acm.org Thu May 10 15:27:34 2007 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 10 May 2007 09:27:34 -0400 Subject: [Python-Dev] \code or \constant in tex markup In-Reply-To: References: Message-ID: <200705100927.34612.fdrake@acm.org> On Wednesday 09 May 2007, Neal Norwitz wrote: > Which is correct? \constant was introduced much more recently than \code (though it's not really new anymore). The intent for \constant when it was introduced was that it be used for names that were treated as constants in code (such as string.ascii_letters or doctest.REPORT_NDIFF), not syntactic literals like 3 or "abc". At the time, None, True, and False were just named values in the __builtin__ module. I don't think the support for None as a "real" constant should change the status of the value as "just another named constant" other than in the implementation details. So I think \constant is right for all three; we just haven't gone back and changed all the older instances of \code{None}, \code{True}, and \code{False}. We've generally resisted that sort of blanket change, but consistency is valuable too. Perhaps it's time to make the change across the board. -Fred -- Fred L. Drake, Jr. From barry at python.org Thu May 10 17:10:52 2007 From: barry at python.org (Barry Warsaw) Date: Thu, 10 May 2007 11:10:52 -0400 Subject: [Python-Dev] Official version support statement Message-ID: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 This came up in a different context. I originally emailed this to the python.org admins, but Aahz rightly points out that we should first agree here that this actually /is/ our official stance. - -----snip----- We have an "official unofficial" policy of supporting only Python 2.current and 2.(current - 1), and /not/ supporting anything earlier. Do we already have an official statement to this effect on the website? The closest thing I could find was on the download page, but that's not really definitive. What do you think about adding something like the following to the top of the download page: "The Python Software Foundation officially supports the current stable major release and one prior major release. Currently, Python 2.5 and 2.4 are officially supported. Where appropriate and necessary, patches for earlier releases may be made available, but no earlier versions are officially supported by the PSF. We do not make releases of unsupported versions, although patched versions may become available through other vendors." - -Barry P.S. On re-reading this, I realize this text would need amending when Python 3.x is released, but I don't care about that right now. pdo admins: I also slightly changed the proposed text. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQCVAwUBRkM1/XEjvBPtnXfVAQJufwP9E4A7cbHyDRk0v/hONjlt3ZJ2eCtwYZL6 hlitMIBb8YxJsGNo6p1kC0KkB/DObqmCarTy8YXIM+v8j32UmEbiRmJFxexuKdVw I0EqziHnYdSkkp3cN+EGP2jahfLFVMaA2A2Ohj0o0mLGZEQU7TTF4F6U33PlooXs G6zKmDzuLT4= =BWxm -----END PGP SIGNATURE----- From guido at python.org Thu May 10 17:14:18 2007 From: guido at python.org (Guido van Rossum) Date: Thu, 10 May 2007 08:14:18 -0700 Subject: [Python-Dev] Hard-to-find problem with set().test_c_api() Message-ID: Could anyone help me debug the following? This is in a debug build of the trunk. I've been banging my head against the wrong wall for too long to "see" the issue here... :-( $ ./python -S -c "set('abc').test_c_api()" [6872 refs] Fatal Python error: ../Objects/stringobject.c:4971 object at 0xb7f66ca0 has negative ref count -606348326 Aborted -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tjreedy at udel.edu Thu May 10 18:53:29 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 10 May 2007 12:53:29 -0400 Subject: [Python-Dev] Official version support statement References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> Message-ID: "Barry Warsaw" wrote in message news:343F26A5-7DFE-4757-BA4D-2A666CE85215 at python.org... | -----BEGIN PGP SIGNED MESSAGE----- | Hash: SHA1 | | This came up in a different context. I originally emailed this to | the python.org admins, but Aahz rightly points out that we should | first agree here that this actually /is/ our official stance. | | - -----snip----- | | We have an "official unofficial" policy of supporting only Python | 2.current and 2.(current - 1), and /not/ supporting anything earlier. | | Do we already have an official statement to this effect on the | website? The closest thing I could find was on the download page, | but that's not really definitive. | | What do you think about adding something like the following to the | top of the download page: | | "The Python Software Foundation officially supports the current | stable major release and one prior major release. Currently, Python | 2.5 and 2.4 are officially supported. Where appropriate and | necessary, patches for earlier releases may be made available, but no | earlier versions are officially supported by the PSF. We do not make | releases of unsupported versions, although patched versions may | become available through other vendors." This strikes me as a bit over-officious (the 'officially' adds nothing to me except a bit of stuffiness). Worse, it seems wrong and hence, to me, misleading. The current de facto policy is that when a new major release comes out, there is a *final* minor, bugfix release of the previous major version. Thus, 2.5 is being supported while 2.6 is being worked on. As I understand it, there are no more plans to touch 2.4 than 2.3 and so on. So the current message is: "If you want a 2.5 bug fixed, find it, report it, and help get it fixed now before 2.6 is released." I am aware that if a trustworthy person or persons were to backport some substantial numbers of fixes from 2.5 to 2.4, greenlight the test suite on several systems, cut release candidates, and repond to reports, the file would appear on the official Python site. But currently, as far as I know, this 'support' is as empty as the Official Help-Yourself Plate of Donated Cookies on my kitchen table. The reason, is seems to me, that prior major releases do not get support is that they do not much need it. For practical purposes, core CPython is pretty much bug free. Module bugs get reported and fixed or worked around. Old users can upgrade if they want fixes that appear later. And new users generally start with the current major release. Terry Jan Reedy From guido at python.org Thu May 10 19:20:53 2007 From: guido at python.org (Guido van Rossum) Date: Thu, 10 May 2007 10:20:53 -0700 Subject: [Python-Dev] Hard-to-find problem with set().test_c_api() In-Reply-To: References: Message-ID: Never mind, I found it via bisection of the offending function, and fixed it: Committed revision 55227. --Guido On 5/10/07, Guido van Rossum wrote: > Could anyone help me debug the following? This is in a debug build of > the trunk. I've been banging my head against the wrong wall for too > long to "see" the issue here... :-( > > $ ./python -S -c "set('abc').test_c_api()" > [6872 refs] > Fatal Python error: ../Objects/stringobject.c:4971 object at > 0xb7f66ca0 has negative ref count -606348326 > Aborted > > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Thu May 10 20:45:57 2007 From: guido at python.org (Guido van Rossum) Date: Thu, 10 May 2007 11:45:57 -0700 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals Message-ID: I just discovered that, in all versions of Python as far back as I have access to (2.0), \uXXXX escapes are interpreted inside raw unicode strings. Thus: >>> a = ur"\u1234" >>> len(a) 1 >>> Contrast this with: >>> a = ur"\x12" >>> len(a) 4 >>> The \U escape has the same behavior, in versions that support it. Does anyone remember why it is done this way? The reference manual describes this behavior, but doesn't give an explanation: """ When an "r" or "R" prefix is used in conjunction with a "u" or "U" prefix, then the \uXXXX and \UXXXXXXXX escape sequences are processed while all other backslashes are left in the string. For example, the string literal ur"\u0062\n" consists of three Unicode characters: `LATIN SMALL LETTER B', `REVERSE SOLIDUS', and `LATIN SMALL LETTER N'. Backslashes can be escaped with a preceding backslash; however, both remain in the string. As a result, \uXXXX escape sequences are only recognized when there are an odd number of backslashes. """ -- --Guido van Rossum (home page: http://www.python.org/~guido/) From p.f.moore at gmail.com Thu May 10 20:53:38 2007 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 10 May 2007 19:53:38 +0100 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: References: Message-ID: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> On 10/05/07, Guido van Rossum wrote: > I just discovered that, in all versions of Python as far back as I > have access to (2.0), \uXXXX escapes are interpreted inside raw > unicode strings. Thus: [...] > Does anyone remember why it is done this way? The reference manual > describes this behavior, but doesn't give an explanation: My memory is so dim as to be more speculation than anything else, but I suspect it's simply because there's no other way of including characters outside the ASCII range in a raw string. Paul. From mal at egenix.com Thu May 10 21:07:14 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 10 May 2007 21:07:14 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> Message-ID: <46436D62.2040700@egenix.com> On 2007-05-10 20:53, Paul Moore wrote: > On 10/05/07, Guido van Rossum wrote: >> I just discovered that, in all versions of Python as far back as I >> have access to (2.0), \uXXXX escapes are interpreted inside raw >> unicode strings. Thus: > [...] >> Does anyone remember why it is done this way? The reference manual >> describes this behavior, but doesn't give an explanation: > > My memory is so dim as to be more speculation than anything else, but > I suspect it's simply because there's no other way of including > characters outside the ASCII range in a raw string. This is per design (see PEP 100) and was done for the reason given by Paul. The motivation for the chosen approach was to make Python's raw Unicode strings compatible to Java's raw Unicode strings: http://java.sun.com/docs/books/jls/second_edition/html/lexical.doc.html -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 10 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From fdrake at acm.org Thu May 10 21:32:22 2007 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 10 May 2007 15:32:22 -0400 Subject: [Python-Dev] Official version support statement In-Reply-To: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> Message-ID: <200705101532.23249.fdrake@acm.org> On Thursday 10 May 2007, Barry Warsaw wrote: > This came up in a different context. I originally emailed this to > the python.org admins, but Aahz rightly points out that we should > first agree here that this actually /is/ our official stance. +1 -Fred -- Fred L. Drake, Jr. From tjreedy at udel.edu Thu May 10 23:24:57 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 10 May 2007 17:24:57 -0400 Subject: [Python-Dev] New operations in Decimal References: Message-ID: "Facundo Batista" wrote in message news:f18754$2c2$1 at sea.gmane.org... | Nick Maclaren wrote: | | > Am I losing my marbles, or is this a proposal to add the logical | > operations to FLOATING-POINT? | | Sort of. This is a proposal to keep compliant with the General Decimal | Arithmetic Specification, as we promised. | | http://www2.hursley.ibm.com/decimal/ I oppose adding this illogical nonsense to Python. Who would ever use it? An intention and promise to keep compliant with a *decimal arithmetic* standard cannot sanely be a blind, open-ended promise to add whatever *non-decimal* functions that IBM puts where they do not belong as part of its commercial strategem. To me, the same would go for any other standard similarly twisted. Supposed IBM defined a mapping between pairs of decimal digits and an ascii subset (printables and the few control chars actually used by most people). Suppose IBM further defined string functions for decimal nuumbers intrerpreted as strings. An example might be 'capitalize', such that capitalize(010203) == 010203 capitalize(121110) == 424140 # 10='a', 40 = 'A', etc And suppose that IBM shoved this into the decimal standard the same way it did with the decimal-interpreded-as-binary-string' functions. Would you really add them to be 'compliant' with IBM? If you really do put them in, turn 'invert' into 'prefix_not'. For the prefix, please not 'logical' but something like 'lu' (for 'lunatic') or, less provocatively, 'ibm'. Terry Jan Reedy From guido at python.org Fri May 11 00:11:37 2007 From: guido at python.org (Guido van Rossum) Date: Thu, 10 May 2007 15:11:37 -0700 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <46436D62.2040700@egenix.com> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> Message-ID: On 5/10/07, M.-A. Lemburg wrote: > On 2007-05-10 20:53, Paul Moore wrote: > > On 10/05/07, Guido van Rossum wrote: > >> I just discovered that, in all versions of Python as far back as I > >> have access to (2.0), \uXXXX escapes are interpreted inside raw > >> unicode strings. Thus: > > [...] > >> Does anyone remember why it is done this way? The reference manual > >> describes this behavior, but doesn't give an explanation: > > > > My memory is so dim as to be more speculation than anything else, but > > I suspect it's simply because there's no other way of including > > characters outside the ASCII range in a raw string. > > This is per design (see PEP 100) and was done for the reason given > by Paul. The motivation for the chosen approach was to make Python's > raw Unicode strings compatible to Java's raw Unicode strings: > > http://java.sun.com/docs/books/jls/second_edition/html/lexical.doc.html I'm not sure what Java compatibility buys us. It is also far from perfect -- IIUC, in Java if you write \u0022 (that's the " character) it counts as an opening or closing quote, and if you write \u005c (a backslash) it can be used to escape the following character. OTOH, in Python, you can write ur"C:\Program Files\u005c" and voila, a raw string terminating in a backslash. (In Java this would escape the " instead.) However, I understand the other reason (inclusion of non-ASCII characters in raw strings) and I reluctantly agree with it. Reluctantly, because it means I can't create a raw string containing a \ followed by u or U -- I needed one of those today. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From mal at egenix.com Fri May 11 00:35:09 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 11 May 2007 00:35:09 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> Message-ID: <46439E1D.50401@egenix.com> On 2007-05-11 00:11, Guido van Rossum wrote: > On 5/10/07, M.-A. Lemburg wrote: >> On 2007-05-10 20:53, Paul Moore wrote: >>> On 10/05/07, Guido van Rossum wrote: >>>> I just discovered that, in all versions of Python as far back as I >>>> have access to (2.0), \uXXXX escapes are interpreted inside raw >>>> unicode strings. Thus: >>> [...] >>>> Does anyone remember why it is done this way? The reference manual >>>> describes this behavior, but doesn't give an explanation: >>> My memory is so dim as to be more speculation than anything else, but >>> I suspect it's simply because there's no other way of including >>> characters outside the ASCII range in a raw string. >> This is per design (see PEP 100) and was done for the reason given >> by Paul. The motivation for the chosen approach was to make Python's >> raw Unicode strings compatible to Java's raw Unicode strings: >> >> http://java.sun.com/docs/books/jls/second_edition/html/lexical.doc.html > > I'm not sure what Java compatibility buys us. It is also far from > perfect -- IIUC, in Java if you write \u0022 (that's the " character) > it counts as an opening or closing quote, and if you write \u005c (a > backslash) it can be used to escape the following character. OTOH, in > Python, you can write ur"C:\Program Files\u005c" and voila, a raw > string terminating in a backslash. (In Java this would escape the " > instead.) http://mail.python.org/pipermail/python-dev/1999-November/001346.html http://mail.python.org/pipermail/python-dev/1999-November/001392.html and all the other postings in that month related to this. > However, I understand the other reason (inclusion of non-ASCII > characters in raw strings) and I reluctantly agree with it. > Reluctantly, because it means I can't create a raw string containing a > \ followed by u or U -- I needed one of those today. >>> print ur"\u005cu" \u -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 11 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From martin at v.loewis.de Fri May 11 00:46:27 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 11 May 2007 00:46:27 +0200 Subject: [Python-Dev] Official version support statement In-Reply-To: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> Message-ID: <4643A0C3.4070408@v.loewis.de> > "The Python Software Foundation officially supports the current > stable major release and one prior major release. Currently, Python > 2.5 and 2.4 are officially supported. If you take "officially supported" to mean "there will be further bugfix releases", then no: 2.4 is not anymore officially supported. Only 2.5 is officially supported. There may, of course, be security patches released for 2.4 if there is a need. Regards, Martin From martin at v.loewis.de Fri May 11 00:50:23 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 11 May 2007 00:50:23 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> Message-ID: <4643A1AF.5080305@v.loewis.de> > However, I understand the other reason (inclusion of non-ASCII > characters in raw strings) and I reluctantly agree with it. I actually disagree with that. It is fairly easy to include non-ASCII characters in a raw Unicode string - just type them in. Or, if that fails, use string concatenation with a non-raw string: r"foo\uhallo" "\u20ac" r"welt" Regards, Martin From python at rcn.com Fri May 11 00:59:24 2007 From: python at rcn.com (Raymond Hettinger) Date: Thu, 10 May 2007 18:59:24 -0400 (EDT) Subject: [Python-Dev] New operations in Decimal Message-ID: <20070510185924.BIU37629@ms09.lnh.mail.rcn.net> | > Am I losing my marbles, or is this a proposal to add the logical | > operations to FLOATING-POINT? | | Sort of. This is a proposal to keep compliant with the General Decimal | Arithmetic Specification, as we promised. | | http://www2.hursley.ibm.com/decimal/ > I oppose adding this illogical nonsense to Python. Who would ever use it? Doesn't matter. What is more important is that we provide a module that is fully compliant with the specification and passes all of its tests. The value is in the compliance, not in the relative value of individual parts of the spec. This is somewhat akin to modules supporting RFC specs or internet protocols. It is more important to be standard than it is to pick and choose the parts you like. The same is true of writing ANSI spec compilers -- you write to the spec, not to the language you wish had been adopted. While I question the sanity of the spec writers in this case, I do trust that overall, they have provided an extremely well thought-out spec, have gone through extensive discussion/feedback cycles, and have provided a thorough test-suite. It is as good as it gets. Raymond Hettinger From guido at python.org Fri May 11 01:03:22 2007 From: guido at python.org (Guido van Rossum) Date: Thu, 10 May 2007 16:03:22 -0700 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <4643A1AF.5080305@v.loewis.de> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> Message-ID: On 5/10/07, "Martin v. L?wis" wrote: > > However, I understand the other reason (inclusion of non-ASCII > > characters in raw strings) and I reluctantly agree with it. > > I actually disagree with that. It is fairly easy to include non-ASCII > characters in a raw Unicode string - just type them in. That violates the convention used in many places that source code should only contain printable ASCII, and all non-ASCII or unprintable characters should be written using \x or \u escapes. > Or, if that > fails, use string concatenation with a non-raw string: > > r"foo\uhallo" "\u20ac" r"welt" That makes for pretty unreadable source code though. Looking for a third opinion, -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Fri May 11 01:26:35 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 11 May 2007 01:26:35 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> Message-ID: <4643AA2B.1080301@v.loewis.de> >> I actually disagree with that. It is fairly easy to include non-ASCII >> characters in a raw Unicode string - just type them in. > > That violates the convention used in many places that source code > should only contain printable ASCII, and all non-ASCII or unprintable > characters should be written using \x or \u escapes. Following that convention: How do you get a non-ASCII byte into a raw byte string in Python 2.x? You can't - so why should you be able to get a non-ASCII character into a raw Unicode string? Regards, Martin From guido at python.org Fri May 11 01:34:11 2007 From: guido at python.org (Guido van Rossum) Date: Thu, 10 May 2007 16:34:11 -0700 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <4643AA2B.1080301@v.loewis.de> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> Message-ID: On 5/10/07, "Martin v. L?wis" wrote: > >> I actually disagree with that. It is fairly easy to include non-ASCII > >> characters in a raw Unicode string - just type them in. > > > > That violates the convention used in many places that source code > > should only contain printable ASCII, and all non-ASCII or unprintable > > characters should be written using \x or \u escapes. > > Following that convention: How do you get a non-ASCII byte into > a raw byte string in Python 2.x? > > You can't - so why should you be able to get a non-ASCII character > into a raw Unicode string? Fair enough. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From greg.ewing at canterbury.ac.nz Fri May 11 03:30:37 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 11 May 2007 13:30:37 +1200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <4643AA2B.1080301@v.loewis.de> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> Message-ID: <4643C73D.1010909@canterbury.ac.nz> Martin v. L?wis wrote: > why should you be able to get a non-ASCII character > into a raw Unicode string? The analogous question would be why can't you get a non-Unicode character into a raw Unicode string. That wouldn't make sense, since Unicode strings can't even hold non-Unicode characters (or at least they're not meant to). But it doesn't seem unreasonable to want to put Unicode characters into a raw Unicode string. After all, if it only contains ASCII characters there's no need for it to be a Unicode string in the first place. -- Greg From guido at python.org Fri May 11 04:11:48 2007 From: guido at python.org (Guido van Rossum) Date: Thu, 10 May 2007 19:11:48 -0700 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <4643C73D.1010909@canterbury.ac.nz> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> Message-ID: On 5/10/07, Greg Ewing wrote: > Martin v. L?wis wrote: > > why should you be able to get a non-ASCII character > > into a raw Unicode string? > > The analogous question would be why can't you get a > non-Unicode character into a raw Unicode string. That > wouldn't make sense, since Unicode strings can't even > hold non-Unicode characters (or at least they're not > meant to). > > But it doesn't seem unreasonable to want to put > Unicode characters into a raw Unicode string. After > all, if it only contains ASCII characters there's > no need for it to be a Unicode string in the first > place. This is what prompted my question, actually: in Py3k, in the str/unicode unification branch, r"\u1234" changes meaning: before the unification, this was an 8-bit string, where the \u was not special, but now it is a unicode string, where \u *is* special. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tjreedy at udel.edu Fri May 11 05:45:31 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 10 May 2007 23:45:31 -0400 Subject: [Python-Dev] New operations in Decimal References: <20070510185924.BIU37629@ms09.lnh.mail.rcn.net> Message-ID: "Raymond Hettinger" wrote in message news:20070510185924.BIU37629 at ms09.lnh.mail.rcn.net... | > I oppose adding this illogical nonsense to Python. Who would ever use it? | | Doesn't matter. What is more important is that we provide a module that is | fully compliant with the specification and passes all of its tests. The value | is in the compliance, not in the relative value of individual parts of the spec. To repeat my further question: if IBM adds string functions or anything else to the 'decimal arithmetic' spec, should we unthinkingly add them also? Is there there no limit to the size of the camel that comes in with the nose? Suppose the next edition of the spec contains decimal versions of the functions in numpy (BLAS, LINPACK, FTTPACK, and so on). Should they be included in the standard lib even while numpy is excluded. We supposedly have a standard for additions to the standard lib. I cannot think of any other module being admitted with what amounts to an unlimited blank check for further additions. | This is somewhat akin to modules supporting RFC specs or internet | protocols. It is more important to be standard than it is to pick and choose | the parts you like. My impresssion from reading this list is that some of the modules supporting such specs/protocols are not complete and that there has been some picking and choosing. Wasn't there recently discussion about DOM level compliance? In any case, once RFCs are finalized, they does not, as far as I know, grow with additions, sane or crazy. Nex stuff goes in a new RFC which can be evaluated separately against our normal criteria for stdlib additions. | While I question the sanity of the spec writers in this case, I do trust that | overall, they have provided an extremely well thought-out spec, have gone | through extensive discussion/feedback cycles, and have provided a thorough | test-suite. It is as good as it gets. I had the same opinion until I saw the logic stuff. But I have known IBM and its products, good and bad, for over 40 years, so it does not surprise me when they act somewhat imperialistically for commercial advantage. This strikes me as likely such a case. But I may give M.C. a chance to better educate me. In the meanwhile here is my suggestion. Segregate the binary digit functions, and anything else of their ilk, in a separate module, say decimal_extras, and make it available on PyPI. In decimal, add try: from decimal_extras import * except ImportError: pass Then the tests could pass without junking up the stdlib with stuff that would never even be proposed, let alone accepted. And should I be proved wrong, and these functions find favor with the community and usage in production code, then they can be seamlessly moved into the stdlib and the decimal module after having met the test of other additions. Terry Jan Reedy From martin at v.loewis.de Fri May 11 07:38:38 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 11 May 2007 07:38:38 +0200 Subject: [Python-Dev] New operations in Decimal In-Reply-To: References: <20070510185924.BIU37629@ms09.lnh.mail.rcn.net> Message-ID: <4644015E.50909@v.loewis.de> > We supposedly have a standard for additions to the standard lib. I cannot > think of any other module being admitted with what amounts to an unlimited > blank check for further additions. xml.dom.minidom, xml.sax, posix, htmlentitydefs, Tkinter. Regards, Martin From martin at v.loewis.de Fri May 11 07:41:53 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 11 May 2007 07:41:53 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <4643C73D.1010909@canterbury.ac.nz> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> Message-ID: <46440221.4010004@v.loewis.de> Greg Ewing schrieb: > Martin v. L?wis wrote: >> why should you be able to get a non-ASCII character >> into a raw Unicode string? > > The analogous question would be why can't you get a > non-Unicode character into a raw Unicode string. No, that would not be analogous. The string type in Python is not an ASCII string type, but a byte string type. It does not necessarily only hold ASCII characters, but can (and, in hundreds of applications) does hold arbitrary bytes. There is (in the non-raw form) support of filling arbitrary bytes into a byte string literal. So no, this is not analogous. Regards, Martin From martin at v.loewis.de Fri May 11 07:52:39 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 11 May 2007 07:52:39 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> Message-ID: <464404A7.3000804@v.loewis.de> > This is what prompted my question, actually: in Py3k, in the > str/unicode unification branch, r"\u1234" changes meaning: before the > unification, this was an 8-bit string, where the \u was not special, > but now it is a unicode string, where \u *is* special. That is true for non-raw strings also: the meaning of "\u1234" also changes. However, traditionally, there was *no* escaping mechanism in raw strings in Python, and I feel that this is a good principle, because it is easy to learn (if you leave out the detail that \ can't be the last character in a raw string - which should get fixed also, IMO). So I think in Py3k, "\u1234" should continue to be a string with 6 characters. Otherwise, people will complain that os.stat(r"c:\windows\system32\user32.dll") fails. Telling them to write os.stat(r"c:\windows\system32\u005Cuser32.dll") will just cause puzzled faces. Windows path names are one of the two primary applications of raw strings (the other being regexes). Regards, Martin From rrr at ronadam.com Fri May 11 09:59:42 2007 From: rrr at ronadam.com (Ron Adam) Date: Fri, 11 May 2007 02:59:42 -0500 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <464404A7.3000804@v.loewis.de> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> Message-ID: <4644226E.9080701@ronadam.com> Martin v. L?wis wrote: >> This is what prompted my question, actually: in Py3k, in the >> str/unicode unification branch, r"\u1234" changes meaning: before the >> unification, this was an 8-bit string, where the \u was not special, >> but now it is a unicode string, where \u *is* special. > > That is true for non-raw strings also: the meaning of "\u1234" also > changes. > > However, traditionally, there was *no* escaping mechanism in raw strings > in Python, and I feel that this is a good principle, because it is > easy to learn (if you leave out the detail that \ can't be the last > character in a raw string - which should get fixed also, IMO). So I > think in Py3k, "\u1234" should continue to be a string with 6 > characters. Otherwise, people will complain that > os.stat(r"c:\windows\system32\user32.dll") fails. Telling them to write > os.stat(r"c:\windows\system32\u005Cuser32.dll") will just cause puzzled > faces. > > Windows path names are one of the two primary applications of raw > strings (the other being regexes). I think regular expressions become easier to read if they don't also contain python escape characters because then you don't have to mentally parse which ones are part of the regular expression and which ones are evaluated by python. The re module can still evaluate r"\uxxxx", r"\'", and r'\"' sequences even if python doesn't. I experimented with tokanize.c to see if the trailing '\' could be special cased in raw strings. The minimum change I could come up with was to have it not respect slash-quote sequences, (for finding the end of a string), if the quote is the same type as the quote used to define the string. The following strings in the library needed to be adjusted after that change. I don't think this is the best solution, but the list of strings needing changed might be useful for the discussion. - r'(\'[^\']*\'|"[^"]*"|[][\-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~\'"@]*))?') + r'''(\'[^\']*\'|"[^"]*"|[][\-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~\'"@]*))?''') -_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match +_declstringlit_match = re.compile(r'''(\'[^\']*\'|"[^"]*")\s*''').match - r'(?<=[\w\!\"\'\&\.\,\?])-{2,}(?=\w))') # em-dash + r'''(?<=[\w\!\"\'\&\.\,\?])-{2,}(?=\w))''') # em-dash - r'[\"\']?' # optional end-of-quote + r'''[\"\']?''' # optional end-of-quote - _wordchars_re = re.compile(r'[^\\\'\"%s ]*' % string.whitespace) + _wordchars_re = re.compile(r'''[^\\\'\"%s ]*''' % string.whitespace) -HEADER_QUOTED_VALUE_RE = re.compile(r"^\s*=\s*\"([^\"\\]*(?:\\.[^\"\\]*)*)\"") +HEADER_QUOTED_VALUE_RE = re.compile(r'''^\s*=\s*\"([^\"\\]*(?:\\.[^\"\\]*)*)\"''') -HEADER_JOIN_ESCAPE_RE = re.compile(r"([\"\\])") +HEADER_JOIN_ESCAPE_RE = re.compile(r'([\"\\])') - quote_re = re.compile(r"([\"\\])") + quote_re = re.compile(r'([\"\\])') - return re.sub(r'((\\[\\abfnrtv\'"]|\\[0-9]..|\\x..|\\u....)+)', + return re.sub(r'''((\\[\\abfnrtv\'"]|\\[0-9]..|\\x..|\\u....)+)''', - _OPTION_DIRECTIVE_RE = re.compile(r'#\s*doctest:\s*([^\n\'"]*)$', + _OPTION_DIRECTIVE_RE = re.compile(r'''#\s*doctest:\s*([^\n\'"]*)$''', re.MULTILINE) - s = unicode(r'\x00="\'a\\b\x80\xff\u0000\u0001\u1234', 'unicode-escape') + s = unicode(r'''\x00="\'a\\b\x80\xff\u0000\u0001\u1234''', d - _escape = re.compile(r"[&<>\"\x80-\xff]+") # 1.5.2 + _escape = re.compile(r'[&<>\"\x80-\xff]+') # 1.5.2 - r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]*))?') + r'''(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]*))?''') I also noticed that python handles the '\' escape character differently than re does in regular strings. In regular expressions, a single '\' is always an escape character. If the following character is not a special character, then the two character combination becomes the second non-special character. "\'" --> ' "\\" --> \ "\q" --> q ('q' not special so '\q' is 'q') This isn't how python does it. >>> '\'' "'" >>> "\\" '\\' >>> "\q" ('q' not special, so Back slash is not an escape.) '\q' So it might be good to have it always be an escape in regular strings, and never be an escape in raw strings. Ron From mal at egenix.com Fri May 11 10:49:56 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 11 May 2007 10:49:56 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <464404A7.3000804@v.loewis.de> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> Message-ID: <46442E34.4020909@egenix.com> On 2007-05-11 07:52, Martin v. L?wis wrote: >> This is what prompted my question, actually: in Py3k, in the >> str/unicode unification branch, r"\u1234" changes meaning: before the >> unification, this was an 8-bit string, where the \u was not special, >> but now it is a unicode string, where \u *is* special. > > That is true for non-raw strings also: the meaning of "\u1234" also > changes. > > However, traditionally, there was *no* escaping mechanism in raw strings > in Python, and I feel that this is a good principle, because it is > easy to learn (if you leave out the detail that \ can't be the last > character in a raw string - which should get fixed also, IMO). So I > think in Py3k, "\u1234" should continue to be a string with 6 > characters. Otherwise, people will complain that > os.stat(r"c:\windows\system32\user32.dll") fails. Telling them to write > os.stat(r"c:\windows\system32\u005Cuser32.dll") will just cause puzzled > faces. Using double backslashes won't cause that reaction: os.stat("c:\\windows\\system32\\user32.dll") Also note that Windows is smart enough nowadays to parse the good old Unix forward slash: os.stat("c:/windows/system32/user32.dll") > Windows path names are one of the two primary applications of raw > strings (the other being regexes). IMHO the primary use case are regexps and for those you'd definitely want to be able to put Unicode characters into your expressions. BTW, if you use ur"..." for your expressions today (which you should if you parse text), then nothing will change when removing the 'u' prefix in Py3k. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 11 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From g.brandl at gmx.net Fri May 11 11:46:57 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 11 May 2007 11:46:57 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <46442E34.4020909@egenix.com> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> <46442E34.4020909@egenix.com> Message-ID: M.-A. Lemburg schrieb: >> Windows path names are one of the two primary applications of raw >> strings (the other being regexes). > > IMHO the primary use case are regexps and for those you'd > definitely want to be able to put Unicode characters into your > expressions. Except if sre_parse would recognize \u and \U escapes, just like it does now with \x escapes. Georg From theller at ctypes.org Fri May 11 13:05:05 2007 From: theller at ctypes.org (Thomas Heller) Date: Fri, 11 May 2007 13:05:05 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <46442E34.4020909@egenix.com> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> <46442E34.4020909@egenix.com> Message-ID: M.-A. Lemburg schrieb: > On 2007-05-11 07:52, Martin v. L?wis wrote: >>> This is what prompted my question, actually: in Py3k, in the >>> str/unicode unification branch, r"\u1234" changes meaning: before the >>> unification, this was an 8-bit string, where the \u was not special, >>> but now it is a unicode string, where \u *is* special. >> >> That is true for non-raw strings also: the meaning of "\u1234" also >> changes. >> >> However, traditionally, there was *no* escaping mechanism in raw strings >> in Python, and I feel that this is a good principle, because it is >> easy to learn (if you leave out the detail that \ can't be the last >> character in a raw string - which should get fixed also, IMO). So I >> think in Py3k, "\u1234" should continue to be a string with 6 >> characters. Otherwise, people will complain that >> os.stat(r"c:\windows\system32\user32.dll") fails. Telling them to write >> os.stat(r"c:\windows\system32\u005Cuser32.dll") will just cause puzzled >> faces. > > Using double backslashes won't cause that reaction: > > os.stat("c:\\windows\\system32\\user32.dll") Sure. But I want to use raw strings for Windows path names; it's much easier to type. > Also note that Windows is smart enough nowadays to parse > the good old Unix forward slash: > > os.stat("c:/windows/system32/user32.dll") In my opinion this is a windows bug and not a features. Especially because there are Windows api functions (the shell functions, IIRC) that do NOT accept forward slashes. Would you say that *nix is dumb because it doesn't parse "\\usr\\include"? >> Windows path names are one of the two primary applications of raw >> strings (the other being regexes). > Thomas From mal at egenix.com Fri May 11 13:27:26 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 11 May 2007 13:27:26 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> <46442E34.4020909@egenix.com> Message-ID: <4644531E.3090901@egenix.com> On 2007-05-11 13:05, Thomas Heller wrote: > M.-A. Lemburg schrieb: >> On 2007-05-11 07:52, Martin v. L?wis wrote: >>>> This is what prompted my question, actually: in Py3k, in the >>>> str/unicode unification branch, r"\u1234" changes meaning: before the >>>> unification, this was an 8-bit string, where the \u was not special, >>>> but now it is a unicode string, where \u *is* special. >>> That is true for non-raw strings also: the meaning of "\u1234" also >>> changes. >>> >>> However, traditionally, there was *no* escaping mechanism in raw strings >>> in Python, and I feel that this is a good principle, because it is >>> easy to learn (if you leave out the detail that \ can't be the last >>> character in a raw string - which should get fixed also, IMO). So I >>> think in Py3k, "\u1234" should continue to be a string with 6 >>> characters. Otherwise, people will complain that >>> os.stat(r"c:\windows\system32\user32.dll") fails. Telling them to write >>> os.stat(r"c:\windows\system32\u005Cuser32.dll") will just cause puzzled >>> faces. >> Using double backslashes won't cause that reaction: >> >> os.stat("c:\\windows\\system32\\user32.dll") > > Sure. But I want to use raw strings for Windows path names; it's much easier > to type. But think of the price to pay if we disable use of Unicode escapes in raw strings. And all of this just because of the one special case: having a file name that starts with a U and needs to be referenced literally in a Python application together with a path leading up to it. BTW, there's an easy work-around for this special case: os.stat(os.path.join(r"c:\windows\system32", "user32.dll")) >> Also note that Windows is smart enough nowadays to parse >> the good old Unix forward slash: >> >> os.stat("c:/windows/system32/user32.dll") > > In my opinion this is a windows bug and not a features. Especially because there > are Windows api functions (the shell functions, IIRC) that do NOT accept > forward slashes. > > Would you say that *nix is dumb because it doesn't parse "\\usr\\include"? Sorry, I wasn't trying to imply that Windows is/was a dumb system. I think it's nice that you can use forward slashes on Windows - makes writing code that works in both worlds (Unix and Windows) a lot easier. >>> Windows path names are one of the two primary applications of raw >>> strings (the other being regexes). -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 11 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From guido at python.org Fri May 11 16:30:33 2007 From: guido at python.org (Guido van Rossum) Date: Fri, 11 May 2007 07:30:33 -0700 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <464404A7.3000804@v.loewis.de> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> Message-ID: On 5/10/07, "Martin v. L?wis" wrote: > Windows path names are one of the two primary applications of raw > strings (the other being regexes). I disagree with this use case; the r"..." notation was not invented for this purpose. I won't compromise the escaping of quotes to accommodate it. Nevertheless I think that \u and \U should lose their special-ness in 3.0. I'd like to hear from anyone who has access to *real code* that uses \u or \U in a raw unicode string. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From goodger at python.org Fri May 11 21:59:20 2007 From: goodger at python.org (David Goodger) Date: Fri, 11 May 2007 19:59:20 +0000 (UTC) Subject: [Python-Dev] \u and \U escapes in raw unicode string literals References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> Message-ID: Guido van Rossum python.org> writes: > I'd like to hear from anyone who has access to *real code* that uses > \u or \U in a raw unicode string. Docutils uses it in the docutils.parsers.rst.states module, Body class: patterns = { 'bullet': ur'[-+*\u2022\u2023\u2043]( +|$)', ... attribution_pattern = re.compile(ur'(---?(?!-)|\u2014) *(?=[^ \n])') -- David Goodger From guido at python.org Fri May 11 22:06:15 2007 From: guido at python.org (Guido van Rossum) Date: Fri, 11 May 2007 13:06:15 -0700 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: References: <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> Message-ID: On 5/11/07, David Goodger wrote: > Guido van Rossum python.org> writes: > > I'd like to hear from anyone who has access to *real code* that uses > > \u or \U in a raw unicode string. > > Docutils uses it in the docutils.parsers.rst.states module, Body class: > > patterns = { > 'bullet': ur'[-+*\u2022\u2023\u2043]( +|$)', > ... > > attribution_pattern = re.compile(ur'(---?(?!-)|\u2014) *(?=[^ \n])') But wouldn't it be just as handy to teach the re module about \u and \U, just as it already knows about \x (and \123 octals)? -- --Guido van Rossum (home page: http://www.python.org/~guido/) From goodger at python.org Fri May 11 22:10:36 2007 From: goodger at python.org (David Goodger) Date: Fri, 11 May 2007 20:10:36 +0000 (UTC) Subject: [Python-Dev] \u and \U escapes in raw unicode string literals References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> Message-ID: > Guido van Rossum python.org> writes: > > I'd like to hear from anyone who has access to *real code* that uses > > \u or \U in a raw unicode string. David Goodger python.org> writes: > Docutils uses it in the docutils.parsers.rst.states module, Body class: > > patterns = { > 'bullet': ur'[-+*\u2022\u2023\u2043]( +|$)', > ... > > attribution_pattern = re.compile(ur'(---?(?!-)|\u2014) *(?=[^ \n])') Although admittedly, these don't *have* to be raw strings, since they don't contain backslashes as regexp syntax. They were made raw by reflex, because they contain regular expressions. -- DG From goodger at python.org Fri May 11 22:12:32 2007 From: goodger at python.org (David Goodger) Date: Fri, 11 May 2007 16:12:32 -0400 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: References: <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> Message-ID: <4335d2c40705111312g73e94481l629fafc88263fb86@mail.gmail.com> > On 5/11/07, David Goodger wrote: > > Docutils uses it in the docutils.parsers.rst.states module, Body class: > > > > patterns = { > > 'bullet': ur'[-+*\u2022\u2023\u2043]( +|$)', > > ... > > > > attribution_pattern = re.compile(ur'(---?(?!-)|\u2014) *(?=[^ \n])') On 5/11/07, Guido van Rossum wrote: > But wouldn't it be just as handy to teach the re module about \u and > \U, just as it already knows about \x (and \123 octals)? Could be. I'm just providing examples, as requested. I leave the heavy thinking to others ;-) -- David Goodger From barry at python.org Sat May 12 00:10:42 2007 From: barry at python.org (Barry Warsaw) Date: Fri, 11 May 2007 18:10:42 -0400 Subject: [Python-Dev] Official version support statement In-Reply-To: References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> Message-ID: <604B92EE-6BB0-4B28-9529-F3649ABA7B27@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 10, 2007, at 12:53 PM, Terry Reedy wrote: > This strikes me as a bit over-officious (the 'officially' adds > nothing to > me except a bit of stuffiness). > > Worse, it seems wrong and hence, to me, misleading. The current de > facto > policy is that when a new major release comes out, there is a *final* > minor, bugfix release of the previous major version. Thus, 2.5 is > being > supported while 2.6 is being worked on. As I understand it, there > are no > more plans to touch 2.4 than 2.3 and so on. So the current message > is: > "If you want a 2.5 bug fixed, find it, report it, and help get it > fixed now > before 2.6 is released." > > I am aware that if a trustworthy person or persons were to backport > some > substantial numbers of fixes from 2.5 to 2.4, greenlight the test > suite on > several systems, cut release candidates, and repond to reports, the > file > would appear on the official Python site. But currently, as far as > I know, > this 'support' is as empty as the Official Help-Yourself Plate of > Donated > Cookies on my kitchen table. I'm happy to document whatever we decide the policy is, but I think we should decide and produce an official statement as such. It helps users and vendors to know what they can count on us for and what they might be on their own for. It's also not that big of a deal if we amend the policy later because we have volunteer release managers for earlier versions. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQCVAwUBRkTp43EjvBPtnXfVAQJCigP9GtmkyIpI7NadM0pfPIIsLkLCvqFA8sNe oMVv5cGMtkDaw6x4kuITv+EL0CCXopXgz89vTq1bFrrmBbJpR1Bk3ToB9L2VvWPl kjxzExIwaS8xtywfw7j5Mn2vfBVpK7lewL5POwOg9QQ1r51cHcTjoL/28FD1yqf2 5rUahZFDTLY= =xQjh -----END PGP SIGNATURE----- From barry at python.org Sat May 12 00:17:12 2007 From: barry at python.org (Barry Warsaw) Date: Fri, 11 May 2007 18:17:12 -0400 Subject: [Python-Dev] Official version support statement In-Reply-To: <4643A0C3.4070408@v.loewis.de> References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> Message-ID: <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 10, 2007, at 6:46 PM, Martin v. L?wis wrote: >> "The Python Software Foundation officially supports the current >> stable major release and one prior major release. Currently, Python >> 2.5 and 2.4 are officially supported. > > If you take "officially supported" to mean "there will be further > bugfix > releases", then no: 2.4 is not anymore officially supported. Only 2.5 > is officially supported. There may, of course, be security patches > released for 2.4 if there is a need. Is this draft any better? "The Python Software Foundation officially supports the current stable major release of Python. By "supports" we mean that the PSF will produce bug fix releases of this version, currently Python 2.5. We may release patches for earlier versions if necessary, such as to fix security problems, but we generally do not make releases of such unsupported versions. Patch releases of earlier Python versions may be made available through third parties, including OS vendors." - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQCVAwUBRkTraHEjvBPtnXfVAQLsZAP/eOCIn77Z+wfbdX0ys8N1LWCd242alW/p 6ws9qa6BQ2At5tDBAZtwjR2QX8LKn4zlM2CgaqvVrEjvZ8hgtSDv4hC0jfa0YCHQ bUrNzu3pZfJm62S0SFs753jtcImRFwBMBHHBU47N6GqXOB+mAlMgvAtch3Zakidq 37by4Iefj84= =28A4 -----END PGP SIGNATURE----- From fuzzyman at voidspace.org.uk Sat May 12 00:27:51 2007 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 11 May 2007 23:27:51 +0100 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <464404A7.3000804@v.loewis.de> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> Message-ID: <4644EDE7.9050208@voidspace.org.uk> Martin v. L?wis wrote: >> This is what prompted my question, actually: in Py3k, in the >> str/unicode unification branch, r"\u1234" changes meaning: before the >> unification, this was an 8-bit string, where the \u was not special, >> but now it is a unicode string, where \u *is* special. >> > > That is true for non-raw strings also: the meaning of "\u1234" also > changes. > > However, traditionally, there was *no* escaping mechanism in raw strings > in Python, and I feel that this is a good principle, because it is > easy to learn (if you leave out the detail that \ can't be the last > character in a raw string - which should get fixed also, IMO). +1 Michael Foord From martin at v.loewis.de Sat May 12 00:48:00 2007 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 12 May 2007 00:48:00 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <46442E34.4020909@egenix.com> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> <46442E34.4020909@egenix.com> Message-ID: <4644F2A0.2080909@v.loewis.de> > Using double backslashes won't cause that reaction: > > os.stat("c:\\windows\\system32\\user32.dll") Please refer to the subject. We are talking about raw strings. >> Windows path names are one of the two primary applications of raw >> strings (the other being regexes). > > IMHO the primary use case are regexps It's not a matter of opinion. It's a statistical fact that these are the two cases where people use raw strings most. > and for those you'd > definitely want to be able to put Unicode characters into your > expressions. For regular expressions, you don't need them as part of the string literal syntax: The re parser itself could support \u, just like it supports \x today. > BTW, if you use ur"..." for your expressions today (which you should > if you parse text), then nothing will change when removing the > 'u' prefix in Py3k. How do you know? Py3k hasn't been released, yet. Regards, Martin From martin at v.loewis.de Sat May 12 00:51:26 2007 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 12 May 2007 00:51:26 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <4644531E.3090901@egenix.com> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> <46442E34.4020909@egenix.com> <4644531E.3090901@egenix.com> Message-ID: <4644F36E.6000708@v.loewis.de> > BTW, there's an easy work-around for this special case: > > os.stat(os.path.join(r"c:\windows\system32", "user32.dll")) No matter what the decision is, there are always work-arounds. The question is what language suits the users most. Being able to specify characters by ordinal IMO has much less value than the a consistent, concise definition of raw strings has. > I think it's nice that you can use forward slashes on Windows - > makes writing code that works in both worlds (Unix and Windows) > a lot easier. But, as Thomas says: you can't. You may be able to do so when using the API directly, however, it fails if you pass the file name in a command line of some tool that takes /foo to mean a command line option "foo". Regards. Martin From martin at v.loewis.de Sat May 12 00:58:36 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 12 May 2007 00:58:36 +0200 Subject: [Python-Dev] Official version support statement In-Reply-To: <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> Message-ID: <4644F51C.8070000@v.loewis.de> > "The Python Software Foundation officially supports the current > stable major release of Python. By "supports" we mean that the PSF > will produce bug fix releases of this version, currently Python 2.5. > We may release patches for earlier versions if necessary, such as to > fix security problems, but we generally do not make releases of such > unsupported versions. Patch releases of earlier Python versions may > be made available through third parties, including OS vendors." If such an official statement still can be superseded by an even more official PEP, it's fine with me. However, I would prefer to not use the verb "support" at all. We (the PSF) don't provide any technical support for *any* version ever released: '''PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.''' The more I think about it: no, there is no official support for the current stable release. We will like produce more bug fix releases, but then, we may not if the volunteers doing so lose time or interest, and 2.6 comes out earlier than planned. Why do you need such a statement? Regards, Martin From guido at python.org Sat May 12 01:16:26 2007 From: guido at python.org (Guido van Rossum) Date: Fri, 11 May 2007 16:16:26 -0700 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <4644F36E.6000708@v.loewis.de> References: <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> <46442E34.4020909@egenix.com> <4644531E.3090901@egenix.com> <4644F36E.6000708@v.loewis.de> Message-ID: I think I'm going to break my own rules and ask Martin to write up a PEP. Given the pragmatics that Windows pathnames *are* a common use case, I'm willing to let allow the trailing \ in the string. A regular expression containing a quote could be written using triple quotes, e.g. r"""(["'])[^"']*\1""". (A single " in a regular expression can always be rewritten as ["] AFAIK.) -- --Guido van Rossum (home page: http://www.python.org/~guido/) From mal at egenix.com Sat May 12 01:30:52 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Sat, 12 May 2007 01:30:52 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <4644F2A0.2080909@v.loewis.de> References: <79990c6b0705101153j582a0ef2k387618c1f86e1aa8@mail.gmail.com> <46436D62.2040700@egenix.com> <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> <46442E34.4020909@egenix.com> <4644F2A0.2080909@v.loewis.de> Message-ID: <4644FCAC.30805@egenix.com> On 2007-05-12 00:48, Martin v. L?wis wrote: >> Using double backslashes won't cause that reaction: >> >> os.stat("c:\\windows\\system32\\user32.dll") > > Please refer to the subject. We are talking about raw strings. If you'd leave the context in place, the reason for my suggestion would become evident. >>> Windows path names are one of the two primary applications of raw >>> strings (the other being regexes). >> IMHO the primary use case are regexps > > It's not a matter of opinion. It's a statistical fact that these > are the two cases where people use raw strings most. Ah, statistics :-) It always depends on who you ask: a Windows user will obviously have more use for raw string use-case you gave than a Unix user. At the end of the day, I still believe that the regexp use-case is by far more common than the Windows path name one. FWIW: Zope has 2 uses of raw string for Windows path names (if I counted correctly) and around 100 for regexps. Python itself has maybe 10-20 Windows path name (and registry name) uses of raw string (in the msi lib and distutils) vs. around 300 uses for regexps. >> and for those you'd >> definitely want to be able to put Unicode characters into your >> expressions. > > For regular expressions, you don't need them as part of the > string literal syntax: The re parser itself could support \u, > just like it supports \x today. True and perhaps that's the right path to follow. You'd still have the problem of writing Windows path names with embedded Unicode characters, but I guess that's something we can fix another day ;-) >> BTW, if you use ur"..." for your expressions today (which you should >> if you parse text), then nothing will change when removing the >> 'u' prefix in Py3k. > > How do you know? Py3k hasn't been released, yet. Sorry, I wasn't clear: if the raw-unicode-escape codec continues to work the way it does not, you won't run into trouble in Py3k. [and later:] >> BTW, there's an easy work-around for this special case: >> > >> > os.stat(os.path.join(r"c:\windows\system32", "user32.dll")) > > No matter what the decision is, there are always work-arounds. > The question is what language suits the users most. Being > able to specify characters by ordinal IMO has much less value > than the a consistent, concise definition of raw strings has. I wonder how we managed to survive all these years with the existing consistent and concise definition of the raw-unicode-escape codec ;-) There are two options: * no one really uses Unicode raw strings nowadays * none of the existing users has ever stumbled across the "problem case" that triggered all this Both ways, we're discussing a non-issue. >> > I think it's nice that you can use forward slashes on Windows - >> > makes writing code that works in both worlds (Unix and Windows) >> > a lot easier. > > But, as Thomas says: you can't. You may be able to do so > when using the API directly, however, it fails if you > pass the file name in a command line of some tool that > takes /foo to mean a command line option "foo". Strange. I've doing exactly that for years. Maybe it's just because I stick to common os module APIs. So far, I've never run into any problem with it. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 12 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From greg.ewing at canterbury.ac.nz Sat May 12 01:46:36 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 12 May 2007 11:46:36 +1200 Subject: [Python-Dev] New operations in Decimal In-Reply-To: References: <20070510185924.BIU37629@ms09.lnh.mail.rcn.net> Message-ID: <4645005C.4060201@canterbury.ac.nz> Terry Reedy wrote: > "Raymond Hettinger" wrote in message > | While I question the sanity of the spec writers in this case, I do trust > that > | overall, they have provided an extremely well thought-out spec, have gone > | through extensive discussion/feedback cycles, and have provided a > thorough > | test-suite. It is as good as it gets. > > I had the same opinion until I saw the logic stuff. The only rationale I can think of for such a thing is that maybe they're trying to accommodate the possibility of a machine built entirely around a hardware implementation of the spec, that doesn't have any other way of doing bitwise logical operations. If that's the case, then Python clearly has no need for it. -- Greg From python at rcn.com Sat May 12 02:11:30 2007 From: python at rcn.com (Raymond Hettinger) Date: Fri, 11 May 2007 20:11:30 -0400 (EDT) Subject: [Python-Dev] New operations in Decimal Message-ID: <20070511201130.BIX91386@ms09.lnh.mail.rcn.net> > The only rationale I can think of for such a thing is > that maybe they're trying to accommodate the possibility > of a machine built entirely around a hardware implementation > of the spec, that doesn't have any other way of doing > bitwise logical operations. If that's the case, then Python > clearly has no need for it. Doesn't matter. My intention for the module is to be fully compliant with the spec and all of its tests. Code written in other languages which support the spec should expect to be transferrable to Python and run exactly as they did in the original language. The module itself makes that promise: "this module should be kept in sync with the latest updates of the IBM specification as it evolves. Those updates will be treated as bug fixes (deviation from the spec is considered a compatibility, usability bug)" If I still have any say in the matter, please consider this a pronouncement. Tim, if you're listening, please chime in. Raymond From ncoghlan at gmail.com Sat May 12 02:43:49 2007 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 12 May 2007 10:43:49 +1000 Subject: [Python-Dev] New operations in Decimal In-Reply-To: <4645005C.4060201@canterbury.ac.nz> References: <20070510185924.BIU37629@ms09.lnh.mail.rcn.net> <4645005C.4060201@canterbury.ac.nz> Message-ID: <46450DC5.90508@gmail.com> Greg Ewing wrote: > Terry Reedy wrote: >> I had the same opinion until I saw the logic stuff. > > The only rationale I can think of for such a thing is > that maybe they're trying to accommodate the possibility > of a machine built entirely around a hardware implementation > of the spec, that doesn't have any other way of doing > bitwise logical operations. > > If that's the case, then Python clearly has no need > for it. That is my interpretation of the spec as well. I'd prefer to see a decimal.BitSequence subtype added to the module rather than lumping the functionality into the main decimal.Decimal type. The specification itself makes it clear that these operations are not supported for the full range of decimal numbers: "The logical operations (and, invert, or, and xor) take logical operands, which are finite numbers with a sign of 0, an exponent of 0, and a coefficient whose digits must all be either 0 or 1." The footnote attached to the quoted sentence also makes it clear that this is about still being able to do binary logic operations with a purely decimal ALU: "This digit-wise representation of bits in a decimal representation has been used since the 1950s; see, for example, Binary and truth-function operations on a decimal computer with an extract command, William H. Kautz, Communications of the ACM, Vol. 1 #5, pp12-13, ACM Press, May 1958." Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From amcnabb at mcnabbs.org Sat May 12 02:42:54 2007 From: amcnabb at mcnabbs.org (Andrew McNabb) Date: Fri, 11 May 2007 18:42:54 -0600 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <4644FCAC.30805@egenix.com> References: <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> <46442E34.4020909@egenix.com> <4644F2A0.2080909@v.loewis.de> <4644FCAC.30805@egenix.com> Message-ID: <20070512004254.GN27728@mcnabbs.org> On Sat, May 12, 2007 at 01:30:52AM +0200, M.-A. Lemburg wrote: > > I wonder how we managed to survive all these years with > the existing consistent and concise definition of the > raw-unicode-escape codec ;-) > > There are two options: > > * no one really uses Unicode raw strings nowadays > > * none of the existing users has ever stumbled across the > "problem case" that triggered all this > > Both ways, we're discussing a non-issue. Sure, it's a non-issue for Python 2.x. However, when Python 3 comes along, and all strings are Unicode, there will likely be a lot more users stumbling into the problem case. -- Andrew McNabb http://www.mcnabbs.org/andrew/ PGP Fingerprint: 8A17 B57C 6879 1863 DE55 8012 AB4D 6098 8826 6868 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 186 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20070511/62e01acd/attachment.pgp From tonynelson at georgeanelson.com Sat May 12 03:38:27 2007 From: tonynelson at georgeanelson.com (Tony Nelson) Date: Fri, 11 May 2007 21:38:27 -0400 Subject: [Python-Dev] Official version support statement In-Reply-To: <4644F51C.8070000@v.loewis.de> References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> <4644F51C.8070000@v.loewis.de> Message-ID: At 12:58 AM +0200 5/12/07, Martin v. L?wis wrote: >> "The Python Software Foundation officially supports the current >> stable major release of Python. By "supports" we mean that the PSF >> will produce bug fix releases of this version, currently Python 2.5. >> We may release patches for earlier versions if necessary, such as to >> fix security problems, but we generally do not make releases of such >> unsupported versions. Patch releases of earlier Python versions may >> be made available through third parties, including OS vendors." > >If such an official statement still can be superseded by an even more >official PEP, it's fine with me. > >However, I would prefer to not use the verb "support" at all. We (the >PSF) don't provide any technical support for *any* version ever >released: '''PSF is making Python available to Licensee on an "AS IS" >basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR >IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND >DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS >FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT >INFRINGE ANY THIRD PARTY RIGHTS.''' > >The more I think about it: no, there is no official support for the >current stable release. We will like produce more bug fix releases, >but then, we may not if the volunteers doing so lose time or >interest, and 2.6 comes out earlier than planned. > >Why do you need such a statement? I think Fedora might want it, per recent discussions on fedora-devel-list. My impertinent attempt: "The Python Software Foundation maintains the current stable major release of Python. By "maintains" we mean that the PSF will produce bug fix releases of that version, currently Python 2.5. We have released patches for earlier versions as necessary, such as to fix security problems, but we generally do not make releases of such prior versions. Patched releases of earlier Python versions may be made available through third parties, including OS vendors." -- ____________________________________________________________________ TonyN.:' ' From skip at pobox.com Sat May 12 05:35:24 2007 From: skip at pobox.com (skip at pobox.com) Date: Fri, 11 May 2007 22:35:24 -0500 Subject: [Python-Dev] Official version support statement In-Reply-To: References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> <4644F51C.8070000@v.loewis.de> Message-ID: <17989.13820.305068.738176@montanaro.dyndns.org> Tony> "The Python Software Foundation maintains the current stable major Tony> release of Python. By "maintains" we mean that the PSF will Tony> produce bug fix releases of that version, currently Python 2.5. Tony> We have released patches for earlier versions as necessary, such Tony> as to fix security problems, but we generally do not make releases Tony> of such prior versions. Patched releases of earlier Python Tony> versions may be made available through third parties, including OS Tony> vendors." Since there is (generally?) an attempt to make one last bug fix release of the previous version after the next major version is released, should that be mentioned? To make it concrete, I believe shortly after 2.5.0 was released the final bug fix release of 2.4 (2.4.4?) was released. Skip From tjreedy at udel.edu Sat May 12 05:59:55 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 11 May 2007 23:59:55 -0400 Subject: [Python-Dev] Official version support statement References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org><4643A0C3.4070408@v.loewis.de><6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org><4644F51C.8070000@v.loewis.de> Message-ID: "Tony Nelson" wrote in message news:p04330101c26ac5b79f21@[192.168.123.162]... At 12:58 AM +0200 5/12/07, Martin v. L?wis wrote: |>However, I would prefer to not use the verb "support" at all. agreed |"The Python Software Foundation maintains the current stable major |release of Python. By "maintains" we mean that the PSF will produce |bug fix releases of that version, currently Python 2.5. This strikes me as an improvement, but 'maintain' is close to 'support' and seems to make a promise that might also have unintended legal consequences. But that is what your legal consel is for. The actuality is that the legal fiction called the PSF *releases* new versions produced by a collection of volunteers, some of whom are PSF members and who perhaps consider that they do their work 'in the name of' PSF, and some of whom are not PSF members and perhaps do not have such a consideration. Defining CPython as a PSF rather than volunteer community product might discourage volunteer contributions. 'Official' statements need both motivation (what is there that is actually broken?) and care (to not break something else). | We have released patches for earlier versions as necessary, such as to fix security problems, Funny thing here is that the security releases, by necessity, are more a PSF product than normal releases. Terry Jan Reedy From tjreedy at udel.edu Sat May 12 06:23:04 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 12 May 2007 00:23:04 -0400 Subject: [Python-Dev] New operations in Decimal References: <20070510185924.BIU37629@ms09.lnh.mail.rcn.net> <4645005C.4060201@canterbury.ac.nz> Message-ID: "Greg Ewing" wrote in message news:4645005C.4060201 at canterbury.ac.nz... | The only rationale I can think of for such a thing is | that maybe they're trying to accommodate the possibility | of a machine built entirely around a hardware implementation | of the spec, that doesn't have any other way of doing | bitwise logical operations. That is what I meant by 'commercial strategy' in my previous post. The IBM site pages mentioned the possibility of a decimal-based processor back when I read it, which was back when the decimal module was being developed. It would fit both their decades of product history and their name (*business machines*). Nothing nefarious, exactly, .. | If that's the case, then Python clearly has no need for it. but think it should at least be discussed whether to add useless functions while somewhat useful functions are being deleted and other useful functions are being denied entry. Terry Jan Reedy From stephen at xemacs.org Sat May 12 06:14:13 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sat, 12 May 2007 13:14:13 +0900 Subject: [Python-Dev] Official version support statement In-Reply-To: <4644F51C.8070000@v.loewis.de> References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> <4644F51C.8070000@v.loewis.de> Message-ID: <17989.16149.218750.557144@uwakimon.sk.tsukuba.ac.jp> "Martin v. L?wis" writes: > However, I would prefer to not use the verb "support" at all. We (the > PSF) don't provide any technical support for *any* version ever > released: '''PSF is making Python available to Licensee on an "AS IS" > basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES [...].''' Of course the PSF provides *excellent* technical support; you just don't acknowledge any *obligation* to do it. A declaration of support is not a warranty, of course. So I see no problem with using the word "support". You may wish to clarify with terms like "resources available". > Why do you need such a statement? Because it expresses what IMO *actually happens* clearly, and clarifies the intent of the PSF to continue in the same way. This is useful to users making decisions, even though the PSF owns few to none of the resources needed. The generosity of the contributors and their loyalty to Python and to each other practically guarantees availability. Python can dispose of a raft of bugs present only in the older versions with WONTFIX at release of a new stable version (after double-checking that they don't exist in the stable version). Developers can respond to reports of bugs in the immediate past version with "I'm sorry, but we try to concentrate our limited resources on supporting the current version, and it is unlikely that it will be fixed. Please post to c.l.p for help." Users are disappointed, but it builds trust, and more so if supported by an official statement. From python at rcn.com Sat May 12 06:52:49 2007 From: python at rcn.com (Raymond Hettinger) Date: Fri, 11 May 2007 21:52:49 -0700 Subject: [Python-Dev] New operations in Decimal References: <20070510185924.BIU37629@ms09.lnh.mail.rcn.net> <4645005C.4060201@canterbury.ac.nz> Message-ID: <00a201c79451$65c32c60$f001a8c0@RaymondLaptop1> > The only rationale I can think of for such a thing is > that maybe they're trying to accommodate the possibility > of a machine built entirely around a hardware implementation > of the spec, that doesn't have any other way of doing > bitwise logical operations. Nonsense. The logical operations are there for environments where decimal is the only available numeric type. IIRC, this is how Rexx works with decimal implemented around strings where what you type is what you get. Raymond From tim.peters at gmail.com Sat May 12 06:57:25 2007 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 12 May 2007 00:57:25 -0400 Subject: [Python-Dev] New operations in Decimal In-Reply-To: <20070511201130.BIX91386@ms09.lnh.mail.rcn.net> References: <20070511201130.BIX91386@ms09.lnh.mail.rcn.net> Message-ID: <1f7befae0705112157w7dd4ba05we4441b12dc00485@mail.gmail.com> [Raymond Hettinger] > ... > My intention for the module is to be fully compliant with the spec and all of its > tests. Code written in other languages which support the spec should expect > to be transferrable to Python and run exactly as they did in the original language. > > The module itself makes that promise: > > "this module should be kept in sync with the latest updates > of the IBM specification as it evolves. Those updates will > be treated as bug fixes (deviation from the spec is considered > a compatibility, usability bug)" > > If I still have any say in the matter, please consider this a pronouncement. Tim, > if you're listening, please chime in. That was one of the major goals in agreeing to adopt an external standard for decimal: tedious arguments are left to the standard creators instead of clogging python-dev. Glad to see that's working exactly as planned ;-) I'm with Raymond on this one, especially given the triviality of implementing the revised spec's new logical operations. I personally wish they would have added more transcendental functions to the spec instead. That's bread-and-butter stuff for FP applications, while I don't see much use for the new "bit" operations. But if I felt strongly enough about that, I'd direct my concerns to the folks creating this standard. As slippery slopes go, this less than a handful of trivial new operations isn't steep enough to measure, let alone cause a landslide. From anthony at interlink.com.au Sat May 12 07:41:44 2007 From: anthony at interlink.com.au (Anthony Baxter) Date: Sat, 12 May 2007 15:41:44 +1000 Subject: [Python-Dev] Official version support statement In-Reply-To: <17989.13820.305068.738176@montanaro.dyndns.org> References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <17989.13820.305068.738176@montanaro.dyndns.org> Message-ID: <200705121541.47317.anthony@interlink.com.au> On Saturday 12 May 2007, skip at pobox.com wrote: > Since there is (generally?) an attempt to make one last bug fix > release of the previous version after the next major version is > released, should that be mentioned? To make it concrete, I > believe shortly after 2.5.0 was released the final bug fix > release of 2.4 (2.4.4?) was released. Correct. Note that this is only something that's been in place while I've been doing it. The current "standard" for how we do releases is just something I made up as I went along, including - regular 6-monthly bugfix releases - only one maintenance branch (most recent) for the bugfix releases - the last bugfix release of the previous release after a new major release. I'm OK with these being formalised - but any additional requirements I'd like to discuss first :-) Anthony -- Anthony Baxter It's never too late to have a happy childhood. From martin at v.loewis.de Sat May 12 09:09:48 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 12 May 2007 09:09:48 +0200 Subject: [Python-Dev] Official version support statement In-Reply-To: References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> <4644F51C.8070000@v.loewis.de> Message-ID: <4645683C.1040505@v.loewis.de> >> Why do you need such a statement? > > I think Fedora might want it, per recent discussions on fedora-devel-list. In that case, I would rather have somebody official of the Fedora list state the request explicitly, than basing it on hearsay. > "The Python Software Foundation maintains the current stable major > release of Python. Ah, maintains is indeed better than supports. That's what we do: we maintain Python. Regards, Martin From stephen at xemacs.org Sat May 12 09:27:39 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sat, 12 May 2007 16:27:39 +0900 Subject: [Python-Dev] Official version support statement In-Reply-To: References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> <4644F51C.8070000@v.loewis.de> Message-ID: <17989.27755.568352.465481@uwakimon.sk.tsukuba.ac.jp> Terry Reedy writes: > This strikes me as an improvement, but 'maintain' is close to > 'support' and seems to make a promise that might also have > unintended legal consequences. But that is what your legal consel > is for. Unilateral statements on a web page do not constitute a contract. Implied warrantees *are* a problem, but those are taken care of by the license. (IANAL, etc.) What's left is purely an issue of marketing, ie, to real people a promise is a promise whether legally enforced or not. No matter how weaselly the wording on the website, users expect support and deprecate projects that don't provide it. And in Python practice they get it, and will continue calling it "support". Unless there are real legal consequences, I think it's a good idea for the PSF to define explicitly how the resources it either controls or can elicit from volunteers will be used, to the extent that PSF can do so. And it's best if the words "support" and "maintain" are used, because that's how the users think about it. > 'Official' statements need both motivation (what is there that is > actually broken?) The impression that many people (including python-dev regulars) have that there is a "policy" of "support" for both the current release (2.5) and the (still very widely used) previous release (2.4) is a real problem, and needs to be addressed. From martin at v.loewis.de Sat May 12 09:18:42 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 12 May 2007 09:18:42 +0200 Subject: [Python-Dev] Official version support statement In-Reply-To: <17989.16149.218750.557144@uwakimon.sk.tsukuba.ac.jp> References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> <4644F51C.8070000@v.loewis.de> <17989.16149.218750.557144@uwakimon.sk.tsukuba.ac.jp> Message-ID: <46456A52.9080101@v.loewis.de> > Python can dispose of a raft of bugs present only in the older > versions with WONTFIX at release of a new stable version (after > double-checking that they don't exist in the stable version). I'm all in favor of formalizing a policy of when Python releases are produced, and what Python releases, and what kinds of changes they may contain. However, such a policy should be addressed primarily to contributors, as a guidance, not to users, as a promise. So I have problems with both "official" and "support" still. > Developers can respond to reports of bugs in the immediate past > version with "I'm sorry, but we try to concentrate our limited > resources on supporting the current version, and it is unlikely that > it will be fixed. Please post to c.l.p for help." Users are > disappointed, but it builds trust, and more so if supported by an > official statement. The way we make policy statements is through the PEP process. This is important, because it involves the community in setting the policy. The PSF board has often explicitly tried to stay out of managing the Python development itself, and has deferred that to python-dev and its readership. I had meant to propose a PEP on maintenance of Python releases for quite some time now; perhaps this is the time to actually write this PEP. Regards, Martin From martin at v.loewis.de Sat May 12 10:29:44 2007 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 12 May 2007 10:29:44 +0200 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases Message-ID: <46457AF8.7070900@v.loewis.de> This PEP attempts to formalize the existing practice, but goes beyond it in introducing security releases. The addition of security releases addresses various concerns I heard over the last year about Python being short-lived. Those concerns are typically raised by Linux distributors which see that they have to maintain Python releases much longer than python-dev does, and are now concerned about the manpower and Python expertise they need. When looking in detail, they are primarily concerned with security fixes. They will not add new features to old releases, and they can ignore bug fixes, but they cannot ignore security fixes. So what I really think they want is some form of commitment that security fixes are still considered for a much longer period of time. In discussions, people often consider "short-lived" as "shorter than five years". So the PEP proposes to produce security releases for five years after the initial release; this would mean that we are willing to make security releases for 2.3 (until July 2008) and 2.4 (until November 2009). To reduce the work-load, the PEP promises no more than one security release per branch per year (if no security fixes get contributed, no release needs to be made). Notice that this setup significantly relies on *no* bug fixes (other than security fixes) being committed to a branch after the final bugfix release. Addition of bug fixes would require much more extensive testing, with release candidates and everything, so it is essential that there are very very few new patches in each security release. Please let me know what you think. Regards, Martin PEP: XX Title: Maintenance of Python Releases Version: $Revision$ Last-Modified: $Date$ Author: Martin v. L?wis Discussions-To: Status: Draft Type: Process Content-Type: text/x-rst Created: 12-May-2005 Abstract ======== This PEP defines the policy for the maintenance of release branches for Python. Overview ======== The Python core developers maintain two major branches at any point in time: the trunk, which will eventually become the next major release, and the current release branch, from which bug-fix releases are made. As a special case, the number of branches may duplicate while Python 2.x and Python 3.x is developed in parallel. Older releases of Python see no regular maintenance, however, security flaws will be fixed in older releases, eventually resulting in a security release. Major Releases ============== Major releases are numbered x.y (despite the .y being commonly named "minor" release in other projects). The release process is described in PEP 101. Major releases should be produced roughly every 18 months, but the actual release frequency may vary significantly. With a major release x.y, this release becomes the current release. For the previous current release (x.(y-1)), a final bug fix release is produced shortly after the x.y release. Bugfix Releases =============== Bug fix releases (numbered x.y.z) are described in PEP 6. As stated there, they may only include bug fixes (no new features), and they are produced typically every six months. Bug fixes should be committed to the release branch as an ongoing activity (e.g. along with fixing them in the trunk), rather than being backported just before the release. After the final bug fix release for a branch has been made, the branch becomes unmaintained except for security fixes. No bug fix patches must be applied to the branch. Security Releases ================= As users operate Python releases long after the release branch has become unmaintained, they still like to see security patches fixed. The Python source repository can act as a repository for security fixes even after the branch has become unmaintained. Users wishing to contribute security patches must clearly indicate that a certain patch is a security patch, and to what branches they want to see it applied. Commit messages of security patches on unmaintained branches must indicate that the commit was done for security reasons; an explicit description of potential exploits should be avoided. A security fix must not risk the releasability of the branch, i.e. the maintenance branch should be in a shape to produce a release out of it as-is at all times. >From time to time, security releases will be made from an unmaintained branch. A security release will be a source-only release (i.e. no Windows or Macintosh binaries are provided); they will follow the numbering of minor releases (i.e. x.y.z). Security releases should be made at most one year after a security patch has been committed to the branch; users wishing to deploy security patches earlier can safely export the maintenance branch, or otherwise incorporate all committed security fixes into their code base. Security releases should be made for a period of five years after the initial major release. Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From ncoghlan at gmail.com Sat May 12 10:43:57 2007 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 12 May 2007 18:43:57 +1000 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <46457AF8.7070900@v.loewis.de> References: <46457AF8.7070900@v.loewis.de> Message-ID: <46457E4D.20907@gmail.com> Martin v. L?wis wrote: > Please let me know what you think. This appears to be an accurate description of the way releases have been handled for the last few years (which appears to be working well), so +1 here. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at gmail.com Sat May 12 10:56:12 2007 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 12 May 2007 18:56:12 +1000 Subject: [Python-Dev] New operations in Decimal In-Reply-To: <1f7befae0705112157w7dd4ba05we4441b12dc00485@mail.gmail.com> References: <20070511201130.BIX91386@ms09.lnh.mail.rcn.net> <1f7befae0705112157w7dd4ba05we4441b12dc00485@mail.gmail.com> Message-ID: <4645812C.5000306@gmail.com> Tim Peters wrote: > [Raymond Hettinger] >> ... >> My intention for the module is to be fully compliant with the spec and all of its >> tests. Code written in other languages which support the spec should expect >> to be transferrable to Python and run exactly as they did in the original language. > I'm with Raymond on this one, especially given the triviality of > implementing the revised spec's new logical operations. After thinking about it some more, I'm also supporting maintaining full compliance (and withdrawing my suggestion of using a separate subclass for the logical operands). Be maintaining full compliance, it should be possible for a developer to prototype an algorithm using Python's decimal module and then use that exact same algorithm on any GDS compliant arithmetic logic unit. While *Python* has the luxury of other means of doing logical operations, an embedded algorithm with only a decimal ALU available may not be so fortunate. Regards, Nick. P.S. Spending an hour at work yesterday discussing some of the ways the bus architecture of a digital signal processor can affect algorithm performance may have had more than a little to do with my change of heart ;) -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From stephen at xemacs.org Sat May 12 11:32:23 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sat, 12 May 2007 18:32:23 +0900 Subject: [Python-Dev] Official version support statement In-Reply-To: <46456A52.9080101@v.loewis.de> References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> <4644F51C.8070000@v.loewis.de> <17989.16149.218750.557144@uwakimon.sk.tsukuba.ac.jp> <46456A52.9080101@v.loewis.de> Message-ID: <17989.35239.995882.611734@uwakimon.sk.tsukuba.ac.jp> "Martin v. L?wis" writes: > I'm all in favor of formalizing a policy of when Python releases > are produced, and what Python releases, and what kinds of changes > they may contain. However, such a policy should be addressed > primarily to contributors, as a guidance, not to users, as > a promise. So I have problems with both "official" and "support" > still. I see your point, but I don't see how you propose to keep the users from viewing the guidelines to developers as official policy regarding support, albeit hard to interpret. Also, it may just be me, but I don't see an official statement as a "promise". It's a "clarification". '''This is what we're trying to do, so you can make well-informed plans, and not be surprised when you ask for something and we say "but we never thought about doing that, and don't intend to".''' > The way we make policy statements is through the PEP process. Creating the statement that way is important. But publishing a PEP is not enough. Non-developer users don't read PEPs. After thinking about it a bit, I do agree that "maintain" is more appropriate than "support" (this is after my reply to Terry Reedy, where I wrote that support was OK). Support implies education and adaptation to user needs, but even if that is done by the PSF, it's a separate activity from the development and release processes. While maintenance does include response to user bug reports as part of the development/release process. From martin at v.loewis.de Sat May 12 12:47:56 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 12 May 2007 12:47:56 +0200 Subject: [Python-Dev] Official version support statement In-Reply-To: <17989.35239.995882.611734@uwakimon.sk.tsukuba.ac.jp> References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> <4644F51C.8070000@v.loewis.de> <17989.16149.218750.557144@uwakimon.sk.tsukuba.ac.jp> <46456A52.9080101@v.loewis.de> <17989.35239.995882.611734@uwakimon.sk.tsukuba.ac.jp> Message-ID: <46459B5C.9040406@v.loewis.de> > > I'm all in favor of formalizing a policy of when Python releases > > are produced, and what Python releases, and what kinds of changes > > they may contain. However, such a policy should be addressed > > primarily to contributors, as a guidance, not to users, as > > a promise. So I have problems with both "official" and "support" > > still. > > I see your point, but I don't see how you propose to keep the users > from viewing the guidelines to developers as official policy regarding > support, albeit hard to interpret. And that's fine if they do. I don't mind if a statement is considered official if it is - a BDFL pronouncement, or - the result of a PSF board or members vote Otherwise, it isn't "official". There are other "officers" which can make official statements, e.g. the release manager can also make official statements, but anybody else's statement is just an opinion. > > The way we make policy statements is through the PEP process. > > Creating the statement that way is important. But publishing a PEP is > not enough. Non-developer users don't read PEPs. Right. It's fine to rephrase (para-phrase?) the consensus achieved in a PEP. However, that rephrasing cannot precede the PEP. > After thinking about it a bit, I do agree that "maintain" is more > appropriate than "support" (this is after my reply to Terry Reedy, > where I wrote that support was OK). Support implies education and > adaptation to user needs, but even if that is done by the PSF, it's a > separate activity from the development and release processes. That was exactly my concern about "support". I associate with "support" that there is a hotline I can call and they will help me. I've used various support infrastructures in the past years (from Microsoft, Dell, Veritas/Symantec), and in all cases, "support" meant that somebody would help me with a specific problem. "Unsupported product" then means "if you have a problem with that product, we won't help". There is good and bad support, of course, and I know which companies provided me good support and which didn't. There are indeed various support channels for python: comp.lang.python, python-tutors, and python-help, and none of them have the notion of "unsupported Python releases". Thing become unsupported by no volunteer being willing to offer help. It's also important to understand that the bug tracker is *not* a means of user support, even though users sometimes mistake it to be so, and end their "bug report" with a call for help. It's vice versa: a bug report is a *contribution* by the user, i.e. a means for giving a gift, not for requesting one. Regards, Martin From stephen at xemacs.org Sat May 12 15:02:53 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sat, 12 May 2007 22:02:53 +0900 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <46457AF8.7070900@v.loewis.de> References: <46457AF8.7070900@v.loewis.de> Message-ID: <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> "Martin v. L?wis" writes: > A security fix must not risk the releasability of the branch, i.e. the > maintenance branch should be in a shape to produce a release out of it > as-is at all times. [...] > Security releases should be made at most one year after a security > patch has been committed to the branch; users wishing to deploy > security patches earlier can safely export the maintenance branch, or > otherwise incorporate all committed security fixes into their code > base. Security releases should be made for a period of five years > after the initial major release. I don't understand the point of a "security release" made up to a year after commit, especially in view of the first quoted paragraph. A commit may not be made without confirming *immediate* releasability. Isn't that the painful part of a release? If so, shouldn't an immediate release should be made, and not be that much burden? (At least in my practice, all that's left is an announcement -- which is going to be about 2 lines of boilerplate, since detailed explanations are prohibited -- and rolling tarballs.) If rolling tarballs etc is considered a burden, a "tag release" could be made. OS distributors are going to import into a vendor branch anyway, what they want is python-dev's certification that it's been checked and (as much as possible given the urgency of a security patch) is safe to apply to their package trees. From barry at python.org Sat May 12 16:57:30 2007 From: barry at python.org (Barry Warsaw) Date: Sat, 12 May 2007 10:57:30 -0400 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <46457AF8.7070900@v.loewis.de> References: <46457AF8.7070900@v.loewis.de> Message-ID: <4E57177E-FDDE-45A3-AFB4-0E586BB4974E@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 12, 2007, at 4:29 AM, Martin v. L?wis wrote: > This PEP attempts to formalize the existing practice, but goes beyond > it in introducing security releases. The addition of security releases > addresses various concerns I heard over the last year about Python > being short-lived. Those concerns are typically raised by Linux > distributors which see that they have to maintain Python releases > much longer than python-dev does, and are now concerned about the > manpower and Python expertise they need. Martin, I like this PEP; it addresses the issues I was trying to get at with my initial posting[1]. Stephen brings up some interesting points which I'll comment on in a follow up to his post. Since one of the major focuses of this PEP is security releases, I wonder if we shouldn't mention that security issues should be reported to security at python dot org instead of public forums or trackers, so that the Python Security Response Team can take the appropriate and responsible actions? - -Barry [1] I still think we should craft some text for the website, but it can now be as simple as: "For the policy on Python version maintenance and release, see PEP XXX." -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (Darwin) iD8DBQFGRdXb2YZpQepbvXERAudHAKCxlTXyO15aRS0GypVKbP0U/y3bCACfVrX6 2TcbU5/oe7GiIwhesRsT45g= =dcr9 -----END PGP SIGNATURE----- From barry at python.org Sat May 12 17:26:14 2007 From: barry at python.org (Barry Warsaw) Date: Sat, 12 May 2007 11:26:14 -0400 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 12, 2007, at 9:02 AM, Stephen J. Turnbull wrote: > I don't understand the point of a "security release" made up to a year > after commit, especially in view of the first quoted paragraph. A > commit may not be made without confirming *immediate* releasability. > Isn't that the painful part of a release? If so, shouldn't an > immediate release should be made, and not be that much burden? (At > least in my practice, all that's left is an announcement -- which is > going to be about 2 lines of boilerplate, since detailed explanations > are prohibited -- and rolling tarballs.) Security releases should be coordinated with the Python Security Response Team (security at python dot org). There are legitimate reasons for wanting to coordinate security releases with this team, such as to ensure adequate and responsible reporting to vendors and other security organizations. Once a set of patches have been generated and (after an embargo period) committed to the public repository, I think we should indeed make a release fairly quickly. > If rolling tarballs etc is considered a burden, a "tag release" could > be made. OS distributors are going to import into a vendor branch > anyway, what they want is python-dev's certification that it's been > checked and (as much as possible given the urgency of a security > patch) is safe to apply to their package trees. I don't think rolling out tarballs is all that much additional burden once everything else is said and done, so I think we should do it. I don't want to give Anthony more work than he wants to do, but I feel confident we can find volunteers to roll out the tarballs if necessary. I would certainly offer to do so. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (Darwin) iD8DBQFGRdyX2YZpQepbvXERAm8RAJ9GhDaT6UKTY8YCLKRUPV75Nb0IgQCcCm38 O9/TyXRgB1sR8T97PhqxZ2I= =wA9j -----END PGP SIGNATURE----- From tjreedy at udel.edu Sat May 12 21:18:10 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 12 May 2007 15:18:10 -0400 Subject: [Python-Dev] Official version support statement References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org><4643A0C3.4070408@v.loewis.de><6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org><4644F51C.8070000@v.loewis.de> <17989.27755.568352.465481@uwakimon.sk.tsukuba.ac.jp> Message-ID: "Stephen J. Turnbull" wrote in message news:17989.27755.568352.465481 at uwakimon.sk.tsukuba.ac.jp... | The impression that many people (including python-dev regulars) have | that there is a "policy" of "support" for both the current release | (2.5) and the (still very widely used) previous release (2.4) is a | real problem, and needs to be addressed. I agree that such mis-understanding should be addressed. So I now think a paragraph summarizing Martin's info PEP, ending with "For details, see PEPxxx.", would be a good idea. tjr From status at bugs.python.org Sun May 13 02:00:49 2007 From: status at bugs.python.org (Tracker) Date: Sun, 13 May 2007 00:00:49 +0000 (UTC) Subject: [Python-Dev] Summary of Tracker Issues Message-ID: <20070513000049.79ECB78236@psf.upfronthosting.co.za> ACTIVITY SUMMARY (05/06/07 - 05/13/07) Tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue number. Do NOT respond to this message. 1650 open ( +0) / 8584 closed ( +0) / 10234 total ( +0) Average duration of open issues: 791 days. Median duration of open issues: 743 days. Open Issues Breakdown open 1650 ( +0) pending 0 ( +0) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070513/d98f595d/attachment.htm From stephen at xemacs.org Sun May 13 02:42:19 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sun, 13 May 2007 09:42:19 +0900 Subject: [Python-Dev] Official version support statement In-Reply-To: References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> <4644F51C.8070000@v.loewis.de> <17989.27755.568352.465481@uwakimon.sk.tsukuba.ac.jp> Message-ID: <17990.24299.900967.926771@uwakimon.sk.tsukuba.ac.jp> Terry Reedy writes: > "Stephen J. Turnbull" wrote in message > news:17989.27755.568352.465481 at uwakimon.sk.tsukuba.ac.jp... > | The impression that many people (including python-dev regulars) have > | that there is a "policy" of "support" for both the current release > | (2.5) and the (still very widely used) previous release (2.4) is a > | real problem, and needs to be addressed. > I agree that such mis-understanding should be addressed. So I now think a > paragraph summarizing Martin's info PEP, ending with "For details, see > PEPxxx.", would be a good idea. FWIW, after Martin's explanation, and considering the annoyance of keeping updates sync'ed (can PEPs be amended after acceptance, or only superseded by a new PEP, like IETF RFCs?), I tend to support Barry's suggestion of a brief listing of current releases and next planned, and "Python policy concerning release planning is defined by [the current version of] PEPxxx", with a link. From guido at python.org Sun May 13 02:36:34 2007 From: guido at python.org (Guido van Rossum) Date: Sat, 12 May 2007 17:36:34 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070513000049.79ECB78236@psf.upfronthosting.co.za> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> Message-ID: On 5/12/07, Tracker wrote: [...] I clicked on the tracker link out of curiosity noticed that the tracker has been spammed -- issues 1028, 1029 and 1030 are all spam (1028 seems a test by the spammer). These issues should be deleted and their creator's accounts disabled. BTW What's the hold-up for making roundup live? I'm sick of sf. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From brett at python.org Sun May 13 04:15:08 2007 From: brett at python.org (Brett Cannon) Date: Sat, 12 May 2007 19:15:08 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> Message-ID: On 5/12/07, Guido van Rossum wrote: > > On 5/12/07, Tracker wrote: > [...] > > I clicked on the tracker link out of curiosity noticed that the > tracker has been spammed -- issues 1028, 1029 and 1030 are all spam > (1028 seems a test by the spammer). We know. Skip is working on something for this. These issues should be deleted and their creator's accounts disabled. The user accounts will be created from scratch when we do the actual transition. Plus all of the existing tracker items will be wiped. BTW What's the hold-up for making roundup live? I'm sick of sf. Well, the tracker you are sick of is holding us up. The data dump that we were depending on stopped working properly last month. We are trying to bug them to fix it as they don't see any issues with it while all of us cannot get a complete dump. -Brett -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070512/376d5cc0/attachment.htm From tjreedy at udel.edu Sun May 13 04:39:57 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 12 May 2007 22:39:57 -0400 Subject: [Python-Dev] Official version support statement References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org><4643A0C3.4070408@v.loewis.de><6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org><4644F51C.8070000@v.loewis.de><17989.27755.568352.465481@uwakimon.sk.tsukuba.ac.jp> <17990.24299.900967.926771@uwakimon.sk.tsukuba.ac.jp> Message-ID: "Stephen J. Turnbull" wrote in message news:17990.24299.900967.926771 at uwakimon.sk.tsukuba.ac.jp... | FWIW, after Martin's explanation, and considering the annoyance of | keeping updates sync'ed (can PEPs be amended after acceptance, or only | superseded by a new PEP, like IETF RFCs?), Informational PEPs often get updated (starting with PEP 1!) From martin at v.loewis.de Sun May 13 09:21:08 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 13 May 2007 09:21:08 +0200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> Message-ID: <4646BC64.4030801@v.loewis.de> > I clicked on the tracker link out of curiosity noticed that the > tracker has been spammed -- issues 1028, 1029 and 1030 are all spam > (1028 seems a test by the spammer). > > These issues should be deleted and their creator's accounts disabled. (Notice that the spammer hasn't been as successful as he thinks - the spam downloads as plain text, not as HTML as he had hoped). That's actually an issue that will like require continuous volunteer efforts. Unless an automated spam filtering materializes (which may or may not happen), people will need to clean out spam manually. It's not that easy for a spammer to submit the spam: we require a registration with an email roundtrip - which used to be sufficient, but isn't anymore, as the spammers now have email accounts which they use for signing up. We have some machinery to detect spambots performing registration, and that already filters out a lot attempts (at least the spam frequency went down when this got activated), but some spammers get still past it. Now it's up to volunteers to do ongoing spam clearing, and we don't have that much volunteers. I think a single-click button "Spammer" should allow committers to lock an account and hide all messages and files that he sent, but that still requires somebody to implement it. Regards, Martin From mal at egenix.com Sun May 13 14:25:01 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Sun, 13 May 2007 14:25:01 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <20070512004254.GN27728@mcnabbs.org> References: <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> <46442E34.4020909@egenix.com> <4644F2A0.2080909@v.loewis.de> <4644FCAC.30805@egenix.com> <20070512004254.GN27728@mcnabbs.org> Message-ID: <4647039D.1050708@egenix.com> On 2007-05-12 02:42, Andrew McNabb wrote: > On Sat, May 12, 2007 at 01:30:52AM +0200, M.-A. Lemburg wrote: >> I wonder how we managed to survive all these years with >> the existing consistent and concise definition of the >> raw-unicode-escape codec ;-) >> >> There are two options: >> >> * no one really uses Unicode raw strings nowadays >> >> * none of the existing users has ever stumbled across the >> "problem case" that triggered all this >> >> Both ways, we're discussing a non-issue. > > > Sure, it's a non-issue for Python 2.x. However, when Python 3 comes > along, and all strings are Unicode, there will likely be a lot more > users stumbling into the problem case. In the first case, changing the codec won't affect much code when ported to Py3k. In the second case, a change to the codec is not necessary. Please also consider the following: * without the Unicode escapes, the only way to put non-ASCII code points into a raw Unicode string is via a source code encoding of say UTF-8 or UTF-16, pretty much defeating the original requirement of writing ASCII code only * non-ASCII code points in text are not uncommon, they occur in most European scripts, all Asian scripts, many scientific texts and in also texts meant for the web (just have a look at the HTML entities, or think of Word exports using quotes) * adding Unicode escapes to the re module will break code already using "...\u..." in the regular expressions for other purposes; writing conversion tools that detect this usage is going to be hard * OTOH, writing conversion tools that simply work on string literals in general is easy Thanks, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 13 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From tomerfiliba at gmail.com Sun May 13 16:56:15 2007 From: tomerfiliba at gmail.com (tomer filiba) Date: Sun, 13 May 2007 16:56:15 +0200 Subject: [Python-Dev] generators and with Message-ID: <1d85506f0705130756j396a197bhf2f694d75ba06101@mail.gmail.com> why not add __enter__ and __exit__ to generator objects? it's really a trivial addition: __enter__ returns self, __exit__ calls close(). it would be used to ensure close() is called when the generator is disposed, instead of doing that manually. typical usage would be: with mygenerator() as g: g.next() bar = g.send("foo") -tomer -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070513/df02d804/attachment.html From dustin at v.igoro.us Sun May 13 17:33:45 2007 From: dustin at v.igoro.us (dustin at v.igoro.us) Date: Sun, 13 May 2007 10:33:45 -0500 Subject: [Python-Dev] generators and with In-Reply-To: <1d85506f0705130756j396a197bhf2f694d75ba06101@mail.gmail.com> References: <1d85506f0705130756j396a197bhf2f694d75ba06101@mail.gmail.com> Message-ID: <20070513153345.GB29172@v.igoro.us> On Sun, May 13, 2007 at 04:56:15PM +0200, tomer filiba wrote: > why not add __enter__ and __exit__ to generator objects? > it's really a trivial addition: __enter__ returns self, __exit__ calls > close(). > it would be used to ensure close() is called when the generator is > disposed, > instead of doing that manually. typical usage would be: > with mygenerator() as g: > g.next() > bar = g.send("foo") > -tomer A better example may help to make your case. Would this do? with mygeneratorfn() as g: x = get_datum() while g.send(x): x = get_next(x) The idea then is that you can't just use a 'for' loop (which will call close() itself, IIRC) because you want access to the generator itself, not just the return values from g.next(). I wouldn't have a problem with this proposal, but I consider the snippet above to be fairly obscure Python already; the requirement to call g.close() is not a great burden on someone capable of using g.send() et al. Dustin From martin at v.loewis.de Sun May 13 18:04:44 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 13 May 2007 18:04:44 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <4647039D.1050708@egenix.com> References: <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> <46442E34.4020909@egenix.com> <4644F2A0.2080909@v.loewis.de> <4644FCAC.30805@egenix.com> <20070512004254.GN27728@mcnabbs.org> <4647039D.1050708@egenix.com> Message-ID: <4647371C.8000600@v.loewis.de> > * without the Unicode escapes, the only way to put non-ASCII > code points into a raw Unicode string is via a source code encoding > of say UTF-8 or UTF-16, pretty much defeating the original > requirement of writing ASCII code only That's no problem, though - just don't put the Unicode character into a raw string. Use plain strings if you have a need to include Unicode characters, and are not willing to leave ASCII. For Python 3, the default source encoding is UTF-8, so it is much easier to use non-ASCII characters in the source code. The original requirement may not be as strong anymore as it used to be. > * non-ASCII code points in text are not uncommon, they occur > in most European scripts, all Asian scripts, > many scientific texts and in also texts meant for the web > (just have a look at the HTML entities, or think of Word > exports using quotes) And you are seriously telling me that people who commonly use non-ASCII code points in their source code are willing to refer to them by Unicode ordinal number (which, of course, they all know by heart, from 1 to 65536)? > * adding Unicode escapes to the re module will break code > already using "...\u..." in the regular expressions for > other purposes; writing conversion tools that detect this > usage is going to be hard It's unlikely to occur in code today - \u just means the same as u (so \u1234 matches u1234); if you want a backslash followed by u in your regular expression, you should write \\u. It would be possible to future-warn about \u in 2.6, catching these cases. Authors then would either have to remove the backslash, or duplicate it, depending on what they want to express. Regards, Martin From duncan.booth at suttoncourtenay.org.uk Sun May 13 18:47:17 2007 From: duncan.booth at suttoncourtenay.org.uk (Duncan Booth) Date: Sun, 13 May 2007 11:47:17 -0500 Subject: [Python-Dev] generators and with References: <1d85506f0705130756j396a197bhf2f694d75ba06101@mail.gmail.com> Message-ID: "tomer filiba" wrote in news:1d85506f0705130756j396a197bhf2f694d75ba06101 at mail.gmail.com: > why not add __enter__ and __exit__ to generator objects? > it's really a trivial addition: __enter__ returns self, __exit__ calls > close(). > it would be used to ensure close() is called when the generator is > disposed, instead of doing that manually. typical usage would be: > > with mygenerator() as g: > g.next() > bar = g.send("foo") > You can already ensure that the close() method is called quite easily: with contextlib.closing(mygenerator()) as g: g.next() bar = g.send("foo") From guido at python.org Sun May 13 21:18:30 2007 From: guido at python.org (Guido van Rossum) Date: Sun, 13 May 2007 12:18:30 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <4646BC64.4030801@v.loewis.de> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> Message-ID: On 5/13/07, "Martin v. L?wis" wrote: > Now it's up to volunteers to do ongoing spam clearing, and we don't > have that much volunteers. I think a single-click button "Spammer" > should allow committers to lock an account and hide all messages > and files that he sent, but that still requires somebody to implement > it. I'd expect that to be pretty effective -- like graffiti artists, spammers want their work to be seen, and a site that quickly removes them will not be worth the effort for them. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Sun May 13 21:19:37 2007 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 13 May 2007 21:19:37 +0200 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <464764C9.8090707@v.loewis.de> > I don't understand the point of a "security release" made up to a year > after commit, especially in view of the first quoted paragraph. The objective is to reduce load for the release manager. Any kind of release that is worth anything takes several hours to produce, in my experience (if it could be made completely automatic, it wouldn't be good, since glitches would not be detected). I would like get Anthony's opinion on this aspect. > A > commit may not be made without confirming *immediate* releasability. > Isn't that the painful part of a release? If so, shouldn't an > immediate release should be made, and not be that much burden? (At > least in my practice, all that's left is an announcement -- which is > going to be about 2 lines of boilerplate, since detailed explanations > are prohibited -- and rolling tarballs.) See PEP 101. A release involves many more steps, and it's not clear whether a release candidate could be skipped. I think we would need to restrict the total number of releases made per year. The one-year limit may be debatable, and immediate releases might be possible, as long as there is some provision that releases are not made at a too-high rate. > If rolling tarballs etc is considered a burden, a "tag release" could > be made. OS distributors are going to import into a vendor branch > anyway, what they want is python-dev's certification that it's been > checked and (as much as possible given the urgency of a security > patch) is safe to apply to their package trees. I think OS distributors typically *do* use official tar balls, even if they import them as a vendor branch somewhere. Also, creating the tar ball is not the only activity: creating a new web page on pydotorg, running the test suite, etc. all is still necessary. In any case, the patch gets certified by being committed (with the indication that it is a security fix), so if they want certified patches, they can just import the maintenance branch. Regards, Martin From joeedh at gmail.com Sun May 13 21:59:06 2007 From: joeedh at gmail.com (Joe Eagar) Date: Sun, 13 May 2007 12:59:06 -0700 Subject: [Python-Dev] Strange behaviour with PyEval_EvalCode Message-ID: <46476E0A.7050603@gmail.com> Hi I'm getting extremely odd behavior. First of all, why isn't PyEval_EvalCode documented anywhere? Anyway, I'm working on blender's python integration (it embeds python, as opposed to python embedding it). I have a function that executes a string buffer of python code, fetches a function from its global dictionary then calls it. When the function code returns a local variable, PyObject_Call() appears to be returning garbage. Strangely this is only happening with internal blender types, yet try however I might I can't find any refcounting errors to account for this. The initial implementation used the same dictionary for the global and local dicts. I tried using separate dicts, but then the function wasn't being called at all (or at least I tested it by putting a "print "bleh"" in there, and it didn't work). Also, when I printed the refcount of the garbage data, it was garbage as well (so the entire piece of memory is bad, not just the data portion). I've tested with both python 2.4 and 2.5. Mostly with 2.4. This bug may be cropping up in other experimental blender python code as well. Here's the code in the string buffer: #BPYCONSTRAINT from Blender import * from Blender.Mathutils import * print "d" def doConstraint(inmat, tarmat, prop): a = Matrix() a.identity() a = a * TranslationMatrix(Vector(0, 0, 0)) print "t" a = tarmat return inmat print doConstraint(Matrix(), Matrix(), 0) Here's the code that executes the string buffer: PyObject *RunPython2( Text * text, PyObject * globaldict, PyObject *localdict ) { char *buf = NULL; /* The script text is compiled to Python bytecode and saved at text->compiled * to speed-up execution if the user executes the script multiple times */ if( !text->compiled ) { // if it wasn't already compiled, do it now buf = txt_to_buf( text ); text->compiled = Py_CompileString( buf, GetName( text ), Py_file_input ); MEM_freeN( buf ); if( PyErr_Occurred( ) ) { BPY_free_compiled_text( text ); return NULL; } } return PyEval_EvalCode( text->compiled, globaldict, localdict ); } . . .and heres the (rather long, and somewhat in a debugging state) function that calls the function in the script's global dictionary: void BPY_pyconstraint_eval(bPythonConstraint *con, float obmat[][4], short ownertype, void *ownerdata, float targetmat[][4]) { PyObject *srcmat, *tarmat, *idprop; PyObject *globals, *locals; PyObject *gkey, *gval; PyObject *retval; MatrixObject *retmat; Py_ssize_t ppos = 0; int row, col; if ( !con->text ) return; globals = CreateGlobalDictionary(); srcmat = newMatrixObject( (float*)obmat, 4, 4, Py_NEW ); tarmat = newMatrixObject( (float*)targetmat, 4, 4, Py_NEW ); idprop = BPy_Wrap_IDProperty( NULL, &con->prop, NULL); /* since I can't remember what the armature weakrefs do, I'll just leave this here commented out. Since this function was based on pydrivers. if( !setup_armature_weakrefs()){ fprintf( stderr, "Oops - weakref dict setup\n"); return result; } */ retval = RunPython2( con->text, globals, globals); if (retval) {Py_XDECREF( retval );} if ( retval == NULL ) { BPY_Err_Handle(con->text->id.name); ReleaseGlobalDictionary( globals ); /*free temp objects*/ Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); return; } /*Now for the fun part! Try and find the functions we need.*/ while ( PyDict_Next(globals, &ppos, &gkey, &gval) ) { if ( PyString_Check(gkey) && strcmp(PyString_AsString(gkey), "doConstraint")==0 ) { if (PyFunction_Check(gval) ) { retval = PyObject_CallObject(gval, Py_BuildValue("OOO", srcmat, tarmat, idprop)); Py_XDECREF( retval ); } else { printf("ERROR: doConstraint is supposed to be a function!\n"); } break; } } if (!retval) { BPY_Err_Handle(con->text->id.name); /*free temp objects*/ ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); return; } if (!PyObject_TypeCheck(retval, &matrix_Type)) { printf("Error in pyconstraint: Wrong return type for a pyconstraint!\n"); ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); Py_XDECREF( retval ); return; } retmat = (MatrixObject*) retval; if (retmat->rowSize != 4 || retmat->colSize != 4) { printf("Error in pyconstraint: Matrix is the wrong size!\n"); ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); Py_XDECREF( retval ); return; } //this is the reverse of code taken from newMatrix(). for(row = 0; row < 4; row++) { for(col = 0; col < 4; col++) { if (retmat->wrapped) obmat[row][col] = retmat->data.blend_data[row*4+col]; //[row][col]; else obmat[row][col] = retmat->data.py_data[row*4+col]; //[row][col]; } } /*clear globals*/ //ReleaseGlobalDictionary( globals ); /*free temp objects*/ //Py_XDECREF( idprop ); //Py_XDECREF( srcmat ); //Py_XDECREF( tarmat ); //Py_XDECREF( retval ); //PyDict_Clear(locals); //Py_XDECREF(locals); } Joe From aahz at pythoncraft.com Sun May 13 22:13:18 2007 From: aahz at pythoncraft.com (Aahz) Date: Sun, 13 May 2007 13:13:18 -0700 Subject: [Python-Dev] Strange behaviour with PyEval_EvalCode In-Reply-To: <46476E0A.7050603@gmail.com> References: <46476E0A.7050603@gmail.com> Message-ID: <20070513201318.GA29706@panix.com> On Sun, May 13, 2007, Joe Eagar wrote: > > Hi I'm getting extremely odd behavior. First of all, why isn't > PyEval_EvalCode documented anywhere? Anyway, I'm working on blender's > python integration (it embeds python, as opposed to python embedding > it). I have a function that executes a string buffer of python code, > fetches a function from its global dictionary then calls it. python-dev is probably not the best place to discuss this -- python-dev is primarily for discussions of *future* versions of Python. You would probably get better resuts posting to comp.lang.python. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Look, it's your affair if you want to play with five people, but don't go calling it doubles." --John Cleese anticipates Usenet From tjreedy at udel.edu Sun May 13 22:34:38 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 13 May 2007 16:34:38 -0400 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases References: <46457AF8.7070900@v.loewis.de><87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> Message-ID: ""Martin v. L?wis"" wrote in message news:464764C9.8090707 at v.loewis.de... |> I don't understand the point of a "security release" made up to a year | > after commit, especially in view of the first quoted paragraph. A security release is presumably a response to a serious problem. | I think we would need to restrict the total number of releases | made per year. The one-year limit may be debatable, and immediate | releases might be possible, as long as there is some provision | that releases are not made at a too-high rate. I would agree, but... has there been more that the one security release that I know about? tjr From tjreedy at udel.edu Sun May 13 22:46:30 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 13 May 2007 16:46:30 -0400 Subject: [Python-Dev] Strange behaviour with PyEval_EvalCode References: <46476E0A.7050603@gmail.com> Message-ID: "Joe Eagar" wrote in message news:46476E0A.7050603 at gmail.com... | why isn't PyEval_EvalCode documented anywhere? Most likely just overlooked and never reported. See Appendix A of the Python/C API http://docs.python.org/api/reporting-bugs.html If you make a report, please provide a link to the page where you think an entry should best go and, if you have figured it out, a draft entry. (plain text, no markup needed). For the rest, see Aahz's post. tjr From mal at egenix.com Sun May 13 22:54:48 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Sun, 13 May 2007 22:54:48 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <4647371C.8000600@v.loewis.de> References: <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> <46442E34.4020909@egenix.com> <4644F2A0.2080909@v.loewis.de> <4644FCAC.30805@egenix.com> <20070512004254.GN27728@mcnabbs.org> <4647039D.1050708@egenix.com> <4647371C.8000600@v.loewis.de> Message-ID: <46477B18.2010808@egenix.com> On 2007-05-13 18:04, Martin v. L?wis wrote: >> * without the Unicode escapes, the only way to put non-ASCII >> code points into a raw Unicode string is via a source code encoding >> of say UTF-8 or UTF-16, pretty much defeating the original >> requirement of writing ASCII code only > > That's no problem, though - just don't put the Unicode character > into a raw string. Use plain strings if you have a need to include > Unicode characters, and are not willing to leave ASCII. > > For Python 3, the default source encoding is UTF-8, so it is > much easier to use non-ASCII characters in the source code. > The original requirement may not be as strong anymore as it > used to be. You can do that today: Just put the "# coding: utf-8" marker at the top of the file. However, in some cases, your editor may not be capable of displaying or letting you enter the Unicode text you have in mind. In other cases, there may be a corporate coding standard in place that prohibits using non-ASCII text in source code, or fixes the encoding to e.g. Latin-1. In all those cases, it's necessary to be able to enter the Unicode code points which do cannot be used in the source code using other means and the easiest way to do this is by using Unicode escapes. >> * non-ASCII code points in text are not uncommon, they occur >> in most European scripts, all Asian scripts, >> many scientific texts and in also texts meant for the web >> (just have a look at the HTML entities, or think of Word >> exports using quotes) > > And you are seriously telling me that people who commonly > use non-ASCII code points in their source code are willing > to refer to them by Unicode ordinal number (which, of course, > they all know by heart, from 1 to 65536)? No, I'm not. I'm saying that non-ASCII code points are in common use and (together with the above bullet) that there are situations where you can't put the relevant code point directly into your source code. Using Unicode escapes for these will always be a cludge, but it's still better than not being able to enter the code points at all. >> * adding Unicode escapes to the re module will break code >> already using "...\u..." in the regular expressions for >> other purposes; writing conversion tools that detect this >> usage is going to be hard > > It's unlikely to occur in code today - \u just means the same > as u (so \u1234 matches u1234); if you want a backslash > followed by u in your regular expression, you should write > \\u. > > It would be possible to future-warn about \u in 2.6, catching > these cases. Authors then would either have to remove the > backslash, or duplicate it, depending on what they want to > express. Good idea. The re module would then have to implement the same escaping scheme as the raw-unicode-escape code (only an odd number of backslashes causes the escaping code to trigger). -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 13 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From joeedh at gmail.com Sun May 13 23:07:34 2007 From: joeedh at gmail.com (Joe Eagar) Date: Sun, 13 May 2007 14:07:34 -0700 Subject: [Python-Dev] Strange behaviour with PyEval_EvalCode Message-ID: <46477E16.1050902@gmail.com> Hi I'm getting extremely odd behavior. First of all, why isn't PyEval_EvalCode documented anywhere? Anyway, I'm working on blender's python integration (it embeds python, as opposed to python embedding it). I have a function that executes a string buffer of python code, fetches a function from its global dictionary then calls it. When the function code returns a local variable, PyObject_Call() appears to be returning garbage. Strangely this is only happening with internal blender types, yet try however I might I can't find any refcounting errors to account for this. However adding an extra incref to a blender type creation function did appear to fix things and consistently return an object with reference count 1. The initial implementation used the same dictionary for the global and local dicts. I tried using separate dicts, but then the function wasn't being called at all (or at least I tested it by putting a "print "bleh"" in there, and it didn't work). Also, when I printed the refcount of the garbage data, it was garbage as well (so the entire piece of memory is bad, not just the data portion). Even more odd, the refcount of printing returned internal python types (which appears to otherwise work) steadily increase after each function call, despite being supposedly different objects. I'm not decrefing the variable in my debug versions, but nonetheless I would expect each consecutive execution of the function to return new objects with reference 1. I've tested with both python 2.4 and 2.5. Mostly with 2.4. This bug may be cropping up in other experimental blender python code as well. Here's the code in the string buffer: #BPYCONSTRAINT from Blender import * from Blender.Mathutils import * print "d" def doConstraint(inmat, tarmat, prop): a = Matrix() a.identity() a = a * TranslationMatrix(Vector(0, 0, 0)) print "t" a = tarmat return inmat print doConstraint(Matrix(), Matrix(), 0) Here's the code that executes the string buffer: PyObject *RunPython2( Text * text, PyObject * globaldict, PyObject *localdict ) { char *buf = NULL; /* The script text is compiled to Python bytecode and saved at text->compiled * to speed-up execution if the user executes the script multiple times */ if( !text->compiled ) { // if it wasn't already compiled, do it now buf = txt_to_buf( text ); text->compiled = Py_CompileString( buf, GetName( text ), Py_file_input ); MEM_freeN( buf ); if( PyErr_Occurred( ) ) { BPY_free_compiled_text( text ); return NULL; } } return PyEval_EvalCode( text->compiled, globaldict, localdict ); } . . .and heres the (rather long, and somewhat in a debugging state) function that calls the function in the script's global dictionary: void BPY_pyconstraint_eval(bPythonConstraint *con, float obmat[][4], short ownertype, void *ownerdata, float targetmat[][4]) { PyObject *srcmat, *tarmat, *idprop; PyObject *globals, *locals; PyObject *gkey, *gval; PyObject *retval; MatrixObject *retmat; Py_ssize_t ppos = 0; int row, col; if ( !con->text ) return; globals = CreateGlobalDictionary(); srcmat = newMatrixObject( (float*)obmat, 4, 4, Py_NEW ); tarmat = newMatrixObject( (float*)targetmat, 4, 4, Py_NEW ); idprop = BPy_Wrap_IDProperty( NULL, &con->prop, NULL); /* since I can't remember what the armature weakrefs do, I'll just leave this here commented out. Since this function was based on pydrivers. if( !setup_armature_weakrefs()){ fprintf( stderr, "Oops - weakref dict setup\n"); return result; } */ retval = RunPython2( con->text, globals, globals); if (retval) {Py_XDECREF( retval );} if ( retval == NULL ) { BPY_Err_Handle(con->text->id.name); ReleaseGlobalDictionary( globals ); /*free temp objects*/ Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); return; } /*Now for the fun part! Try and find the functions we need.*/ while ( PyDict_Next(globals, &ppos, &gkey, &gval) ) { if ( PyString_Check(gkey) && strcmp(PyString_AsString(gkey), "doConstraint")==0 ) { if (PyFunction_Check(gval) ) { retval = PyObject_CallObject(gval, Py_BuildValue("OOO", srcmat, tarmat, idprop)); Py_XDECREF( retval ); } else { printf("ERROR: doConstraint is supposed to be a function!\n"); } break; } } if (!retval) { BPY_Err_Handle(con->text->id.name); /*free temp objects*/ ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); return; } if (!PyObject_TypeCheck(retval, &matrix_Type)) { printf("Error in pyconstraint: Wrong return type for a pyconstraint!\n"); ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); Py_XDECREF( retval ); return; } retmat = (MatrixObject*) retval; if (retmat->rowSize != 4 || retmat->colSize != 4) { printf("Error in pyconstraint: Matrix is the wrong size!\n"); ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); Py_XDECREF( retval ); return; } //this is the reverse of code taken from newMatrix(). for(row = 0; row < 4; row++) { for(col = 0; col < 4; col++) { if (retmat->wrapped) obmat[row][col] = retmat->data.blend_data[row*4+col]; //[row][col]; else obmat[row][col] = retmat->data.py_data[row*4+col]; //[row][col]; } } /*clear globals*/ //ReleaseGlobalDictionary( globals ); /*free temp objects*/ //Py_XDECREF( idprop ); //Py_XDECREF( srcmat ); //Py_XDECREF( tarmat ); //Py_XDECREF( retval ); //PyDict_Clear(locals); //Py_XDECREF(locals); } Joe From joeedh at gmail.com Sun May 13 23:09:56 2007 From: joeedh at gmail.com (Joe Eagar) Date: Sun, 13 May 2007 14:09:56 -0700 Subject: [Python-Dev] Strange behaviour with PyEval_EvalCode In-Reply-To: <46477E16.1050902@gmail.com> References: <46477E16.1050902@gmail.com> Message-ID: <46477EA4.8030505@gmail.com> Joe Eagar wrote: > Hi I'm getting extremely odd behavior. First of all, why isn't > PyEval_EvalCode documented anywhere? Anyway, I'm working on blender's > python integration (it embeds python, as opposed to python embedding > it). I have a function that executes a string buffer of python code, > fetches a function from its global dictionary then calls it. > > Eh sorry about the double post. Joe From kbk at shore.net Mon May 14 02:40:38 2007 From: kbk at shore.net (Kurt B. Kaiser) Date: Sun, 13 May 2007 20:40:38 -0400 (EDT) Subject: [Python-Dev] Weekly Python Patch/Bug Summary Message-ID: <200705140040.l4E0ect2013867@bayview.thirdcreek.com> Patch / Bug Summary ___________________ Patches : 362 open ( +2) / 3766 closed ( +6) / 4128 total ( +8) Bugs : 968 open ( -3) / 6692 closed ( +9) / 7660 total ( +6) RFE : 256 open ( -1) / 286 closed ( +4) / 542 total ( +3) New / Reopened Patches ______________________ Fix off-by-one error in Modules/socketmodule.c (2007-05-06) CLOSED http://python.org/sf/1713797 opened by Bryan ?stergaard Patch for PEP 3109 (2007-05-06) http://python.org/sf/1713889 opened by wpy os.linesep needs clarification (2007-05-07) CLOSED http://python.org/sf/1714700 opened by Gabriel Genellina x64 clean compile patch for _ctypes (2007-05-09) http://python.org/sf/1715718 opened by Kristj?n Valur "Really print?" Dialog (2007-05-11) http://python.org/sf/1717170 opened by Tal Einat textView code cleanup (2007-05-13) http://python.org/sf/1718043 opened by Tal Einat PEP 3123 implementation (2007-05-13) http://python.org/sf/1718153 opened by Martin v. L?wis Patches Closed ______________ Fix off-by-one error in Modules/socketmodule.c (2007-05-06) http://python.org/sf/1713797 closed by gbrandl make range be xrange (2006-04-18) http://python.org/sf/1472639 closed by nnorwitz xrange that supports longs, etc (2006-08-24) http://python.org/sf/1546078 closed by nnorwitz os.linesep needs clarification (2007-05-08) http://python.org/sf/1714700 closed by gbrandl PEP 3132: extended unpacking (2007-05-02) http://python.org/sf/1711529 closed by gbrandl Bastion and rexec message out-of-date (2007-04-12) http://python.org/sf/1698951 closed by gbrandl New / Reopened Bugs ___________________ smtplib starttls() didn't do TLS Close Notify when quit() (2007-05-07) CLOSED http://python.org/sf/1713993 opened by AndCycle Multiple re.compile flags cause error (2007-05-06) CLOSED http://python.org/sf/1713999 opened by Saul Spatz character set in Japanese on Ubuntu distribution (2007-05-04) http://python.org/sf/1713252 reopened by cgrell Universal line ending mode duplicates all line endings (2007-05-07) CLOSED http://python.org/sf/1714381 opened by Geoffrey Bache subprocess.py problems errors when calling cmd.exe (2007-05-07) http://python.org/sf/1714451 opened by tzellman python throws an error when unpacking bz2 file (2007-05-08) http://python.org/sf/1714773 opened by runnig datetime.date won't accept 08 or 09 as valid days. (2007-05-08) CLOSED http://python.org/sf/1715302 opened by Erik Wickstrom Const(None) in compiler.ast.Return.value (2007-05-09) http://python.org/sf/1715581 opened by Ali Gholami Rudi Destructor behavior faulty (2007-05-12) http://python.org/sf/1717900 opened by Wolf Rogner posixpath and friends have uses that should be documented (2007-05-13) http://python.org/sf/1718017 opened by Gabriel de Perthuis Bugs Closed ___________ smtplib starttls() didn't do TLS Close Notify when quit() (2007-05-07) http://python.org/sf/1713993 deleted by andcycle Multiple re.compile flags cause error (2007-05-07) http://python.org/sf/1713999 closed by gbrandl Universal line ending mode duplicates all line endings (2007-05-07) http://python.org/sf/1714381 closed by gbrandl datetime.date won't accept 08 or 09 as valid days. (2007-05-08) http://python.org/sf/1715302 closed by fdrake Segfaults on memory error (2007-04-10) http://python.org/sf/1697916 closed by gbrandl types.InstanceType is missing but used by pydoc (2007-04-10) http://python.org/sf/1697782 closed by gbrandl improving distutils swig support - documentation (2004-10-14) http://python.org/sf/1046945 closed by gbrandl New / Reopened RFE __________________ Expose callback API in readline module (2007-05-06) http://python.org/sf/1713877 opened by strank if something as x: (2007-05-07) http://python.org/sf/1714448 opened by k0wax RFE Closed __________ commands module (2007-05-06) http://python.org/sf/1713624 closed by gbrandl additions to commands lib (2003-11-11) http://python.org/sf/840034 closed by gbrandl A large block of commands after an "if" cannot be (2003-03-28) http://python.org/sf/711268 closed by gbrandl Please add .bz2 in encodings_map (in the module mimetypes) (2007-04-25) http://python.org/sf/1707693 closed by gbrandl From greg.ewing at canterbury.ac.nz Mon May 14 02:55:52 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Mon, 14 May 2007 12:55:52 +1200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: <4647039D.1050708@egenix.com> References: <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> <46442E34.4020909@egenix.com> <4644F2A0.2080909@v.loewis.de> <4644FCAC.30805@egenix.com> <20070512004254.GN27728@mcnabbs.org> <4647039D.1050708@egenix.com> Message-ID: <4647B398.2000303@canterbury.ac.nz> M.-A. Lemburg wrote: > * non-ASCII code points in text are not uncommon, they occur > in most European scripts, all Asian scripts, In an Asian script, almost every character is likely to be non-ascii, which is going to be pretty hard to read as a string of unicode escapes. Maybe what we want is a new kind of string literal in which *everything* is a unicode escape. A sufficiently smart editor could then display it using the appropriate characters, yet it could still be dealt with as ascii- only in a pinch. -- Greg From skip at pobox.com Mon May 14 05:16:30 2007 From: skip at pobox.com (skip at pobox.com) Date: Sun, 13 May 2007 22:16:30 -0500 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> Message-ID: <17991.54414.768930.125733@montanaro.dyndns.org> >> Now it's up to volunteers to do ongoing spam clearing, and we don't >> have that much volunteers. I think a single-click button "Spammer" >> should allow committers to lock an account and hide all messages and >> files that he sent, but that still requires somebody to implement it. Guido> I'd expect that to be pretty effective -- like graffiti artists, Guido> spammers want their work to be seen, and a site that quickly Guido> removes them will not be worth the effort for them. I'm still (slowly) working on adding SpamBayes to Roundup. I've exchanged one or two messages with Richard Jones. In the meantime (thinking out loud here), would it be possible to keep search engines from seeing a submission or an edit until a trusted person has had a chance to approve it? It should also be possible for trusted users to mark other users as trusted. Trusted users' submissions and edits should not require approval. In a rather short period of time I think you'd settle on a fairly static group of trusted users who are responsible for most changes. Only new submissions from previously unknown users would require approval. Skip From martin at v.loewis.de Mon May 14 07:27:14 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 14 May 2007 07:27:14 +0200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <17991.54414.768930.125733@montanaro.dyndns.org> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> <17991.54414.768930.125733@montanaro.dyndns.org> Message-ID: <4647F332.3000605@v.loewis.de> > In the meantime (thinking out loud here), would it be possible to keep > search engines from seeing a submission or an edit until a trusted person > has had a chance to approve it? It would be possible, but I would strongly oppose it. A bug tracker where postings need to be approved is just unacceptable. Regards, Martin From tcdelaney at optusnet.com.au Mon May 14 09:23:52 2007 From: tcdelaney at optusnet.com.au (Tim Delaney) Date: Mon, 14 May 2007 17:23:52 +1000 Subject: [Python-Dev] PEP 367: New Super Message-ID: <003001c795f8$d5275060$0201a8c0@mshome.net> Here is my modified version of PEP 367. The reference implementation in it is pretty long, and should probably be split out to somewhere else (esp. since it can't fully implement the semantics). Cheers, Tim Delaney PEP: 367 Title: New Super Version: $Revision$ Last-Modified: $Date$ Author: Calvin Spealman Author: Tim Delaney Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 28-Apr-2007 Python-Version: 2.6 Post-History: 28-Apr-2007, 29-Apr-2007 (1), 29-Apr-2007 (2), 14-May-2007 Abstract ======== This PEP proposes syntactic sugar for use of the ``super`` type to automatically construct instances of the super type binding to the class that a method was defined in, and the instance (or class object for classmethods) that the method is currently acting upon. The premise of the new super usage suggested is as follows:: super.foo(1, 2) to replace the old:: super(Foo, self).foo(1, 2) and the current ``__builtin__.super`` be aliased to ``__builtin__.__super__`` (with ``__builtin__.super`` to be removed in Python 3.0). It is further proposed that assignment to ``super`` become a ``SyntaxError``, similar to the behaviour of ``None``. Rationale ========= The current usage of super requires an explicit passing of both the class and instance it must operate from, requiring a breaking of the DRY (Don't Repeat Yourself) rule. This hinders any change in class name, and is often considered a wart by many. Specification ============= Within the specification section, some special terminology will be used to distinguish similar and closely related concepts. "super type" will refer to the actual builtin type named "super". A "super instance" is simply an instance of the super type, which is associated with a class and possibly with an instance of that class. Because the new ``super`` semantics are not backwards compatible with Python 2.5, the new semantics will require a ``__future__`` import:: from __future__ import new_super The current ``__builtin__.super`` will be aliased to ``__builtin__.__super__``. This will occur regardless of whether the new ``super`` semantics are active. It is not possible to simply rename ``__builtin__.super``, as that would affect modules that do not use the new ``super`` semantics. In Python 3.0 it is proposed that the name ``__builtin__.super`` will be removed. Replacing the old usage of super, calls to the next class in the MRO (method resolution order) can be made without explicitly creating a ``super`` instance (although doing so will still be supported via ``__super__``). Every function will have an implicit local named ``super``. This name behaves identically to a normal local, including use by inner functions via a cell, with the following exceptions: 1. Assigning to the name ``super`` will raise a ``SyntaxError`` at compile time; 2. Calling a static method or normal function that accesses the name ``super`` will raise a ``TypeError`` at runtime. Every function that uses the name ``super``, or has an inner function that uses the name ``super``, will include a preamble that performs the equivalent of:: super = __builtin__.__super__(, ) where ```` is the class that the method was defined in, and ```` is the first parameter of the method (normally ``self`` for instance methods, and ``cls`` for class methods). For static methods and normal functions, ```` will be ``None``, resulting in a ``TypeError`` being raised during the preamble. Note: The relationship between ``super`` and ``__super__`` is similar to that between ``import`` and ``__import__``. Much of this was discussed in the thread of the python-dev list, "Fixing super anyone?" [1]_. Open Issues ----------- Determining the class object to use ''''''''''''''''''''''''''''''''''' The exact mechanism for associating the method with the defining class is not specified in this PEP, and should be chosen for maximum performance. For CPython, it is suggested that the class instance be held in a C-level variable on the function object which is bound to one of ``NULL`` (not part of a class), ``Py_None`` (static method) or a class object (instance or class method). Should ``super`` actually become a keyword? ''''''''''''''''''''''''''''''''''''''''''' With this proposal, ``super`` would become a keyword to the same extent that ``None`` is a keyword. It is possible that further restricting the ``super`` name may simplify implementation, however some are against the actual keyword- ization of super. The simplest solution is often the correct solution and the simplest solution may well not be adding additional keywords to the language when they are not needed. Still, it may solve other open issues. Closed Issues ------------- super used with __call__ attributes ''''''''''''''''''''''''''''''''''' It was considered that it might be a problem that instantiating super instances the classic way, because calling it would lookup the __call__ attribute and thus try to perform an automatic super lookup to the next class in the MRO. However, this was found to be false, because calling an object only looks up the __call__ method directly on the object's type. The following example shows this in action. :: class A(object): def __call__(self): return '__call__' def __getattribute__(self, attr): if attr == '__call__': return lambda: '__getattribute__' a = A() assert a() == '__call__' assert a.__call__() == '__getattribute__' In any case, with the renaming of ``__builtin__.super`` to ``__builtin__.__super__`` this issue goes away entirely. Reference Implementation ======================== It is impossible to implement the above specification entirely in Python. This reference implementation has the following differences to the specification: 1. New ``super`` semantics are implemented using bytecode hacking. 2. Assignment to ``super`` is not a ``SyntaxError``. Also see point #4. 3. Classes must either use the metaclass ``autosuper_meta`` or inherit from the base class ``autosuper`` to acquire the new ``super`` semantics. 4. ``super`` is not an implicit local variable. In particular, for inner functions to be able to use the super instance, there must be an assignment of the form ``super = super`` in the method. The reference implementation assumes that it is being run on Python 2.5+. :: #!/usr/bin/env python # # autosuper.py from array import array import dis import new import types import __builtin__ __builtin__.__super__ = __builtin__.super del __builtin__.super # We need these for modifying bytecode from opcode import opmap, HAVE_ARGUMENT, EXTENDED_ARG LOAD_GLOBAL = opmap['LOAD_GLOBAL'] LOAD_NAME = opmap['LOAD_NAME'] LOAD_CONST = opmap['LOAD_CONST'] LOAD_FAST = opmap['LOAD_FAST'] LOAD_ATTR = opmap['LOAD_ATTR'] STORE_FAST = opmap['STORE_FAST'] LOAD_DEREF = opmap['LOAD_DEREF'] STORE_DEREF = opmap['STORE_DEREF'] CALL_FUNCTION = opmap['CALL_FUNCTION'] STORE_GLOBAL = opmap['STORE_GLOBAL'] DUP_TOP = opmap['DUP_TOP'] POP_TOP = opmap['POP_TOP'] NOP = opmap['NOP'] JUMP_FORWARD = opmap['JUMP_FORWARD'] ABSOLUTE_TARGET = dis.hasjabs def _oparg(code, opcode_pos): return code[opcode_pos+1] + (code[opcode_pos+2] << 8) def _bind_autosuper(func, cls): co = func.func_code name = func.func_name newcode = array('B', co.co_code) codelen = len(newcode) newconsts = list(co.co_consts) newvarnames = list(co.co_varnames) # Check if the global 'super' keyword is already present try: sn_pos = list(co.co_names).index('super') except ValueError: sn_pos = None # Check if the varname 'super' keyword is already present try: sv_pos = newvarnames.index('super') except ValueError: sv_pos = None # Check if the callvar 'super' keyword is already present try: sc_pos = list(co.co_cellvars).index('super') except ValueError: sc_pos = None # If 'super' isn't used anywhere in the function, we don't have anything to do if sn_pos is None and sv_pos is None and sc_pos is None: return func c_pos = None s_pos = None n_pos = None # Check if the 'cls_name' and 'super' objects are already in the constants for pos, o in enumerate(newconsts): if o is cls: c_pos = pos if o is __super__: s_pos = pos if o == name: n_pos = pos # Add in any missing objects to constants and varnames if c_pos is None: c_pos = len(newconsts) newconsts.append(cls) if n_pos is None: n_pos = len(newconsts) newconsts.append(name) if s_pos is None: s_pos = len(newconsts) newconsts.append(__super__) if sv_pos is None: sv_pos = len(newvarnames) newvarnames.append('super') # This goes at the start of the function. It is: # # super = __super__(cls, self) # # If 'super' is a cell variable, we store to both the # local and cell variables (i.e. STORE_FAST and STORE_DEREF). # preamble = [ LOAD_CONST, s_pos & 0xFF, s_pos >> 8, LOAD_CONST, c_pos & 0xFF, c_pos >> 8, LOAD_FAST, 0, 0, CALL_FUNCTION, 2, 0, ] if sc_pos is None: # 'super' is not a cell variable - we can just use the local variable preamble += [ STORE_FAST, sv_pos & 0xFF, sv_pos >> 8, ] else: # If 'super' is a cell variable, we need to handle LOAD_DEREF. preamble += [ DUP_TOP, STORE_FAST, sv_pos & 0xFF, sv_pos >> 8, STORE_DEREF, sc_pos & 0xFF, sc_pos >> 8, ] preamble = array('B', preamble) # Bytecode for loading the local 'super' variable. load_super = array('B', [ LOAD_FAST, sv_pos & 0xFF, sv_pos >> 8, ]) preamble_len = len(preamble) need_preamble = False i = 0 while i < codelen: opcode = newcode[i] need_load = False remove_store = False if opcode == EXTENDED_ARG: raise TypeError("Cannot use 'super' in function with EXTENDED_ARG opcode") # If the opcode is an absolute target it needs to be adjusted # to take into account the preamble. elif opcode in ABSOLUTE_TARGET: oparg = _oparg(newcode, i) + preamble_len newcode[i+1] = oparg & 0xFF newcode[i+2] = oparg >> 8 # If LOAD_GLOBAL(super) or LOAD_NAME(super) then we want to change it into # LOAD_FAST(super) elif (opcode == LOAD_GLOBAL or opcode == LOAD_NAME) and _oparg(newcode, i) == sn_pos: need_preamble = need_load = True # If LOAD_FAST(super) then we just need to add the preamble elif opcode == LOAD_FAST and _oparg(newcode, i) == sv_pos: need_preamble = need_load = True # If LOAD_DEREF(super) then we change it into LOAD_FAST(super) because # it's slightly faster. elif opcode == LOAD_DEREF and _oparg(newcode, i) == sc_pos: need_preamble = need_load = True if need_load: newcode[i:i+3] = load_super i += 1 if opcode >= HAVE_ARGUMENT: i += 2 # No changes needed - get out. if not need_preamble: return func # Our preamble will have 3 things on the stack co_stacksize = max(3, co.co_stacksize) # Conceptually, our preamble is on the `def` line. co_lnotab = array('B', co.co_lnotab) if co_lnotab: co_lnotab[0] += preamble_len co_lnotab = co_lnotab.tostring() # Our code consists of the preamble and the modified code. codestr = (preamble + newcode).tostring() codeobj = new.code(co.co_argcount, len(newvarnames), co_stacksize, co.co_flags, codestr, tuple(newconsts), co.co_names, tuple(newvarnames), co.co_filename, co.co_name, co.co_firstlineno, co_lnotab, co.co_freevars, co.co_cellvars) func.func_code = codeobj func.func_class = cls return func class autosuper_meta(type): def __init__(cls, name, bases, clsdict): UnboundMethodType = types.UnboundMethodType for v in vars(cls): o = getattr(cls, v) if isinstance(o, UnboundMethodType): _bind_autosuper(o.im_func, cls) class autosuper(object): __metaclass__ = autosuper_meta if __name__ == '__main__': class A(autosuper): def f(self): return 'A' class B(A): def f(self): return 'B' + super.f() class C(A): def f(self): def inner(): return 'C' + super.f() # Needed to put 'super' into a cell super = super return inner() class D(B, C): def f(self, arg=None): var = None return 'D' + super.f() assert D().f() == 'DBCA' Disassembly of B.f and C.f reveals the different preambles used when ``super`` is simply a local variable compared to when it is used by an inner function. :: >>> dis.dis(B.f) 214 0 LOAD_CONST 4 () 3 LOAD_CONST 2 () 6 LOAD_FAST 0 (self) 9 CALL_FUNCTION 2 12 STORE_FAST 1 (super) 215 15 LOAD_CONST 1 ('B') 18 LOAD_FAST 1 (super) 21 LOAD_ATTR 1 (f) 24 CALL_FUNCTION 0 27 BINARY_ADD 28 RETURN_VALUE :: >>> dis.dis(C.f) 218 0 LOAD_CONST 4 () 3 LOAD_CONST 2 () 6 LOAD_FAST 0 (self) 9 CALL_FUNCTION 2 12 DUP_TOP 13 STORE_FAST 1 (super) 16 STORE_DEREF 0 (super) 219 19 LOAD_CLOSURE 0 (super) 22 LOAD_CONST 1 () 25 MAKE_CLOSURE 0 28 STORE_FAST 2 (inner) 223 31 LOAD_FAST 1 (super) 34 STORE_DEREF 0 (super) 224 37 LOAD_FAST 2 (inner) 40 CALL_FUNCTION 0 43 RETURN_VALUE Note that in the final implementation, the preamble would not be part of the bytecode of the method, but would occur immediately following unpacking of parameters. Alternative Proposals ===================== No Changes ---------- Although its always attractive to just keep things how they are, people have sought a change in the usage of super calling for some time, and for good reason, all mentioned previously. - Decoupling from the class name (which might not even be bound to the right class anymore!) - Simpler looking, cleaner super calls would be better Dynamic attribute on super type ------------------------------- The proposal adds a dynamic attribute lookup to the super type, which will automatically determine the proper class and instance parameters. Each super attribute lookup identifies these parameters and performs the super lookup on the instance, as the current super implementation does with the explicit invokation of a super instance upon a class and instance. This proposal relies on sys._getframe(), which is not appropriate for anything except a prototype implementation. super(__this_class__, self) --------------------------- This is nearly an anti-proposal, as it basically relies on the acceptance of the __this_class__ PEP, which proposes a special name that would always be bound to the class within which it is used. If that is accepted, __this_class__ could simply be used instead of the class' name explicitly, solving the name binding issues [2]_. self.__super__.foo(\*args) -------------------------- The __super__ attribute is mentioned in this PEP in several places, and could be a candidate for the complete solution, actually using it explicitly instead of any super usage directly. However, double-underscore names are usually an internal detail, and attempted to be kept out of everyday code. super(self, \*args) or __super__(self, \*args) ---------------------------------------------- This solution only solves the problem of the type indication, does not handle differently named super methods, and is explicit about the name of the instance. It is less flexable without being able to enacted on other method names, in cases where that is needed. One use case this fails is where a base- class has a factory classmethod and a subclass has two factory classmethods, both of which needing to properly make super calls to the one in the base- class. super.foo(self, \*args) ----------------------- This variation actually eliminates the problems with locating the proper instance, and if any of the alternatives were pushed into the spotlight, I would want it to be this one. super or super() ---------------- This proposal leaves no room for different names, signatures, or application to other classes, or instances. A way to allow some similar use alongside the normal proposal would be favorable, encouraging good design of multiple inheritence trees and compatible methods. super(\*p, \*\*kw) ------------------ There has been the proposal that directly calling ``super(*p, **kw)`` would be equivalent to calling the method on the ``super`` object with the same name as the method currently being executed i.e. the following two methods would be equivalent: :: def f(self, *p, **kw): super.f(*p, **kw) :: def f(self, *p, **kw): super(*p, **kw) There is strong sentiment for and against this, but implementation and style concerns are obvious. Guido has suggested that this should be excluded from this PEP on the principle of KISS (Keep It Simple Stupid). History ======= 29-Apr-2007 - Changed title from "Super As A Keyword" to "New Super" - Updated much of the language and added a terminology section for clarification in confusing places. - Added reference implementation and history sections. 06-May-2007 - Updated by Tim Delaney to reflect discussions on the python-3000 and python-dev mailing lists. References ========== .. [1] Fixing super anyone? (http://mail.python.org/pipermail/python-3000/2007-April/006667.html) .. [2] PEP 3130: Access to Module/Class/Function Currently Being Defined (this) (http://mail.python.org/pipermail/python-ideas/2007-April/000542.html) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From hrvoje.niksic at avl.com Mon May 14 12:40:40 2007 From: hrvoje.niksic at avl.com (Hrvoje =?UTF-8?Q?Nik=C5=A1i=C4=87?=) Date: Mon, 14 May 2007 12:40:40 +0200 Subject: [Python-Dev] \u and \U escapes in raw unicode string literals In-Reply-To: References: <4643A1AF.5080305@v.loewis.de> <4643AA2B.1080301@v.loewis.de> <4643C73D.1010909@canterbury.ac.nz> <464404A7.3000804@v.loewis.de> Message-ID: <1179139240.6077.40.camel@localhost> On Fri, 2007-05-11 at 13:06 -0700, Guido van Rossum wrote: > > attribution_pattern = re.compile(ur'(---?(?!-)|\u2014) *(?=[^ \n])') > > But wouldn't it be just as handy to teach the re module about \u and > \U, just as it already knows about \x (and \123 octals)? And \n, \r, etc. Implementing \u in re is not only useful, but also consistent. From aahz at pythoncraft.com Mon May 14 13:55:26 2007 From: aahz at pythoncraft.com (Aahz) Date: Mon, 14 May 2007 04:55:26 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <4647F332.3000605@v.loewis.de> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> <17991.54414.768930.125733@montanaro.dyndns.org> <4647F332.3000605@v.loewis.de> Message-ID: <20070514115526.GA6296@panix.com> On Mon, May 14, 2007, "Martin v. L?wis" wrote: > Skip(?): >> >> In the meantime (thinking out loud here), would it be possible to keep >> search engines from seeing a submission or an edit until a trusted person >> has had a chance to approve it? > > It would be possible, but I would strongly oppose it. A bug tracker > where postings need to be approved is just unacceptable. Could you expand this, please? It sounds like Skip is just talking about a dynamic robots.txt, essentially. Anyone coming in to the tracker itself should still see everything. Moreover, this restriction only comes into play for postings from new people, which should limit the load. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Look, it's your affair if you want to play with five people, but don't go calling it doubles." --John Cleese anticipates Usenet From facundo at taniquetil.com.ar Mon May 14 14:41:36 2007 From: facundo at taniquetil.com.ar (Facundo Batista) Date: Mon, 14 May 2007 12:41:36 +0000 (UTC) Subject: [Python-Dev] New operations in Decimal References: <20070511201130.BIX91386@ms09.lnh.mail.rcn.net> <1f7befae0705112157w7dd4ba05we4441b12dc00485@mail.gmail.com> Message-ID: Tim Peters wrote: > I'm with Raymond on this one, especially given the triviality of > implementing the revised spec's new logical operations. Exactly. I already implemented part of it, and took less than read this thread, ;). The cost of having it is lines of code in decimal.py. The benefit is that you can claim you comply to the standard. Regards, -- . Facundo . Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From stephen at xemacs.org Mon May 14 17:32:39 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 15 May 2007 00:32:39 +0900 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <464764C9.8090707@v.loewis.de> References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> Message-ID: <878xbrbeug.fsf@uwakimon.sk.tsukuba.ac.jp> "Martin v. L?wis" writes: > The objective is to reduce load for the release manager. Any kind of > release that is worth anything takes several hours to produce, in > my experience (if it could be made completely automatic, it wouldn't > be good, since glitches would not be detected). I absolutely agree. > See PEP 101. A release involves many more steps, and it's not clear > whether a release candidate could be skipped. My point is that if those steps are required for a release, the branch is not "immediately releasable" by my standards if they're not done. Especially if a release candidate is required. I guess you only meant that a security commit must meet the technical requirements of any other commit (plus being a security fix, of course), so that the release process can be started at any time? In general, I recognize the burden on the release engineer, and obviously any burdensome policy needs his OK. But I think the policy should be *effective* too, and I just don't see that a policy that allows such long lags is a more effective security response than a policy that says "the tarballs are deprecated due to security fixes; get your Python by importing the branch, not by fetching a tarball." From barry at python.org Mon May 14 17:45:24 2007 From: barry at python.org (Barry Warsaw) Date: Mon, 14 May 2007 11:45:24 -0400 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <878xbrbeug.fsf@uwakimon.sk.tsukuba.ac.jp> References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> <878xbrbeug.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 14, 2007, at 11:32 AM, Stephen J. Turnbull wrote: > In general, I recognize the burden on the release engineer, and > obviously any burdensome policy needs his OK. But I think the policy > should be *effective* too, and I just don't see that a policy that > allows such long lags is a more effective security response than a > policy that says "the tarballs are deprecated due to security fixes; > get your Python by importing the branch, not by fetching a tarball." Like many other activities we do, if we find ourselves blocking because of resource constraints, we should recruit additional volunteers to reduce the load on any one person. Anthony does a masterful job as release manager, but maybe he would rather someone else perform security releases. (It's not a bad idea anyway so that others have experience doing releases too.) We should decide what's right for security releases and then assess whether we need to recruit in order to perform that activity the way we want to. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQCVAwUBRkiEFXEjvBPtnXfVAQL1TQP+IbelPCGvkd8IEGvDLIguJxM4B437AJPh I6sluVGP3EjOcVbHTh8EgiqvWn+DaKQUIIkxqt+CEX/ghOXwv4X2z73Qnc8VB5jG W6ghV6diiYwmD8xOGUUvuIk4Rr+qV4Me22p38E1aZY7UP9ub9o6ofsGe19rjNjoX nQBs7PUMqPQ= =Onzb -----END PGP SIGNATURE----- From facundo at taniquetil.com.ar Mon May 14 18:24:27 2007 From: facundo at taniquetil.com.ar (Facundo Batista) Date: Mon, 14 May 2007 16:24:27 +0000 (UTC) Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> Message-ID: Martin v. L?wis wrote: >> I don't understand the point of a "security release" made up to a year >> after commit, especially in view of the first quoted paragraph. > > The objective is to reduce load for the release manager. Any kind of > release that is worth anything takes several hours to produce, in You can always can make a checkout of the security-manteined-only branch, if you're in a particular hurry (maybe the PEP should say something about this). Regards, -- . Facundo . Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From pje at telecommunity.com Mon May 14 18:58:51 2007 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 14 May 2007 12:58:51 -0400 Subject: [Python-Dev] [Python-3000] PEP 367: New Super In-Reply-To: <003001c795f8$d5275060$0201a8c0@mshome.net> References: <003001c795f8$d5275060$0201a8c0@mshome.net> Message-ID: <20070514165704.4F8D23A4036@sparrow.telecommunity.com> At 05:23 PM 5/14/2007 +1000, Tim Delaney wrote: >Determining the class object to use >''''''''''''''''''''''''''''''''''' > >The exact mechanism for associating the method with the defining class is >not >specified in this PEP, and should be chosen for maximum performance. For >CPython, it is suggested that the class instance be held in a C-level >variable >on the function object which is bound to one of ``NULL`` (not part of a >class), >``Py_None`` (static method) or a class object (instance or class method). Another open issue here: is the decorated class used, or the undecorated class? From martin at v.loewis.de Mon May 14 23:29:33 2007 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 14 May 2007 23:29:33 +0200 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <878xbrbeug.fsf@uwakimon.sk.tsukuba.ac.jp> References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> <878xbrbeug.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <4648D4BD.9050600@v.loewis.de> > My point is that if those steps are required for a release, the branch > is not "immediately releasable" by my standards if they're not done. > Especially if a release candidate is required. But how does that help in practice? If you find after the release that the branch was not in a releasable state, will you fire your employee that caused the mess-up? Even though no problems are expected, you still have to *check* whether there are problems, and that is time-consuming. Better safe than sorry (at least, this is what I understand Anthony Baxter's position on release engineering is - and I agree with that view). > I guess you only meant that a security commit must meet the technical > requirements of any other commit (plus being a security fix, of > course), so that the release process can be started at any time? Exactly. I wouldn't require the release manager to actually commit all security patches - and requiring so would be the only way to guarantee that the branch is releasable (i.e. you have to release it to be sure). > In general, I recognize the burden on the release engineer, and > obviously any burdensome policy needs his OK. But I think the policy > should be *effective* too, and I just don't see that a policy that > allows such long lags is a more effective security response than a > policy that says "the tarballs are deprecated due to security fixes; > get your Python by importing the branch, not by fetching a tarball." In effect, this is what the PEP says. That's intentional (i.e. it is my intention - others may have different intentions). It's the repository that holds the security patches; the tarballs (and the version number bumps) are just a convenience. Regards, Martin From martin at v.loewis.de Mon May 14 23:32:08 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 14 May 2007 23:32:08 +0200 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> <878xbrbeug.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <4648D558.60404@v.loewis.de> > We should decide what's right for security releases and then assess > whether we need to recruit in order to perform that activity the way we > want to. I disagree. If you would like to see a certain policy implemented, you need to locate the volunteers *first*, and only then you can start setting a policy that these volunteers can agree to. When the volunteers then run away, or become inactive, the policy needs revisiting. Regards, Martin From barry at python.org Mon May 14 23:43:47 2007 From: barry at python.org (Barry Warsaw) Date: Mon, 14 May 2007 17:43:47 -0400 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <4648D558.60404@v.loewis.de> References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> <878xbrbeug.fsf@uwakimon.sk.tsukuba.ac.jp> <4648D558.60404@v.loewis.de> Message-ID: <066FA759-5344-4FF2-B7FA-8021DFE91D21@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 14, 2007, at 5:32 PM, Martin v. L?wis wrote: >> We should decide what's right for security releases and then assess >> whether we need to recruit in order to perform that activity the >> way we >> want to. > > I disagree. If you would like to see a certain policy implemented, you > need to locate the volunteers *first*, and only then you can start > setting a policy that these volunteers can agree to. When the > volunteers > then run away, or become inactive, the policy needs revisiting. These are not mutually exclusive positions, but that's unimportant because in this specific case, I'm confident we can summon the necessary manpower. Still, I'm in agreement with you that the repository holds the security patches and that the tarballs are a convenience. They are an important convenience though, so I would say that they should be released in a timely manner after the commit of the security patches. I don't think we need to be that exact about spelling out when that happens. (I personally would like to see it within "weeks" of a security patch, not "months" or "years".) Also, I would like to document explicit that it is the responsibility of the PSRT (or its designate) to commit security patches to revision control. The act of committing these patches is a public event and has an important impact on any embargoes agreed upon by the PSRT with other organizations. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQCVAwUBRkjYFHEjvBPtnXfVAQIAfAQAq8052/15WnMqrEyReXJRgeJqtklKzg3f xwVaOdEQjnp0QXAg7tMf29kCxLq6kW6al8DMUPHQcaV9cH7sQcMAon0V9LwiXlwU 3d0Mbvb5RUlpRmfDniQeGljCyCLJZbk+nUbrWbLAtIsrzMaW4FaPUkTUza1ZSIHX nKhsh7fifiM= =kYxd -----END PGP SIGNATURE----- From martin at v.loewis.de Tue May 15 01:19:58 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 15 May 2007 01:19:58 +0200 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <066FA759-5344-4FF2-B7FA-8021DFE91D21@python.org> References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> <878xbrbeug.fsf@uwakimon.sk.tsukuba.ac.jp> <4648D558.60404@v.loewis.de> <066FA759-5344-4FF2-B7FA-8021DFE91D21@python.org> Message-ID: <4648EE9E.2090701@v.loewis.de> > Still, I'm in agreement with you that the repository holds the security > patches and that the tarballs are a convenience. They are an important > convenience though, so I would say that they should be released in a > timely manner after the commit of the security patches. I don't think > we need to be that exact about spelling out when that happens. > > (I personally would like to see it within "weeks" of a security patch, > not "months" or "years".) Couldn't that lead to many more 2.x.y releases in the "security fixes" period than in the "active maintenance" period? For active maintenance, we don't do roll security fixes into releases more often than every six months. I would dislike a quick succession of 2.3.7, 2.3.8, 2.3.9, etc. (also because these numbers should not grow above 10 :-) > Also, I would like to document explicit that it is the responsibility of > the PSRT (or its designate) to commit security patches to revision > control. The act of committing these patches is a public event and has > an important impact on any embargoes agreed upon by the PSRT with other > organizations. I also disagree. Regular committers should continue to do that. I haven't seen a single activity from the PSRT (or, perhaps one), so for all I can tell, it doesn't exist. If a security patch is reported to the Python bug tracker, it's as public as it can get. If PSRT members (who are they, anyway) also happen to be committers, they can commit these changes at the time the PSRT deems appropriate. If they are not committers, they need to post the patch to SF as anybody else. (you can tell that I come from a country where people are quite skeptical about the secret service). Regards, Martin From andrewm at object-craft.com.au Tue May 15 05:47:42 2007 From: andrewm at object-craft.com.au (Andrew McNamara) Date: Tue, 15 May 2007 13:47:42 +1000 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> Message-ID: <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> >> I think a single-click button "Spammer" >> should allow committers to lock an account and hide all messages >> and files that he sent, but that still requires somebody to implement >> it. > >I'd expect that to be pretty effective -- like graffiti artists, >spammers want their work to be seen, and a site that quickly removes >them will not be worth the effort for them. Unfortunately, the spammers are using automated tools to locate, register on and post to victim sites. The tools are distributed (running on compromised PCs) and massively parallel, so they really don't care that some of their posts are never seen. I'm reluctant to mention the name of one particular tool I'm aware of, but as well as the above, it also has OCR to defeat CAPTCHA, and automatically creates throw-away e-mail accounts with a range of free web-mail providers for registration purposes. -- Andrew McNamara, Senior Developer, Object Craft http://www.object-craft.com.au/ From stephen at xemacs.org Tue May 15 06:38:09 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 15 May 2007 13:38:09 +0900 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <4648D4BD.9050600@v.loewis.de> References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> <878xbrbeug.fsf@uwakimon.sk.tsukuba.ac.jp> <4648D4BD.9050600@v.loewis.de> Message-ID: <87646ubt1q.fsf@uwakimon.sk.tsukuba.ac.jp> "Martin v. L?wis" writes: > > In general, I recognize the burden on the release engineer, and > > obviously any burdensome policy needs his OK. But I think the policy > > should be *effective* too, and I just don't see that a policy that > > allows such long lags is a more effective security response than a > > policy that says "the tarballs are deprecated due to security fixes; > > get your Python by importing the branch, not by fetching a tarball." > > In effect, this is what the PEP says. That's intentional (i.e. it > is my intention - others may have different intentions). It's the > repository that holds the security patches; the tarballs (and the > version number bumps) are just a convenience. It's not the intentions of the Python developers that is my concern here. In effect, I can read this PEP as saying "we don't take security seriously enough to release in a timely fashion, why should you go to the effort of getting sources and applying patches?" and I fear that many users will do so. I think that the label of "release" is important. From martin at v.loewis.de Tue May 15 06:55:35 2007 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 15 May 2007 06:55:35 +0200 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <87646ubt1q.fsf@uwakimon.sk.tsukuba.ac.jp> References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> <878xbrbeug.fsf@uwakimon.sk.tsukuba.ac.jp> <4648D4BD.9050600@v.loewis.de> <87646ubt1q.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <46493D47.2030304@v.loewis.de> > > In effect, this is what the PEP says. That's intentional (i.e. it > > is my intention - others may have different intentions). It's the > > repository that holds the security patches; the tarballs (and the > > version number bumps) are just a convenience. > > It's not the intentions of the Python developers that is my concern > here. In effect, I can read this PEP as saying "we don't take > security seriously enough to release in a timely fashion, why should > you go to the effort of getting sources and applying patches?" and I > fear that many users will do so. I think that the label of "release" > is important. [Not sure who "you" is above: who should or should not go to the effort of getting sources, and what patches should they apply?] I don't think I can be more plain than that: yes, I do not take security seriously enough to release security fixes for old Python versions more than once a year. As a user, it's easy to demand things, and people really have to learn that in open source, all things are done by volunteers, and that demanding gets you nowhere. To get a better service, somebody really has to volunteer and offer it. Regards, Martin From martin at v.loewis.de Tue May 15 06:59:08 2007 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 15 May 2007 06:59:08 +0200 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: References: <46457AF8.7070900@v.loewis.de><87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> Message-ID: <46493E1C.7070400@v.loewis.de> > | I think we would need to restrict the total number of releases > | made per year. The one-year limit may be debatable, and immediate > | releases might be possible, as long as there is some provision > | that releases are not made at a too-high rate. > > I would agree, but... > has there been more that the one security release that I know about? No, but I believe this partly due to the fact that Linux distributors don't provide their security patches, or that they get applied only to the current maintenance branch, and not to older branches. If this policy gets implemented, I believe there would be more security patches than those we had seen - this single one was only in response to some demanding users. Regards, Martin From martin at v.loewis.de Tue May 15 07:00:45 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 15 May 2007 07:00:45 +0200 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> Message-ID: <46493E7D.5040700@v.loewis.de> > You can always can make a checkout of the security-manteined-only > branch, if you're in a particular hurry (maybe the PEP should say > something about this). Indeed. I can add explicit wording to say that. Regards, Martin From martin at v.loewis.de Tue May 15 07:09:44 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 15 May 2007 07:09:44 +0200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070514115526.GA6296@panix.com> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> <17991.54414.768930.125733@montanaro.dyndns.org> <4647F332.3000605@v.loewis.de> <20070514115526.GA6296@panix.com> Message-ID: <46494098.20907@v.loewis.de> Aahz schrieb: > On Mon, May 14, 2007, "Martin v. L?wis" wrote: >> Skip(?): >>> In the meantime (thinking out loud here), would it be possible to keep >>> search engines from seeing a submission or an edit until a trusted person >>> has had a chance to approve it? >> It would be possible, but I would strongly oppose it. A bug tracker >> where postings need to be approved is just unacceptable. > > Could you expand this, please? It sounds like Skip is just talking about > a dynamic robots.txt, essentially. Anyone coming in to the tracker > itself should still see everything. I must have misunderstood Skip then - I thought he had a scheme in mind where an editor would have approve postings before they become visible to tracker users; the tracker itself cannot distinguish between a search engine and a regular (anonymous) user. As for a dynamically-expanding robots.txt - I think that would be difficult to implement (close to being impossible). At best, we can have robots.txt filter out entire issues, not individual messages within an issue. So if a spammer posts to an existing issue, no proper robots.txt can be written. Even for new issues: they can be added to robots.txt only after they have been created. As search engines are allowed to cache robots.txt, they might not see that it has been changed, and fetch the issue that was supposed to be blocked. Regards, Martin From martin at v.loewis.de Tue May 15 07:12:54 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 15 May 2007 07:12:54 +0200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> Message-ID: <46494156.30406@v.loewis.de> > I'm reluctant to mention the name of one particular tool I'm aware > of, but as well as the above, it also has OCR to defeat CAPTCHA, and > automatically creates throw-away e-mail accounts with a range of free > web-mail providers for registration purposes. Right. We considered CAPTCHA, but some people were immediately opposed to using it, both for the reason that spammers still get past it in an automated manner, and that it might lock out certain groups of legitimate users. So I have personally given up on that path. Regards, Martin From tjreedy at udel.edu Tue May 15 08:00:29 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 15 May 2007 02:00:29 -0400 Subject: [Python-Dev] Summary of Tracker Issues References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <46494156.30406@v.loewis.de> Message-ID: ""Martin v. L?wis"" wrote in message news:46494156.30406 at v.loewis.de... |> I'm reluctant to mention the name of one particular tool I'm aware | > of, but as well as the above, it also has OCR to defeat CAPTCHA, and | > automatically creates throw-away e-mail accounts with a range of free | > web-mail providers for registration purposes. | | Right. We considered CAPTCHA, but some people were immediately opposed | to using it, both for the reason that spammers still get past it in | an automated manner, and that it might lock out certain groups of | legitimate users. So I have personally given up on that path. I have not noticed any spam on the very public SF tracker (or have I just missed it?) while I saw some my first visit to our hardly public trial site. Any ideas on why the difference? tjr From tjreedy at udel.edu Tue May 15 08:32:06 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 15 May 2007 02:32:06 -0400 Subject: [Python-Dev] Summary of Tracker Issues References: <20070513000049.79ECB78236@psf.upfronthosting.co.za><4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> Message-ID: "Andrew McNamara" wrote in message news:20070515034743.2030F5CC4B5 at longblack.object-craft.com.au... | I'm reluctant to mention the name of one particular tool I'm aware | of, but as well as the above, it also has OCR to defeat CAPTCHA, and How about asking a Python specific question, with answered filled in rather that multiple choice selected: I would be willing to make up a bunch. The initials of Python's founder. ____ The keyword for looping by condition. ____ The char that signals a name-binding statement. ____ (I am intentionally avoiding question words and ? that would signal Test Question to automated software.) If we anticipate users rather than programmers to register (as if so, it would be nice to collect that info to formulate sensible responses), then questions like The orb that shines in the sky during the day. ____ | automatically creates throw-away e-mail accounts with a range of free | web-mail providers for registration purposes. Either don't accept registrations from such accounts (as other sites have done), or require extra verification steps or require approval of the first post. How many current legitimate registered users use such? Terry Jan Reedy From g.brandl at gmx.net Tue May 15 09:32:06 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 15 May 2007 09:32:06 +0200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <20070513000049.79ECB78236@psf.upfronthosting.co.za><4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> Message-ID: Terry Reedy schrieb: > "Andrew McNamara" wrote in message > news:20070515034743.2030F5CC4B5 at longblack.object-craft.com.au... > | I'm reluctant to mention the name of one particular tool I'm aware > | of, but as well as the above, it also has OCR to defeat CAPTCHA, and > > How about asking a Python specific question, with answered filled in rather > that multiple choice selected: I would be willing to make up a bunch. > > The initials of Python's founder. ____ > The keyword for looping by condition. ____ > The char that signals a name-binding statement. ____ > (I am intentionally avoiding question words and ? that would signal Test > Question to automated software.) There are two problems with this: * The set of questions is limited, and bots can be programmed to know them all. * Even programmers might not immediately know an answer, and I can understand them turning away on that occasion (take for example the "name-binding" term). > If we anticipate users rather than programmers to register (as if so, it > would be nice to collect that info to formulate sensible responses), then > questions like > The orb that shines in the sky during the day. ____ > > | automatically creates throw-away e-mail accounts with a range of free > | web-mail providers for registration purposes. > > Either don't accept registrations from such accounts (as other sites have > done), or require extra verification steps or require approval of the first > post. How many current legitimate registered users use such? This is impossible to find out, I think, since SF.net does not publicly show real e-mail addresses, instead, each user has an alias username at sourceforge.net. Georg From castironpi at comcast.net Tue May 15 13:52:41 2007 From: castironpi at comcast.net (Aaron Brady) Date: Tue, 15 May 2007 06:52:41 -0500 Subject: [Python-Dev] Functools Defaults (was Python-ideas parameter omit) In-Reply-To: Message-ID: <20070515115303.D6B081E4002@bag.python.org> > -----Original Message----- > From: Steven Bethard [mailto:steven.bethard at gmail.com] > Sent: Tuesday, May 15, 2007 1:54 AM > > On 5/15/07, Aaron Brady wrote: > > You might be able to get away without a PEP, but you'll definitely > need to post an implementation patch to the bug tracker > (http://sourceforge.net/tracker/?group_id=5470&atid=105470). Once > you've posted your implementation, you should send an email to > python-dev asking folks what they think about it. Be sure to give > some code examples that using this decorator would simplify. Code with proposal are in SourceForge [ 1719222 ] new functools. Python feature Functools gains a new decorator. `Defaults' allows its caller to placehold non-None defaults; it becomes unnecessary to know the value a place defaults to. It might be useful in cases where you want the calling signature to look alike for a group of dispatched functions and the added overhead the decorator adds isn't a problem. But you probably wouldn't want that overhead all the time, so having it as an optional decorator would be good. -Ron Adam What -do- you think about it? From ncoghlan at gmail.com Tue May 15 15:51:59 2007 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 15 May 2007 23:51:59 +1000 Subject: [Python-Dev] Functools Defaults (was Python-ideas parameter omit) In-Reply-To: <20070515115303.D6B081E4002@bag.python.org> References: <20070515115303.D6B081E4002@bag.python.org> Message-ID: <4649BAFF.6040602@gmail.com> Aaron Brady wrote: > It might be useful in cases where you want the calling signature to look > alike for a group of dispatched functions and the added overhead the > decorator adds isn't a problem. But you probably wouldn't want that > overhead all the time, so having it as an optional decorator would be good. > -Ron Adam > > What -do- you think about it? -1 It took me a couple of rereads to actually figure out what this decorator was trying to do (which is simply allowing callers to skip parameters in a function call without using keyword arguments). I think it would be significantly clearer (and far more efficient) to simply use keyword arguments at the call site where the parameters are being skipped rather than significantly slowing down every single call to a function simply to permit some sloppy coding. To take the example from the SF tracker: .>>> @defaults. .>>> def f(a=123, b=None, c='abc'): .>>> return a, b, c .>>> .>>> use_default = defaults.absent .>>> f(123, use_default, 'abc') 123, None, 'abc' As opposed to: .>>> def f(a=123, b=None, c='abc'): .>>> return a, b, c .>>> .>>> f(123, c='abc') 123, None, 'abc' Keyword only parameters (coming in Py3k and possibly in 2.6) will be an even cleaner solution for cases where you don't want callers to have to care about a default value, but still give them the ability to override it if needed. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From barry at python.org Tue May 15 16:01:40 2007 From: barry at python.org (Barry Warsaw) Date: Tue, 15 May 2007 10:01:40 -0400 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <4648EE9E.2090701@v.loewis.de> References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> <878xbrbeug.fsf@uwakimon.sk.tsukuba.ac.jp> <4648D558.60404@v.loewis.de> <066FA759-5344-4FF2-B7FA-8021DFE91D21@python.org> <4648EE9E.2090701@v.loewis.de> Message-ID: <66ECE492-BF64-4271-810C-7362558FE877@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 14, 2007, at 7:19 PM, Martin v. L?wis wrote: >> Still, I'm in agreement with you that the repository holds the >> security >> patches and that the tarballs are a convenience. They are an >> important >> convenience though, so I would say that they should be released in a >> timely manner after the commit of the security patches. I don't >> think >> we need to be that exact about spelling out when that happens. >> >> (I personally would like to see it within "weeks" of a security >> patch, >> not "months" or "years".) > > Couldn't that lead to many more 2.x.y releases in the "security fixes" > period than in the "active maintenance" period? For active > maintenance, > we don't do roll security fixes into releases more often than every > six months. > > I would dislike a quick succession of 2.3.7, 2.3.8, 2.3.9, etc. > (also because these numbers should not grow above 10 :-) I think we could reasonably batch security releases, if we know that there are a few critical issues in the pipeline. That ought to be part of the job of the PSRT. Personally, I don't care much if the micro release number goes above 10 although I know there are many here who don't like that. We should definitely try to reduce churn. It's a balancing act. >> Also, I would like to document explicit that it is the >> responsibility of >> the PSRT (or its designate) to commit security patches to revision >> control. The act of committing these patches is a public event >> and has >> an important impact on any embargoes agreed upon by the PSRT with >> other >> organizations. > > I also disagree. Regular committers should continue to do that. I > haven't seen a single activity from the PSRT (or, perhaps one), so > for all I can tell, it doesn't exist. > > If a security patch is reported to the Python bug tracker, it's as > public as it can get. Right, but hopefully people know to report them to security at python dot org instead. Also hopefully, the new tracker will support private/security bug reports that won't be made public (I don't actually know if this is the case or not). > If PSRT members (who are they, anyway) also > happen to be committers, they can commit these changes at the > time the PSRT deems appropriate. If they are not committers, they > need to post the patch to SF as anybody else. > > (you can tell that I come from a country where people are quite > skeptical about the secret service). There's no secret police here, since almost anyone who's foolish enough to volunteer to do some work could easily infiltrate that most cloistered of organizations. I believe there is value in having a PSRT that coordinates security reports, fixes, and disclosures. If the community disagrees, that's cool. But in that case there's not much point in a security at python dot org mailing list or a PSRT, so let's disband them. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQCVAwUBRkm9RHEjvBPtnXfVAQKO5gP+IE+AhsUo28ayVojGWbIyupV0eIYBrOke R+Hvulllcr9LAVmlxlWNZV+TeReavKL+SSzmoyzj/Dv2U5szvTRld7Ca4PBl+mJ8 mfyjqg6uWp1At4OVhf93J6JCrLZkw2sY1lH+yAfcvmxivTr7Rf5+vugDJ822enUt pKtcowVQCwI= =ms5P -----END PGP SIGNATURE----- From skip at pobox.com Tue May 15 16:07:38 2007 From: skip at pobox.com (skip at pobox.com) Date: Tue, 15 May 2007 09:07:38 -0500 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <46494098.20907@v.loewis.de> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> <17991.54414.768930.125733@montanaro.dyndns.org> <4647F332.3000605@v.loewis.de> <20070514115526.GA6296@panix.com> <46494098.20907@v.loewis.de> Message-ID: <17993.48810.775316.900835@montanaro.dyndns.org> >> On Mon, May 14, 2007, "Martin v. L?wis" wrote: >>> Skip(?): >>>> In the meantime (thinking out loud here), would it be possible to >>>> keep search engines from seeing a submission or an edit until a >>>> trusted person has had a chance to approve it? >>> It would be possible, but I would strongly oppose it. A bug tracker >>> where postings need to be approved is just unacceptable. >> >> Could you expand this, please? It sounds like Skip is just talking >> about a dynamic robots.txt, essentially. Anyone coming in to the >> tracker itself should still see everything. Martin> I must have misunderstood Skip then - I thought he had a scheme Martin> in mind where an editor would have approve postings before they Martin> become visible to tracker users; the tracker itself cannot Martin> distinguish between a search engine and a regular (anonymous) Martin> user. Okay, let me expand. ;-) I didn't mean do dynamically update robots.txt. I meant to modify Roundup to restrict view of items which have not yet been explicitly or implicitly approved. I envision three classes of users: 1. People with no special credentials (I include anonymous users such as search engine spiders in this class) 2. Tracker admins (Erik, Aahz, Martin, me, etc) 3. Other trusted users (include admins in this group - they are the root of the trust network). Anyone can submit an item or edit an item, however, if that person is not trusted, their submissions need to be approved by a trusted user before they are made visible to the unwashed masses in group 1. Also, such users will not be able to see any unapproved items. (That thwarts the desire of the spammers for visibility - search engine spiders will not know their submissions exist, and anonymous users will just get 404 responses when they try to access unapproved attachments or submissions.) The intent is that this would be done by modifying Roundup. True, initially, lots of submissions would be held for review, but I think we would fairly quickly expand the trust network to a larger, fairly static group of users. Once someone adds Guido to the trust network, any pending and future modifications of his will be visible to the world. Once trusted, Guido can extend the trust network himself, by, for example adding Georg to the network. Also, once trusted, a user would see everything and would be able to approve individual submissions. Again, as I indicated, I was thinking out loud. I don't think this is a trivial undertaking. I suspect the approach might work for a number of similar systems (Trac, MoinMoin, etc), not just Roundup though. Skip From barry at python.org Tue May 15 16:08:10 2007 From: barry at python.org (Barry Warsaw) Date: Tue, 15 May 2007 10:08:10 -0400 Subject: [Python-Dev] Draft PEP: Maintenance of Python Releases In-Reply-To: <46493D47.2030304@v.loewis.de> References: <46457AF8.7070900@v.loewis.de> <87bqgqb3eq.fsf@uwakimon.sk.tsukuba.ac.jp> <464764C9.8090707@v.loewis.de> <878xbrbeug.fsf@uwakimon.sk.tsukuba.ac.jp> <4648D4BD.9050600@v.loewis.de> <87646ubt1q.fsf@uwakimon.sk.tsukuba.ac.jp> <46493D47.2030304@v.loewis.de> Message-ID: <69BA09B4-DACC-4F7A-A350-E47EC627BEA7@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 15, 2007, at 12:55 AM, Martin v. L?wis wrote: > I don't think I can be more plain than that: yes, I do not take > security > seriously enough to release security fixes for old Python versions > more > than once a year. As a user, it's easy to demand things, and people > really have to learn that in open source, all things are done by > volunteers, and that demanding gets you nowhere. To get a better > service, somebody really has to volunteer and offer it. I've volunteered, and I contend that this community is big enough that we can recruit more people if necessary. So the question really comes down to what is in the best interest of Python. If resources weren't an issue, would you still say that doing security releases once a year is enough? If so, and if that represents the consensus of the community, then that's what we'll do. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQCVAwUBRkm+ynEjvBPtnXfVAQJW/gQAnzhTEwt9/YCydkRTqI51Z9iAQTikaDpI /2YMpvv6nxJX7dUoDQam08T8BoZ0Vt2iXFXMQ90GD99nYOevFTKMSx7u4l/kY/Do U0a4BG8lVaIZUS5ipW/7suvrQtlkEDqLQ9qpms2LP+6J/32wugw6YLPEi5PyiurM Hax4oeJB37A= =fLgb -----END PGP SIGNATURE----- From ct at gocept.com Tue May 15 15:05:01 2007 From: ct at gocept.com (Christian Theune) Date: Tue, 15 May 2007 13:05:01 +0000 (UTC) Subject: [Python-Dev] updated for gdbinit Message-ID: Hi, I tried to use gdbinit today and found that the "fragile" pystacks macro didn't work anymore. I don't know gdb very well, but this turned out to work a bit more reliably: # print the entire Python call stack define pystack set $last=0 while $sp != $last if $pc > PyEval_EvalFrame && $pc < PyEval_EvalCodeEx pyframe end set $last=$sp up-silently 1 end select-frame 0 end Just in case anybody might want to use this and update the existing gdbinit. Christian PS: I'm not subscribed to this list. From skip at pobox.com Tue May 15 16:26:55 2007 From: skip at pobox.com (skip at pobox.com) Date: Tue, 15 May 2007 09:26:55 -0500 Subject: [Python-Dev] updated for gdbinit In-Reply-To: References: Message-ID: <17993.49967.807609.243250@montanaro.dyndns.org> Christian> I tried to use gdbinit today and found that the "fragile" Christian> pystacks macro didn't work anymore. I don't know gdb very Christian> well, but this turned out to work a bit more reliably: ... Thanks. I'll give it a try and check it in if it checks out. Skip From exarkun at divmod.com Tue May 15 16:42:51 2007 From: exarkun at divmod.com (Jean-Paul Calderone) Date: Tue, 15 May 2007 10:42:51 -0400 Subject: [Python-Dev] updated for gdbinit In-Reply-To: <17993.49967.807609.243250@montanaro.dyndns.org> Message-ID: <20070515144251.30678.1822196285.divmod.quotient.517@ohm> On Tue, 15 May 2007 09:26:55 -0500, skip at pobox.com wrote: > Christian> I tried to use gdbinit today and found that the "fragile" > Christian> pystacks macro didn't work anymore. I don't know gdb very > Christian> well, but this turned out to work a bit more reliably: > > ... > >Thanks. I'll give it a try and check it in if it checks out. > It would also be nice if it handled non-main threads. This is accomplished by additionally checking if the pc is in t_bootstrap (ie, between that and thread_PyThread_start_new_thread). Jean-Paul From nnorwitz at gmail.com Tue May 15 23:05:21 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 15 May 2007 14:05:21 -0700 Subject: [Python-Dev] marshal and ssize_t (PEP 353) Message-ID: Martin, I'm looking at Python/marshal.c and there are a lot of places that don't support sequences that are larger than would fit into size(int). I looked for marshal referenced in the PEP and didn't find anything. Was this an oversight or intentional? To give you some examples of what I mean from the code: (line 255) n = PyString_GET_SIZE(v); if (n > INT_MAX) { /* huge strings are not supported */ p->depth--; p->error = 1; return; } w_long((long)n, p); w_string(PyString_AS_STRING(v), (int)n, p); ... (line 717) n = r_long(p); if (n < 0 || n > INT_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data"); return NULL; } v = PyTuple_New((int)n); if (v == NULL) return v; for (i = 0; i < n; i++) { v2 = r_object(p); if ( v2 == NULL ) { if (!PyErr_Occurred()) PyErr_SetString(PyExc_TypeError, "NULL object in marshal data"); Py_DECREF(v); v = NULL; break; } PyTuple_SET_ITEM(v, (int)i, v2); Also, the PEP references the ssize_t branch which no longer exists. Is it possible to reference the specific revision: 42382? Thanks, n From martin at v.loewis.de Tue May 15 23:41:56 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 15 May 2007 23:41:56 +0200 Subject: [Python-Dev] marshal and ssize_t (PEP 353) In-Reply-To: References: Message-ID: <464A2924.8050806@v.loewis.de> > I'm looking at Python/marshal.c and there are a lot of places that > don't support sequences that are larger than would fit into size(int). > I looked for marshal referenced in the PEP and didn't find anything. > Was this an oversight or intentional? These changes were only made after merging the ssize_t branch, namely in r42883. They were intentional, in the sense that the ssize_t changes were meant to *only* change the API. Supporting larger strings would have been a change to the marshal format as well, and that was not within the mandate of PEP 353. Now, if you think the marshal format should change as well to support large strings, that may be worth considering. There are two design alternatives: - change the 's', 't', and 'u' codes to use an 8-byte argument That would be an incompatible change that would also blow up marshal data which don't need it (by 4 bytes per string value). - introduce additional codes (like 'S', 'T', and 'U') that take 8-byte lengths. That would be (forward?) compatible, in that old marshal data can be still read in new implementations, and mostly backwards-compatible, assuming that S/T/U get used only when needed. However, it would complicate the implementation. I'm still leaning towards "don't change", since I don't expect that such string objects occur in source code, and since I still think source code / .pyc is/should be the major application of marshal. Regards, Martin From martin at v.loewis.de Tue May 15 23:45:06 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 15 May 2007 23:45:06 +0200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <20070513000049.79ECB78236@psf.upfronthosting.co.za><4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> Message-ID: <464A29E2.1060207@v.loewis.de> > If we anticipate users rather than programmers to register (as if so, it > would be nice to collect that info to formulate sensible responses), then > questions like > The orb that shines in the sky during the day. ____ This question I could not answer, because I don't know what an orb is (it's not an object request broker, right?) Is the answer "sun"? Regards, Martin From martin at v.loewis.de Tue May 15 23:49:50 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 15 May 2007 23:49:50 +0200 Subject: [Python-Dev] marshal and ssize_t (PEP 353) In-Reply-To: <464A2924.8050806@v.loewis.de> References: <464A2924.8050806@v.loewis.de> Message-ID: <464A2AFE.3060807@v.loewis.de> Ah, forgot to mention that a browsable version of the branch is at http://svn.python.org/view/python/branches/ssize_t/?rev=42382 Unfortunately, you cannot check out that URL. OTOH, you can checkout "peg revisions" (I have no clue what a peg is) http://svn.python.org/projects/python/branches/ssize_t at 42382 but that URL is, unfortunately, not browsable. Regards, Martin From guido at python.org Wed May 16 00:08:02 2007 From: guido at python.org (Guido van Rossum) Date: Tue, 15 May 2007 15:08:02 -0700 Subject: [Python-Dev] marshal and ssize_t (PEP 353) In-Reply-To: <464A2924.8050806@v.loewis.de> References: <464A2924.8050806@v.loewis.de> Message-ID: On 5/15/07, "Martin v. L?wis" wrote: > > I'm looking at Python/marshal.c and there are a lot of places that > > don't support sequences that are larger than would fit into size(int). > > I looked for marshal referenced in the PEP and didn't find anything. > > Was this an oversight or intentional? > > These changes were only made after merging the ssize_t branch, > namely in r42883. > > They were intentional, in the sense that the ssize_t changes were > meant to *only* change the API. Supporting larger strings would > have been a change to the marshal format as well, and that was not > within the mandate of PEP 353. > > Now, if you think the marshal format should change as well to > support large strings, that may be worth considering. There > are two design alternatives: > - change the 's', 't', and 'u' codes to use an 8-byte argument > That would be an incompatible change that would also blow up > marshal data which don't need it (by 4 bytes per string value). > - introduce additional codes (like 'S', 'T', and 'U') that take > 8-byte lengths. That would be (forward?) compatible, in that > old marshal data can be still read in new implementations, > and mostly backwards-compatible, assuming that S/T/U get used > only when needed. However, it would complicate the > implementation. > > I'm still leaning towards "don't change", since I don't expect > that such string objects occur in source code, and since I still > think source code / .pyc is/should be the major application > of marshal. Agreed. I see little use to changing .pyc files to support >2G literals or bytecode. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From skip at pobox.com Wed May 16 00:55:43 2007 From: skip at pobox.com (skip at pobox.com) Date: Tue, 15 May 2007 17:55:43 -0500 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <464A29E2.1060207@v.loewis.de> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> Message-ID: <17994.14959.508083.953123@montanaro.dyndns.org> >> The orb that shines in the sky during the day. ____ Martin> This question I could not answer, because I don't know what an Martin> orb is (it's not an object request broker, right?) Martin> Is the answer "sun"? It is indeed. I would use "star" instead of "orb". It might be reasonable to have a translate the questions into a handful of other languages and let the user select the language. Skip From mike.klaas at gmail.com Wed May 16 01:11:24 2007 From: mike.klaas at gmail.com (Mike Klaas) Date: Tue, 15 May 2007 16:11:24 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <20070513000049.79ECB78236@psf.upfronthosting.co.za><4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> Message-ID: On 15-May-07, at 12:32 AM, Georg Brandl wrote: > > There are two problems with this: > * The set of questions is limited, and bots can be programmed to > know them all. Sure, but if someone is customizing their bot to python's issue tracker, in all likelyhood they would have to be dealt with specially anyway. Foiling automated bots shoud be the first priority--they should represent the vast majority of cases. > * Even programmers might not immediately know an answer, and I can > understand > them turning away on that occasion (take for example the "name- > binding" term). It would be hard to make it so easy that anyone with business submitting a bug report should know the answer: What python keyword is used to define a function? What file extension is typically used for python source files? etc. If there is still worry, then a failed answer could simply be the moderation trigger. -Mike From nnorwitz at gmail.com Wed May 16 06:27:24 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 15 May 2007 21:27:24 -0700 Subject: [Python-Dev] marshal and ssize_t (PEP 353) In-Reply-To: <464A2924.8050806@v.loewis.de> References: <464A2924.8050806@v.loewis.de> Message-ID: On 5/15/07, "Martin v. L?wis" wrote: > > I'm looking at Python/marshal.c and there are a lot of places that > > don't support sequences that are larger than would fit into size(int). > > I looked for marshal referenced in the PEP and didn't find anything. > > Was this an oversight or intentional? > > I'm still leaning towards "don't change", since I don't expect > that such string objects occur in source code, and since I still > think source code / .pyc is/should be the major application > of marshal. I agree this is fine. I'll update the PEP with this rationale and the link(s) you provided unless anyone objects. That way we have it clearly documented that this was intentional. I didn't remember this discussion back when ssize_t was done. n From stephen at xemacs.org Wed May 16 06:49:50 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 16 May 2007 13:49:50 +0900 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <17994.14959.508083.953123@montanaro.dyndns.org> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> <17994.14959.508083.953123@montanaro.dyndns.org> Message-ID: <87k5v99xu9.fsf@uwakimon.sk.tsukuba.ac.jp> skip at pobox.com writes: > >> The orb that shines in the sky during the day. ____ > > Martin> This question I could not answer, because I don't know what an > Martin> orb is (it's not an object request broker, right?) > > Martin> Is the answer "sun"? > > It is indeed. I would use "star" instead of "orb". And what happens if the user writes "the sun"? Everyday knowledge is pretty slippery. > It might be reasonable to have a translate the questions into a > handful of other languages and let the user select the language. Since English is the common language used in the community, I think a better source of questions would be the English language itself, such as: How many words are in the question on this line? ___ten___ John threw a ball at Mark. Who threw it? __John___ John was thrown a ball by Mark. Who threw it? __Mark___ I think most human readers able to use the tracker would be able to handle even the passive "was thrown" construction without too much trouble. We could also use easy "reading comprehension" questions, say from the Iowa achievement test for 11-year-olds. :-) Or even the SAT (GMAT, LSAT); there must be banks of practice questions for those. (Copyright might be a problem, though. Any fifth-grade teachers who write drill programs for their kids out there?) You could also have the user evaluate a simple Python program fragment. Probably it should contain an obvious typo or two to foil a program that evals it. It would be sad if somebody who could write a program to handle any of those couldn't find a better job than working for spammers .... From tjreedy at udel.edu Wed May 16 06:39:26 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 16 May 2007 00:39:26 -0400 Subject: [Python-Dev] Summary of Tracker Issues References: <20070513000049.79ECB78236@psf.upfronthosting.co.za><4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> Message-ID: "Georg Brandl" wrote in message news:f2bnlr$b14$1 at sea.gmane.org... | Terry Reedy schrieb: | > How about asking a Python specific question, with answered filled in rather | > that multiple choice selected: I would be willing to make up a bunch. And I would spend longer than a couple of minutes at 3am to do so. | There are two problems with this: | * The set of questions is limited, but unbounded. I would aim for, say, 50 to start | and bots can be programmed to know them all. by hacking into the site? or by repeated failed attempts? Then someone has to answer the questions correctly to put the correct answers in. A lot of work for very temporary access (a day?) to one site. Then maybe I reword the questions or add new ones, so more programming is needed. | * Even programmers might not immediately know an answer, and I can | understand them turning away on that occasion (take for example the "name- | binding" term). I would expect and want review by others, including non-American, non-native-English speakers to weed out unintended obscurities and ambiguities. | > | automatically creates throw-away e-mail accounts with a range of free | > | web-mail providers for registration purposes. | > | > Either don't accept registrations from such accounts (as other sites have | > done), or require extra verification steps or require approval of the first | > post. How many current legitimate registered users use such? | | This is impossible to find out, I think, since SF.net does not publicly show | real e-mail addresses, instead, each user has an alias username at sourceforge.net. If the list of registrants we got from SF does not have real emails, we will need such to validate accounts on our tracker so *it* can send emails. Terry From tjreedy at udel.edu Wed May 16 06:47:03 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 16 May 2007 00:47:03 -0400 Subject: [Python-Dev] Summary of Tracker Issues References: <20070513000049.79ECB78236@psf.upfronthosting.co.za><4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> Message-ID: ""Martin v. L?wis"" wrote in message news:464A29E2.1060207 at v.loewis.de... |> If we anticipate users rather than programmers to register (as if so, it | > would be nice to collect that info to formulate sensible responses), then | > questions like | > The orb that shines in the sky during the day. ____ | | This question I could not answer, because I don't know what an orb is | (it's not an object request broker, right?) Ugh. The sort of reason why I would want review both by myself when not half asleep and by others ;-) | Is the answer "sun"? Yes. Skip is right about 'star' My underlying point: seeing porno spam on the practice site gave me a bad itch both because I detest spammers in general and because I would not want visitors turned off to Python by something that is completely out of place and potentially offensive to some. So I am willing to help us not throw up our hands in surrender. Terry Jan Reedy From talin at acm.org Wed May 16 07:51:04 2007 From: talin at acm.org (Talin) Date: Tue, 15 May 2007 22:51:04 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <20070513000049.79ECB78236@psf.upfronthosting.co.za><4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> Message-ID: <464A9BC8.5070703@acm.org> Terry Reedy wrote: > My underlying point: seeing porno spam on the practice site gave me a bad > itch both because I detest spammers in general and because I would not want > visitors turned off to Python by something that is completely out of place > and potentially offensive to some. So I am willing to help us not throw up > our hands in surrender. Typically spammers don't go through the effort to do a custom login script for each different site. Instead, they do a custom login script for each of the various software applications that support end-user comments. So for example, there's a script for WordPress, and one for PHPNuke, and so on. For applications that allow entries to be added via the web, the solution to spam is pretty simple, which is to make the comment submission form deviate from the normal submission process for that package. For example, in WordPress, you could rename the PHP URL that posts a comment to an article to a non-standard name. The spammer's script generally isn't smart enough to figure out how to post based on an examination of the page, it just knows that for WordPress, the way to submit comments is via a particular URL with particular params. There are various other solutions. The spammer's client isn't generally a full browser, it's just a bare HTTP robot, so if there's some kind of Javascript that is required to post, then the spammer probably won't be able to execute it. For example, you could have a hidden field which is a hash of the bug summary line, calculated by the Javascript in the web form, which is checked by the server. (For people who have JS turned off, failing the check would fall back to a captcha or some other manual means of identification.) Preventing spam that comes in via the email gateway is a little harder. One method is to have email submissions mail back a confirmation mail which must be responded to in some semi-intelligent way. Note that this confirmation step need only be done the first time a new user submits a bug, which can automatically add them to a whitelist for future bug submissions. -- Talin From g.brandl at gmx.net Wed May 16 08:27:55 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 16 May 2007 08:27:55 +0200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <20070513000049.79ECB78236@psf.upfronthosting.co.za><4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> Message-ID: Terry Reedy schrieb: > "Georg Brandl" wrote in message > news:f2bnlr$b14$1 at sea.gmane.org... > | Terry Reedy schrieb: > | > How about asking a Python specific question, with answered filled in > rather > | > that multiple choice selected: I would be willing to make up a bunch. > > And I would spend longer than a couple of minutes at 3am to do so. > > | There are two problems with this: > | * The set of questions is limited, > > but unbounded. I would aim for, say, 50 to start > > | and bots can be programmed to know them all. > > by hacking into the site? or by repeated failed attempts? By requesting a registration form over and over, and recording all questions. A human would then answer them, which is easily done for 50 questions (provided that they are *not* targeted at experienced Python programmers, which shouldn't be done). > Then someone > has to answer the questions correctly to put the correct answers in. A lot > of work for very temporary access (a day?) to one site. Assuming that you don't replace all questions after killing the bot account, he can get the access again very easily. > Then maybe I reword the questions or add new ones, so more programming is > needed. > > | * Even programmers might not immediately know an answer, and I can > | understand them turning away on that occasion (take for example the > "name- > | binding" term). > > I would expect and want review by others, including non-American, > non-native-English speakers to weed out unintended obscurities and > ambiguities. That's necessary in any case. Georg From stephen at xemacs.org Wed May 16 11:07:36 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 16 May 2007 18:07:36 +0900 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> Message-ID: <87d5119lwn.fsf@uwakimon.sk.tsukuba.ac.jp> Georg Brandl writes: > By requesting a registration form over and over, and recording all > questions. A human would then answer them, which is easily done for > 50 questions (provided that they are *not* targeted at experienced > Python programmers, which shouldn't be done). We are not going to publish all 50 questions at once. ISTM you need one only question requiring human attention at a time, because once a spammer assigns a human (or inhuman of equivalent intelligence) to cracking you, you're toast. Use it for a short period of time (days, maybe even weeks). The crucial thing is that questions (or site-specific answers that require reading comprehension to obtain from the page) differ across sites; they must not be shared. Now it's much faster for the human to simply do the registration on the current question, and then point the spambot at the site and vandalize 10,000 or so issues. If we can force them to do that, though, I think we're going to win. (In a "scorched earth" sense, maybe, but the spammers will get burned too.) Note that one crucial aspect is to record the ID of the question that each account authenticated with (at creation, not at login -- the person's password is a different token). Then have a Big Red Switch that hides[1] all data entered by accounts that authenticated with that question. Of course admins only throw the switch on actually seeing the spam, but since all data is associated with a creation token, you can nuke all of it, even if the spammer has had forethought to create multiple accounts with the Question of the Day, with *one* switch. And if they try to save such an account for tomorrow, cool! they're busted right there. You can get smarter than that (ie, by only barring access to data by accounts that touch more than a small number of issues in a short period of time), if you wish, but that should be sufficient unless you're getting dozens of new users during the validity period for a given question. I guess there will need to be a special token, available only to accounts confirmed by admins, to recover accounts for people who happen to have the same "birthday" as a spammer. Footnotes: [1] Ie, requires user action to become visible, and is tagged as "possible spam". This requires a new attribute on data items, and some programming, but since roundup has to recreate the page for every request (even if it caches, it has to do so for every new item; it's not a problem to invalidate the cache and recreate, I bet), I think it's probably not going to require huge amounts of extra effort or changes in the basic design. [2] Probabilistically. If the spammers are cracking your site on average every 10 days, rotate the question every 5 days. 50 questions means protection for most of a year in that case. From castironpi at comcast.net Wed May 16 11:13:13 2007 From: castironpi at comcast.net (Aaron Brady) Date: Wed, 16 May 2007 04:13:13 -0500 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <87d5119lwn.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <20070516091329.192F01E4010@bag.python.org> > -----Original Message----- > From: python-dev-bounces+castironpi=comcast.net at python.org [mailto:python- > dev-bounces+castironpi=comcast.net at python.org] On Behalf Of Stephen J. > Turnbull > > ISTM you need one only question requiring human attention at a time, > because once a spammer assigns a human (or inhuman of equivalent > intelligence) to cracking you, you're toast. I can't believe this is still profitable. It's either lucrative or fulfilling, and malice, if the latter. From kristjan at ccpgames.com Wed May 16 11:54:37 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 16 May 2007 09:54:37 +0000 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070516091329.192F01E4010@bag.python.org> References: <87d5119lwn.fsf@uwakimon.sk.tsukuba.ac.jp> <20070516091329.192F01E4010@bag.python.org> Message-ID: <4E9372E6B2234D4F859320D896059A9508DAACBA96@exchis.ccp.ad.local> > -----Original Message----- > > ISTM you need one only question requiring human attention at a time, > > because once a spammer assigns a human (or inhuman of equivalent > > intelligence) to cracking you, you're toast. > > I can't believe this is still profitable. It's either lucrative or > fulfilling, and malice, if the latter. At any rate, it is hardly such an urgent problem that it needs all this brainpower poured into it. And it almost certainly doesn't require novel solutions. Kristj?n From stephen at xemacs.org Wed May 16 12:09:52 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 16 May 2007 19:09:52 +0900 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070516091329.192F01E4010@bag.python.org> References: <87d5119lwn.fsf@uwakimon.sk.tsukuba.ac.jp> <20070516091329.192F01E4010@bag.python.org> Message-ID: <87bqgl9j0v.fsf@uwakimon.sk.tsukuba.ac.jp> Aaron Brady writes: > > ISTM you need one only question requiring human attention at a time, > > because once a spammer assigns a human (or inhuman of equivalent > > intelligence) to cracking you, you're toast. > > I can't believe this is still profitable. It's either lucrative or > fulfilling, and malice, if the latter. That's precisely my point. I don't think it is profitable, and therefore at a reasonable expense to us (one of us makes up a question every couple of days) we can make the tracker an unprofitable target for spammers, and probably avoid most spam. There's ample evidence of malicious behavior by spammers who feel threatened or thwarted, though. From castironpi at comcast.net Wed May 16 12:07:36 2007 From: castironpi at comcast.net (Aaron Brady) Date: Wed, 16 May 2007 05:07:36 -0500 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <87bqgl9j0v.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <20070516100740.B0A2D1E4002@bag.python.org> > -----Original Message----- > From: Stephen J. Turnbull [mailto:stephen at xemacs.org] > Sent: Wednesday, May 16, 2007 5:10 AM > To: Aaron Brady > Cc: 'Georg Brandl'; python-dev at python.org > Subject: Re: [Python-Dev] Summary of Tracker Issues > > Aaron Brady writes: > > > > ISTM you need one only question requiring human attention at a time, > > > because once a spammer assigns a human (or inhuman of equivalent > > > intelligence) to cracking you, you're toast. > > > > I can't believe this is still profitable. It's either lucrative or > > fulfilling, and malice, if the latter. > > That's precisely my point. I don't think it is profitable, and > therefore at a reasonable expense to us (one of us makes up a question > every couple of days) we can make the tracker an unprofitable target > for spammers, and probably avoid most spam. > > There's ample evidence of malicious behavior by spammers who feel > threatened or thwarted, though. Can we spam back? /blink/ Click here for free therapy. //blink/ From greg.ewing at canterbury.ac.nz Wed May 16 12:39:43 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 16 May 2007 22:39:43 +1200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <464A29E2.1060207@v.loewis.de> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> Message-ID: <464ADF6F.8060005@canterbury.ac.nz> Martin v. L?wis wrote: > This question I could not answer, because I don't know what an orb is An orb is a sphere. -- Greg From steve at holdenweb.com Wed May 16 13:34:18 2007 From: steve at holdenweb.com (Steve Holden) Date: Wed, 16 May 2007 07:34:18 -0400 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DAACBA96@exchis.ccp.ad.local> References: <87d5119lwn.fsf@uwakimon.sk.tsukuba.ac.jp> <20070516091329.192F01E4010@bag.python.org> <4E9372E6B2234D4F859320D896059A9508DAACBA96@exchis.ccp.ad.local> Message-ID: Kristj?n Valur J?nsson wrote: >> -----Original Message----- >>> ISTM you need one only question requiring human attention at a time, >>> because once a spammer assigns a human (or inhuman of equivalent >>> intelligence) to cracking you, you're toast. >> I can't believe this is still profitable. It's either lucrative or >> fulfilling, and malice, if the latter. > > At any rate, it is hardly such an urgent problem that it needs all this > brainpower poured into it. And it almost certainly doesn't require > novel solutions. > Possibly so, but I can't see c.l.p.dev passing up the chance to discuss this particular bicycle shed. It gets kind of personal when someone is spamming *your* tracker ... ;-) I have already been criticized on c.l.py for suggesting there should be at least one day of the year when we should be allowed to hang spammers up by the nuts (assuming they have any) - "not very welcoming" was the phrase, IIRC. So maybe I'm no longer rational on this topic. or-any-other-for-that-matter-ly y'rs - steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://del.icio.us/steve.holden ------------------ Asciimercial --------------------- Get on the web: Blog, lens and tag your way to fame!! holdenweb.blogspot.com squidoo.com/pythonology tagged items: del.icio.us/steve.holden/python All these services currently offer free registration! -------------- Thank You for Reading ---------------- From steve at holdenweb.com Wed May 16 13:55:03 2007 From: steve at holdenweb.com (Steve Holden) Date: Wed, 16 May 2007 07:55:03 -0400 Subject: [Python-Dev] Official version support statement In-Reply-To: <17990.24299.900967.926771@uwakimon.sk.tsukuba.ac.jp> References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> <4644F51C.8070000@v.loewis.de> <17989.27755.568352.465481@uwakimon.sk.tsukuba.ac.jp> <17990.24299.900967.926771@uwakimon.sk.tsukuba.ac.jp> Message-ID: Stephen J. Turnbull wrote: > Terry Reedy writes: > > > "Stephen J. Turnbull" wrote in message > > news:17989.27755.568352.465481 at uwakimon.sk.tsukuba.ac.jp... > > | The impression that many people (including python-dev regulars) have > > | that there is a "policy" of "support" for both the current release > > | (2.5) and the (still very widely used) previous release (2.4) is a > > | real problem, and needs to be addressed. > > > I agree that such mis-understanding should be addressed. So I now think a > > paragraph summarizing Martin's info PEP, ending with "For details, see > > PEPxxx.", would be a good idea. > > FWIW, after Martin's explanation, and considering the annoyance of > keeping updates sync'ed (can PEPs be amended after acceptance, or only > superseded by a new PEP, like IETF RFCs?), I tend to support Barry's > suggestion of a brief listing of current releases and next planned, > and "Python policy concerning release planning is defined by [the > current version of] PEPxxx", with a link. In which case doesn't it make more sense to use the existing mechanism of PEP 356 (Release Schedule)? If something isn't listed in there (even without dates) then there are no current plans to release it, and that tells the reader everything they need to know. At the moment the PEP begins with "This document describes the development and release schedule for Python 2.5." but it could just as easily say "future releases of the Python 2.X series" or something similar. Which reminds me, that PEP needs updating! regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://del.icio.us/steve.holden ------------------ Asciimercial --------------------- Get on the web: Blog, lens and tag your way to fame!! holdenweb.blogspot.com squidoo.com/pythonology tagged items: del.icio.us/steve.holden/python All these services currently offer free registration! -------------- Thank You for Reading ---------------- From ncoghlan at gmail.com Wed May 16 15:22:35 2007 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 16 May 2007 23:22:35 +1000 Subject: [Python-Dev] Official version support statement In-Reply-To: References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> <4644F51C.8070000@v.loewis.de> <17989.27755.568352.465481@uwakimon.sk.tsukuba.ac.jp> <17990.24299.900967.926771@uwakimon.sk.tsukuba.ac.jp> Message-ID: <464B059B.9030505@gmail.com> Steve Holden wrote: > In which case doesn't it make more sense to use the existing mechanism > of PEP 356 (Release Schedule)? If something isn't listed in there (even > without dates) then there are no current plans to release it, and that > tells the reader everything they need to know. > > At the moment the PEP begins with "This document describes the > development and release schedule for Python 2.5." but it could just as > easily say "future releases of the Python 2.X series" or something similar. > > Which reminds me, that PEP needs updating! Those release schedule PEPs are mainly a TODO list leading up to the 2.x.0 releases, though - there's a new one for each major version bump: PEP 160 - Python 1.6 PEP 200 - Python 2.0 PEP 226 - Python 2.1 PEP 251 - Python 2.2 PEP 283 - Python 2.3 PEP 320 - Python 2.4 PEP 356 - Python 2.5 PEP 361 - Python 2.6 Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From steve at holdenweb.com Wed May 16 15:33:57 2007 From: steve at holdenweb.com (Steve Holden) Date: Wed, 16 May 2007 09:33:57 -0400 Subject: [Python-Dev] Official version support statement In-Reply-To: <464B059B.9030505@gmail.com> References: <343F26A5-7DFE-4757-BA4D-2A666CE85215@python.org> <4643A0C3.4070408@v.loewis.de> <6A40659D-B011-4198-B6FD-5D28A18B57EA@python.org> <4644F51C.8070000@v.loewis.de> <17989.27755.568352.465481@uwakimon.sk.tsukuba.ac.jp> <17990.24299.900967.926771@uwakimon.sk.tsukuba.ac.jp> <464B059B.9030505@gmail.com> Message-ID: <464B0845.6060603@holdenweb.com> Nick Coghlan wrote: > Steve Holden wrote: >> In which case doesn't it make more sense to use the existing mechanism >> of PEP 356 (Release Schedule)? If something isn't listed in there >> (even without dates) then there are no current plans to release it, >> and that tells the reader everything they need to know. >> >> At the moment the PEP begins with "This document describes the >> development and release schedule for Python 2.5." but it could just as >> easily say "future releases of the Python 2.X series" or something >> similar. >> >> Which reminds me, that PEP needs updating! > > Those release schedule PEPs are mainly a TODO list leading up to the > 2.x.0 releases, though - there's a new one for each major version bump: > > PEP 160 - Python 1.6 > PEP 200 - Python 2.0 > PEP 226 - Python 2.1 > PEP 251 - Python 2.2 > PEP 283 - Python 2.3 > PEP 320 - Python 2.4 > PEP 356 - Python 2.5 > PEP 361 - Python 2.6 > > Cheers, > Nick. > Thanks, it wouldn't be appropriate then (and 361 *doesn't* need updating). regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://del.icio.us/steve.holden ------------------ Asciimercial --------------------- Get on the web: Blog, lens and tag your way to fame!! holdenweb.blogspot.com squidoo.com/pythonology tagged items: del.icio.us/steve.holden/python All these services currently offer free registration! -------------- Thank You for Reading ---------------- From jcarlson at uci.edu Wed May 16 18:38:25 2007 From: jcarlson at uci.edu (Josiah Carlson) Date: Wed, 16 May 2007 09:38:25 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <464A9BC8.5070703@acm.org> References: <464A9BC8.5070703@acm.org> Message-ID: <20070516091407.8587.JCARLSON@uci.edu> Talin wrote: > Terry Reedy wrote: > > My underlying point: seeing porno spam on the practice site gave me a bad > > itch both because I detest spammers in general and because I would not want > > visitors turned off to Python by something that is completely out of place > > and potentially offensive to some. So I am willing to help us not throw up > > our hands in surrender. > > There are various other solutions. The spammer's client isn't generally > a full browser, it's just a bare HTTP robot, so if there's some kind of > Javascript that is required to post, then the spammer probably won't be > able to execute it. For example, you could have a hidden field which is > a hash of the bug summary line, calculated by the Javascript in the web > form, which is checked by the server. (For people who have JS turned > off, failing the check would fall back to a captcha or some other manual > means of identification.) I'm not sure how effective the question/answer stuff is, but a bit of javascript seems to be a good idea. What has also worked on a phpbb forum that I admin is "Stop Spambot Registration". As the user is registering, it tells them not enter in any profile information when they are registering, that they should do that later. Anyone who enters any profile information is flagged as a spammer, their registration rejected, and I get an email (of the 35 rejections I've received, none have been legitimate users, and only one smart spambot got through, but he had a drug-related name and was easy to toss). If we include fake profile entries during registration that we tell people not to fill in (like 'web page', 'interests', etc.), we may catch some foolish spambots. Of course there is the other *really* simple option of just renaming registration form entry names. Have a 'username' field, but make it hidden and empty by default, rejecting registration if it is not empty. The real login form name could be generated uniquely for each registration attempt, and verified against another hidden form with minimal backend database support. While it would only take a marginally intelligent spambot to defeat it, it should thwart the stupid spambots. - Josiah From martin at v.loewis.de Wed May 16 20:13:26 2007 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 16 May 2007 20:13:26 +0200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <20070513000049.79ECB78236@psf.upfronthosting.co.za><4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> Message-ID: <464B49C6.8030009@v.loewis.de> > My underlying point: seeing porno spam on the practice site gave me a bad > itch both because I detest spammers in general and because I would not want > visitors turned off to Python by something that is completely out of place > and potentially offensive to some. So I am willing to help us not throw up > our hands in surrender. Would that help go so far as to provide patches to the roundup installation? Regards, Martin From aahz at pythoncraft.com Thu May 17 02:58:42 2007 From: aahz at pythoncraft.com (Aahz) Date: Wed, 16 May 2007 17:58:42 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070516091407.8587.JCARLSON@uci.edu> References: <464A9BC8.5070703@acm.org> <20070516091407.8587.JCARLSON@uci.edu> Message-ID: <20070517005842.GA19714@panix.com> On Wed, May 16, 2007, Josiah Carlson wrote: > > I'm not sure how effective the question/answer stuff is, but a bit of > javascript seems to be a good idea. Just for the record (and to few people's surprise, I'm sure), I am entirely opposed to any use of JavaScript. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Look, it's your affair if you want to play with five people, but don't go calling it doubles." --John Cleese anticipates Usenet From anthony at interlink.com.au Thu May 17 05:18:21 2007 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 17 May 2007 13:18:21 +1000 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070517005842.GA19714@panix.com> References: <20070516091407.8587.JCARLSON@uci.edu> <20070517005842.GA19714@panix.com> Message-ID: <200705171318.22153.anthony@interlink.com.au> On Thursday 17 May 2007, Aahz wrote: > On Wed, May 16, 2007, Josiah Carlson wrote: > > I'm not sure how effective the question/answer stuff is, but a > > bit of javascript seems to be a good idea. > > Just for the record (and to few people's surprise, I'm sure), I > am entirely opposed to any use of JavaScript. What about flash, instead, then? /ducks -- Anthony Baxter It's never too late to have a happy childhood. From andrewm at object-craft.com.au Thu May 17 07:00:02 2007 From: andrewm at object-craft.com.au (Andrew McNamara) Date: Thu, 17 May 2007 15:00:02 +1000 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <464A9BC8.5070703@acm.org> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za><4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> Message-ID: <20070517050002.E9D5C600045@longblack.object-craft.com.au> >Typically spammers don't go through the effort to do a custom login >script for each different site. Instead, they do a custom login script >for each of the various software applications that support end-user >comments. So for example, there's a script for WordPress, and one for >PHPNuke, and so on. In my experience, what you say is true - the bulk of the spam comes via generic spamming software that has been hard-coded to work with a finite number of applications. However - once you knock these out, there is still a steady stream of what are clearly human generated spams. The mind boggles at the economics or desperation that make this worthwhile. -- Andrew McNamara, Senior Developer, Object Craft http://www.object-craft.com.au/ From talin at acm.org Thu May 17 07:17:49 2007 From: talin at acm.org (Talin) Date: Wed, 16 May 2007 22:17:49 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070517050002.E9D5C600045@longblack.object-craft.com.au> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za><4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> <20070517050002.E9D5C600045@longblack.object-craft.com.au> Message-ID: <464BE57D.9050103@acm.org> Andrew McNamara wrote: >> Typically spammers don't go through the effort to do a custom login >> script for each different site. Instead, they do a custom login script >> for each of the various software applications that support end-user >> comments. So for example, there's a script for WordPress, and one for >> PHPNuke, and so on. > > In my experience, what you say is true - the bulk of the spam comes via > generic spamming software that has been hard-coded to work with a finite > number of applications. > > However - once you knock these out, there is still a steady stream of > what are clearly human generated spams. The mind boggles at the economics > or desperation that make this worthwhile. Actually, it doesn't cost that much, because typically the spammer can trick other humans into doing their work for them. Here's a simple method: Put up a free porn site, with a front page that says "you must be 18 or older to enter". The page also has a captcha to verify that you are a real person. But here's the trick: The captcha is actually a proxy to some other site that the spammer is trying to get access to. When the human enters in the correct word, the spammer's server sends that word to the target site, which result in a successful login/registration. Now that the spammer is in, they can post comments or whatever they need to do. -- Talin From andrewm at object-craft.com.au Thu May 17 07:30:43 2007 From: andrewm at object-craft.com.au (Andrew McNamara) Date: Thu, 17 May 2007 15:30:43 +1000 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <464BE57D.9050103@acm.org> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za><4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> <20070517050002.E9D5C600045@longblack.object-craft.com.au> <464BE57D.9050103@acm.org> Message-ID: <20070517053043.4155C600045@longblack.object-craft.com.au> >> However - once you knock these out, there is still a steady stream of >> what are clearly human generated spams. The mind boggles at the economics >> or desperation that make this worthwhile. > >Actually, it doesn't cost that much, because typically the spammer can >trick other humans into doing their work for them. > >Here's a simple method: Put up a free porn site, with a front page that >says "you must be 18 or older to enter". The page also has a captcha to >verify that you are a real person. But here's the trick: The captcha is >actually a proxy to some other site that the spammer is trying to get >access to. When the human enters in the correct word, the spammer's >server sends that word to the target site, which result in a successful >login/registration. Now that the spammer is in, they can post comments >or whatever they need to do. Yep - I was aware of this trick, but the ones I'm talking about have also got through filling out questionnaires, and whatnot. Certainly the same technique could be used, but my suspicion is that real people are being paid a pittance to sit in front of a PC and spam anything that moves. -- Andrew McNamara, Senior Developer, Object Craft http://www.object-craft.com.au/ From hrvoje.niksic at avl.com Thu May 17 09:53:45 2007 From: hrvoje.niksic at avl.com (Hrvoje =?UTF-8?Q?Nik=C5=A1i=C4=87?=) Date: Thu, 17 May 2007 09:53:45 +0200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <464BE57D.9050103@acm.org> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> <20070517050002.E9D5C600045@longblack.object-craft.com.au> <464BE57D.9050103@acm.org> Message-ID: <1179388425.6077.58.camel@localhost> On Wed, 2007-05-16 at 22:17 -0700, Talin wrote: > Here's a simple method: Put up a free porn site [...] Is it known that someone actually implemented this? It's a neat trick, but as far as I know, it started as a thought experiment of what *could* be done to fairly easily defeat the captchas, as well as all other circumvention methods that make use of human intelligence. From greg.ewing at canterbury.ac.nz Thu May 17 12:15:01 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 17 May 2007 22:15:01 +1200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <464BE57D.9050103@acm.org> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> <20070517050002.E9D5C600045@longblack.object-craft.com.au> <464BE57D.9050103@acm.org> Message-ID: <464C2B25.5030908@canterbury.ac.nz> Talin wrote: > Here's a simple method: Put up a free porn site, with a front page that > says "you must be 18 or older to enter". The page also has a captcha to > verify that you are a real person. But here's the trick: The captcha is > actually a proxy to some other site that the spammer is trying to get > access to. The "python-related question" technique would probably be somewhat resistant to this, as your average porn surfer probably doesn't know anything about Python. (At least until CP4E takes off and everyone knows Python...) -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | Carpe post meridiem! | Christchurch, New Zealand | (I'm not a morning person.) | greg.ewing at canterbury.ac.nz +--------------------------------------+ From guido at python.org Thu May 17 16:35:15 2007 From: guido at python.org (Guido van Rossum) Date: Thu, 17 May 2007 07:35:15 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <1179388425.6077.58.camel@localhost> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> <20070517050002.E9D5C600045@longblack.object-craft.com.au> <464BE57D.9050103@acm.org> <1179388425.6077.58.camel@localhost> Message-ID: On 5/17/07, Hrvoje Nik?i? wrote: > On Wed, 2007-05-16 at 22:17 -0700, Talin wrote: > > Here's a simple method: Put up a free porn site [...] > > Is it known that someone actually implemented this? It's a neat trick, > but as far as I know, it started as a thought experiment of what *could* > be done to fairly easily defeat the captchas, as well as all other > circumvention methods that make use of human intelligence. I don't have hard data but it's been related to me as true by Googlers who should have first-hand experience. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From scott+python-dev at scottdial.com Thu May 17 17:04:46 2007 From: scott+python-dev at scottdial.com (Scott Dial) Date: Thu, 17 May 2007 11:04:46 -0400 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <1179388425.6077.58.camel@localhost> References: <20070513000049.79ECB78236@psf.upfronthosting.co.za> <4646BC64.4030801@v.loewis.de> <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> <20070517050002.E9D5C600045@longblack.object-craft.com.au> <464BE57D.9050103@acm.org> <1179388425.6077.58.camel@localhost> Message-ID: <464C6F0E.5010908@scottdial.com> Hrvoje Niksic wrote: > On Wed, 2007-05-16 at 22:17 -0700, Talin wrote: >> Here's a simple method: Put up a free porn site [...] > > Is it known that someone actually implemented this? I moderate a discussion forum which was abused with this exact attack. At the time, it was a phpBB forum which had the standard graphical captcha. After switching to a different forum package, the attacks went away. I will assume because (as it has been said) it was no longer a well-known and common interface. However, it may also be because instead of using a graphic (which is easily transplanted to another page), it uses ascii art which would require more effort to extract and move to another page. -Scott -- Scott Dial scott at scottdial.com scodial at cs.indiana.edu From orsenthil at users.sourceforge.net Thu May 17 19:07:04 2007 From: orsenthil at users.sourceforge.net (O.R.Senthil Kumaran) Date: Thu, 17 May 2007 22:37:04 +0530 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <464C6F0E.5010908@scottdial.com> References: <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> <20070517050002.E9D5C600045@longblack.object-craft.com.au> <464BE57D.9050103@acm.org> <1179388425.6077.58.camel@localhost> <464C6F0E.5010908@scottdial.com> Message-ID: <20070517170703.GC9779@gmail.com> * Scott Dial [2007-05-17 11:04:46]: > However, it may also be because instead of using a graphic (which is > easily transplanted to another page), it uses ascii art which would > require more effort to extract and move to another page. Another approach would be a 'text scrambler' logic: You can aclltauy srlbcame the quiotesn psneeetrd wchih only a hmuan can uetrnnadsd pperlory. The quiotesn ovubolsiy slouhd be a vrey vrey slmipe one. And you can hvae a quiotesn form the quiotesn itslef. Site: What is the futorh word of tihs scnnteee? Answer: fourth. Site: You are intelligent, I shall allow you. -- O.R.Senthil Kumaran http://uthcode.sarovar.org From g.brandl at gmx.net Thu May 17 19:19:19 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 17 May 2007 19:19:19 +0200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070517170703.GC9779@gmail.com> References: <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> <20070517050002.E9D5C600045@longblack.object-craft.com.au> <464BE57D.9050103@acm.org> <1179388425.6077.58.camel@localhost> <464C6F0E.5010908@scottdial.com> <20070517170703.GC9779@gmail.com> Message-ID: O.R.Senthil Kumaran schrieb: > * Scott Dial [2007-05-17 11:04:46]: > >> However, it may also be because instead of using a graphic (which is >> easily transplanted to another page), it uses ascii art which would >> require more effort to extract and move to another page. > > Another approach would be a 'text scrambler' logic: > > You can aclltauy srlbcame the quiotesn psneeetrd wchih only a hmuan can > uetrnnadsd pperlory. The quiotesn ovubolsiy slouhd be a vrey vrey slmipe one. > > And you can hvae a quiotesn form the quiotesn itslef. > > Site: What is the futorh word of tihs scnnteee? > > Answer: fourth. > > Site: You are intelligent, I shall allow you. Please bear in mind that non-native speakers who don't have had much exposure to the English language should be able to solve this problem too. I doubt that is the case for the kind of challenge you propose. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From mithrandi-python-dev at mithrandi.za.net Thu May 17 20:51:57 2007 From: mithrandi-python-dev at mithrandi.za.net (Tristan Seligmann) Date: Thu, 17 May 2007 20:51:57 +0200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070517053043.4155C600045@longblack.object-craft.com.au> References: <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> <20070517050002.E9D5C600045@longblack.object-craft.com.au> <464BE57D.9050103@acm.org> <20070517053043.4155C600045@longblack.object-craft.com.au> Message-ID: <20070517185157.GC26076@mithrandi.za.net> * Andrew McNamara [2007-05-17 15:30:43 +1000]: > technique could be used, but my suspicion is that real people are being > paid a pittance to sit in front of a PC and spam anything that moves. http://www.mturk.com/mturk/welcome Complete simple tasks that people do better than computers. And, get paid for it. Learn more. Choose from thousands of tasks, control when you work, and decide how much you earn. -- mithrandi, i Ainil en-Balandor, a faer Ambar -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: Digital signature Url : http://mail.python.org/pipermail/python-dev/attachments/20070517/7eb683ca/attachment.pgp From greg.ewing at canterbury.ac.nz Fri May 18 03:06:41 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 18 May 2007 13:06:41 +1200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070517170703.GC9779@gmail.com> References: <20070515034743.2030F5CC4B5@longblack.object-craft.com.au> <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> <20070517050002.E9D5C600045@longblack.object-craft.com.au> <464BE57D.9050103@acm.org> <1179388425.6077.58.camel@localhost> <464C6F0E.5010908@scottdial.com> <20070517170703.GC9779@gmail.com> Message-ID: <464CFC21.8090806@canterbury.ac.nz> O.R.Senthil Kumaran wrote: > Site: What is the futorh word of tihs scnnteee? > > Answer: fourth. Are you sure it isn't "futorh"?-) -- Greg From orsenthil at users.sourceforge.net Fri May 18 04:46:02 2007 From: orsenthil at users.sourceforge.net (O.R.Senthil Kumaran) Date: Fri, 18 May 2007 08:16:02 +0530 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <464CFC21.8090806@canterbury.ac.nz> References: <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> <20070517050002.E9D5C600045@longblack.object-craft.com.au> <464BE57D.9050103@acm.org> <1179388425.6077.58.camel@localhost> <464C6F0E.5010908@scottdial.com> <20070517170703.GC9779@gmail.com> <464CFC21.8090806@canterbury.ac.nz> Message-ID: <20070518024602.GA3268@gmail.com> * Greg Ewing [2007-05-18 13:06:41]: > > > Site: What is the futorh word of tihs scnnteee? > > Answer: fourth. > > Are you sure it isn't "futorh"?-) > :-) My idea was, a human got to answer it unscrambled as 'fourth' as he "understands" what the question is and gives the proper answer. Agreed, there could be confusion at first. For non-native speakers of English, this could be difficult if their experience with English is less, but we will have to take a chance that anyone capable of reading english should be able to figure it out. Again these are my thoughts and I dont have a good data to prove it. Implementation standpoint, this is one of the easiest I can think of. Thanks, -- O.R.Senthil Kumaran http://uthcode.sarovar.org From nnorwitz at gmail.com Fri May 18 07:21:05 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Thu, 17 May 2007 22:21:05 -0700 Subject: [Python-Dev] recursion limit in marshal Message-ID: I had a little argument with the marshal module on Windows last night, I eventually won. :-) A patch was checked in which would prevent blowing out the stack and segfaulting with this code: marshal.loads( 'c' + ('X' * 4*4) + '{' * 2**20) Originally, I didn't change the recursion limit which was 5000. (See MAX_MARSHAL_STACK_DEPTH in Python/marshal.c) This is a constant in C code that cannot be changed at runtime. The fix worked on most platforms. However it didn't on some (Windows and MIPS?), presumably due a smaller stack limit. I don't know what the stack limits are on each architecture. I dropped the limit to 4000, that still crashed. I eventually settled on 2000. Which passed in 2.6. I don't think there is a test case for the recursion limit when dumping a deeply nested object. I suppose I should add one, because that could also blow the limit too. The point of this message is to see if anyone thinks 2000 is unreasonable. It could probably be raised, but I'm not going to try it since I don't have access to a Windows box. Testing this remotely sucks. n From martin at v.loewis.de Fri May 18 07:52:40 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 18 May 2007 07:52:40 +0200 Subject: [Python-Dev] recursion limit in marshal In-Reply-To: References: Message-ID: <464D3F28.3040100@v.loewis.de> > The point of this message is to see if anyone thinks 2000 is > unreasonable. It could probably be raised, but I'm not going to try > it since I don't have access to a Windows box. Testing this remotely > sucks. If this turns out ever to be a limitation, I would challenge the reporter to rewrite marshal in a non-recursive manner. It shouldn't be that difficult: w_object should operate a queue of objects yet to be written, and Tuple, Dict, Set (why does marshal support writing sets, anyway?) would all just add things to the queue rather than recursively invoking w_object. The only challenge would be code objects, which have a w_long interspersed with the w_object calls. I would fix this by changing the marshal format of code objects, to move the co_firstlineno to the beginning. For reading, a heap-managed stack would be necessary, consisting of a (type, value, position, next) linked list. Again, code objects would need special casing, to allow construction with NULL pointers at first, followed by indexed setting of the code fields. For lists and tuples, position would define the next index to be filled; for dictionaries, it would not be needed, and for code objects, it would index the various elements of the code object. Regards, Martin From stephen at xemacs.org Fri May 18 10:09:51 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 18 May 2007 17:09:51 +0900 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070518024602.GA3268@gmail.com> References: <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> <20070517050002.E9D5C600045@longblack.object-craft.com.au> <464BE57D.9050103@acm.org> <1179388425.6077.58.camel@localhost> <464C6F0E.5010908@scottdial.com> <20070517170703.GC9779@gmail.com> <464CFC21.8090806@canterbury.ac.nz> <20070518024602.GA3268@gmail.com> Message-ID: <87lkfm8sds.fsf@uwakimon.sk.tsukuba.ac.jp> O.R.Senthil Kumaran writes: > :-) My idea was, a human got to answer it unscrambled as 'fourth' as he > "understands" what the question is and gives the proper answer. > Agreed, there could be confusion at first. But for any given user, there's only going to be a first. Either they pass the test the first time and after that authenticate via personal password, or they say WTF!! In that case we could lose all the bug reports they were ever going to write. If we're going to do CAPTCHA, what we're looking for is something that any 4 year old does automatically, but machines can't do at all. Visual recognition used to be one, but isn't any more. The CAPTCHA literature claims that segmentation still is (dividing complex images into letters), but that's nontrivial for humans, too, and I think that machines will eventually catch up. (Ie, within a handful of months.) I think it would be better to do content. URLs come to mind; without something clickable, most commercial spam would be hamstrung. But few bug reports and patches need to contain URLs, except for specialized local ones pointing to related issues. For example, how about requiring user interaction to display any post containing an URL, until an admin approves it? Or you could provide a preview containing the first two non-empty lines not containing an URL. This *would* be inconvenient for large attachments and other data where the reporter prefers to provide an URL rather than the literal data, but OTOH only people who indicate they really want to see spam would see it. ;-) From jeff at taupro.com Fri May 18 17:23:46 2007 From: jeff at taupro.com (Jeff Rush) Date: Fri, 18 May 2007 10:23:46 -0500 Subject: [Python-Dev] Need Survey Answers from Core Developers Message-ID: <464DC502.5000700@taupro.com> Time is short and I'm still looking for answers to some questions about cPython, so that it makes a good showing in the Forrester survey. 1) How is the project governed? How does the community make decisions on what goes into a release? You know, I've been a member of the Python community for many years -- I know about PEPs, Guido as BDFL, and +1/-1. But I've never figured out exactly how -final- decisions are made on what goes into a release. I've never needed to, until now. Can someone explain in one paragraph? 2) Does the language have a formal defined release plan? I know Zope 3's release plan, every six months, but not that of Python. Is there a requirement to push a release out the door every N months, as some projects do, or is each release separately negotiated with developers around a planned set of features? 3) Some crude idea of how many new major and minor features were added in the last release? Yes, I know this is difficult -- the idea it so get some measure of the evolution/stability of cPython re features. Jython and IronPython are probably changing rapidly -- cPython, not such much. 4) How many committers to the cPython core are there? I don't have the necessary access to the pydotorg infrastructure to answer this -- can someone who does help me out here? Thanks for any one-line answers you can dash off to me today. Jeff Rush Python Advocacy Coordinator From guido at python.org Fri May 18 19:40:40 2007 From: guido at python.org (Guido van Rossum) Date: Fri, 18 May 2007 10:40:40 -0700 Subject: [Python-Dev] Mass PEP status changes Message-ID: With the help of Neal Norwitz, Jeremy Hylton, Alex Martelli and Collin Winter, I've greatly reduced the set of open PEPs numbered less than 3000. Here's a summary. Please speak up if we've made a grave error; I take all responsibility for the final decisions. Positive Decisions (Marked Accepted or Final) --------------------------------------------- SF 237 Unifying Long Integers and Integers Zadka, GvR Marked as Final; there's no work left to be done. I 287 reStructuredText Docstring Format Goodger Status changed from Draft to Active. SA 302 New Import Hooks JvR, Moore Marked Accepted. Should this be marked Final, or is there still an unimplemented part? SF 331 Locale-Independent Float/String Conversions Reis Marked Final; this was done years ago (Python 2.3?). Negative Decisions (Rejected, Withdrawn or Deferred) ---------------------------------------------------- SW 228 Reworking Python's Numeric Model Zadka, GvR Withdrawn in favor of PEP 3141 (numeric ABCs). SR 256 Docstring Processing System Framework Goodger Rejected; seems to have run out of steam. SR 258 Docutils Design Specification Goodger Rejected; docutils is no longer slated for stdlib inclusion. SD 267 Optimized Access to Module Namespaces Hylton Deferred; no-one has had time for this. SR 268 Extended HTTP functionality and WebDAV Stein Rejected for lack of interest or progress. SD 280 Optimizing access to globals GvR Deferred; no-one has had time for this. SW 296 Adding a bytes Object Type Gilbert Was withdrawn long ago in favor of PEP 358 (the bytes object). SR 297 Support for System Upgrades Lemburg Rejected; failed to generate support. SD 323 Copyable Iterators Martelli Deferred due to lack of interest. IR 350 Codetags Elliott Rejected for lack of will to standardize. XXX lives! SR 354 Enumerations in Python Finney Rejected; not enough interest, not sufficiently Pythonic. SR 355 Path - Object oriented filesystem paths Lindqvist Rejected as being the ultimate kitchen sink. SR 754 IEEE 754 Floating Point Special Values Warnes Rejected for lack of progress. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From jackdied at jackdied.com Fri May 18 19:39:13 2007 From: jackdied at jackdied.com (Jack Diederich) Date: Fri, 18 May 2007 13:39:13 -0400 Subject: [Python-Dev] Need Survey Answers from Core Developers In-Reply-To: <464DC502.5000700@taupro.com> References: <464DC502.5000700@taupro.com> Message-ID: <20070518173913.GC5429@performancedrivers.com> On Fri, May 18, 2007 at 10:23:46AM -0500, Jeff Rush wrote: > Time is short and I'm still looking for answers to some questions about > cPython, so that it makes a good showing in the Forrester survey. > [snip] > > 4) How many committers to the cPython core are there? > > I don't have the necessary access to the pydotorg infrastructure > to answer this -- can someone who does help me out here? http://www.python.org/dev/committers If the last modified date can be trusted there are currently 77 committers. From brett at python.org Fri May 18 20:00:23 2007 From: brett at python.org (Brett Cannon) Date: Fri, 18 May 2007 11:00:23 -0700 Subject: [Python-Dev] [PEPs] Mass PEP status changes In-Reply-To: References: Message-ID: On 5/18/07, Guido van Rossum wrote: > > With the help of Neal Norwitz, Jeremy Hylton, Alex Martelli and Collin > Winter, I've greatly reduced the set of open PEPs numbered less than > 3000. Here's a summary. Please speak up if we've made a grave error; I > take all responsibility for the final decisions. > > Positive Decisions (Marked Accepted or Final) > --------------------------------------------- > > SF 237 Unifying Long Integers and Integers Zadka, GvR > Marked as Final; there's no work left to be done. > > I 287 reStructuredText Docstring Format Goodger > Status changed from Draft to Active. > > SA 302 New Import Hooks JvR, Moore > Marked Accepted. Should this be marked Final, or is there still an > unimplemented part? Everything added by this PEP is not covered in the official docs. Otherwise the PEP is implemented. There is mention of a possible "part 2" where the built-in import gets refactored to use what this PEP introduces, but that can be a separate PEP for possible changes to import itself. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070518/c5b68547/attachment.html From guido at python.org Fri May 18 20:02:56 2007 From: guido at python.org (Guido van Rossum) Date: Fri, 18 May 2007 11:02:56 -0700 Subject: [Python-Dev] Wither PEP 335 (Overloadable Boolean Operators)? Message-ID: While reviewing PEPs, I stumbled over PEP 335 ( Overloadable Boolean Operators) by Greg Ewing. I am of two minds of this -- on the one hand, it's been a long time without any working code or anything. OTOH it might be quite useful to e.g. numpy folks. It is time to reject it due to lack of interest, or revive it! -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Fri May 18 20:03:59 2007 From: guido at python.org (Guido van Rossum) Date: Fri, 18 May 2007 11:03:59 -0700 Subject: [Python-Dev] [PEPs] Mass PEP status changes In-Reply-To: References: Message-ID: On 5/18/07, Brett Cannon wrote: > > > On 5/18/07, Guido van Rossum wrote: > > With the help of Neal Norwitz, Jeremy Hylton, Alex Martelli and Collin > > Winter, I've greatly reduced the set of open PEPs numbered less than > > 3000. Here's a summary. Please speak up if we've made a grave error; I > > take all responsibility for the final decisions. > > > > Positive Decisions (Marked Accepted or Final) > > --------------------------------------------- > > > > SF 237 Unifying Long Integers and Integers Zadka, GvR > > Marked as Final; there's no work left to be done. > > > > I 287 reStructuredText Docstring Format Goodger > > Status changed from Draft to Active. > > > > SA 302 New Import Hooks > JvR, Moore > > Marked Accepted. Should this be marked Final, or is there still an > > unimplemented part? > > Everything added by this PEP is not covered in the official docs. Otherwise > the PEP is implemented. OK, I'll mark it final then. Docs are really not the PEP's business. > There is mention of a possible "part 2" where the built-in import gets > refactored to use what this PEP introduces, but that can be a separate PEP > for possible changes to import itself. Yes, please. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From brett at python.org Fri May 18 20:10:19 2007 From: brett at python.org (Brett Cannon) Date: Fri, 18 May 2007 11:10:19 -0700 Subject: [Python-Dev] Need Survey Answers from Core Developers In-Reply-To: <464DC502.5000700@taupro.com> References: <464DC502.5000700@taupro.com> Message-ID: On 5/18/07, Jeff Rush wrote: > > Time is short and I'm still looking for answers to some questions about > cPython, so that it makes a good showing in the Forrester survey. > > 1) How is the project governed? How does the community make decisions > on what goes into a release? > > You know, I've been a member of the Python community for many years > -- I know about PEPs, Guido as BDFL, and +1/-1. But I've never > figured out exactly how -final- decisions are made on what goes > into a release. I've never needed to, until now. Can someone > explain in one paragraph? Concensus is reached on python-dev or Guido says so. =) Honestly someone proposes an idea to python-dev. It gets discussed. Either a concensus is reached and the person goes ahead and moves forward with it, or Guido explicitly says OK. Occasionally there is a minor revolt and Guido backs down, but usually that leads to the wrong decision winning out. =) How much extra work is needed to present to python-dev depends on the level of the change. PEP is needed for language changes. New additions to the stdlib require community concensus that it is best-of-breed. Small additions usually should get python-dev approval. Patches for fixes just happen. More details are in http://www.python.org/dev/intro . 2) Does the language have a formal defined release plan? > > I know Zope 3's release plan, every six months, but not that of > Python. Is there a requirement to push a release out the door > every N months, as some projects do, or is each release > separately negotiated with developers around a planned set > of features? Latter. We aim for every 12 - 18 months, but it depends on if there are any specific features we want in a release. 3) Some crude idea of how many new major and minor features were > added in the last release? Yes, I know this is difficult -- the > idea it so get some measure of the evolution/stability of cPython > re features. Jython and IronPython are probably changing rapidly > -- cPython, not such much. Going by http://www.python.org/download/releases/2.5/highlights/ , roughly 8 or so major features. Don't know what to say about minor since I don't know how you want to count stdlib additions. 4) How many committers to the cPython core are there? > > I don't have the necessary access to the pydotorg infrastructure > to answer this -- can someone who does help me out here? According to http://www.ohloh.net/projects/26/analyses/latest/contributors , 92 people over the life of the project, but 51 over the last year. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070518/066cb24c/attachment-0001.html From g.brandl at gmx.net Fri May 18 20:25:37 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 18 May 2007 20:25:37 +0200 Subject: [Python-Dev] Need Survey Answers from Core Developers In-Reply-To: References: <464DC502.5000700@taupro.com> Message-ID: Brett Cannon schrieb: > 4) How many committers to the cPython core are there? > > I don't have the necessary access to the pydotorg infrastructure > to answer this -- can someone who does help me out here? > > > According to > http://www.ohloh.net/projects/26/analyses/latest/contributors , 92 > people over the life of the project, but 51 over the last year. Heh. I was about to point that out too, but somehow 51 seemed a very large number to me... Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From skip at pobox.com Fri May 18 20:39:44 2007 From: skip at pobox.com (skip at pobox.com) Date: Fri, 18 May 2007 13:39:44 -0500 Subject: [Python-Dev] Need Survey Answers from Core Developers In-Reply-To: <464DC502.5000700@taupro.com> References: <464DC502.5000700@taupro.com> Message-ID: <17997.62192.311458.672697@montanaro.dyndns.org> Jeff> 1) How is the project governed? How does the community make Jeff> decisions on what goes into a release? Jeff> You know, I've been a member of the Python community for many Jeff> years -- I know about PEPs, Guido as BDFL, and +1/-1. But I've Jeff> never figured out exactly how -final- decisions are made on Jeff> what goes into a release. I've never needed to, until now. Jeff> Can someone explain in one paragraph? Consensus (most of the time) and GvR pronouncements for significant changes. There are situations where Guido has simply pronounced when the community seemed unable to settle on one solution. Decorators come to mind. Jeff> 2) Does the language have a formal defined release plan? Jeff> I know Zope 3's release plan, every six months, but not that of Jeff> Python. Is there a requirement to push a release out the door Jeff> every N months, as some projects do, or is each release Jeff> separately negotiated with developers around a planned set Jeff> of features? PEP 6? PEP 101? PEP 102? There is no hard-and-fast time schedule. I believe minor releases leave the station approximately every 18-24 months, micro releases roughly every six months. Jeff> 3) Some crude idea of how many new major and minor features were Jeff> added in the last release? Yes, I know this is difficult -- the Jeff> idea it so get some measure of the evolution/stability of cPython Jeff> re features. Jython and IronPython are probably changing rapidly Jeff> -- cPython, not such much. Haven't the slightest idea. Check Andrew's What's New document. Jeff> 4) How many committers to the cPython core are there? 71, according to the Assignee menu in SourceForge. I suspect at most a quarter of them are active. Skip From barry at python.org Fri May 18 21:13:44 2007 From: barry at python.org (Barry Warsaw) Date: Fri, 18 May 2007 15:13:44 -0400 Subject: [Python-Dev] Mass PEP status changes In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 18, 2007, at 1:40 PM, Guido van Rossum wrote: > SR 354 Enumerations in Python Finney > Rejected; not enough interest, not sufficiently Pythonic. I have a competing proposal for enumerations which I just haven't gotten around to spec'ing out yet. Check the the cheeseshop for the 'munepy' package if you're interested (mune == enum backwards, since 'enum' was already taken). Guido, can you tell me whether the concept of enums for Python is being rejected, or this specific proposal? My proposal would be quite different, and I think, more Pythonic. Should I bother submitting a PEP? - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQCVAwUBRk366XEjvBPtnXfVAQKL1wP8D/iUag8jCCjFTT1Qa+Z1iwFcGCFgHq7c +ZzR2PrqkG8+2f3MxEa31GMZZpyNyBxh50QSpSeEx/NLfFSLyWtrY1q58BwSfay2 b7ClZmvjC4wlLJzuTxpO05MXhu2nbS5TQ0h2ut+HDvKe8bCHVs1Me48mEYa/BYF0 rvSCShoa37o= =DAER -----END PGP SIGNATURE----- From nnorwitz at gmail.com Fri May 18 21:16:06 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 18 May 2007 12:16:06 -0700 Subject: [Python-Dev] accepted peps that should be final? Message-ID: Are the following accepted PEPs implemented and should be marked final: SA 358 The "bytes" Object Schemenauer, GvR SA 3106 Revamping dict.keys(), .values() & .items() GvR SA 3109 Raising Exceptions in Python 3000 Winter SA 3110 Catching Exceptions in Python 3000 Winter SA 3111 Simple input built-in in Python 3000 Roberge SA 3112 Bytes literals in Python 3000 Orendorff n From guido at python.org Fri May 18 21:17:34 2007 From: guido at python.org (Guido van Rossum) Date: Fri, 18 May 2007 12:17:34 -0700 Subject: [Python-Dev] Mass PEP status changes In-Reply-To: References: Message-ID: On 5/18/07, Barry Warsaw wrote: > On May 18, 2007, at 1:40 PM, Guido van Rossum wrote: > > > SR 354 Enumerations in Python Finney > > Rejected; not enough interest, not sufficiently Pythonic. > > I have a competing proposal for enumerations which I just haven't > gotten around to spec'ing out yet. Check the the cheeseshop for the > 'munepy' package if you're interested (mune == enum backwards, since > 'enum' was already taken). > > Guido, can you tell me whether the concept of enums for Python is > being rejected, or this specific proposal? My proposal would be > quite different, and I think, more Pythonic. Should I bother > submitting a PEP? That's really up to the rest of the community. If their response to your proposal is going to be as lackluster as to PEP 354, don't bother. Perhaps you can test the waters on python-ideas or c.l.py. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Fri May 18 21:19:34 2007 From: guido at python.org (Guido van Rossum) Date: Fri, 18 May 2007 12:19:34 -0700 Subject: [Python-Dev] accepted peps that should be final? In-Reply-To: References: Message-ID: On 5/18/07, Neal Norwitz wrote: > Are the following accepted PEPs implemented and should be marked final: > > SA 358 The "bytes" Object Schemenauer, GvR > SA 3106 Revamping dict.keys(), .values() & .items() GvR Not yet -- the implementations of these two are incomplete. > SA 3109 Raising Exceptions in Python 3000 Winter > SA 3110 Catching Exceptions in Python 3000 Winter I believe Collin is still working on a patch. > SA 3111 Simple input built-in in Python 3000 Roberge Final. > SA 3112 Bytes literals in Python 3000 Orendorff Let's keep this open since it is still a subject of debate, e.g. some folks feel that b"..." should be immutable. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From collinw at gmail.com Fri May 18 21:22:14 2007 From: collinw at gmail.com (Collin Winter) Date: Fri, 18 May 2007 12:22:14 -0700 Subject: [Python-Dev] accepted peps that should be final? In-Reply-To: References: Message-ID: <43aa6ff70705181222g2ff42ff0j99cb238145ae1ba6@mail.gmail.com> On 5/18/07, Neal Norwitz wrote: > Are the following accepted PEPs implemented and should be marked final: > > SA 3109 Raising Exceptions in Python 3000 Winter Not yet implemented, will be this weekend. > SA 3110 Catching Exceptions in Python 3000 Winter This is implemented (I'll update the PEP to reflect this). Has a decision been made as to whether 2.6 will support both "," and "as" in except statements? Collin Winter From barry at python.org Fri May 18 21:23:50 2007 From: barry at python.org (Barry Warsaw) Date: Fri, 18 May 2007 15:23:50 -0400 Subject: [Python-Dev] Mass PEP status changes In-Reply-To: References: Message-ID: <35B295E7-6ED6-4679-ACBA-60302983F0CE@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 18, 2007, at 3:17 PM, Guido van Rossum wrote: >> Guido, can you tell me whether the concept of enums for Python is >> being rejected, or this specific proposal? My proposal would be >> quite different, and I think, more Pythonic. Should I bother >> submitting a PEP? > > That's really up to the rest of the community. If their response to > your proposal is going to be as lackluster as to PEP 354, don't > bother. Perhaps you can test the waters on python-ideas or c.l.py. I guess the first step would be to announce the package on c.l.py.a :). But cool, that tells me what I wanted to know. Thanks, - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQCVAwUBRk39RnEjvBPtnXfVAQJ+BAQAm5oc/gzHq1bcwUt961Rc/9Ga7SX0CQI3 qcpSgQTnUOQxOFgas71tfZ1KC0Hg8ceD/L+3OnTeANY+HVjN/3B+JFLTdELYrCu7 bjJNZvnlkq46/fjR8YXgPwoAH8LgFZgrOxaGLZw4IpTWU5p3MRJJXR9344lG/zR3 zKbtUssefZk= =bs9n -----END PGP SIGNATURE----- From nnorwitz at gmail.com Fri May 18 21:28:35 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 18 May 2007 12:28:35 -0700 Subject: [Python-Dev] accepted peps that should be final? In-Reply-To: <43aa6ff70705181222g2ff42ff0j99cb238145ae1ba6@mail.gmail.com> References: <43aa6ff70705181222g2ff42ff0j99cb238145ae1ba6@mail.gmail.com> Message-ID: On 5/18/07, Collin Winter wrote: > > > SA 3110 Catching Exceptions in Python 3000 Winter > > This is implemented (I'll update the PEP to reflect this). Has a > decision been made as to whether 2.6 will support both "," and "as" in > except statements? I think 'except as' should go into 2.6. I haven't heard any reason not to make this change. It should be easy to implement. n From jdahlin at async.com.br Fri May 18 21:49:11 2007 From: jdahlin at async.com.br (Johan Dahlin) Date: Fri, 18 May 2007 16:49:11 -0300 Subject: [Python-Dev] Wither PEP 335 (Overloadable Boolean Operators)? In-Reply-To: References: Message-ID: <464E0337.3000905@async.com.br> Guido van Rossum wrote: > While reviewing PEPs, I stumbled over PEP 335 ( Overloadable Boolean > Operators) by Greg Ewing. I am of two minds of this -- on the one > hand, it's been a long time without any working code or anything. OTOH > it might be quite useful to e.g. numpy folks. This kind of feature would also be useful for ORMs, to be able to map boolean operators to SQL. Johan From collinw at gmail.com Fri May 18 22:12:55 2007 From: collinw at gmail.com (Collin Winter) Date: Fri, 18 May 2007 13:12:55 -0700 Subject: [Python-Dev] accepted peps that should be final? In-Reply-To: References: <43aa6ff70705181222g2ff42ff0j99cb238145ae1ba6@mail.gmail.com> Message-ID: <43aa6ff70705181312j37fce8fft385cd46994e792b8@mail.gmail.com> On 5/18/07, Neal Norwitz wrote: > On 5/18/07, Collin Winter wrote: > > > > > SA 3110 Catching Exceptions in Python 3000 Winter > > > > This is implemented (I'll update the PEP to reflect this). Has a > > decision been made as to whether 2.6 will support both "," and "as" in > > except statements? > > I think 'except as' should go into 2.6. I haven't heard any reason > not to make this change. It should be easy to implement. Just the syntax, right, or should the end-of-suite cleanup code be backported, too? Collin Winter From nnorwitz at gmail.com Fri May 18 22:18:13 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 18 May 2007 13:18:13 -0700 Subject: [Python-Dev] accepted peps that should be final? In-Reply-To: <43aa6ff70705181312j37fce8fft385cd46994e792b8@mail.gmail.com> References: <43aa6ff70705181222g2ff42ff0j99cb238145ae1ba6@mail.gmail.com> <43aa6ff70705181312j37fce8fft385cd46994e792b8@mail.gmail.com> Message-ID: On 5/18/07, Collin Winter wrote: > On 5/18/07, Neal Norwitz wrote: > > On 5/18/07, Collin Winter wrote: > > > > > > > SA 3110 Catching Exceptions in Python 3000 Winter > > > > > > This is implemented (I'll update the PEP to reflect this). Has a > > > decision been made as to whether 2.6 will support both "," and "as" in > > > except statements? > > > > I think 'except as' should go into 2.6. I haven't heard any reason > > not to make this change. It should be easy to implement. > > Just the syntax, right, or should the end-of-suite cleanup code be > backported, too? Yes, just the syntax. That will make it similar to loop comprehension leaking the var in 2.6, but not 3.0. n From jdahlin at async.com.br Fri May 18 21:49:11 2007 From: jdahlin at async.com.br (Johan Dahlin) Date: Fri, 18 May 2007 16:49:11 -0300 Subject: [Python-Dev] Wither PEP 335 (Overloadable Boolean Operators)? In-Reply-To: References: Message-ID: <464E0337.3000905@async.com.br> Guido van Rossum wrote: > While reviewing PEPs, I stumbled over PEP 335 ( Overloadable Boolean > Operators) by Greg Ewing. I am of two minds of this -- on the one > hand, it's been a long time without any working code or anything. OTOH > it might be quite useful to e.g. numpy folks. This kind of feature would also be useful for ORMs, to be able to map boolean operators to SQL. Johan From tjreedy at udel.edu Sat May 19 00:03:49 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 18 May 2007 18:03:49 -0400 Subject: [Python-Dev] Summary of Tracker Issues References: <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org><20070517050002.E9D5C600045@longblack.object-craft.com.au><464BE57D.9050103@acm.org> <1179388425.6077.58.camel@localhost><464C6F0E.5010908@scottdial.com> <20070517170703.GC9779@gmail.com><464CFC21.8090806@canterbury.ac.nz><20070518024602.GA3268@gmail.com> <87lkfm8sds.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: "Stephen J. Turnbull" wrote in message news:87lkfm8sds.fsf at uwakimon.sk.tsukuba.ac.jp... | I think it would be better to do content. URLs come to mind; without | something clickable, most commercial spam would be hamstrung. But | few bug reports and patches need to contain URLs, except for | specialized local ones pointing to related issues. A bug is a disparity between promise and performance. Promise is often best demonstrated by a link to the relevant section of the docs. Doc patches should also contain a such a link. So doc references should be included with local (to tracker) links and not filtered on. | For example, how about requiring user interaction to display any post | containing an URL, until an admin approves it? Why not simply embargo any post with an off-site link? Tho there might have been some, I can't remember a single example of such at SF. Anybody posting such could certainly understand "Because this post contains an off-site link, it will be embargoed until reviewed to ensure that it is legitimate." | Or you could provide a preview containing the first two non-empty lines | not containing an URL. | This *would* be inconvenient for large attachments and other | data where the reporter prefers to provide an URL rather than the | literal data, but OTOH only people who indicate they really want to | see spam would see it. ;-) I don't get this, but it sounds like more work than simple embargo. I think html attachments should also be embargoed (I believe this is what I saw a couple of months ago.) And perhaps the account uploading an html file. Terry Jan Reedy From brett at python.org Sat May 19 01:15:28 2007 From: brett at python.org (Brett Cannon) Date: Fri, 18 May 2007 16:15:28 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <20070517050002.E9D5C600045@longblack.object-craft.com.au> <464BE57D.9050103@acm.org> <1179388425.6077.58.camel@localhost> <464C6F0E.5010908@scottdial.com> <20070517170703.GC9779@gmail.com> <464CFC21.8090806@canterbury.ac.nz> <20070518024602.GA3268@gmail.com> <87lkfm8sds.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On 5/18/07, Terry Reedy wrote: > > > "Stephen J. Turnbull" wrote in message > news:87lkfm8sds.fsf at uwakimon.sk.tsukuba.ac.jp... > | I think it would be better to do content. URLs come to mind; without > | something clickable, most commercial spam would be hamstrung. But > | few bug reports and patches need to contain URLs, except for > | specialized local ones pointing to related issues. > > A bug is a disparity between promise and performance. Promise is often > best demonstrated by a link to the relevant section of the docs. Doc > patches should also contain a such a link. So doc references should be > included with local (to tracker) links and not filtered on. > > | For example, how about requiring user interaction to display any post > | containing an URL, until an admin approves it? > > Why not simply embargo any post with an off-site link? Tho there might > have been some, I can't remember a single example of such at SF. Anybody > posting such could certainly understand "Because this post contains an > off-site link, it will be embargoed until reviewed to ensure that it is > legitimate." > > | Or you could provide a preview containing the first two non-empty lines > | not containing an URL. > | This *would* be inconvenient for large attachments and other > | data where the reporter prefers to provide an URL rather than the > | literal data, but OTOH only people who indicate they really want to > | see spam would see it. ;-) > > I don't get this, but it sounds like more work than simple embargo. > > I think html attachments should also be embargoed (I believe this is what > I > saw a couple of months ago.) And perhaps the account uploading an html > file. If you guys want to see any of this happen please take this discussion over to the tracker-discuss mailing list. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070518/34cd4119/attachment.htm From python at rcn.com Sat May 19 03:34:34 2007 From: python at rcn.com (Raymond Hettinger) Date: Fri, 18 May 2007 21:34:34 -0400 (EDT) Subject: [Python-Dev] Py2.6 buildouts to the set API Message-ID: <20070518213434.BJU26447@ms09.lnh.mail.rcn.net> Here some ideas that have been proposed for sets: * New method (proposed by Shane Holloway): s1.isdisjoint(s2). Logically equivalent to "not s1.intersection(s2)" but has an early-out if a common member is found. The speed-up is potentially large given two big sets that may largely overlap or may not intersect at all. There is also a memory savings since a new set does not have to be formed and then thrown away. * Additional optional arguments for basic set operations to allow chained operations. For example, s=s1.union(s2, s3, s4) would be logically equivalent to s=s1.union(s2).union(s3).union(s4) but would run faster because no intermediate sets are created, copied, and discarded. It would run as if written: s=s1.copy(); s.update(s2); s.update(s3); s.update(s4). * Make sets listenable for changes (proposed by Jason Wells): s = set(mydata) def callback(s): print 'Set %d now has %d items' % (id(s), len(s)) s.listeners.append(callback) s.add(existing_element) # no callback s.add(new_element) # callback Raymond From mike.klaas at gmail.com Sat May 19 03:50:24 2007 From: mike.klaas at gmail.com (Mike Klaas) Date: Fri, 18 May 2007 18:50:24 -0700 Subject: [Python-Dev] Py2.6 buildouts to the set API In-Reply-To: <20070518213434.BJU26447@ms09.lnh.mail.rcn.net> References: <20070518213434.BJU26447@ms09.lnh.mail.rcn.net> Message-ID: On 18-May-07, at 6:34 PM, Raymond Hettinger wrote: > Here some ideas that have been proposed for sets: > > * New method (proposed by Shane Holloway): s1.isdisjoint(s2). > Logically equivalent to "not s1.intersection(s2)" but has an early- > out if a common member is found. The speed-up is potentially large > given two big sets that may largely overlap or may not intersect at > all. There is also a memory savings since a new set does not have > to be formed and then thrown away. +1. Disjointness verification is one of my main uses for set(), and though I don't think that the early-out condition would trigger often in my code, it would increase readability. > * Additional optional arguments for basic set operations to allow > chained operations. For example, s=s1.union(s2, s3, s4) would be > logically equivalent to s=s1.union(s2).union(s3).union(s4) but > would run faster because no intermediate sets are created, copied, > and discarded. It would run as if written: s=s1.copy(); s.update > (s2); s.update(s3); s.update(s4). It's too bad that this couldn't work with the binary operator spelling: s = s1 | s2 | s3 | s4 > * Make sets listenable for changes (proposed by Jason Wells): > > s = set(mydata) > def callback(s): > print 'Set %d now has %d items' % (id(s), len(s)) > s.listeners.append(callback) > s.add(existing_element) # no callback > s.add(new_element) # callback -1 on the base set type: it seems too complex for a base set type. Also, there are various possible semantics that might be desirable, such as receiving the added element, or returning False to prevent addition. The proper place is perhaps a subclass of set with a magic method (analogous to defaultdict/__missing__). -Mike From tjreedy at udel.edu Sat May 19 04:10:18 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 18 May 2007 22:10:18 -0400 Subject: [Python-Dev] Wither PEP 335 (Overloadable Boolean Operators)? References: Message-ID: "Guido van Rossum" wrote in message news:ca471dc20705181102q29329642qb166f076d6d93999 at mail.gmail.com... | While reviewing PEPs, I stumbled over PEP 335 ( Overloadable Boolean | Operators) by Greg Ewing. I am of two minds of this -- on the one | hand, it's been a long time without any working code or anything. OTOH | it might be quite useful to e.g. numpy folks. | | It is time to reject it due to lack of interest, or revive it! Rejection would currently be fine with me, so I do not intend this to indicate 'revived interest'. But having looked at the PEP now, I have some comments in case anyone else has such. Rationale: if the only extra time without the overloads is the check of Py_TPFLAGS_HAVE_BOOLEAN_OVERLOAD then I suppose it might be 'not appreciable', but in my ignorance, I would be more persuaded by timing data ;-) Motivation: another 'workaround' is the one used in symbolic logic, dating back to Boole himself. '+' for 'or' and '*' for 'and'. But these are also needed as in in your motivating cases. A counter-motivation is that this proposal makes it even harder to reason about the behavior of Python programs. It adds one more caveat to stick in the back of ones mind. Other overloads do the same, but to me, overloading logic cuts a bit deeper. Special Methods: the proposal strikes me as clever but excessively baroque. In the absence of justification for the complexities, here is a *much* simpler version. Delete special casing of NonImplemented. __not__: substitute for default semantics when present. Delete NeedOtherOperand (where would it even live?). The current spelling is True for and and False for or, as with standard semantics. A class that does not want short circuiting, as in your motivating cases, simply defines __and1__ or __or1__ to return True or False. If the return value of __xxx1__ is not True/False, then it is the result. (I believe this matches your spec.) So the LOGICAL_XXX_1 bytecodes check for True/False identity without calling bool() on the result. Delete the reverse methods. They are only needed for mixed-type operations, like scaler*matrix. But such seems senseless here. In any case, they are not needed for any of your motivating applications, which would define both methods without mixing. If the second arg is evaluated, the result is __x2__(arg1,arg2) if defined, else arg2 as usual. Delete the 'As a special case' sentence. Type Slots: someone else can decide if a new flag and 5 new slots are a significant price. Python/C API: 5 more to learn or ignore, but that does not affect me. I do not understand what they would do or how they relate to the byte codes. Other Implementations: any possible problems? (I have no idea.) Terry Jan Reedy From stephen at xemacs.org Sat May 19 04:50:47 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sat, 19 May 2007 11:50:47 +0900 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org> <20070517050002.E9D5C600045@longblack.object-craft.com.au> <464BE57D.9050103@acm.org> <1179388425.6077.58.camel@localhost> <464C6F0E.5010908@scottdial.com> <20070517170703.GC9779@gmail.com> <464CFC21.8090806@canterbury.ac.nz> <20070518024602.GA3268@gmail.com> <87lkfm8sds.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <87d50x8r20.fsf@uwakimon.sk.tsukuba.ac.jp> Terry Reedy writes: > Why not simply embargo any post with an off-site link? Tho there might > have been some, I can't remember a single example of such at SF. Fine by me; if it doesn't happen often, then embargoing them would be fine. My occasional experience with distro reporting processes shows that they happen a fair amount there (often as a reference to an upstream or downstream bug report). The major ones can probably be special-cased easily as needed. > I don't get [the short preview idea], but it sounds like more work > than simple embargo. It would be. It is a use-case that according to your explanation doesn't apply to Python's tracker, so a YAGNI until proved otherwise. > I think html attachments should also be embargoed (I believe this is what I > saw a couple of months ago.) And perhaps the account uploading an html > file. Sounds OK to me, except that there are some modules that handle HTML (and XML? can that be practically distinguished from HTML?), and I would suppose people would want upload examples and test cases. From castironpi at comcast.net Sat May 19 05:05:29 2007 From: castironpi at comcast.net (Aaron Brady) Date: Fri, 18 May 2007 22:05:29 -0500 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <87lkfm8sds.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <20070519030535.AB2D91E4004@bag.python.org> > -----Original Message----- > From: python-dev-bounces+castironpi=comcast.net at python.org [mailto:python- > dev-bounces+castironpi=comcast.net at python.org] On Behalf Of Stephen J. > Turnbull > Sent: Friday, May 18, 2007 3:10 AM > To: python-dev at python.org > Subject: Re: [Python-Dev] Summary of Tracker Issues > > O.R.Senthil Kumaran writes: > > > :-) My idea was, a human got to answer it unscrambled as 'fourth' as > he > > "understands" what the question is and gives the proper answer. > > > Agreed, there could be confusion at first. > > password, or they say WTF!! In that case we could lose all the bug > reports they were ever going to write. That's bad. > If we're going to do CAPTCHA, what we're looking for is something that > any 4 year old does automatically, but machines can't do at all. > Visual recognition used to be one, but isn't any more. The CAPTCHA > literature claims that segmentation still is (dividing complex images > into letters), but that's nontrivial for humans, too, and I think that > machines will eventually catch up. (Ie, within a handful of months.) Complex backgrounds used? Colorful foreground on a interior decorating background. Also gradient foreground, gradient background. > I think it would be better to do content. URLs come to mind; without > something clickable, most commercial spam would be hamstrung. But > few bug reports and patches need to contain URLs, except for > specialized local ones pointing to related issues. > > For example, how about requiring user interaction to display any post > containing an URL, until an admin approves it? Or you could provide a > preview containing the first two non-empty lines not containing an > URL. This *would* be inconvenient for large attachments and other > data where the reporter prefers to provide an URL rather than the > literal data, but OTOH only people who indicate they really want to > see spam would see it. ;-) Block spam or hide? Maybe a reader is what you want. "Posting a URL requires heavier spam-proofing. Click here to authenticate yourself." Takes you to ours- the PL question. From aahz at pythoncraft.com Sat May 19 05:40:25 2007 From: aahz at pythoncraft.com (Aahz) Date: Fri, 18 May 2007 20:40:25 -0700 Subject: [Python-Dev] Py2.6 buildouts to the set API In-Reply-To: <20070518213434.BJU26447@ms09.lnh.mail.rcn.net> References: <20070518213434.BJU26447@ms09.lnh.mail.rcn.net> Message-ID: <20070519034025.GA12349@panix.com> On Fri, May 18, 2007, Raymond Hettinger wrote: > > Here some ideas that have been proposed for sets: > > * New method (proposed by Shane Holloway): s1.isdisjoint(s2). > Logically equivalent to "not s1.intersection(s2)" but has an early-out > if a common member is found. The speed-up is potentially large given > two big sets that may largely overlap or may not intersect at all. > There is also a memory savings since a new set does not have to be > formed and then thrown away. +1 > * Additional optional arguments for basic set operations to allow > chained operations. For example, s=s1.union(s2, s3, s4) would be > logically equivalent to s=s1.union(s2).union(s3).union(s4) but would > run faster because no intermediate sets are created, copied, and > discarded. It would run as if written: s=s1.copy(); s.update(s2); > s.update(s3); s.update(s4). +1 > * Make sets listenable for changes (proposed by Jason Wells): > > s = set(mydata) > def callback(s): > print 'Set %d now has %d items' % (id(s), len(s)) > s.listeners.append(callback) > s.add(existing_element) # no callback > s.add(new_element) # callback -3 -- that's an ugly wart on what is currently a nice, clean object. This seems like a good opportunity for a Cookbook Recipe for a generic listenable proxy class for container objects. Note that I could argue for the callback to be called even when the set doesn't actually change, only when the set is attempted to be changed, which to my mind pushes strongly for a recipe instead of extending sets. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Look, it's your affair if you want to play with five people, but don't go calling it doubles." --John Cleese anticipates Usenet From castironpi at comcast.net Sat May 19 06:07:57 2007 From: castironpi at comcast.net (Aaron Brady) Date: Fri, 18 May 2007 23:07:57 -0500 Subject: [Python-Dev] Py2.6 buildouts to the set API In-Reply-To: <20070518213434.BJU26447@ms09.lnh.mail.rcn.net> Message-ID: <20070519040802.926EC1E4004@bag.python.org> > -----Original Message----- > From: python-dev-bounces+castironpi=comcast.net at python.org [mailto:python- > dev-bounces+castironpi=comcast.net at python.org] On Behalf Of Raymond > Hettinger > Sent: Friday, May 18, 2007 8:35 PM > To: python-dev at python.org > Subject: [Python-Dev] Py2.6 buildouts to the set API > > Here some ideas that have been proposed for sets: > > * New method (proposed by Shane Holloway): s1.isdisjoint(s2). Logically > equivalent to "not s1.intersection(s2)" but has an early-out if a common > member is found. The speed-up is potentially large given two big sets > that may largely overlap or may not intersect at all. There is also a > memory savings since a new set does not have to be formed and then thrown > away. It sounds -really- good. > * Additional optional arguments for basic set operations to allow chained > operations. For example, s=s1.union(s2, s3, s4) would be logically > equivalent to s=s1.union(s2).union(s3).union(s4) but would run faster > because no intermediate sets are created, copied, and discarded. It would > run as if written: s=s1.copy(); s.update(s2); s.update(s3); s.update(s4). This pleads for elsewhere adding operation in chains. Sort on multiple keys is addressed by itemgetter (IMO also should be built-in). But dict.update, a list append, a deque pop could use these. When-do-you-ever is out of stock and ships in a week. > * Make sets listenable for changes (proposed by Jason Wells): > > s = set(mydata) > def callback(s): > print 'Set %d now has %d items' % (id(s), len(s)) > s.listeners.append(callback) > s.add(existing_element) # no callback > s.add(new_element) # callback This one calls for subclassing, a la observer pattern. In that vein, some subclassing operation could use a list of pattern-matching / semantic membership. E.g. def every_add_op( self, op, ***data ): call_a_hook( ***data ) op( ***data ) Rings of ML. Support could be provided with def __init__... for x in ( add, update, intersection_update ): def my_x( self, ***data ): call_a_hook( ***data ) x( ***data ) setattr( self, x, my_x ) But you need to know which operations are destructive/constructive, but we can't go back and annotate the whole stdlib. Though I agree that some level of programmatic interference could be useful. Academic concern which shoots 50-50 in the real world. I may be tempered with too much beauty (Beautiful is better than ugly.), not enough market. You're all in it for money. From martin at v.loewis.de Sat May 19 09:19:36 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 19 May 2007 09:19:36 +0200 Subject: [Python-Dev] Py2.6 buildouts to the set API In-Reply-To: <20070518213434.BJU26447@ms09.lnh.mail.rcn.net> References: <20070518213434.BJU26447@ms09.lnh.mail.rcn.net> Message-ID: <464EA508.7040900@v.loewis.de> > * New method (proposed by Shane Holloway): s1.isdisjoint(s2). > Logically equivalent to "not s1.intersection(s2)" but has an > early-out if a common member is found. The speed-up is potentially > large given two big sets that may largely overlap or may not > intersect at all. There is also a memory savings since a new set > does not have to be formed and then thrown away. I'd rather see iterator versions of the set operations. Then you could do def isempty(i): try: i.next() except StopIteration: return True else: return False if isempty(s1.iter_intersection(s2)): ... > * Additional optional arguments for basic set operations to allow > chained operations. For example, s=s1.union(s2, s3, s4) would be > logically equivalent to s=s1.union(s2).union(s3).union(s4) but would > run faster because no intermediate sets are created, copied, and > discarded. It would run as if written: s=s1.copy(); s.update(s2); > s.update(s3); s.update(s4). I'd rather see this as collections.bigunion. > * Make sets listenable for changes (proposed by Jason Wells): -1, IAGNI. Martin From rasky at develer.com Sat May 19 13:33:11 2007 From: rasky at develer.com (Giovanni Bajo) Date: Sat, 19 May 2007 13:33:11 +0200 Subject: [Python-Dev] Py2.6 buildouts to the set API In-Reply-To: <20070518213434.BJU26447@ms09.lnh.mail.rcn.net> References: <20070518213434.BJU26447@ms09.lnh.mail.rcn.net> Message-ID: On 19/05/2007 3.34, Raymond Hettinger wrote: > * Make sets listenable for changes (proposed by Jason Wells): > > s = set(mydata) > def callback(s): > print 'Set %d now has %d items' % (id(s), len(s)) > s.listeners.append(callback) > s.add(existing_element) # no callback > s.add(new_element) # callback -1 because I can't see why sets are so specials (compared to other containers or objects) to provide a builtin implementation of the observer pattern. In fact, in my experience, real-world use cases of this pattern often require more attention to details (eg: does the set keep a strong or weak reference to the callback? What if I need to do several *transactional* modifications in a row, and thus would like my callback to be called only once at the end?). -- Giovanni Bajo From ncoghlan at gmail.com Sat May 19 13:40:12 2007 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 19 May 2007 21:40:12 +1000 Subject: [Python-Dev] Need Survey Answers from Core Developers In-Reply-To: <464DC502.5000700@taupro.com> References: <464DC502.5000700@taupro.com> Message-ID: <464EE21C.7070101@gmail.com> Jeff Rush wrote: > Time is short and I'm still looking for answers to some questions about > cPython, so that it makes a good showing in the Forrester survey. > > 1) How is the project governed? How does the community make decisions > on what goes into a release? > > You know, I've been a member of the Python community for many years > -- I know about PEPs, Guido as BDFL, and +1/-1. But I've never > figured out exactly how -final- decisions are made on what goes > into a release. I've never needed to, until now. Can someone > explain in one paragraph? As others have already pointed out, the ultimate authority rests with GvR. For any given release, Guido delegates a fair bit of authority to the release manager, and will often defer to the release manager's judgment (especially for maintenance releases). The current status of a release (including naming names for the release team) is tracked using an Informational track PEP. The initial Python 2.5 release was tracked in PEP 356. The Python 2.6 release is being tracked in PEP 361. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From jason.orendorff at gmail.com Sat May 19 14:41:32 2007 From: jason.orendorff at gmail.com (Jason Orendorff) Date: Sat, 19 May 2007 08:41:32 -0400 Subject: [Python-Dev] Wither PEP 335 (Overloadable Boolean Operators)? In-Reply-To: References: Message-ID: On 5/18/07, Guido van Rossum wrote: > While reviewing PEPs, I stumbled over PEP 335 ( Overloadable Boolean > Operators) by Greg Ewing. -1. "and" and "or" affect the flow of control. It's a matter of taste, but I feel the benefit is too small here to add another flow-control quirk. I like that part of the language to be simple. Anyway, if this *is* done, logically it should cover "(... if ... else ...)" as well. Same use cases. -j From g.brandl at gmx.net Sat May 19 19:14:09 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 19 May 2007 19:14:09 +0200 Subject: [Python-Dev] The docs, reloaded Message-ID: Hi, over the last few weeks I've hacked on a new approach to Python's documentation. As Python already has an excellent documentation framework, the docutils, with a readable yet extendable markup format, reST, I thought that it should be possible to use those instead of the current LaTeX->latex2html toolchain. For the impatient: the result can be seen at . I've written a converter tool that handles most of the LaTeX markup and turns it into reST, as well as a builder tool that adds many custom directives and roles, and also features like index generation and cross-document linking. (What you can see at the URL is a completely statical version of the docs, as it would be shipped with the distribution. For the real online docs, I have more plans; I'll come to that later.) So why the effort? Here's a partial list of things that have already been improved: - the source is much more readable (for examples, try the "view source" links in the left navbar) - all function/class/module names are properly cross-referenced - the HTML pages are generated from templates, using a language similar to Django's template language - Python and C code snippets are syntax highlighted - for the offline version, there's a JavaScript enabled search function - the index is generated over all the documentation, making it easier to find stuff you need - the toolchain is pure Python, therefore can easily be shipped What more? If there is support for this approach, I have plans for things that can be added to the online version: - a "quick-dispatch" function: e.g., docs.python.org/q?os.path.split would redirect you to the matching location. - "interactive patching": provide an "propose edit" link, leading to a Wiki-like page where you can edit the source. From the result, a diff is generated, which can be accepted, edited or rejected by the development team. This is even more straightforward than plain old comments. - the same infrastructure could be used for developers, with automatic checkin into subversion. - of course, plain old comments can be useful too. Concluding, a small caveat: The conversion/build tools are, of course, not complete yet. There are a number of XXX comments in the text, most of them indicate that the converter could not handle a situation -- that would have to be corrected once after conversion is done. Waiting for comments! Cheers, Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From fuzzyman at voidspace.org.uk Sat May 19 19:19:46 2007 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sat, 19 May 2007 18:19:46 +0100 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: <464F31B2.2090402@voidspace.org.uk> Georg Brandl wrote: > Hi, > > over the last few weeks I've hacked on a new approach to Python's documentation. > As Python already has an excellent documentation framework, the docutils, with a > readable yet extendable markup format, reST, I thought that it should be > possible to use those instead of the current LaTeX->latex2html toolchain. > > For the impatient: the result can be seen at . > Wow! Very impressive. Changing to ReST would encourage more contributions to the documentation and widen the range of people able to maintain them. Michael Foord From steven.bethard at gmail.com Sat May 19 19:28:31 2007 From: steven.bethard at gmail.com (Steven Bethard) Date: Sat, 19 May 2007 11:28:31 -0600 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: On 5/19/07, Georg Brandl wrote: > For the impatient: the result can be seen at . [snip] > Here's a partial list of things that have already been improved: > > - the source is much more readable (for examples, try the "view source" links in > the left navbar) > - all function/class/module names are properly cross-referenced > - the HTML pages are generated from templates, using a language similar to > Django's template language > - Python and C code snippets are syntax highlighted > - for the offline version, there's a JavaScript enabled search function > - the index is generated over all the documentation, making it easier to find > stuff you need > - the toolchain is pure Python, therefore can easily be shipped Very cool! I'd love to see the docs move to ReST. > If there is support for this approach, I have plans for things that can be added > to the online version: > > - a "quick-dispatch" function: e.g., docs.python.org/q?os.path.split would > redirect you to the matching location. > - "interactive patching": provide an "propose edit" link, leading to a Wiki-like > page where you can edit the source. From the result, a diff is generated, > which can be accepted, edited or rejected by the development team. This is > even more straightforward than plain old comments. > - the same infrastructure could be used for developers, with automatic checkin > into subversion. Yes, these would all be outstanding features. STeVe -- I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a tiny blip on the distant coast of sanity. --- Bucky Katt, Get Fuzzy From jcarlson at uci.edu Sat May 19 19:37:19 2007 From: jcarlson at uci.edu (Josiah Carlson) Date: Sat, 19 May 2007 10:37:19 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070519030535.AB2D91E4004@bag.python.org> References: <87lkfm8sds.fsf@uwakimon.sk.tsukuba.ac.jp> <20070519030535.AB2D91E4004@bag.python.org> Message-ID: <20070519101235.85AE.JCARLSON@uci.edu> "Aaron Brady" wrote: > "Stephen J. Turnbull" wrote: > > If we're going to do CAPTCHA, what we're looking for is something that > > any 4 year old does automatically, but machines can't do at all. > > Visual recognition used to be one, but isn't any more. The CAPTCHA > > literature claims that segmentation still is (dividing complex images > > into letters), but that's nontrivial for humans, too, and I think that > > machines will eventually catch up. (Ie, within a handful of months.) > > Complex backgrounds used? Colorful foreground on a interior decorating > background. > > Also gradient foreground, gradient background. Captchas like this are easily broken using computational methods, or even the porn site trick that was already mentioned. Never mind Stephen's stated belief, that you quoted, that he believes that even the hard captchas are going to be beaten by computational methods soon. Please try to pay attention to previous posts. - Josiah As an aside, while the '4 year old can do it' is a hard qualification to meet, add 10 years and there exists a fairly sexist method (-), that can be subjective (-), that seems to work quite well (+), but requires javascript (-); the 'hot or not' captcha. It fetches 9 random pictures from hot or not (hopefully changes their file names) and asks the user to pick the 4 hottest of the 9. A variant exists that asks "choose the 4 horses" or "select all of the iguanas", but it requires an ever-evolving number of tagged input images (which is why hot or not works so well as a source). From jcarlson at uci.edu Sat May 19 19:48:29 2007 From: jcarlson at uci.edu (Josiah Carlson) Date: Sat, 19 May 2007 10:48:29 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: <20070519103928.85B4.JCARLSON@uci.edu> Georg Brandl wrote: > Hi, > > over the last few weeks I've hacked on a new approach to Python's documentation. > As Python already has an excellent documentation framework, the docutils, with a > readable yet extendable markup format, reST, I thought that it should be > possible to use those instead of the current LaTeX->latex2html toolchain. > > For the impatient: the result can be seen at . I'm generally a curmudgeon when it comes to 'the docs could be done better'. But this? I like it. A lot. Especially if you can get these other features in: > - a "quick-dispatch" function: e.g., docs.python.org/q?os.path.split would > redirect you to the matching location. > - "interactive patching": provide an "propose edit" link, leading to a Wiki-like > page where you can edit the source. From the result, a diff is generated, > which can be accepted, edited or rejected by the development team. This is > even more straightforward than plain old comments. > - the same infrastructure could be used for developers, with automatic checkin > into subversion. I'm a bit iffy on yet another tool, but if roundup integration could be done, I think it would be great. - Josiah From dustin at v.igoro.us Sat May 19 20:22:24 2007 From: dustin at v.igoro.us (Dustin J. Mitchell) Date: Sat, 19 May 2007 13:22:24 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <20070519103928.85B4.JCARLSON@uci.edu> References: <20070519103928.85B4.JCARLSON@uci.edu> Message-ID: <20070519182224.GD2940@v.igoro.us> On Sat, May 19, 2007 at 10:48:29AM -0700, Josiah Carlson wrote: > I'm generally a curmudgeon when it comes to 'the docs could be done > better'. But this? I like it. A lot. Especially if you can get these > other features in: > > > - a "quick-dispatch" function: e.g., docs.python.org/q?os.path.split would > > redirect you to the matching location. Seconded! -- even if it's just for modules, this would be great. I can't count the times I've wished I could type e.g., 'docs.python.org/httplib' the way I can type 'php.net/array_search' to try to find out whether the needle comes before or after the haystack. Dustin From LeWiemann at gmail.com Sat May 19 22:11:52 2007 From: LeWiemann at gmail.com (Lea Wiemann) Date: Sat, 19 May 2007 16:11:52 -0400 Subject: [Python-Dev] [Doc-SIG] The docs, reloaded In-Reply-To: References: Message-ID: <464F5A08.5010507@gmail.com> Georg Brandl wrote: > over the last few weeks I've hacked on a new approach to Python's documentation. > As Python already has an excellent documentation framework, the docutils, with a > readable yet extendable markup format, reST, I thought that it should be > possible to use those instead of the current LaTeX->latex2html toolchain. > > For the impatient: the result can be seen at . Awesome, that looks pretty amazing! I'd reeeally like to have a look at the source code (don't worry if it's not clean!). Can you publish or post it somewhere? If you'd like to store it in the Docutils sandboxes, just drop me a line and I'll give you SVN access. By the way, things get a lot easier for me if you place it in the public domain, because that's the license Docutils uses, and it's obviously compatible to every other license. I actually have a Google Summer of Code project, "Documenting Python Packages with Docutils", which I'll start working on May 28: . It has a somewhat different scope, so our projects will complement each other nicely I believe. (To the point where we may end up with a complete tool-chain to actually migrate the Python documentation to reST. Tr?s cool.) Your effort and mine only seem to have some limited overlap. I see that you added at least some markup to reST that allows documents to be marked up in a similar fashion as the current Python-specific LaTeX markup, which is on my list, too. If you see more overlap, please let me know, because I may need to adjust my time-line or project-scope (which is totally fine with me, by the way, so don't worry about "getting into the way of my project" or so!). May I suggest we continue the discussion on Doc-SIG only and prune Python-dev from the CC? I'm on Jabber/GoogleTalk (LeWiemann at gmail.com), by the way, so feel free to IM me. Best wishes, Lea [Rest of the quoted message below.] > I've written a converter tool that handles most of the LaTeX markup and turns it > into reST, as well as a builder tool that adds many custom directives and roles, > and also features like index generation and cross-document linking. > > (What you can see at the URL is a completely statical version of the docs, as it > would be shipped with the distribution. For the real online docs, I have more > plans; I'll come to that later.) > > So why the effort? > > Here's a partial list of things that have already been improved: > > - the source is much more readable (for examples, try the "view source" links in > the left navbar) > - all function/class/module names are properly cross-referenced > - the HTML pages are generated from templates, using a language similar to > Django's template language > - Python and C code snippets are syntax highlighted > - for the offline version, there's a JavaScript enabled search function > - the index is generated over all the documentation, making it easier to find > stuff you need > - the toolchain is pure Python, therefore can easily be shipped > > What more? > > If there is support for this approach, I have plans for things that can be added > to the online version: > > - a "quick-dispatch" function: e.g., docs.python.org/q?os.path.split would > redirect you to the matching location. > - "interactive patching": provide an "propose edit" link, leading to a Wiki-like > page where you can edit the source. From the result, a diff is generated, > which can be accepted, edited or rejected by the development team. This is > even more straightforward than plain old comments. > - the same infrastructure could be used for developers, with automatic checkin > into subversion. > - of course, plain old comments can be useful too. > > Concluding, a small caveat: The conversion/build tools are, of course, not > complete yet. There are a number of XXX comments in the text, most of them > indicate that the converter could not handle a situation -- that would have > to be corrected once after conversion is done. > > Waiting for comments! > > Cheers, > Georg > > From rrr at ronadam.com Sat May 19 22:31:59 2007 From: rrr at ronadam.com (Ron Adam) Date: Sat, 19 May 2007 15:31:59 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: <464F5EBF.4030209@ronadam.com> Georg Brandl wrote: > Hi, > > over the last few weeks I've hacked on a new approach to Python's documentation. > As Python already has an excellent documentation framework, the docutils, with a > readable yet extendable markup format, reST, I thought that it should be > possible to use those instead of the current LaTeX->latex2html toolchain. > > For the impatient: the result can be seen at . Wow, very nice. I like it.. +1 I've been working on improving pydoc. (slowly but steadily) Maybe we can join efforts as I think the two projects overlap in a number of areas, and it sounds like we are thinking of some of the same things as far as the tool chain. So maybe there's some synergy we can take advantage of. Some of the things I've recently considered adding to pydoc. - To output individual sections for use in a template engine. - A reST formatter. - Use docutils to format reST doc strings to html in the html formatter. (as an option, not the default.) It looks like there may be a few areas where we can share code. - The html syntax highlighters. (Pydoc can use those) - A shared html style sheet. - Document locater. [1] - An HTMLServer for local (dynamic dispatching) html viewing. [2] - The reST source for viewing topics and keywords in pydoc. (Instead of scraping html pages. Ick) (1.) Pydoc has a locater function which finds the html docs and presents a link to the html page for an individual item. But it would be more reliable if the dispatcher where on the document end. Then pydoc would have a single place to link to. (Either locally or on line.) (2.) The server in pydoc will probably work as is. You just need to supply a callback to get the pages. It's a separate module now. Cheers, Ron > I've written a converter tool that handles most of the LaTeX markup and turns it > into reST, as well as a builder tool that adds many custom directives and roles, > and also features like index generation and cross-document linking. > > (What you can see at the URL is a completely statical version of the docs, as it > would be shipped with the distribution. For the real online docs, I have more > plans; I'll come to that later.) > > So why the effort? > > Here's a partial list of things that have already been improved: > > - the source is much more readable (for examples, try the "view source" links in > the left navbar) > - all function/class/module names are properly cross-referenced > - the HTML pages are generated from templates, using a language similar to > Django's template language > - Python and C code snippets are syntax highlighted > - for the offline version, there's a JavaScript enabled search function > - the index is generated over all the documentation, making it easier to find > stuff you need > - the toolchain is pure Python, therefore can easily be shipped > > What more? > > If there is support for this approach, I have plans for things that can be added > to the online version: > > - a "quick-dispatch" function: e.g., docs.python.org/q?os.path.split would > redirect you to the matching location. > - "interactive patching": provide an "propose edit" link, leading to a Wiki-like > page where you can edit the source. From the result, a diff is generated, > which can be accepted, edited or rejected by the development team. This is > even more straightforward than plain old comments. > - the same infrastructure could be used for developers, with automatic checkin > into subversion. > - of course, plain old comments can be useful too. > > Concluding, a small caveat: The conversion/build tools are, of course, not > complete yet. There are a number of XXX comments in the text, most of them > indicate that the converter could not handle a situation -- that would have > to be corrected once after conversion is done. > > Waiting for comments! > > Cheers, > Georg From brett at python.org Sun May 20 00:11:32 2007 From: brett at python.org (Brett Cannon) Date: Sat, 19 May 2007 15:11:32 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: On 5/19/07, Georg Brandl wrote: > > Hi, > > over the last few weeks I've hacked on a new approach to Python's > documentation. > As Python already has an excellent documentation framework, the docutils, > with a > readable yet extendable markup format, reST, I thought that it should be > possible to use those instead of the current LaTeX->latex2html toolchain. > > For the impatient: the result can be seen at . I really want this! >From a doc writer's perspective I find this reST approach much easier to grapple than the LaTeX one since I find reST markup nicer for simple things like lists and bolding. From a committer's POV I like this as it should hopefully get more people to help with changes and make it easy for me to build the docs locally to make sure the markup is correct. And from a lazy coder's POV I love it as Georg has already done all the work (and in Python so if I really have to change something I have a better chance of figuring out how). -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070519/689c521e/attachment.html From status at bugs.python.org Sun May 20 02:00:50 2007 From: status at bugs.python.org (Tracker) Date: Sun, 20 May 2007 00:00:50 +0000 (UTC) Subject: [Python-Dev] Summary of Tracker Issues Message-ID: <20070520000050.047CC78038@psf.upfronthosting.co.za> ACTIVITY SUMMARY (05/13/07 - 05/20/07) Tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue number. Do NOT respond to this message. 1649 open ( +0) / 8584 closed ( +0) / 10233 total ( +0) Average duration of open issues: 799 days. Median duration of open issues: 750 days. Open Issues Breakdown open 1649 ( +0) pending 0 ( +0) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070520/c13c9f6f/attachment.htm From python-dev at zesty.ca Sun May 20 03:25:11 2007 From: python-dev at zesty.ca (Ka-Ping Yee) Date: Sat, 19 May 2007 20:25:11 -0500 (CDT) Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: On Sat, 19 May 2007, Georg Brandl wrote: > For the impatient: the result can be seen at . This is extremely impressive. Thank you for this work! If all the documentation is generated from a base format that is closer to text (reST instead of LaTeX), that will make it easier for volunteers to read diffs, make edits, and contribute patches. I agree that interactivity (online commenting and editing) will be a huge advantage. I could imagine this heading in a Wiki-like direction -- where a particular version is tagged as the official revision shipped with a particular Python release, but everyone can make edits online to yield new versions, eventually yielding the revision that will be released with the next Python release. -- ?!ng From talin at acm.org Sun May 20 03:29:48 2007 From: talin at acm.org (Talin) Date: Sat, 19 May 2007 18:29:48 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: <464FA48C.4050108@acm.org> Georg Brandl wrote: > Hi, > > over the last few weeks I've hacked on a new approach to Python's documentation. > As Python already has an excellent documentation framework, the docutils, with a > readable yet extendable markup format, reST, I thought that it should be > possible to use those instead of the current LaTeX->latex2html toolchain. > > For the impatient: the result can be seen at . > > I've written a converter tool that handles most of the LaTeX markup and turns it > into reST, as well as a builder tool that adds many custom directives and roles, > and also features like index generation and cross-document linking. Very impressive. I should say that although in the past I have argued strongly against the use of reST as a markup language for source-code comments (because the reST language only indicates presentation, not semantics), I am 100% supportive of the use of reST in reference documents such as these, especially considering that LaTeX is also a presentational markup (at least, that's the way it tends to be used.) I know that for myself, LaTeX has been a barrier to contributing to the Python documentation, and reST would be much less of a barrier. In fact, I have considered in the past asking whether or not the Python documentation could be migrated to a format with wider fluency, but I never actually posted on this issue because I was afraid that the answer would be that it's too hard / too late to do anything about it. I am glad to have been proven wrong. -- Talin From talin at acm.org Sun May 20 03:41:27 2007 From: talin at acm.org (Talin) Date: Sat, 19 May 2007 18:41:27 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <20070519101235.85AE.JCARLSON@uci.edu> References: <87lkfm8sds.fsf@uwakimon.sk.tsukuba.ac.jp> <20070519030535.AB2D91E4004@bag.python.org> <20070519101235.85AE.JCARLSON@uci.edu> Message-ID: <464FA747.8060704@acm.org> Josiah Carlson wrote: > Captchas like this are easily broken using computational methods, or > even the porn site trick that was already mentioned. Never mind > Stephen's stated belief, that you quoted, that he believes that even the > hard captchas are going to be beaten by computational methods soon. Please > try to pay attention to previous posts. I think people are trying too hard here - in other words, they are putting more of computational science brainpower into the problem than it really merits. While it is true that there is an arms race between creators of social software applications and spammers, this arms race is only waged the largest scales - spammers simply won't spend the effort to go after individual sites, its not cost effective, especially when there are much more lucrative targets. Generally, sites are only vulnerable when they have a comment submission interface that is identical to thousands of other sites. All that one needs to do on the web side is to make the submission process slightly idiosyncratic compared to other sites. If one wants to put in extra effort, you can change the comment submission process on a regular basis. The real issue is comment submission via email, which I believe RoundUp supports (although I don't know if it's enabled for the Python tracker.) Because there's very little that you can do to "customize" an email submission interface (you have to work with standard email clients after all). Do we know how these spam comments entered the system? There's no point in spending any thought securing the web interface if the comments were submitted via email. And has there been any spam submitted since that point? If we're talking less than one spam a week on average, then this is all a moot point, its less effort for someone to just manually delete it than it is to come up with an automated system. -- Talin From blais at furius.ca Sun May 20 03:46:52 2007 From: blais at furius.ca (Martin Blais) Date: Sat, 19 May 2007 18:46:52 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: <1b151690705191846o45604bc6jcdbc3159513feda1@mail.gmail.com> Hi Georg Super impressive work! :-) I haven't looked at it in depth yet, but I have a question. One concern from a long thread on Doc-Sig a long time ago, is that ReST did not at the time possess the ability to nicely markup the objects as LaTeX macros do. Is your transformation losing markup information from the original docs? e.g. are you still marking classes as classes and functions as functions in the ReST source, or is it converting from qualified markup to "style" markup (e.g., to generic literals instead of class/function/variable/keyword argument docutils roles, etc.). If you solved that problem, how did you solve it? Is the resulting ReST pretty? Do you think we can build a better index? My beef with using ReST for documentation, as much as I like ReST, is that unless we have roles and structure for declaring functions, classes, etc. it would remain inferior to the LaTeX macros, which in spite of being LaTeX, qualify the kinds of objects to some extent. Wow, it looks amazingly good. Amazing work. Very impressed. (Somewhat related, but another idea from back then, which was never implemented IMO, was to find a way to automatically pull and convert the docstrings from the source code into the documentation, thus unifying all the information in one place.) On 5/19/07, Georg Brandl wrote: > Hi, > > over the last few weeks I've hacked on a new approach to Python's documentation. > As Python already has an excellent documentation framework, the docutils, with a > readable yet extendable markup format, reST, I thought that it should be > possible to use those instead of the current LaTeX->latex2html toolchain. > > For the impatient: the result can be seen at . > > I've written a converter tool that handles most of the LaTeX markup and turns it > into reST, as well as a builder tool that adds many custom directives and roles, > and also features like index generation and cross-document linking. > > (What you can see at the URL is a completely statical version of the docs, as it > would be shipped with the distribution. For the real online docs, I have more > plans; I'll come to that later.) > > So why the effort? > > Here's a partial list of things that have already been improved: > > - the source is much more readable (for examples, try the "view source" links in > the left navbar) > - all function/class/module names are properly cross-referenced > - the HTML pages are generated from templates, using a language similar to > Django's template language > - Python and C code snippets are syntax highlighted > - for the offline version, there's a JavaScript enabled search function > - the index is generated over all the documentation, making it easier to find > stuff you need > - the toolchain is pure Python, therefore can easily be shipped > > What more? > > If there is support for this approach, I have plans for things that can be added > to the online version: > > - a "quick-dispatch" function: e.g., docs.python.org/q?os.path.split would > redirect you to the matching location. > - "interactive patching": provide an "propose edit" link, leading to a Wiki-like > page where you can edit the source. From the result, a diff is generated, > which can be accepted, edited or rejected by the development team. This is > even more straightforward than plain old comments. > - the same infrastructure could be used for developers, with automatic checkin > into subversion. > - of course, plain old comments can be useful too. > > Concluding, a small caveat: The conversion/build tools are, of course, not > complete yet. There are a number of XXX comments in the text, most of them > indicate that the converter could not handle a situation -- that would have > to be corrected once after conversion is done. > > Waiting for comments! > > Cheers, > Georg > > > -- > Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. > Four shall be the number of spaces thou shalt indent, and the number of thy > indenting shall be four. Eight shalt thou not indent, nor either indent thou > two, excepting that thou then proceed to four. Tabs are right out. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/blais%40furius.ca > From rrr at ronadam.com Sun May 20 03:57:54 2007 From: rrr at ronadam.com (Ron Adam) Date: Sat, 19 May 2007 20:57:54 -0500 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <464FA747.8060704@acm.org> References: <87lkfm8sds.fsf@uwakimon.sk.tsukuba.ac.jp> <20070519030535.AB2D91E4004@bag.python.org> <20070519101235.85AE.JCARLSON@uci.edu> <464FA747.8060704@acm.org> Message-ID: <464FAB22.7040002@ronadam.com> Talin wrote: > Josiah Carlson wrote: >> Captchas like this are easily broken using computational methods, or >> even the porn site trick that was already mentioned. Never mind >> Stephen's stated belief, that you quoted, that he believes that even the >> hard captchas are going to be beaten by computational methods soon. Please >> try to pay attention to previous posts. > > I think people are trying too hard here - in other words, they are > putting more of computational science brainpower into the problem than > it really merits. While it is true that there is an arms race between > creators of social software applications and spammers, this arms race is > only waged the largest scales - spammers simply won't spend the effort > to go after individual sites, its not cost effective, especially when > there are much more lucrative targets. [clip] > And has there been any spam submitted since that point? If we're talking > less than one spam a week on average, then this is all a moot point, its > less effort for someone to just manually delete it than it is to come up > with an automated system. I was thinking the same thing. Once we start using it, any spam that does get though won't stay there very long. At most maybe half a day, but likely only an hour or two. (or less) If it becomes a frequent problem, then it is the time to put the brain cells to work on this. So far we've only had one instance over how long? Lets spend the brain power on getting it up and running first. Cheers, Ron From steven.bethard at gmail.com Sun May 20 04:19:25 2007 From: steven.bethard at gmail.com (Steven Bethard) Date: Sat, 19 May 2007 20:19:25 -0600 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <1b151690705191846o45604bc6jcdbc3159513feda1@mail.gmail.com> References: <1b151690705191846o45604bc6jcdbc3159513feda1@mail.gmail.com> Message-ID: On 5/19/07, Martin Blais wrote: > I haven't looked at it in depth yet, but I have a question. One > concern from a long thread on Doc-Sig a long time ago, is that ReST > did not at the time possess the ability to nicely markup the objects > as LaTeX macros do. Is your transformation losing markup information > from the original docs? e.g. are you still marking classes as classes > and functions as functions in the ReST source, or is it converting > from qualified markup to "style" markup (e.g., to generic literals > instead of class/function/variable/keyword argument docutils roles, > etc.). If you solved that problem, how did you solve it? Is the > resulting ReST pretty? Looking at http://pydoc.gbrandl.de/modules/collections.txt, I can see it has markup like:: .. class:: deque([iterable]) Returns a new deque object initialized left-to-right (using :meth:`append()`) with data from `iterable`. If `iterable` is not specified, the new deque is empty. .. method:: deque.append(x) Add `x` to the right side of the deque. So he's clearly got some of the info in there with things like ``.. class::`` and ``:meth:``. STeVe -- I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a tiny blip on the distant coast of sanity. --- Bucky Katt, Get Fuzzy From python at rcn.com Sun May 20 00:17:00 2007 From: python at rcn.com (Raymond Hettinger) Date: Sat, 19 May 2007 15:17:00 -0700 Subject: [Python-Dev] Py2.6 buildouts to the set API References: <20070518213434.BJU26447@ms09.lnh.mail.rcn.net> <464EA508.7040900@v.loewis.de> Message-ID: <003801c79a87$a835d070$f101a8c0@RaymondLaptop1> >> * New method (proposed by Shane Holloway): s1.isdisjoint(s2). >> Logically equivalent to "not s1.intersection(s2)" but has an >> early-out if a common member is found. [MvL] > I'd rather see iterator versions of the set operations. Interesting idea. I'm not sure I see how to make it work. If s|t returned an iterator, then how would s|t|u work? Are you proposing lazy evaluation of unions, intersections, and differences? Raymond From martin at v.loewis.de Sun May 20 07:14:46 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 20 May 2007 07:14:46 +0200 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <464FA747.8060704@acm.org> References: <87lkfm8sds.fsf@uwakimon.sk.tsukuba.ac.jp> <20070519030535.AB2D91E4004@bag.python.org> <20070519101235.85AE.JCARLSON@uci.edu> <464FA747.8060704@acm.org> Message-ID: <464FD946.1000306@v.loewis.de> > Do we know how these spam comments entered the system? Through the web site. Submission through email is not an issue: you need to use a registered email address, and those are hard to guess. > And has there been any spam submitted since that point? One day after the tracker was renamed to bugs.python.org, there were 10 spam submissions, and new spam was entered at a high rate. We then added some anti-spam measures, and now new spam is added very infrequently. The real problem now is that people panic when they see spam in the tracker, demanding all kinds of immediate action, and wondering what bastards let the spam in in the first place. Regards, Martin From martin at v.loewis.de Sun May 20 07:36:10 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 20 May 2007 07:36:10 +0200 Subject: [Python-Dev] Py2.6 buildouts to the set API In-Reply-To: <003801c79a87$a835d070$f101a8c0@RaymondLaptop1> References: <20070518213434.BJU26447@ms09.lnh.mail.rcn.net> <464EA508.7040900@v.loewis.de> <003801c79a87$a835d070$f101a8c0@RaymondLaptop1> Message-ID: <464FDE4A.40806@v.loewis.de> >> I'd rather see iterator versions of the set operations. > > Interesting idea. I'm not sure I see how to make it work. > If s|t returned an iterator, then how would s|t|u work? I don't think s.union(t) should return an iterator, if for no other reason than compatibility. Instead, there might be s.iunion(t) (or s.unioni(t), or s.iterating_union(t)). Then, you could spell s.iunion(t.iunion(u)). iunion is implemented as def iunion(self, other): for o in self: yield o for o in other: if o not in self: yield o So rather than writing x = a.union(b,c,d,e) you could write x = set(a.iunion(b.iunion(c.iunion(d.iunion(e))))) or x = a.union(b.iunion(c.iunion(d.iunion(e)))) Likewise def iintersection(self, other): for o in other: if o in self: yield o > Are you proposing lazy evaluation of unions, intersections, > and differences? I'm not so sure about differences: union and intersections are commutative, so one should typically be able to reorder the evaluation so that the left operand is a true set, and the other is the iterator. For difference, it would be necessary that the right operand is the true set; it couldn't be an iterator, as you need to test whether an element of self is in the other operand. Regards, Martin From tcdelaney at optusnet.com.au Sun May 20 08:25:52 2007 From: tcdelaney at optusnet.com.au (Tim Delaney) Date: Sun, 20 May 2007 16:25:52 +1000 Subject: [Python-Dev] [Python-3000] PEP 367: New Super References: <003001c795f8$d5275060$0201a8c0@mshome.net> <20070514165704.4F8D23A4036@sparrow.telecommunity.com> Message-ID: <000b01c79aa7$ba716cc0$0201a8c0@mshome.net> Phillip J. Eby wrote: > At 05:23 PM 5/14/2007 +1000, Tim Delaney wrote: >> Determining the class object to use >> ''''''''''''''''''''''''''''''''''' >> >> The exact mechanism for associating the method with the defining >> class is not >> specified in this PEP, and should be chosen for maximum performance. >> For CPython, it is suggested that the class instance be held in a >> C-level variable >> on the function object which is bound to one of ``NULL`` (not part >> of a class), >> ``Py_None`` (static method) or a class object (instance or class >> method). > > Another open issue here: is the decorated class used, or the > undecorated class? Sorry I've taken so long to get back to you about this - had email problems. I'm not sure what you're getting at here - are you referring to the decorators for classes PEP? In that case, the decorator is applied after the class is constructed, so it would be the undecorated class. Are class decorators going to update the MRO? I see nothing about that in PEP 3129, so using the undecorated class would match the current super(cls, self) behaviour. Tim Delaney From tcdelaney at optusnet.com.au Sun May 20 08:44:03 2007 From: tcdelaney at optusnet.com.au (Tim Delaney) Date: Sun, 20 May 2007 16:44:03 +1000 Subject: [Python-Dev] [Python-3000] PEP 367: New Super Message-ID: <009c01c79aaa$441b0dd0$0201a8c0@mshome.net> Tim Delaney wrote: > Phillip J. Eby wrote: >> At 05:23 PM 5/14/2007 +1000, Tim Delaney wrote: >>> Determining the class object to use >>> ''''''''''''''''''''''''''''''''''' >>> >>> The exact mechanism for associating the method with the defining >>> class is not >>> specified in this PEP, and should be chosen for maximum performance. >>> For CPython, it is suggested that the class instance be held in a >>> C-level variable >>> on the function object which is bound to one of ``NULL`` (not part >>> of a class), >>> ``Py_None`` (static method) or a class object (instance or class >>> method). >> >> Another open issue here: is the decorated class used, or the >> undecorated class? > > Sorry I've taken so long to get back to you about this - had email > problems. > I'm not sure what you're getting at here - are you referring to the > decorators for classes PEP? In that case, the decorator is applied > after the class is constructed, so it would be the undecorated class. > > Are class decorators going to update the MRO? I see nothing about > that in PEP 3129, so using the undecorated class would match the > current super(cls, self) behaviour. Duh - I'm an idiot. Of course, the current behaviour uses name lookup, so it would use the decorated class. So the question is, should the method store the class, or the name? Looking up by name could pick up a totally unrelated class, but storing the undecorated class could miss something important in the decoration. Tim Delaney From vinay_sajip at yahoo.co.uk Sun May 20 09:15:34 2007 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 20 May 2007 07:15:34 +0000 (UTC) Subject: [Python-Dev] The docs, reloaded References: Message-ID: Georg Brandl gmx.net> writes: > > For the impatient: the result can be seen at . > > - the toolchain is pure Python, therefore can easily be shipped > Very nice! As well as looking very attractive and professional, the all-Python toolset should make it easier to build the documentation - I've not been able to get a trouble-free setup of the docs toolchain on Windows. Thanks for this, Vinay Sajip From brett at python.org Sun May 20 09:23:08 2007 From: brett at python.org (Brett Cannon) Date: Sun, 20 May 2007 00:23:08 -0700 Subject: [Python-Dev] [Python-checkins] buildbot failure in x86 W2k trunk In-Reply-To: <20070520071645.BA1C01E4004@bag.python.org> References: <20070520071645.BA1C01E4004@bag.python.org> Message-ID: For removing extension modules from the build process on Windows, do you just delete the File entry from PCbuild/pythoncore.vcproj? -Brett On 5/20/07, buildbot at python.org wrote: > > The Buildbot has detected a new failure of x86 W2k trunk. > Full details are available at: > http://www.python.org/dev/buildbot/all/x86%2520W2k%2520trunk/builds/290 > > Buildbot URL: http://www.python.org/dev/buildbot/all/ > > Build Reason: > Build Source Stamp: [branch trunk] HEAD > Blamelist: brett.cannon > > BUILD FAILED: failed compile > > sincerely, > -The Buildbot > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070520/89881357/attachment.htm From ncoghlan at gmail.com Sun May 20 09:42:22 2007 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 May 2007 17:42:22 +1000 Subject: [Python-Dev] [Python-3000] PEP 367: New Super In-Reply-To: <009c01c79aaa$441b0dd0$0201a8c0@mshome.net> References: <009c01c79aaa$441b0dd0$0201a8c0@mshome.net> Message-ID: <464FFBDE.4000109@gmail.com> Tim Delaney wrote: > So the question is, should the method store the class, or the name? Looking > up by name could pick up a totally unrelated class, but storing the > undecorated class could miss something important in the decoration. Couldn't we provide a mechanism whereby the cell can be adjusted to point to the decorated class? (heck, the interpreter has access to both classes after execution of the class statement - it could probably arrange for this to happen automatically whenever the decorated and undecorated classes are different). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From martin at v.loewis.de Sun May 20 09:59:24 2007 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 20 May 2007 09:59:24 +0200 Subject: [Python-Dev] [Python-checkins] buildbot failure in x86 W2k trunk In-Reply-To: References: <20070520071645.BA1C01E4004@bag.python.org> Message-ID: <464FFFDC.4020600@v.loewis.de> Brett Cannon schrieb: > For removing extension modules from the build process on Windows, do you > just delete the File entry from PCbuild/pythoncore.vcproj? No, you also need to remove the entry from PC/config.c. Regards, Martin From tcdelaney at optusnet.com.au Sun May 20 10:20:37 2007 From: tcdelaney at optusnet.com.au (Tim Delaney) Date: Sun, 20 May 2007 18:20:37 +1000 Subject: [Python-Dev] [Python-3000] PEP 367: New Super References: <009c01c79aaa$441b0dd0$0201a8c0@mshome.net> <464FFBDE.4000109@gmail.com> Message-ID: <00ae01c79ab7$c1ea5100$0201a8c0@mshome.net> Nick Coghlan wrote: > Tim Delaney wrote: >> So the question is, should the method store the class, or the name? >> Looking up by name could pick up a totally unrelated class, but >> storing the undecorated class could miss something important in the >> decoration. > > Couldn't we provide a mechanism whereby the cell can be adjusted to > point to the decorated class? (heck, the interpreter has access to > both classes after execution of the class statement - it could > probably arrange for this to happen automatically whenever the > decorated and undecorated classes are different). Yep - I thought of that. I think that's probably the right way to go. Tim Delaney From jcarlson at uci.edu Sun May 20 10:32:32 2007 From: jcarlson at uci.edu (Josiah Carlson) Date: Sun, 20 May 2007 01:32:32 -0700 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <464FA747.8060704@acm.org> References: <20070519101235.85AE.JCARLSON@uci.edu> <464FA747.8060704@acm.org> Message-ID: <20070520012511.85BC.JCARLSON@uci.edu> Talin wrote: > Josiah Carlson wrote: > > Captchas like this are easily broken using computational methods, or > > even the porn site trick that was already mentioned. Never mind > > Stephen's stated belief, that you quoted, that he believes that even the > > hard captchas are going to be beaten by computational methods soon. Please > > try to pay attention to previous posts. > > I think people are trying too hard here - in other words, they are > putting more of computational science brainpower into the problem than > it really merits. While it is true that there is an arms race between > creators of social software applications and spammers, this arms race is > only waged the largest scales - spammers simply won't spend the effort > to go after individual sites, its not cost effective, especially when > there are much more lucrative targets. My point was that spending time to come up with a "better" captcha in attempt to thwart spammers was ill advised, in particular because others brought up varous reasons why captchas weren't the way to go. - Josiah From g.brandl at gmx.net Sun May 20 10:40:55 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 20 May 2007 10:40:55 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: [warning: bulk answer ahead] First, thanks for all those nice comments! [John Gabriele] > BTW, would like to see a little blurb of your own on that page about > how the docs were converted, rendered, and their new source format. Sure. I've already written part of the new "Documenting Python" docs, which cover this a bit. The "About the documentation" will be rewritten too. [Lea Wiemann] > I'd reeeally like to have a look at the source code (don't worry if it's > not clean!). Can you publish or post it somewhere? If you'd like to > store it in the Docutils sandboxes, just drop me a line and I'll give > you SVN access. By the way, things get a lot easier for me if you place > it in the public domain, because that's the license Docutils uses, and > it's obviously compatible to every other license. The toolset is now in the Docutils sandbox at . > I actually have a Google Summer of Code project, "Documenting Python > Packages with Docutils", which I'll start working on May 28: > . > It has a somewhat different scope, so our projects will complement each > other nicely I believe. (To the point where we may end up with a > complete tool-chain to actually migrate the Python documentation to > reST. Tr?s cool.) Great! Making the new toolset usable for third-party developers is certainly a good option. I saw quite a few using the LaTeX-based tools too.. [Ron Adam] > I've been working on improving pydoc. (slowly but steadily) Maybe we can > join efforts as I think the two projects overlap in a number of areas, and > it sounds like we are thinking of some of the same things as far as the > tool chain. So maybe there's some synergy we can take advantage of. Certainly there's plenty of overlap. > It looks like there may be a few areas where we can share code. > > - The html syntax highlighters. (Pydoc can use those) The highlighting is actually done with Pygments, which cannot be included in the stdlib as-is. Perhaps a stripped-down version? > - A shared html style sheet. > - Document locater. [1] > - An HTMLServer for local (dynamic dispatching) html viewing. [2] > - The reST source for viewing topics and keywords in pydoc. > (Instead of scraping html pages. Ick) Yes, that makes sense. If you want to coordinate efforts, feel free to contact me at Jabber . [Ka-Ping Yee] > I agree that interactivity (online commenting and editing) will > be a huge advantage. > I could imagine this heading in a Wiki-like direction -- where a > particular version is tagged as the official revision shipped > with a particular Python release, but everyone can make edits > online to yield new versions, eventually yielding the revision > that will be released with the next Python release. Yes. I think that always only the latest version should be "publicly" interactive. Old archived doc versions should be static only. > I haven't looked at it in depth yet, but I have a question. One > concern from a long thread on Doc-Sig a long time ago, is that ReST > did not at the time possess the ability to nicely markup the objects > as LaTeX macros do. Is your transformation losing markup information > from the original docs? e.g. are you still marking classes as classes > and functions as functions in the ReST source, or is it converting > from qualified markup to "style" markup (e.g., to generic literals > instead of class/function/variable/keyword argument docutils roles, > etc.). If you solved that problem, how did you solve it? Is the > resulting ReST pretty? Do you think we can build a better index? As Steven said, it's solved quite nicely with interpreted text roles. Whether ":class:`Foo`" is nicer than "\class{Foo}" is not entirely clear ;) but you actually get more now, since if a class "Foo" is found in the namespace, it will be cross-linked automatically. About the index: I didn't do anything about it. I just transferred the LaTeX commands into reST directives, the rest is generated completely analogous. > Very nice! As well as looking very attractive and professional, the all-Python > toolset should make it easier to build the documentation - I've not been > able to get a trouble-free setup of the docs toolchain on Windows. Yep. As it is now, you need three packages from the Cheese Shop: Docutils, Pygments (the highlighter) and Jinja (the templating engine). This shouldn't be problematic, though they could also be stripped down and included. Cheers, Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From g.brandl at gmx.net Sun May 20 10:49:47 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 20 May 2007 10:49:47 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: Vinay Sajip schrieb: > Georg Brandl gmx.net> writes: > >> >> For the impatient: the result can be seen at . >> >> - the toolchain is pure Python, therefore can easily be shipped >> > > Very nice! As well as looking very attractive and professional, [...] BTW, I have to give lots of credit for the looks to Armin Ronacher. I'm not so much of a designer ;) Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From g.brandl at gmx.net Sun May 20 10:41:09 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 20 May 2007 10:41:09 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: [warning: bulk answer ahead] First, thanks for all those nice comments! [John Gabriele] > BTW, would like to see a little blurb of your own on that page about > how the docs were converted, rendered, and their new source format. Sure. I've already written part of the new "Documenting Python" docs, which cover this a bit. The "About the documentation" will be rewritten too. [Lea Wiemann] > I'd reeeally like to have a look at the source code (don't worry if it's > not clean!). Can you publish or post it somewhere? If you'd like to > store it in the Docutils sandboxes, just drop me a line and I'll give > you SVN access. By the way, things get a lot easier for me if you place > it in the public domain, because that's the license Docutils uses, and > it's obviously compatible to every other license. The toolset is now in the Docutils sandbox at . > I actually have a Google Summer of Code project, "Documenting Python > Packages with Docutils", which I'll start working on May 28: > . > It has a somewhat different scope, so our projects will complement each > other nicely I believe. (To the point where we may end up with a > complete tool-chain to actually migrate the Python documentation to > reST. Tr?s cool.) Great! Making the new toolset usable for third-party developers is certainly a good option. I saw quite a few using the LaTeX-based tools too.. [Ron Adam] > I've been working on improving pydoc. (slowly but steadily) Maybe we can > join efforts as I think the two projects overlap in a number of areas, and > it sounds like we are thinking of some of the same things as far as the > tool chain. So maybe there's some synergy we can take advantage of. Certainly there's plenty of overlap. > It looks like there may be a few areas where we can share code. > > - The html syntax highlighters. (Pydoc can use those) The highlighting is actually done with Pygments, which cannot be included in the stdlib as-is. Perhaps a stripped-down version? > - A shared html style sheet. > - Document locater. [1] > - An HTMLServer for local (dynamic dispatching) html viewing. [2] > - The reST source for viewing topics and keywords in pydoc. > (Instead of scraping html pages. Ick) Yes, that makes sense. If you want to coordinate efforts, feel free to contact me at Jabber . [Ka-Ping Yee] > I agree that interactivity (online commenting and editing) will > be a huge advantage. > I could imagine this heading in a Wiki-like direction -- where a > particular version is tagged as the official revision shipped > with a particular Python release, but everyone can make edits > online to yield new versions, eventually yielding the revision > that will be released with the next Python release. Yes. I think that always only the latest version should be "publicly" interactive. Old archived doc versions should be static only. [Martin Blais] > I haven't looked at it in depth yet, but I have a question. One > concern from a long thread on Doc-Sig a long time ago, is that ReST > did not at the time possess the ability to nicely markup the objects > as LaTeX macros do. Is your transformation losing markup information > from the original docs? e.g. are you still marking classes as classes > and functions as functions in the ReST source, or is it converting > from qualified markup to "style" markup (e.g., to generic literals > instead of class/function/variable/keyword argument docutils roles, > etc.). If you solved that problem, how did you solve it? Is the > resulting ReST pretty? Do you think we can build a better index? As Steven said, it's solved quite nicely with interpreted text roles. Whether ":class:`Foo`" is nicer than "\class{Foo}" is not entirely clear ;) but you actually get more now, since if a class "Foo" is found in the namespace, it will be cross-linked automatically. About the index: I didn't do anything about it. I just transferred the LaTeX commands into reST directives, the rest is generated completely analogous. > Very nice! As well as looking very attractive and professional, the all-Python > toolset should make it easier to build the documentation - I've not been > able to get a trouble-free setup of the docs toolchain on Windows. Yep. As it is now, you need three packages from the Cheese Shop: Docutils, Pygments (the highlighter) and Jinja (the templating engine). This shouldn't be problematic, though they could also be stripped down and included. Cheers, Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From LeWiemann at gmail.com Sun May 20 13:00:31 2007 From: LeWiemann at gmail.com (Lea Wiemann) Date: Sun, 20 May 2007 07:00:31 -0400 Subject: [Python-Dev] [Doc-SIG] The docs, reloaded In-Reply-To: References: Message-ID: <46502A4F.4000004@gmail.com> [Georg Brandl] > The highlighting is actually done with Pygments, which cannot be > included in the stdlib as-is. Perhaps a stripped-down version? No need to; we can just fall back to no syntax highlighting if Pygments is not installed on the user's system. [Gael Varoquaux] >> - The html syntax highlighters. (Pydoc can use those) > > I have a patch on the docutils patch tracker that does this. For everyone's reference, . Best wishes, Lea From scott+python-dev at scottdial.com Sun May 20 15:12:24 2007 From: scott+python-dev at scottdial.com (Scott Dial) Date: Sun, 20 May 2007 09:12:24 -0400 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: References: <464A29E2.1060207@v.loewis.de> <464A9BC8.5070703@acm.org><20070517050002.E9D5C600045@longblack.object-craft.com.au><464BE57D.9050103@acm.org> <1179388425.6077.58.camel@localhost><464C6F0E.5010908@scottdial.com> <20070517170703.GC9779@gmail.com><464CFC21.8090806@canterbury.ac.nz><20070518024602.GA3268@gmail.com> <87lkfm8sds.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <46504938.3050007@scottdial.com> Terry Reedy wrote: > Why not simply embargo any post with an off-site link? Tho there might > have been some, I can't remember a single example of such at SF. I have often posted links off-site because the SF tracker didn't allow unrelated parties to attach things. I don't know whether the new tracker will allow that, but if it doesn't, you will again see links off-site. -Scott -- Scott Dial scott at scottdial.com scodial at cs.indiana.edu From blais at furius.ca Sun May 20 20:08:15 2007 From: blais at furius.ca (Martin Blais) Date: Sun, 20 May 2007 11:08:15 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: <1b151690705201108v53f19b39s93e45b915ae80ea7@mail.gmail.com> On 5/20/07, Georg Brandl wrote: > > Very nice! As well as looking very attractive and professional, the all-Python > > toolset should make it easier to build the documentation - I've not been > > able to get a trouble-free setup of the docs toolchain on Windows. > > Yep. As it is now, you need three packages from the Cheese Shop: > Docutils, Pygments (the highlighter) and Jinja (the templating engine). > This shouldn't be problematic, though they could also be stripped down > and included. This is great. IMHO if this is to compete to become the official Python docs, I would argue for even less dependencies, even at the cost of more generic/bland output, for portability reasons and to stimulate greater adoption. If we can make some of those dependencies optional and only rely on docutils, that could make it ubiquitous. Another thing to keep in mind: I don't know if the directives you defined are very generic, but if they are, it would be interesting to consider migrating them up into docutils (if it makes sense), and see if they could support documenting other programming languages. Could this be a language-independent documenting toolkit? Could we document LISP or Ruby code with it? Georg, thanks again! From ndbecker2 at gmail.com Sun May 20 22:52:12 2007 From: ndbecker2 at gmail.com (Neal Becker) Date: Sun, 20 May 2007 16:52:12 -0400 Subject: [Python-Dev] The docs, reloaded References: Message-ID: Sounds very interesting. I just have one concern/question. I hope that while moving away from latex, we are not precluding the ability to write math as part of the documentation. What would be my choices for add math to the documentation? Hopefully using latex, since there really isn't AFAIK any other competitor for this. From scott+python-dev at scottdial.com Sun May 20 23:05:39 2007 From: scott+python-dev at scottdial.com (Scott Dial) Date: Sun, 20 May 2007 17:05:39 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: <4650B823.4050908@scottdial.com> Neal Becker wrote: > Sounds very interesting. I just have one concern/question. I hope that > while moving away from latex, we are not precluding the ability to write > math as part of the documentation. What would be my choices for add math > to the documentation? Hopefully using latex, since there really isn't > AFAIK any other competitor for this. > Where in the current documentation is there any math notation /at all/? In all my reading of it, I have not run across anything that appeared like it was being used. Besides that question, is the full power of LaTeX math notation really necessary here? I somehow doubt anything more than simple expressions of runtime performance and container behaviors are appropriate for any documentation we have. -Scott -- Scott Dial scott at scottdial.com scodial at cs.indiana.edu From alexandre at peadrop.com Sun May 20 23:28:14 2007 From: alexandre at peadrop.com (Alexandre Vassalotti) Date: Sun, 20 May 2007 17:28:14 -0400 Subject: [Python-Dev] Introduction and request for commit access to the sandbox. Message-ID: Hello, As some of you may already know, I will be working on Python for this year Google Summer of Code. My project is to merge the modules with a dual C and Python implementation, i.e. cPickle/pickle, cStringIO/StringIO and cProfile/profile [1]. This project is part of the standard library reorganization for Python 3000 [2]. And my mentor for this project is Brett Cannon. So first, let me introduce myself. I am currently a student from Quebec, Canada. I plan to make a career as a (hopefully good) programmer. Therefore, I dedicate a lot of my free time contributing to open source projects, like Ubuntu. I, recently, became interested by how compilers and interpreters work. So, I started reading Python's source code, which is one of the most well organized and comprehensive code base I have seen. This motivated me to start contributing to Python. However since school kept me fairly busy, I haven't had the chance to do anything other than providing support to Python's users in the #python FreeNode IRC channel. This year Summer of Code will give me the chance to do a significant contribution to Python, and to get started with Python code development as well. With that said, I would to request svn access to the sandbox for my work. I will use this access only for modifying stuff in the directory I will be assigned to. I would like to use the username "avassalotti" and the attached SSH2 public key for this access. One last thing, if you know semantic differences (other than the obvious ones) between the C and Python versions of the modules I need to merge, please let know. This will greatly simplify the merge and reduce the chances of later breaking. Cheers, -- Alexandre .. [1] Abstract of my application, Merge the C and Python implementations of the same interface (http://code.google.com/soc/psf/appinfo.html?csaid=C6768E09BEF7CCE2) .. [2] PEP 3108, Standard Library Reorganization, Cannon (http://www.python.org/dev/peps/pep-3108) -------------- next part -------------- A non-text attachment was scrubbed... Name: id_dsa.pub Type: application/octet-stream Size: 610 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20070520/1242e699/attachment.obj From g.brandl at gmx.net Sun May 20 23:30:13 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 20 May 2007 23:30:13 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <4650B823.4050908@scottdial.com> References: <4650B823.4050908@scottdial.com> Message-ID: Scott Dial schrieb: > Neal Becker wrote: >> Sounds very interesting. I just have one concern/question. I hope that >> while moving away from latex, we are not precluding the ability to write >> math as part of the documentation. What would be my choices for add math >> to the documentation? Hopefully using latex, since there really isn't >> AFAIK any other competitor for this. >> > > Where in the current documentation is there any math notation /at all/? > In all my reading of it, I have not run across anything that appeared > like it was being used. Besides that question, is the full power of > LaTeX math notation really necessary here? I somehow doubt anything more > than simple expressions of runtime performance and container behaviors > are appropriate for any documentation we have. There is exactly one instance of LaTeX math in the whole docs, it's in the description of audioop, AFAIR, an contains a sum over square roots... So, that's not really a concern of mine ;) Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From tjreedy at udel.edu Sun May 20 23:42:49 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 20 May 2007 17:42:49 -0400 Subject: [Python-Dev] The docs, reloaded References: Message-ID: Please add a link to the PEP index (which is also missing from docs.python.org, though not from python.org/doc/. And consider at least some PEPs as part of the corpus indexed (ie, those with info not in the regular docs). tjr From talin at acm.org Mon May 21 00:07:37 2007 From: talin at acm.org (Talin) Date: Sun, 20 May 2007 15:07:37 -0700 Subject: [Python-Dev] PEP 0365: Adding the pkg_resources module In-Reply-To: <5.1.1.6.0.20070430202700.04cb8128@sparrow.telecommunity.com> References: <5.1.1.6.0.20070430202700.04cb8128@sparrow.telecommunity.com> Message-ID: <4650C6A9.1020809@acm.org> Phillip J. Eby wrote: > I wanted to get this in before the Py3K PEP deadline, since this is a > Python 2.6 PEP that would presumably impact 3.x as well. Feedback welcome. > > > PEP: 365 > Title: Adding the pkg_resources module > Version: $Revision: 55032 $ > Last-Modified: $Date: 2007-04-30 20:24:48 -0400 (Mon, 30 Apr 2007) $ > Author: Phillip J. Eby > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 30-Apr-2007 > Post-History: 30-Apr-2007 > > > Abstract > ======== > > This PEP proposes adding an enhanced version of the ``pkg_resources`` > module to the standard library. > > ``pkg_resources`` is a module used to find and manage Python > package/version dependencies and access bundled files and resources, > including those inside of zipped ``.egg`` files. Currently, > ``pkg_resources`` is only available through installing the entire > ``setuptools`` distribution, but it does not depend on any other part > of setuptools; in effect, it comprises the entire runtime support > library for Python Eggs, and is independently useful. > > In addition, with one feature addition, this module could support > easy bootstrap installation of several Python package management > tools, including ``setuptools``, ``workingenv``, and ``zc.buildout``. > > > Proposal > ======== > > Rather than proposing to include ``setuptools`` in the standard > library, this PEP proposes only that ``pkg_resources`` be added to the > standard library for Python 2.6 and 3.0. ``pkg_resources`` is > considerably more stable than the rest of setuptools, with virtually > no new features being added in the last 12 months. > > However, this PEP also proposes that a new feature be added to > ``pkg_resources``, before being added to the stdlib. Specifically, it > should be possible to do something like:: > > python -m pkg_resources SomePackage==1.2 > > to request downloading and installation of ``SomePackage`` from PyPI. > This feature would *not* be a replacement for ``easy_install``; > instead, it would rely on ``SomePackage`` having pure-Python ``.egg`` > files listed for download via the PyPI XML-RPC API, and the eggs would > be placed in the ``$PYTHONEGGS`` cache, where they would **not** be > importable by default. (And no scripts would be installed) However, > if the download egg contains installation bootstrap code, it will be > given a chance to run. > > These restrictions would allow the code to be extremely simple, yet > still powerful enough to support users downloading package management > tools such as ``setuptools``, ``workingenv`` and ``zc.buildout``, > simply by supplying the tool's name on the command line. > > > Rationale > ========= > > Many users have requested that ``setuptools`` be included in the > standard library, to save users needing to go through the awkward > process of bootstrapping it. However, most of the bootstrapping > complexity comes from the fact that setuptools-installed code cannot > use the ``pkg_resources`` runtime module unless setuptools is already > installed. Thus, installing setuptools requires (in a sense) that > setuptools already be installed. > > Other Python package management tools, such as ``workingenv`` and > ``zc.buildout``, have similar bootstrapping issues, since they both > make use of setuptools, but also want to provide users with something > approaching a "one-step install". The complexity of creating bootstrap > utilities for these and any other such tools that arise in future, is > greatly reduced if ``pkg_resources`` is already present, and is also > able to download pre-packaged eggs from PyPI. > > (It would also mean that setuptools would not need to be installed > in order to simply *use* eggs, as opposed to building them.) > > Finally, in addition to providing access to eggs built via setuptools > or other packaging tools, it should be noted that since Python 2.5, > the distutils install package metadata (aka ``PKG-INFO``) files that > can be read by ``pkg_resources`` to identify what distributions are > already on ``sys.path``. In environments where Python packages are > installed using system package tools (like RPM), the ``pkg_resources`` > module provides an API for detecting what versions of what packages > are installed, even if those packages were installed via the distutils > instead of setuptools. > > > Implementation and Documentation > ================================ > > The ``pkg_resources`` implementation is maintained in the Python > SVN repository under ``/sandbox/trunk/setuptools/``; see > ``pkg_resources.py`` and ``pkg_resources.txt``. Documentation for the > egg format(s) supported by ``pkg_resources`` can be found in > ``doc/formats.txt``. HTML versions of these documents are available > at: > > * http://peak.telecommunity.com/DevCenter/PkgResources and > > * http://peak.telecommunity.com/DevCenter/EggFormats > > (These HTML versions are for setuptools 0.6; they may not reflect all > of the changes found in the Subversion trunk's ``.txt`` versions.) > > > Copyright > ========= > > This document has been placed in the public domain. I'm really surprised that there hasn't been more comment on this. -- Talin From ndbecker2 at gmail.com Mon May 21 01:11:12 2007 From: ndbecker2 at gmail.com (Neal Becker) Date: Sun, 20 May 2007 19:11:12 -0400 Subject: [Python-Dev] The docs, reloaded References: <4650B823.4050908@scottdial.com> Message-ID: Georg Brandl wrote: > Scott Dial schrieb: >> Neal Becker wrote: >>> Sounds very interesting. I just have one concern/question. I hope that >>> while moving away from latex, we are not precluding the ability to write >>> math as part of the documentation. What would be my choices for add >>> math >>> to the documentation? Hopefully using latex, since there really isn't >>> AFAIK any other competitor for this. >>> >> >> Where in the current documentation is there any math notation /at all/? >> In all my reading of it, I have not run across anything that appeared >> like it was being used. Besides that question, is the full power of >> LaTeX math notation really necessary here? I somehow doubt anything more >> than simple expressions of runtime performance and container behaviors >> are appropriate for any documentation we have. > > There is exactly one instance of LaTeX math in the whole docs, it's in the > description of audioop, AFAIR, an contains a sum over square roots... > > So, that's not really a concern of mine ;) > > Georg > There is an effort as part of numpy to come up with a new system using docstrings. It seems to me it would be unfortunate if these two efforts were not coordinated. From robert.kern at gmail.com Mon May 21 02:00:04 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 20 May 2007 19:00:04 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <4650B823.4050908@scottdial.com> Message-ID: Neal Becker wrote: > There is an effort as part of numpy to come up with a new system using > docstrings. It seems to me it would be unfortunate if these two efforts > were not coordinated. I don't think so. The issue with numpy is getting our act together and making parseable docstrings for auto-generated API documentation using existing tools or slightly modified versions thereof. No one is actually contemplating building a new tool. AFAICT, Georg's (excellent) work doesn't address that use. I don't think there is anything to coordinate, here. Provided that Georg's system doesn't place too many restrictions on the reST it handles, we could use the available reST math options if we wanted to use Georg's system. I'd much rather see Georg spend his time working on the docs for the Python language and the feature set it requires. If the numpy project needs to extend that feature set, we'll provide the manpower ourselves. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pje at telecommunity.com Mon May 21 02:07:42 2007 From: pje at telecommunity.com (Phillip J. Eby) Date: Sun, 20 May 2007 20:07:42 -0400 Subject: [Python-Dev] [Python-3000] PEP 367: New Super In-Reply-To: <000b01c79aa7$ba716cc0$0201a8c0@mshome.net> References: <003001c795f8$d5275060$0201a8c0@mshome.net> <20070514165704.4F8D23A4036@sparrow.telecommunity.com> <000b01c79aa7$ba716cc0$0201a8c0@mshome.net> Message-ID: <20070521000552.B64C93A4061@sparrow.telecommunity.com> At 04:25 PM 5/20/2007 +1000, Tim Delaney wrote: >I'm not sure what you're getting at here - are you referring to the >decorators for classes PEP? In that case, the decorator is applied >after the class is constructed, so it would be the undecorated class. > >Are class decorators going to update the MRO? I see nothing about >that in PEP 3129, so using the undecorated class would match the >current super(cls, self) behaviour. Class decorators can (and sometimes *do*, in PEAK) return an object that's not the original class object. So that would break super, which is why my inclination is to go with using the decorated result. From pje at telecommunity.com Mon May 21 02:11:24 2007 From: pje at telecommunity.com (Phillip J. Eby) Date: Sun, 20 May 2007 20:11:24 -0400 Subject: [Python-Dev] [Python-3000] PEP 367: New Super In-Reply-To: <00ae01c79ab7$c1ea5100$0201a8c0@mshome.net> References: <009c01c79aaa$441b0dd0$0201a8c0@mshome.net> <464FFBDE.4000109@gmail.com> <00ae01c79ab7$c1ea5100$0201a8c0@mshome.net> Message-ID: <20070521000933.202083A4061@sparrow.telecommunity.com> At 06:20 PM 5/20/2007 +1000, Tim Delaney wrote: >Nick Coghlan wrote: > > Tim Delaney wrote: > >> So the question is, should the method store the class, or the name? > >> Looking up by name could pick up a totally unrelated class, but > >> storing the undecorated class could miss something important in the > >> decoration. > > > > Couldn't we provide a mechanism whereby the cell can be adjusted to > > point to the decorated class? (heck, the interpreter has access to > > both classes after execution of the class statement - it could > > probably arrange for this to happen automatically whenever the > > decorated and undecorated classes are different). > >Yep - I thought of that. I think that's probably the right way to go. Btw, PEP 3124 needs a way to receive the same class object at more or less the same moment, although in the form of a callback rather than a cell assignment. Guido suggested I co-ordinate with you to design a mechanism for this. From skip at pobox.com Sun May 20 16:16:37 2007 From: skip at pobox.com (skip at pobox.com) Date: Sun, 20 May 2007 09:16:37 -0500 Subject: [Python-Dev] Py2.6 buildouts to the set API In-Reply-To: References: <20070518213434.BJU26447@ms09.lnh.mail.rcn.net> Message-ID: <18000.22597.5993.854803@montanaro.dyndns.org> >> * New method (proposed by Shane Holloway): s1.isdisjoint(s2). Mike> +1. Disjointness verification is one of my main uses for set(), Mike> and though I don't think that the early-out condition would Mike> trigger often in my code, it would increase readability. I think the readbility argument is marginal at best. I use sets frequently and to the greatest extent possible use the builtin operator support because I find that more readable. So for me, I'd be going from if not s1 & s2: to if s1.isdisjoint(s2): I'm not sure that's an improvement. Maybe it's just me, but given two sets I frequently want to operate on s1-s2, s2-s1 and s1&s2 in different ways. I wouldn't find a disjoint operation all that useful. Skip From skip at pobox.com Sun May 20 16:44:33 2007 From: skip at pobox.com (skip at pobox.com) Date: Sun, 20 May 2007 09:44:33 -0500 Subject: [Python-Dev] Summary of Tracker Issues In-Reply-To: <464FA747.8060704@acm.org> References: <87lkfm8sds.fsf@uwakimon.sk.tsukuba.ac.jp> <20070519030535.AB2D91E4004@bag.python.org> <20070519101235.85AE.JCARLSON@uci.edu> <464FA747.8060704@acm.org> Message-ID: <18000.24273.419175.544382@montanaro.dyndns.org> talin> While it is true that there is an arms race between creators of talin> social software applications and spammers, this arms race is only talin> waged the largest scales - spammers simply won't spend the effort talin> to go after individual sites, its not cost effective, especially talin> when there are much more lucrative targets. The advantage of choosing a couple simple topical questions means that in theory, every Roundup installation can create a site-specific set of questions. If each site builds a small database of 10 or so questions then chooses two or three at random for each submission, it seems that would make Roundup, a very challenging system to hack in this regard. It would also likely be tough to use the porn site human proxy idea as well since questions will (or ought to be) topical (what is the power operator?, what does the "E" in R. E. Olds stand for?), not general (what star shines during the day? what day preceeds Monday?) Skip From skip at pobox.com Mon May 21 02:07:45 2007 From: skip at pobox.com (skip at pobox.com) Date: Sun, 20 May 2007 19:07:45 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <4650B823.4050908@scottdial.com> Message-ID: <18000.58065.249249.648413@montanaro.dyndns.org> >>> What would be my choices for add math to the documentation? >> Where in the current documentation is there any math notation /at >> all/? Georg> There is exactly one instance of LaTeX math in the whole docs, Georg> it's in the description of audioop, AFAIR, an contains a sum over Georg> square roots... Georg> So, that's not really a concern of mine ;) You must realize that people will use the core tools to create documentation for third party packages which aren't in the core. If you replace LaTeX with something else I think you need to keep math in mind whether it's used in the core documentation or not. Skip From joeedh at gmail.com Thu May 10 13:27:38 2007 From: joeedh at gmail.com (Joe Eagar) Date: Thu, 10 May 2007 04:27:38 -0700 Subject: [Python-Dev] Strange behaviour with PyEval_EvalCode Message-ID: <464301AA.10204@gmail.com> [I'm re-sending this message cause it might've gotten lost; sorry if it ends up posting twice] Hi I'm getting extremely odd behavior. First of all, why isn't PyEval_EvalCode documented anywhere? Anyway, I'm working on blender's python integration (it embeds python, as opposed to python embedding it). I have a function that executes a string buffer of python code, fetches a function from its global dictionary then calls it. When the function code returns a local variable, PyObject_Call() appears to be returning garbage. The initial implementation used the same dictionary for the global and local dicts. I tried using separate dicts, but then the function wasn't being called at all (or at least I tested it by putting a "print "bleh"" in there, and it didn't work). I've tested with both python 2.4 and 2.5. Mostly with 2.4. This bug may be cropping up in other experimental blender python code as well. Here's the code in the string buffer: #BPYCONSTRAINT from Blender import * from Blender.Mathutils import * print "d" def doConstraint(inmat, tarmat, prop): a = Matrix() a.identity() a = a * TranslationMatrix(Vector(0, 0, 0)) print "t" a = tarmat return inmat print doConstraint(Matrix(), Matrix(), 0) Here's the code that executes the string buffer: PyObject *RunPython2( Text * text, PyObject * globaldict, PyObject *localdict ) { char *buf = NULL; /* The script text is compiled to Python bytecode and saved at text->compiled * to speed-up execution if the user executes the script multiple times */ if( !text->compiled ) { // if it wasn't already compiled, do it now buf = txt_to_buf( text ); text->compiled = Py_CompileString( buf, GetName( text ), Py_file_input ); MEM_freeN( buf ); if( PyErr_Occurred( ) ) { BPY_free_compiled_text( text ); return NULL; } } return PyEval_EvalCode( text->compiled, globaldict, localdict ); } . . .and heres the (rather long, and somewhat in a working state) function that calls the function in the script's global dictionary: void BPY_pyconstraint_eval(bPythonConstraint *con, float obmat[][4], short ownertype, void *ownerdata, float targetmat[][4]) { PyObject *srcmat, *tarmat, *idprop; PyObject *globals, *locals; PyObject *gkey, *gval; PyObject *retval; MatrixObject *retmat; Py_ssize_t ppos = 0; int row, col; if ( !con->text ) return; globals = CreateGlobalDictionary(); srcmat = newMatrixObject( (float*)obmat, 4, 4, Py_NEW ); tarmat = newMatrixObject( (float*)targetmat, 4, 4, Py_NEW ); idprop = BPy_Wrap_IDProperty( NULL, &con->prop, NULL); /* since I can't remember what the armature weakrefs do, I'll just leave this here commented out. Since this function was based on pydrivers. if( !setup_armature_weakrefs()){ fprintf( stderr, "Oops - weakref dict setup\n"); return result; } */ retval = RunPython2( con->text, globals, globals); if (retval) {Py_XDECREF( retval );} if ( retval == NULL ) { BPY_Err_Handle(con->text->id.name); ReleaseGlobalDictionary( globals ); /*free temp objects*/ Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); return; } /*Now for the fun part! Try and find the functions we need.*/ while ( PyDict_Next(globals, &ppos, &gkey, &gval) ) { if ( PyString_Check(gkey) && strcmp(PyString_AsString(gkey), "doConstraint")==0 ) { if (PyFunction_Check(gval) ) { retval = PyObject_CallObject(gval, Py_BuildValue("OOO", srcmat, tarmat, idprop)); Py_XDECREF( retval ); } else { printf("ERROR: doConstraint is supposed to be a function!\n"); } break; } } if (!retval) { BPY_Err_Handle(con->text->id.name); /*free temp objects*/ ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); return; } if (!PyObject_TypeCheck(retval, &matrix_Type)) { printf("Error in pyconstraint: Wrong return type for a pyconstraint!\n"); ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); Py_XDECREF( retval ); return; } retmat = (MatrixObject*) retval; if (retmat->rowSize != 4 || retmat->colSize != 4) { printf("Error in pyconstraint: Matrix is the wrong size!\n"); ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); Py_XDECREF( retval ); return; } //this is the reverse of code taken from newMatrix(). for(row = 0; row < 4; row++) { for(col = 0; col < 4; col++) { if (retmat->wrapped) obmat[row][col] = retmat->data.blend_data[row*4+col]; //[row][col]; else obmat[row][col] = retmat->data.py_data[row*4+col]; //[row][col]; } } /*clear globals*/ //ReleaseGlobalDictionary( globals ); /*free temp objects*/ //Py_XDECREF( idprop ); //Py_XDECREF( srcmat ); //Py_XDECREF( tarmat ); //Py_XDECREF( retval ); //PyDict_Clear(locals); //Py_XDECREF(locals); } Joe From joeedh at gmail.com Thu May 10 12:49:55 2007 From: joeedh at gmail.com (Joe Eagar) Date: Thu, 10 May 2007 03:49:55 -0700 Subject: [Python-Dev] Strange behaviour with PyEval_EvalCode Message-ID: <4642F8D3.1040907@gmail.com> Hi I'm getting extremely odd behavior. First of all, why isn't PyEval_EvalCode documented anywhere? Anyway, I'm working on blender's python integration (it embeds python, as opposed to python embedding it). I have a function that executes a string buffer of python code, fetches a function from its global dictionary then calls it. When the function code returns a local variable, PyObject_Call() appears to be returning garbage. The initial implementation used the same dictionary for the global and local dicts. I tried using separate dicts, but then the function wasn't being called at all (or at least I tested it by putting a "print "bleh"" in there, and it didn't work). I've tested with both python 2.4 and 2.5. Mostly with 2.4. This bug may be cropping up in other experimental blender python code as well. Here's the code in the string buffer: #BPYCONSTRAINT from Blender import * from Blender.Mathutils import * print "d" def doConstraint(inmat, tarmat, prop): a = Matrix() a.identity() a = a * TranslationMatrix(Vector(0, 0, 0)) print "t" a = tarmat return inmat print doConstraint(Matrix(), Matrix(), 0) Here's the code that executes the string buffer: PyObject *RunPython2( Text * text, PyObject * globaldict, PyObject *localdict ) { char *buf = NULL; /* The script text is compiled to Python bytecode and saved at text->compiled * to speed-up execution if the user executes the script multiple times */ if( !text->compiled ) { // if it wasn't already compiled, do it now buf = txt_to_buf( text ); text->compiled = Py_CompileString( buf, GetName( text ), Py_file_input ); MEM_freeN( buf ); if( PyErr_Occurred( ) ) { BPY_free_compiled_text( text ); return NULL; } } return PyEval_EvalCode( text->compiled, globaldict, localdict ); } . . .and heres the (rather long, and somewhat in a working state) function that calls the function in the script's global dictionary: void BPY_pyconstraint_eval(bPythonConstraint *con, float obmat[][4], short ownertype, void *ownerdata, float targetmat[][4]) { PyObject *srcmat, *tarmat, *idprop; PyObject *globals, *locals; PyObject *gkey, *gval; PyObject *retval; MatrixObject *retmat; Py_ssize_t ppos = 0; int row, col; if ( !con->text ) return; globals = CreateGlobalDictionary(); srcmat = newMatrixObject( (float*)obmat, 4, 4, Py_NEW ); tarmat = newMatrixObject( (float*)targetmat, 4, 4, Py_NEW ); idprop = BPy_Wrap_IDProperty( NULL, &con->prop, NULL); /* since I can't remember what the armature weakrefs do, I'll just leave this here commented out. Since this function was based on pydrivers. if( !setup_armature_weakrefs()){ fprintf( stderr, "Oops - weakref dict setup\n"); return result; } */ retval = RunPython2( con->text, globals, globals); if (retval) {Py_XDECREF( retval );} if ( retval == NULL ) { BPY_Err_Handle(con->text->id.name); ReleaseGlobalDictionary( globals ); /*free temp objects*/ Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); return; } /*Now for the fun part! Try and find the functions we need.*/ while ( PyDict_Next(globals, &ppos, &gkey, &gval) ) { if ( PyString_Check(gkey) && strcmp(PyString_AsString(gkey), "doConstraint")==0 ) { if (PyFunction_Check(gval) ) { retval = PyObject_CallObject(gval, Py_BuildValue("OOO", srcmat, tarmat, idprop)); Py_XDECREF( retval ); } else { printf("ERROR: doConstraint is supposed to be a function!\n"); } break; } } if (!retval) { BPY_Err_Handle(con->text->id.name); /*free temp objects*/ ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); return; } if (!PyObject_TypeCheck(retval, &matrix_Type)) { printf("Error in pyconstraint: Wrong return type for a pyconstraint!\n"); ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); Py_XDECREF( retval ); return; } retmat = (MatrixObject*) retval; if (retmat->rowSize != 4 || retmat->colSize != 4) { printf("Error in pyconstraint: Matrix is the wrong size!\n"); ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); Py_XDECREF( retval ); return; } //this is the reverse of code taken from newMatrix(). for(row = 0; row < 4; row++) { for(col = 0; col < 4; col++) { if (retmat->wrapped) obmat[row][col] = retmat->data.blend_data[row*4+col]; //[row][col]; else obmat[row][col] = retmat->data.py_data[row*4+col]; //[row][col]; } } /*clear globals*/ //ReleaseGlobalDictionary( globals ); /*free temp objects*/ //Py_XDECREF( idprop ); //Py_XDECREF( srcmat ); //Py_XDECREF( tarmat ); //Py_XDECREF( retval ); //PyDict_Clear(locals); //Py_XDECREF(locals); } Joe From joeedh at gmail.com Thu May 10 19:01:06 2007 From: joeedh at gmail.com (Joe Eagar) Date: Thu, 10 May 2007 10:01:06 -0700 Subject: [Python-Dev] Strange behaviour with PyEval_EvalCode Message-ID: <46434FD2.9080203@gmail.com> [I'm re-sending this message cause it might've gotten lost; sorry if it ends up posting twice] Hi I'm getting extremely odd behavior. First of all, why isn't PyEval_EvalCode documented anywhere? Anyway, I'm working on blender's python integration (it embeds python, as opposed to python embedding it). I have a function that executes a string buffer of python code, fetches a function from its global dictionary then calls it. When the function code returns a local variable, PyObject_Call() appears to be returning garbage. The initial implementation used the same dictionary for the global and local dicts. I tried using separate dicts, but then the function wasn't being called at all (or at least I tested it by putting a "print "bleh"" in there, and it didn't work). I've tested with both python 2.4 and 2.5. Mostly with 2.4. This bug may be cropping up in other experimental blender python code as well. Here's the code in the string buffer: #BPYCONSTRAINT from Blender import * from Blender.Mathutils import * print "d" def doConstraint(inmat, tarmat, prop): a = Matrix() a.identity() a = a * TranslationMatrix(Vector(0, 0, 0)) print "t" a = tarmat return inmat print doConstraint(Matrix(), Matrix(), 0) Here's the code that executes the string buffer: PyObject *RunPython2( Text * text, PyObject * globaldict, PyObject *localdict ) { char *buf = NULL; /* The script text is compiled to Python bytecode and saved at text->compiled * to speed-up execution if the user executes the script multiple times */ if( !text->compiled ) { // if it wasn't already compiled, do it now buf = txt_to_buf( text ); text->compiled = Py_CompileString( buf, GetName( text ), Py_file_input ); MEM_freeN( buf ); if( PyErr_Occurred( ) ) { BPY_free_compiled_text( text ); return NULL; } } return PyEval_EvalCode( text->compiled, globaldict, localdict ); } . . .and heres the (rather long, and somewhat in a working state) function that calls the function in the script's global dictionary: void BPY_pyconstraint_eval(bPythonConstraint *con, float obmat[][4], short ownertype, void *ownerdata, float targetmat[][4]) { PyObject *srcmat, *tarmat, *idprop; PyObject *globals, *locals; PyObject *gkey, *gval; PyObject *retval; MatrixObject *retmat; Py_ssize_t ppos = 0; int row, col; if ( !con->text ) return; globals = CreateGlobalDictionary(); srcmat = newMatrixObject( (float*)obmat, 4, 4, Py_NEW ); tarmat = newMatrixObject( (float*)targetmat, 4, 4, Py_NEW ); idprop = BPy_Wrap_IDProperty( NULL, &con->prop, NULL); /* since I can't remember what the armature weakrefs do, I'll just leave this here commented out. Since this function was based on pydrivers. if( !setup_armature_weakrefs()){ fprintf( stderr, "Oops - weakref dict setup\n"); return result; } */ retval = RunPython2( con->text, globals, globals); if (retval) {Py_XDECREF( retval );} if ( retval == NULL ) { BPY_Err_Handle(con->text->id.name); ReleaseGlobalDictionary( globals ); /*free temp objects*/ Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); return; } /*Now for the fun part! Try and find the functions we need.*/ while ( PyDict_Next(globals, &ppos, &gkey, &gval) ) { if ( PyString_Check(gkey) && strcmp(PyString_AsString(gkey), "doConstraint")==0 ) { if (PyFunction_Check(gval) ) { retval = PyObject_CallObject(gval, Py_BuildValue("OOO", srcmat, tarmat, idprop)); Py_XDECREF( retval ); } else { printf("ERROR: doConstraint is supposed to be a function!\n"); } break; } } if (!retval) { BPY_Err_Handle(con->text->id.name); /*free temp objects*/ ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); return; } if (!PyObject_TypeCheck(retval, &matrix_Type)) { printf("Error in pyconstraint: Wrong return type for a pyconstraint!\n"); ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); Py_XDECREF( retval ); return; } retmat = (MatrixObject*) retval; if (retmat->rowSize != 4 || retmat->colSize != 4) { printf("Error in pyconstraint: Matrix is the wrong size!\n"); ReleaseGlobalDictionary( globals ); Py_XDECREF( idprop ); Py_XDECREF( srcmat ); Py_XDECREF( tarmat ); Py_XDECREF( retval ); return; } //this is the reverse of code taken from newMatrix(). for(row = 0; row < 4; row++) { for(col = 0; col < 4; col++) { if (retmat->wrapped) obmat[row][col] = retmat->data.blend_data[row*4+col]; //[row][col]; else obmat[row][col] = retmat->data.py_data[row*4+col]; //[row][col]; } } /*clear globals*/ //ReleaseGlobalDictionary( globals ); /*free temp objects*/ //Py_XDECREF( idprop ); //Py_XDECREF( srcmat ); //Py_XDECREF( tarmat ); //Py_XDECREF( retval ); //PyDict_Clear(locals); //Py_XDECREF(locals); } Joe From jmg3000 at gmail.com Sat May 19 21:05:23 2007 From: jmg3000 at gmail.com (John Gabriele) Date: Sat, 19 May 2007 15:05:23 -0400 Subject: [Python-Dev] [Doc-SIG] The docs, reloaded In-Reply-To: References: Message-ID: <65e0bb520705191205p222a5f2by9b04e5793b3cb3e7@mail.gmail.com> On 5/19/07, Georg Brandl wrote: > > [snip] > > Waiting for comments! Awesome, Georg! Wow. Nice work. Seems like this has been a long time comin', and I bet others have been working away "in secret" on similar projects. I hope you keep running with it until it gets hijacked into being the "official" versions. :) I'm bookmarking it as "python docs" in my browser. BTW, would like to see a little blurb of your own on that page about how the docs were converted, rendered, and their new source format. Thanks much, ---John P.S. -- funny sig, btw. :) From lewiemann at vassar.edu Sun May 20 10:04:24 2007 From: lewiemann at vassar.edu (Lea Wiemann) Date: Sun, 20 May 2007 04:04:24 -0400 Subject: [Python-Dev] [Doc-SIG] The docs, reloaded In-Reply-To: <1b151690705191846o45604bc6jcdbc3159513feda1@mail.gmail.com> References: <1b151690705191846o45604bc6jcdbc3159513feda1@mail.gmail.com> Message-ID: <46500108.7030503@vassar.edu> Martin Blais wrote: > e.g. are you still marking classes as classes > and functions as functions in the ReST source It seems so (modulo XXX's and TODO's in Georg's implementation, probably ^_^) -- all of the pages have "show source" links, so you can see for yourself. I'm not an expert with the documentation system, but the markup on looks pretty complete to me. > (Somewhat related, but another idea from back then, which was never > implemented IMO, was to find a way to automatically pull and convert > the docstrings from the source code into the documentation, thus > unifying all the information in one place.) While it's probably not possible to simply generate the documentation from the docstrings, it would certainly seem interesting to get have some means (like a directive) to pull docstrings into the documentation. I think however that while migrating the docs do reStructuredText is comparatively straightforward [1]_, pulling documentation from the docstrings will require quite a bit of design and discussion work. So I'd suggest we postpone this idea until we have a working documentation system in reStructuredText, so we don't clutter the discussion. .. [1] I'm sure there will still be quite a few issues to sort out that I'm simply not seeing right now. Best wishes, Lea From gael.varoquaux at normalesup.org Sun May 20 11:11:59 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 20 May 2007 11:11:59 +0200 Subject: [Python-Dev] [Doc-SIG] The docs, reloaded In-Reply-To: <464F5EBF.4030209@ronadam.com> References: <464F5EBF.4030209@ronadam.com> Message-ID: <20070520091159.GB14894@clipper.ens.fr> On Sat, May 19, 2007 at 03:31:59PM -0500, Ron Adam wrote: > - The html syntax highlighters. (Pydoc can use those) I have a patch on the docutils patch tracker that does this. Code is probably of a rather bad quality, but it outputs LaTeX and HTML. If we can work together to improve this patch and get it in docutils it will avoid having different syntaxes and behavior depending on the front-end to docutils being used (I am thinking of rest2web, trac, and I am probably forgetting some others). The patch has been sitting there for almost 6 months without review, but I have that if people other than me work on it and ask for review it will both improve, and get reviewed, and eventually get in ! Sorry for the shameless plug, but I really do think we need a unifying approach to this. Ga?l From blais at furius.ca Mon May 21 04:26:46 2007 From: blais at furius.ca (Martin Blais) Date: Sun, 20 May 2007 19:26:46 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18000.58065.249249.648413@montanaro.dyndns.org> References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> Message-ID: <1b151690705201926g750e1e78qc1a71629986e5f22@mail.gmail.com> On 5/20/07, skip at pobox.com wrote: > > Georg> There is exactly one instance of LaTeX math in the whole docs, > Georg> it's in the description of audioop, AFAIR, an contains a sum over > Georg> square roots... > > Georg> So, that's not really a concern of mine ;) > > You must realize that people will use the core tools to create documentation > for third party packages which aren't in the core. If you replace LaTeX > with something else I think you need to keep math in mind whether it's used > in the core documentation or not. IMHO the question of math support in ReST is one that should be best answered at the level of docutils, instead of Georg. A number of discussions on that topic have already taken place. From mhammond at skippinet.com.au Mon May 21 04:42:37 2007 From: mhammond at skippinet.com.au (Mark Hammond) Date: Mon, 21 May 2007 12:42:37 +1000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows Message-ID: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> Hi all, I hope the cross-post is appropriate. I've started playing with getting the pywin32 extensions building under the AMD64 architecture. I started building with Visual Studio 8 (it was what I had handy) and I struck a few issues relating to the compiler version that I thought worth sharing. * In trying to build x64 from a 32bit VS7 (ie, cross-compiling via the PCBuild directory), the python.exe project fails with: pythoncore fatal error LNK1112: module machine type 'X86' conflicts with target machine type 'AMD64' is this a known issue, or am I doing something wrong? * The PCBuild8 project files appear to work without modification (I only tried native compilation here though, not a cross-compile) - however, unlike the PCBuild directory, they place all binaries in a 'PCBuild8/x64' directory. While this means that its possible to build for multiple architectures from the same source tree, it makes life harder for tools like 'distutils' - eg, distutils already knows about the 'PCBuild' directory, but it knows nothing about either PCBuild8 or PCBuild8/x64. A number of other build processes also know to look inside a PCBuild directory (eg, Mozilla), so instead of formalizing PCBuild8, I think we should merge PCBuild8 into PCBuild. This could mean PCBuild/vs7 and PCBuild/vs8 directories with the "project" files, but binaries would still be generated in the 'PCBuild' (or PCBuild/x64) directory. This would mean the same tree isn't capable of hosting 2 builds from different VS compilers, but I think that is reasonable (if it's a problem, just have multiple source directories). I understand that PCBuild8 is not "official", but in the assumption that future versions of Python will use a compiler later than VS7, it makes sense to me to clean this up now - what are others opinions on this? * Re the x64 directory used by the PCBuild8 process. IMO, it makes sense to generate x64 binaries to their own directory - my expectation is that cross-compiling between platforms is a reasonable use-case, and we should support multiple achitectures for the same compiler version. This would mean formalizing the x64 directory in both 'PCBuild' and distutils, and leaving other external build processes to update as they support x64 builds. Does this make sense? Would this fatally break other scripts used for packaging (eg, the MSI framework)? * Wide characters in VS8: PC/pyconfig.h defines PY_UNICODE_TYPE as 'unsigned short', which corresponds with both 'WCHAR' and 'wchar' in previous compiler versions. VS8 defines this as wchar_t, which I'm struggling to find a formal definition for beyond being 2 bytes. My problem is that code which assumes a 'Py_UNICODE *' could be used in place of a 'WCHAR *' now fails. I believe the intent on Windows has always been "PyUNICODE == 'native unicode'" - should PC/pyconfig.h reflect this (ie, should pyconfig.h grow a version specific definition of PyUNICODE as wchar_t)? * Finally, as something from left-field which may well take 12 months or more to pull off - but would there be any interest to moving the Windows build process to a cygwin environment based on the existing autoconf scripts? I know a couple of projects are doing this successfully, including Mozilla, so it has precendent. It does impose a greater burden on people trying to build on Windows, but I'd suggest that in recent times, many people who are likely to want to build Python on Windows are already likely to have a cygwin environment. Simpler mingw builds and nuking MSVC specific build stuff are among the advantages this would bring. It is not worth adding this as "yet another windows build option" - so IMO it is only worth progressing with if it became the "blessed" build process for windows - if there is support for this, I'll work on it as the opportunity presents itself... I'm (obviously) only suggesting we do this on the trunk and am happy to make all agreed changes - but I welcome all suggestions or critisisms of this endeavour... Cheers, Mark From martin at v.loewis.de Mon May 21 06:28:46 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 21 May 2007 06:28:46 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18000.58065.249249.648413@montanaro.dyndns.org> References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> Message-ID: <46511FFE.4020803@v.loewis.de> > Georg> So, that's not really a concern of mine ;) > > You must realize that people will use the core tools to create documentation > for third party packages which aren't in the core. If you replace LaTeX > with something else I think you need to keep math in mind whether it's used > in the core documentation or not. I disagree. The documentation infrastructure of Python should only consider the needs of Python itself. If other people can use that infrastructure for other purposes, fine - if they find that it does not meet their needs, they have to look elsewhere. We are developing a programming language here, not a typesetting system. Regards, Martin From martin at v.loewis.de Mon May 21 06:44:25 2007 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 21 May 2007 06:44:25 +0200 Subject: [Python-Dev] Introduction and request for commit access to the sandbox. In-Reply-To: References: Message-ID: <465123A9.8090500@v.loewis.de> > With that said, I would to request svn access to the sandbox for my > work. I will use this access only for modifying stuff in the directory > I will be assigned to. I would like to use the username "avassalotti" > and the attached SSH2 public key for this access. I have added your key. As we have a strict first.last account policy, I named it alexandre.vassalotti; please correct me if I misspelled it. > One last thing, if you know semantic differences (other than the > obvious ones) between the C and Python versions of the modules I need > to merge, please let know. This will greatly simplify the merge and > reduce the chances of later breaking. Somebody noticed on c.l.p that, for cPickle, a) cPickle will start memo keys at 1; pickle at 0 b) cPickle will not put things into the memo if their refcount is 1, whereas pickle puts everything into the memo. Not sure what you'd consider obvious, but I'll mention that cStringIO "obviously" is constrained in what data types you can write (namely, byte strings only), whereas StringIO allows Unicode strings as well. Less obviously, StringIO also allows py> s = StringIO(0) py> s.write(10) py> s.write(20) py> s.getvalue() '1020' Regards, Martin From martin at v.loewis.de Mon May 21 07:15:53 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 21 May 2007 07:15:53 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> Message-ID: <46512B09.1080304@v.loewis.de> > * In trying to build x64 from a 32bit VS7 (ie, cross-compiling via the > PCBuild directory), the python.exe project fails with: > > pythoncore fatal error LNK1112: module machine type 'X86' conflicts with > target machine type 'AMD64' > > is this a known issue, or am I doing something wrong? You are likely doing something wrong: a) I assume it's VS 7.1 (i.e. VS.NET 2003); VS 2002 is not supported at all b) you probably didn't install vsextcomp, but you should. In fact, you don't need all of it, but you do need the cl.exe and link.exe wrappers it comes with - they dispatch to the proper tools from the SDK c) in case it isn't clear: you also need an AMD64 compiler, e.g. from the platform SDK. Unfortunately, Microsoft keeps changing the registry settings for the SDK, so vsextcomp only knows about some selected SDKs. If that causes a problem, please let me know. > * The PCBuild8 project files appear to work without modification (I only > tried native compilation here though, not a cross-compile) - however, unlike > the PCBuild directory, they place all binaries in a 'PCBuild8/x64' > directory. While this means that its possible to build for multiple > architectures from the same source tree, it makes life harder for tools like > 'distutils' - eg, distutils already knows about the 'PCBuild' directory, but > it knows nothing about either PCBuild8 or PCBuild8/x64. This is an issue to be discussed for Python 2.6. I'm personally hesitant to have the "official" build infrastructure deviate from the layout that has been in-use for so many years, as a lot of things depend on it. I don't find the need to have separate object directories convincing: For building the Win32/Win64 binaries, I have separate checkouts *anyway*, since all the add-on libraries would have to support multi-arch builds, but I think they don't. > A number of other build processes also know to look inside a PCBuild > directory (eg, Mozilla), so instead of formalizing PCBuild8, I think we > should merge PCBuild8 into PCBuild. Right - PCbuild8 should not get formalized. It probably should continue to be maintained. For 2.6, the first question to answer is: what compiler should it use? I would personally like to see Python "skip" VS 2005 altogether, as it will be soon superceded by Orcas. Unfortunately, it's unclear how long Microsoft will need to release Orcas (and also, when Python 2.6 will be released), so I would like to defer that question by a few months. > I understand that PCBuild8 is not "official", but in the > assumption that future versions of Python will use a compiler later than > VS7, it makes sense to me to clean this up now - what are others opinions on > this? Not "official" really only means "not used to build the official binaries" - just like PC/VC6. It's still (somewhat) maintained. As for cleaning it up - see above. I would *really* like to skip VS 2005 altogether, as I expect that soon after we decide to use VS 2005, Microsoft will replace it with the next release, stop supporting VS 2005, take the free compiler off the next, and so on (just like they did for VS 2003, soon after we decided to use it for 2.5). > * Re the x64 directory used by the PCBuild8 process. IMO, it makes sense to > generate x64 binaries to their own directory - my expectation is that > cross-compiling between platforms is a reasonable use-case, and we should > support multiple achitectures for the same compiler version. See above; I disagree. First, "multiple architectures" only means x86, AMD64, and Itanium, and I would like to drop "official" Itanium binaries from 2.6 (even though they could continue to be supported in the build process). Then, even if the Python build itself support multiple simultaneous architectures, the extension modules don't all (correct me if I'm wrong). > This would > mean formalizing the x64 directory in both 'PCBuild' and distutils, and > leaving other external build processes to update as they support x64 builds. > Does this make sense? Would this fatally break other scripts used for > packaging (eg, the MSI framework)? The MSI packaging would need to be changed, certainly. It currently detects the architecture it needs to package by looking at the file type of python.exe; that would have to be changed to give it an explicit parameter what architecture to package, or have it package all architectures it can find. > * Wide characters in VS8: PC/pyconfig.h defines PY_UNICODE_TYPE as 'unsigned > short', which corresponds with both 'WCHAR' and 'wchar' in previous compiler > versions. VS8 defines this as wchar_t, which I'm struggling to find a > formal definition for beyond being 2 bytes. In C or in C++? In C++, wchar_t is a builtin type, just like short, int, long. So there is no further formal definition. In C (including C99), wchar_t ought to be defined in stddef.h. > My problem is that code which > assumes a 'Py_UNICODE *' could be used in place of a 'WCHAR *' now fails. I > believe the intent on Windows has always been "PyUNICODE == 'native > unicode'" - should PC/pyconfig.h reflect this (ie, should pyconfig.h grow a > version specific definition of PyUNICODE as wchar_t)? I'd rather make it a platform-specific definition (for platform=Windows API). Correct me if I'm wrong, but isn't wchar_t also available in VS 2003 (and even in VC6?). And doesn't it have the "right" definition in all these compilers? So +1 for setting Py_UNICODE to wchar_t on Windows. > * Finally, as something from left-field which may well take 12 months or > more to pull off - but would there be any interest to moving the Windows > build process to a cygwin environment based on the existing autoconf > scripts? What compiler would you use there? I very much like using the VS debugger when developing on Windows, so that capability should not go away. Regards, Martin From brett at python.org Mon May 21 07:24:59 2007 From: brett at python.org (Brett Cannon) Date: Sun, 20 May 2007 22:24:59 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <46511FFE.4020803@v.loewis.de> References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> <46511FFE.4020803@v.loewis.de> Message-ID: On 5/20/07, "Martin v. L?wis" wrote: > > > Georg> So, that's not really a concern of mine ;) > > > > You must realize that people will use the core tools to create > documentation > > for third party packages which aren't in the core. If you replace LaTeX > > with something else I think you need to keep math in mind whether it's > used > > in the core documentation or not. > > I disagree. The documentation infrastructure of Python should only > consider the needs of Python itself. If other people can use that > infrastructure for other purposes, fine - if they find that it does > not meet their needs, they have to look elsewhere. Martin beat me to my comment. =) Python's needs should come first, period. If Georg wants to add math support, fine. But honestly I would rather he spend his time on Python-specific stuff then get bogged down to support possible third parties. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070520/1fdf9305/attachment.html From mhammond at skippinet.com.au Mon May 21 08:13:43 2007 From: mhammond at skippinet.com.au (Mark Hammond) Date: Mon, 21 May 2007 16:13:43 +1000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <46512B09.1080304@v.loewis.de> Message-ID: <00af01c79b6f$2f863c30$1f0a0a0a@enfoldsystems.local> Hi Martin, > You are likely doing something wrong: > a) I assume it's VS 7.1 (i.e. VS.NET 2003); VS 2002 is not supported > at all > b) you probably didn't install vsextcomp, but you should. > In fact, you don't need all of it, but you do need the cl.exe and > link.exe wrappers it comes with - they dispatch to the proper > tools from the SDK > c) in case it isn't clear: you also need an AMD64 compiler, e.g. > from the platform SDK. > Unfortunately, Microsoft keeps changing the registry settings for > the SDK, so vsextcomp only knows about some selected SDKs. If > that causes a problem, please let me know. I'm using the full-blown VS.NET 2003, as given to a number of python-dev people by Microsoft a number of years ago. This appears to come with the SDK and a 64bit compiler. I'm guessing vsextcomp doesn't use the Visual Studio 'ReleaseAMD64' configuration - would it be OK for me to check in changes to the PCBuild projects for this configuration? > This is an issue to be discussed for Python 2.6. I'm > personally hesitant > to have the "official" build infrastructure deviate from the > layout that > has been in-use for so many years, as a lot of things depend on it. Yes, I agree - although I consider x64 new enough that an opportunity exists to set a new 'standard'. However, if most 'external' build processes will not otherwise need to change for a 64bit environment, then I agree that nothing should change in Python's layout. > Right - PCbuild8 should not get formalized. It probably > should continue to be maintained. So is there something we can do to make distutils play better with binaries built from PCBuild8, even though it is considered temporary? It seems the best thing might be to modify the PCBuild8 build process so the output binaries are in the ../PCBuild' directory - this way distutils and others continue to work fine. Does that sound reasonable? > For 2.6, the first question to answer is: what compiler should it use? > > I would personally like to see Python "skip" VS 2005 altogether, > as it will be soon superceded by Orcas. Unfortunately, it's unclear > how long Microsoft will need to release Orcas (and also, when Python > 2.6 will be released), so I would like to defer that question by > a few months. I've no objection to that - but I'd like to help keep the pain to a minimum for people who find themselves trying to build 64bit extensions in the meantime. Anecdotally, VS8 is the compiler most people start trying to use for this (quite possibly because that is what they already have handy). > See above; I disagree. First, "multiple architectures" only means x86, > AMD64, and Itanium, and I would like to drop "official" > Itanium binaries > from 2.6 (even though they could continue to be supported in the build > process). Then, even if the Python build itself support multiple > simultaneous architectures, the extension modules don't all (correct > me if I'm wrong). Yes, I agree that it is unlikely to work in practice - at least for a number of years as the external libs and extensions catch up. > > * Wide characters in VS8: PC/pyconfig.h defines > PY_UNICODE_TYPE as 'unsigned > > short', which corresponds with both 'WCHAR' and 'wchar' in > previous compiler > > versions. VS8 defines this as wchar_t, which I'm > struggling to find a > > formal definition for beyond being 2 bytes. > > In C or in C++? In C++, wchar_t is a builtin type, just like > short, int, > long. So there is no further formal definition. This was in C++, but the problem was really WCHAR, as used by much of the win32 API. > I'd rather make it a platform-specific definition (for > platform=Windows > API). Correct me if I'm wrong, but isn't wchar_t also available in VS > 2003 (and even in VC6?). And doesn't it have the "right" definition in > all these compilers? hrm - as above, I'm more concerned with the definition of WCHAR - which means my problem is related more to the Platform SDK version rather than the compiler. This is unfortunate - on one hand we do consider 'platform=Windows API', and WCHAR is very much an API concept. I'll need to dig some more into this, but at least I know I'm not wasting my time :) > > * Finally, as something from left-field which may well take > > 12 months or > > more to pull off - but would there be any interest to > > moving the Windows > > build process to a cygwin environment based on the existing autoconf > > scripts? > > What compiler would you use there? I very much like using the VS > debugger when developing on Windows, so that capability should not > go away. You would use whatever compiler the autoconf toolset found. Recent versions know enough about MSVC for simple projects. Many people would need to take care that their environment pointed at the correct compiler - especially the person building releases. But assuming MSVC was found and had the appropriate switches passed, there would be no impact on the ability to use Visual Studio as a debugging environment. Thanks, Mark From lgautier at gmail.com Mon May 21 11:23:03 2007 From: lgautier at gmail.com (Laurent Gautier) Date: Mon, 21 May 2007 17:23:03 +0800 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> <46511FFE.4020803@v.loewis.de> Message-ID: <27d1e6020705210223g7f6c3c15g193cc1987d938c82@mail.gmail.com> I would agree with the point that python core should be considered first, but I would also only see beneficial to leave the door open to the need of other packages. I have (briefly but intensely) worked on a revamp of pydoc earlier on this year, and while collecting requirements from a number of places having maths expressions or else appeared important for a number of cases (and a very reasonable request in a case) . That particular point leads to something that I see important for what a new/better documentation system should provide: a good and modular interface to access the documentation, process it, and navigate it. When looking at the particular example discussed here, it could be implemented by allowing a "pluggable" processing components for docstrings (and let a given package developer the possibility to use as much as the default documentation machinery as possible and implement the processing mathml, latex, whatever, as wanted). One can consider the possibility to have the "custom" processing of the docstring embedded in the package itself. Laurent 2007/5/21, Brett Cannon : > > > On 5/20/07, "Martin v. L?wis" wrote: > > > Georg> So, that's not really a concern of mine ;) > > > > > > You must realize that people will use the core tools to create > documentation > > > for third party packages which aren't in the core. If you replace LaTeX > > > with something else I think you need to keep math in mind whether it's > used > > > in the core documentation or not. > > > > I disagree. The documentation infrastructure of Python should only > > consider the needs of Python itself. If other people can use that > > infrastructure for other purposes, fine - if they find that it does > > not meet their needs, they have to look elsewhere. > > > Martin beat me to my comment. =) Python's needs should come first, period. > If Georg wants to add math support, fine. But honestly I would rather he > spend his time on Python-specific stuff then get bogged down to support > possible third parties. > > -Brett > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/lgautier%40gmail.com > > From nick at craig-wood.com Mon May 21 11:43:41 2007 From: nick at craig-wood.com (Nick Craig-Wood) Date: Mon, 21 May 2007 10:43:41 +0100 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> Georg Brandl wrote: > over the last few weeks I've hacked on a new approach to Python's documentation. > As Python already has an excellent documentation framework, the docutils, with a > readable yet extendable markup format, reST, I thought that it should be > possible to use those instead of the current LaTeX->latex2html > toolchain. Good idea! Latex is a barrier for contribution to the docs. I imagine most people would be much better at contributing to the docs in reST. (Me included: I learnt latex at university a couple of decades ago and have now forgotten it completely!) > - a "quick-dispatch" function: e.g., docs.python.org/q?os.path.split would > redirect you to the matching location. Being a seasoned unix user, I tend to reach for pydoc as my first stab at finding some documentation rather than (after excavating the mouse from under a pile of paper) use a web browser. If you've ever used pydoc you'll know it reads docstrings and for some modules they are great and for others they are sorely lacking. If pydoc could show all this documentation as well I'd be very happy! Maybe your quick dispatch feature could be added to pydoc too? > Concluding, a small caveat: The conversion/build tools are, of course, not > complete yet. There are a number of XXX comments in the text, most of them > indicate that the converter could not handle a situation -- that would have > to be corrected once after conversion is done. It is missing conversion of ``comment'' at the moment as I'm sure you know... You will need to make your conversion perfect before you convince the people who wrote most of that documentation I suspect! -- Nick Craig-Wood -- http://www.craig-wood.com/nick From kristjan at ccpgames.com Mon May 21 12:30:24 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Mon, 21 May 2007 10:30:24 +0000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <46512B09.1080304@v.loewis.de> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> Message-ID: <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> First of all, I have put some work into pcbuild8 recently and it works well. I am trying to drum up momentum for work on PCBuild8 next europython. See http://wiki.python.org/moin/EuroPython2007Sprints > -----Original Message----- > From: python-dev-bounces+kristjan=ccpgames.com at python.org > > I don't find the need to have separate object directories convincing: > For building the Win32/Win64 binaries, I have separate checkouts > *anyway*, since all the add-on libraries would have to support > multi-arch builds, but I think they don't. No they don't, but that doesn't mean that you need different checkouts for python, only the others. Anyway, this is indeed something I'd like to see addressed. I don't think we should ditch cross-compilation. It should simplify a lot of stuff, including buildbot setup and so on (if we get the buildbot infrastructure to support it). It is also very cumbersome, if you are working on a project, to have the binaries all end up in the same place. Doing interactive work on python, I frequently compile both the 32 bit and 64 bit versions for testing and it would be downright silly to have to rebuild everything every time. > I would personally like to see Python "skip" VS 2005 altogether, > as it will be soon superceded by Orcas. Unfortunately, it's unclear > how long Microsoft will need to release Orcas (and also, when Python > 2.6 will be released), so I would like to defer that question by > a few months. I think this is a bit unrealistic. Here we are in the middle of 2007, VS2005 has just got SP1, and VS2003 is still the "official" compiler. PCBuild8 is ready, it just needs a little bit of extra love and buildbots to make us able to release PGO versions of x86 and x64. Given the delay for getting even this far, waiting for Orcas and then someone to create PCBuild9, and then getting it up and running and so on will mean waiting another two years. > The MSI packaging would need to be changed, certainly. It currently > detects the architecture it needs to package by looking at the file > type of python.exe; that would have to be changed to give it an > explicit parameter what architecture to package, or have it package > all architectures it can find. I am not familiar with the msi packaging process at all. But here is something we should start to consider: VISTA support. This could mean some of: 1) supplying python.dll as a Side By Side assembly 2) Changing python install locations 3) Supporting shadow libraries, where .pyc files end up in a different hierarchy from the .py files. (useful for many things beside VISTA) 4) Signing the python dlls and executables 5) Providing user level manifests. Vista adoption is going very fast. We see 10% of our users have moved to vista and rising. > I'd rather make it a platform-specific definition (for platform=Windows > API). Correct me if I'm wrong, but isn't wchar_t also available in VS > 2003 (and even in VC6?). And doesn't it have the "right" definition in > all these compilers? > > So +1 for setting Py_UNICODE to wchar_t on Windows. Yes. Btw, in previous visual studio versions, wchar_t was not treated as a builtin type by default, but rather as synonymous with unsighed short. Now the default is that it is, and this causes some semantic differences and incompatibilities of the type seen. Kristjan From mal at egenix.com Mon May 21 13:28:04 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 21 May 2007 13:28:04 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> Message-ID: <46518244.9050207@egenix.com> On 2007-05-21 12:30, Kristj?n Valur J?nsson wrote: >> >> [Py_UNICODE being #defined as "unsigned short" on Windows] >> >> I'd rather make it a platform-specific definition (for platform=Windows >> API). Correct me if I'm wrong, but isn't wchar_t also available in VS >> 2003 (and even in VC6?). And doesn't it have the "right" definition in >> all these compilers? >> >> So +1 for setting Py_UNICODE to wchar_t on Windows. > > Yes. Btw, in previous visual studio versions, wchar_t was not treated > as a builtin type by default, but rather as synonymous with unsighed short. > Now the default is that it is, and this causes some semantic differences > and incompatibilities of the type seen. +1 from me. If think this is simply a bug introduced with the UCS4 patches in Python 2.2. unicodeobject.h already has this code: #ifndef PY_UNICODE_TYPE /* Windows has a usable wchar_t type (unless we're using UCS-4) */ # if defined(MS_WIN32) && Py_UNICODE_SIZE == 2 # define HAVE_USABLE_WCHAR_T # define PY_UNICODE_TYPE wchar_t # endif # if defined(Py_UNICODE_WIDE) # define PY_UNICODE_TYPE Py_UCS4 # endif #endif But for some reason, pyconfig.h defines: /* Define as the integral type used for Unicode representation. */ #define PY_UNICODE_TYPE unsigned short /* Define as the size of the unicode type. */ #define Py_UNICODE_SIZE SIZEOF_SHORT /* Define if you have a useable wchar_t type defined in wchar.h; useable means wchar_t must be 16-bit unsigned type. (see Include/unicodeobject.h). */ #if Py_UNICODE_SIZE == 2 #define HAVE_USABLE_WCHAR_T #endif disabling the default settings in the unicodeobject.h. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 21 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From kristjan at ccpgames.com Mon May 21 12:39:05 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Mon, 21 May 2007 10:39:05 +0000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <00af01c79b6f$2f863c30$1f0a0a0a@enfoldsystems.local> References: <46512B09.1080304@v.loewis.de> <00af01c79b6f$2f863c30$1f0a0a0a@enfoldsystems.local> Message-ID: <4E9372E6B2234D4F859320D896059A9508DCBEE4BF@exchis.ccp.ad.local> > -----Original Message----- > This was in C++, but the problem was really WCHAR, as used by much of > the > win32 API. > > > I'd rather make it a platform-specific definition (for > > platform=Windows > > API). Correct me if I'm wrong, but isn't wchar_t also available in VS > > 2003 (and even in VC6?). And doesn't it have the "right" definition > in > > all these compilers? > > hrm - as above, I'm more concerned with the definition of WCHAR - which > means my problem is related more to the Platform SDK version rather > than the > compiler. This is unfortunate - on one hand we do consider > 'platform=Windows API', and WCHAR is very much an API concept. I'll > need to > dig some more into this, but at least I know I'm not wasting my time :) Mark, your problem may be related to a setting in the "c/c++ -> language" tab in the settings, where "treat wchar_t as a builtin type" default has changed. I recommend that we do treat it as a builtin, but the VS2003 default was "no" and the 2005 is "yes". Could this be contributing to your problem? Kristjan From mal at egenix.com Mon May 21 13:43:43 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 21 May 2007 13:43:43 +0200 Subject: [Python-Dev] PEP 0365: Adding the pkg_resources module In-Reply-To: <4650C6A9.1020809@acm.org> References: <5.1.1.6.0.20070430202700.04cb8128@sparrow.telecommunity.com> <4650C6A9.1020809@acm.org> Message-ID: <465185EF.90608@egenix.com> On 2007-05-21 00:07, Talin wrote: > Phillip J. Eby wrote: >> I wanted to get this in before the Py3K PEP deadline, since this is a >> Python 2.6 PEP that would presumably impact 3.x as well. Feedback welcome. >> >> >> PEP: 365 >> Title: Adding the pkg_resources module > > I'm really surprised that there hasn't been more comment on this. True.... both ways, I guess: I'm still waiting for a reply to my comments. I'd also like to see more discussion about adding e.g.: * support for user packages (ie. having site.py add a well-defined user home directory based Python path entry to sys.path, e.g. ~/.python/user-packages, much like what MacPython already does now) * support for having the import mechanism play nice with namespace packages (ie. packages that may live in different places on the disk, but appear to be in the same Python package as seen by the import mechanism) I think those two features would go a long way in reducing the number of hacks setuptools currently applies to get this functionality working with code in .pth files, monkey-patching site.py, etc. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 21 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From mhammond at skippinet.com.au Mon May 21 13:46:53 2007 From: mhammond at skippinet.com.au (Mark Hammond) Date: Mon, 21 May 2007 21:46:53 +1000 Subject: [Python-Dev] wchar_t (was Adventures with x64, VS7 and VS8) on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DCBEE4BF@exchis.ccp.ad.local> Message-ID: <00d301c79b9d$baae8a00$1f0a0a0a@enfoldsystems.local> Kristj?n Valur J?nsson quoting me: > > hrm - as above, I'm more concerned with the definition of > > WCHAR - which > > means my problem is related more to the Platform SDK version rather > > than the > > compiler. This is unfortunate - on one hand we do consider > > 'platform=Windows API', and WCHAR is very much an API concept. I'll > > need to > > dig some more into this, but at least I know I'm not > > wasting my time :) > > Mark, your problem may be related to a setting in the "c/c++ > -> language" > tab in the settings, where "treat wchar_t as a builtin type" > default has > changed. I recommend that we do treat it as a builtin, but the VS2003 > default was "no" and the 2005 is "yes". Could this be contributing to > your problem? Thanks for the suggestion and for introducing me to that option - but it made no difference. I'm guessing its related to C++ - code such as the following: static PyObject *TestIBuild() { // obviously nonsense - the point is to test if it compiles. WCHAR *wval = PyUnicode_AS_UNICODE(Py_None); return PyUnicode_FromUnicode(wval, wcslen(wval)); } works everywhere - except in a pywin32 .cpp file built on x64:) That code results in: win32/src/win32apimodule.cpp(81) : error C2440: 'initializing' : cannot convert from 'Py_UNICODE *' to 'WCHAR *' Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast win32/src/win32apimodule.cpp(82) : error C2664: 'PyUnicodeUCS2_FromUnicode' : cannot convert parameter 1 from 'WCHAR *' to 'const Py_UNICODE *' Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast Unfortunately I don't have pywin32 building under vc8 on x32, but expect it to happen there too. pywin32 uses distutils, but I inspected the options passed and can't find anything to make a difference. /Zc:wchar_t and/or /Zc:wchar_t- seem to be the command-line settings for this flag and it also makes no difference. I'm out of time to confirm is is simply "c++ with vs8", but did confirm that the patch below appears to solve the problem, and given Martin's previous +1, I decided to stop there. I failed in a quick attempt at replacing the literal 2 with something involving sizeof. Does this look reasonable? Cheers, Mark Index: pyconfig.h =================================================================== --- pyconfig.h (revision 55487) +++ pyconfig.h (working copy) @@ -492,10 +492,10 @@ #define Py_USING_UNICODE /* Define as the integral type used for Unicode representation. */ -#define PY_UNICODE_TYPE unsigned short +#define PY_UNICODE_TYPE wchar_t /* Define as the size of the unicode type. */ -#define Py_UNICODE_SIZE SIZEOF_SHORT +#define Py_UNICODE_SIZE 2 /* Define if you have a useable wchar_t type defined in wchar.h; useable means wchar_t must be 16-bit unsigned type. (see -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3260 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20070521/12f28bf0/attachment.bin From kristjan at ccpgames.com Mon May 21 13:56:24 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Mon, 21 May 2007 11:56:24 +0000 Subject: [Python-Dev] wchar_t (was Adventures with x64, VS7 and VS8) on Windows In-Reply-To: <00d301c79b9d$baae8a00$1f0a0a0a@enfoldsystems.local> References: <4E9372E6B2234D4F859320D896059A9508DCBEE4BF@exchis.ccp.ad.local> <00d301c79b9d$baae8a00$1f0a0a0a@enfoldsystems.local> Message-ID: <4E9372E6B2234D4F859320D896059A9508DCBEE502@exchis.ccp.ad.local> > -----Original Message----- > below appears to solve the problem, and given Martin's previous +1, I > decided to stop there. I failed in a quick attempt at replacing the > literal 2 with something involving sizeof. Does this look reasonable? > +1. I should add that we have this local mod in our own patched python. It does seem like a mistake that needs fixing. Kristjan From skip at pobox.com Mon May 21 13:01:20 2007 From: skip at pobox.com (skip at pobox.com) Date: Mon, 21 May 2007 06:01:20 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> <46511FFE.4020803@v.loewis.de> Message-ID: <18001.31744.271598.871237@montanaro.dyndns.org> Brett> Martin beat me to my comment. =) Python's needs should come Brett> first, period. If Georg wants to add math support, fine. But Brett> honestly I would rather he spend his time on Python-specific Brett> stuff then get bogged down to support possible third parties. I think the people who have responded to my comment read too much into it. Nowhere do I think I asked Georg to write an equation typesetter to include in the Python documentation toolchain. I asked that math capability be considered. I have no idea what tools he used to build his new documentation set. I only briefly glanced at a couple of the output pages. I think what he has done is marvelous. However, I don't think the door should be shut on equation display. Is there a route to it based on the tools Georg is using? If not, then I think some accommodation should be made. I'm being vague here on purpose because I'm unfamiliar with the available tools. The one thing I do know is that LaTeX provides that today and by removing it from the toolchain you have removed a significant piece of functionality. Skip From skip at pobox.com Mon May 21 12:54:36 2007 From: skip at pobox.com (skip at pobox.com) Date: Mon, 21 May 2007 05:54:36 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <46511FFE.4020803@v.loewis.de> References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> <46511FFE.4020803@v.loewis.de> Message-ID: <18001.31340.881646.672542@montanaro.dyndns.org> >> You must realize that people will use the core tools to create >> documentation for third party packages which aren't in the core. If >> you replace LaTeX with something else I think you need to keep math >> in mind whether it's used in the core documentation or not. Martin> I disagree. The documentation infrastructure of Python should Martin> only consider the needs of Python itself. If other people can Martin> use that infrastructure for other purposes, fine - if they find Martin> that it does not meet their needs, they have to look elsewhere. Then I submit that you are probably removing some significant piece of functionality from the provided documentation toolchain which some people probably rely on. After all, that's what LaTeX excels at. They will be able to continue to use the old tools, but where will they get them if they are no longer part of Python? Skip From mhammond at skippinet.com.au Mon May 21 14:33:27 2007 From: mhammond at skippinet.com.au (Mark Hammond) Date: Mon, 21 May 2007 22:33:27 +1000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> Message-ID: <00e501c79ba4$3b9fd1e0$1f0a0a0a@enfoldsystems.local> Hi Kristj?n, > First of all, I have put some work into pcbuild8 recently and it works > well. It does! There are a few issues though, notably with distutils (and as mentioned before, any other tools what may assume PCBuild - see below) You quoting Martin: > > I don't find the need to have separate object directories > > convincing: > > For building the Win32/Win64 binaries, I have separate checkouts > > *anyway*, since all the add-on libraries would have to support > > multi-arch builds, but I think they don't. > > No they don't, but that doesn't mean that you need different checkouts > for python, only the others. Anyway, this is indeed > something I'd like to see addressed. I don't think we should ditch > cross-compilation. While I agree with you, Martin's point about the dependant libraries and tools is valid, and may defeat this goal. In the short term, we should research how some of these other projects are approaching x64 - we may also find that this helps with any autoconf work we choose to do. Ultimately, the person releasing the official binaries gets to say how this works (based on how they actually get it to work) > > I would personally like to see Python "skip" VS 2005 altogether, > > as it will be soon superceded by Orcas. Unfortunately, it's unclear > > how long Microsoft will need to release Orcas (and also, when Python > > 2.6 will be released), so I would like to defer that question by > > a few months. > I think this is a bit unrealistic. Here we are in the middle of 2007, > VS2005 has just got SP1, and VS2003 is still the "official" compiler. > PCBuild8 is ready, it just needs a little bit of extra love and > buildbots to make us able to release PGO versions of x86 and x64. > > Given the delay for getting even this far, waiting for Orcas and then > someone to create PCBuild9, and then getting it up and > running and so on > will mean waiting another two years. I don't believe there was any suggestion that Python 2.6 would wait for a compiler release from Microsoft. Before we talk about Vista and while I have your attention , some final questions relating to PCBuild8. Regardless of the ultimate layout for x64, what do you think about having PCBuid8 put the binaries into the PCBuild directory, and thus (theoretically) letting such a directory work with distutils and otherwise as a fully functional Python installation? > I am not familiar with the msi packaging process at all. But here is > something we should start to consider: VISTA support. This > could mean > some of: > 1) supplying python.dll as a Side By Side assembly Yes, this is something pywin32 is going to face. The hack of copying python*.dll into the 'system' directory - necessary for COM - is (sensibly!) no longer working. One thing at a time though... > 2) Changing python install locations > 3) Supporting shadow libraries, where .pyc files end up in a different > hierarchy from the .py files. (useful for many things > beside VISTA) > 4) Signing the python dlls and executables > 5) Providing user level manifests. And dragging distutils back op topic, having bdist_wininst supply a manifest that indicates escalation is required appears necessary. > Vista adoption is going very fast. We see 10% of our users have moved > to vista and rising. ack - I'm yet to try a 32 bit version, but my Vista-x64 box isn't proving very reliable yet. It *is* very pretty and cute though :) I'm surprised to see 10% unless your users are skewed towards early-adopters though, but I'm in no position to refute it! Cheers, Mark -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3744 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20070521/d9e6d8c2/attachment.bin From fdrake at acm.org Mon May 21 15:23:47 2007 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Mon, 21 May 2007 09:23:47 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18001.31340.881646.672542@montanaro.dyndns.org> References: <46511FFE.4020803@v.loewis.de> <18001.31340.881646.672542@montanaro.dyndns.org> Message-ID: <200705210923.47610.fdrake@acm.org> On Monday 21 May 2007, skip at pobox.com wrote: > Then I submit that you are probably removing some significant piece of > functionality from the provided documentation toolchain which some people > probably rely on. After all, that's what LaTeX excels at. They will be > able to continue to use the old tools, but where will they get them if > they are no longer part of Python? I'll be happy to pull the existing tools out into a separate distribution if we move to something else for Python. There are too many users of the existing tools to abandon. -Fred -- Fred L. Drake, Jr. From pje at telecommunity.com Mon May 21 16:05:18 2007 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 21 May 2007 10:05:18 -0400 Subject: [Python-Dev] PEP 0365: Adding the pkg_resources module In-Reply-To: <465185EF.90608@egenix.com> References: <5.1.1.6.0.20070430202700.04cb8128@sparrow.telecommunity.com> <4650C6A9.1020809@acm.org> <465185EF.90608@egenix.com> Message-ID: <20070521140332.B8B033A40F3@sparrow.telecommunity.com> At 01:43 PM 5/21/2007 +0200, M.-A. Lemburg wrote: >On 2007-05-21 00:07, Talin wrote: > > Phillip J. Eby wrote: > >> I wanted to get this in before the Py3K PEP deadline, since this is a > >> Python 2.6 PEP that would presumably impact 3.x as > well. Feedback welcome. > >> > >> > >> PEP: 365 > >> Title: Adding the pkg_resources module > > > > I'm really surprised that there hasn't been more comment on this. > >True.... both ways, I guess: I'm still waiting for a reply to my >comments. What comments are you talking about? I must've missed them. >I'd also like to see more discussion about adding e.g.: > > * support for user packages > > (ie. having site.py add a well-defined user home directory > based Python path entry to sys.path, e.g. > ~/.python/user-packages, much like what MacPython already does > now) > > * support for having the import mechanism play nice > with namespace packages > > (ie. packages that may live in different places on the disk, > but appear to be in the same Python package as seen by the > import mechanism) > >I think those two features would go a long way in reducing the >number of hacks setuptools currently applies to get this >functionality working with code in .pth files, monkey-patching >site.py, etc. These items aren't directly related to the PEP, however. pkg_resources doesn't monkeypatch anything or touch any .pth files. It only changes sys.path at runtime if you explicitly ask it to locate and activate packages for you. As for namespace packages, pkg_resources provides a more PEP 302-compatible alternative to pkgutil.extend_path(). pkgutil doesn't support anything but existing filesystem directories, but the pkg_resources version supports zipfiles and has hooks to allow namespace package support to be registered for any PEP 302 importer. See: http://peak.telecommunity.com/DevCenter/PkgResources#supporting-custom-importers (specifically, the register_namespace_handler() function.) From mal at egenix.com Mon May 21 18:28:30 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 21 May 2007 18:28:30 +0200 Subject: [Python-Dev] PEP 0365: Adding the pkg_resources module In-Reply-To: <20070521140332.B8B033A40F3@sparrow.telecommunity.com> References: <5.1.1.6.0.20070430202700.04cb8128@sparrow.telecommunity.com> <4650C6A9.1020809@acm.org> <465185EF.90608@egenix.com> <20070521140332.B8B033A40F3@sparrow.telecommunity.com> Message-ID: <4651C8AE.5040407@egenix.com> On 2007-05-21 16:05, Phillip J. Eby wrote: > At 01:43 PM 5/21/2007 +0200, M.-A. Lemburg wrote: >> On 2007-05-21 00:07, Talin wrote: >>> Phillip J. Eby wrote: >>>> I wanted to get this in before the Py3K PEP deadline, since this is a >>>> Python 2.6 PEP that would presumably impact 3.x as >> well. Feedback welcome. >>>> >>>> PEP: 365 >>>> Title: Adding the pkg_resources module >>> I'm really surprised that there hasn't been more comment on this. >> True.... both ways, I guess: I'm still waiting for a reply to my >> comments. > > What comments are you talking about? I must've missed them. I've attached the email. Please see below. >> I'd also like to see more discussion about adding e.g.: >> >> * support for user packages >> >> (ie. having site.py add a well-defined user home directory >> based Python path entry to sys.path, e.g. >> ~/.python/user-packages, much like what MacPython already does >> now) >> >> * support for having the import mechanism play nice >> with namespace packages >> >> (ie. packages that may live in different places on the disk, >> but appear to be in the same Python package as seen by the >> import mechanism) >> >> I think those two features would go a long way in reducing the >> number of hacks setuptools currently applies to get this >> functionality working with code in .pth files, monkey-patching >> site.py, etc. > > These items aren't directly related to the PEP, > however. Right. I wasn't referring to this PEP. I think we should have two more PEPs covering the above points, since they offer benefits for all users, not just setuptools users. > pkg_resources doesn't monkeypatch anything or touch any > .pth files. It only changes sys.path at runtime if you explicitly > ask it to locate and activate packages for you. > > As for namespace packages, pkg_resources provides a more PEP > 302-compatible alternative to pkgutil.extend_path(). pkgutil doesn't > support anything but existing filesystem directories, but the > pkg_resources version supports zipfiles and has hooks to allow > namespace package support to be registered for any PEP 302 importer. See: > > http://peak.telecommunity.com/DevCenter/PkgResources#supporting-custom-importers > > (specifically, the register_namespace_handler() function.) Looking at the code it appears as if you've already formalized an implementation for this. However, since this is not egg-specific it should probably be moved to pkgutil and get a separate PEP with detailed documentation (the link you provided doesn't really explain the concepts, reading the code helped a bit). What I don't understand about your approach is why importers would have to register with the namespace implementation. This doesn't seem necessary, since the package __path__ attribute already provides all functionality needed for redirecting lookups to different paths. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 21 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 -------------- next part -------------- An embedded message was scrubbed... From: "M.-A. Lemburg" Subject: Re: [Python-Dev] PEP 0365: Adding the pkg_resources module Date: Fri, 04 May 2007 16:06:54 +0200 Size: 10406 Url: http://mail.python.org/pipermail/python-dev/attachments/20070521/73a3d884/attachment-0001.mht From janssen at parc.com Mon May 21 19:48:13 2007 From: janssen at parc.com (Bill Janssen) Date: Mon, 21 May 2007 10:48:13 PDT Subject: [Python-Dev] The docs, reloaded In-Reply-To: <1b151690705201108v53f19b39s93e45b915ae80ea7@mail.gmail.com> References: <1b151690705201108v53f19b39s93e45b915ae80ea7@mail.gmail.com> Message-ID: <07May21.104819pdt."57996"@synergy1.parc.xerox.com> > Could this be a language-independent documenting toolkit? Could > we document LISP or Ruby code with it? Might want to look at "noweb", http://www.eecs.harvard.edu/~nr/noweb/: ``...noweb works ``out of the box'' with any programming language, and supports TeX, latex, HTML, and troff back ends.'' Bill From janssen at parc.com Mon May 21 19:48:45 2007 From: janssen at parc.com (Bill Janssen) Date: Mon, 21 May 2007 10:48:45 PDT Subject: [Python-Dev] The docs, reloaded In-Reply-To: <46511FFE.4020803@v.loewis.de> References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> <46511FFE.4020803@v.loewis.de> Message-ID: <07May21.104847pdt."57996"@synergy1.parc.xerox.com> > We are developing a programming language here, not a typesetting > system. Good point, Martin. Are you implying that the documentation should be kept in LaTeX, a widely-accepted widely-disseminated stable documentation language, which someone else maintains, rather than ReST, which elements of the the Python community maintain? Bill From pje at telecommunity.com Mon May 21 20:01:52 2007 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 21 May 2007 14:01:52 -0400 Subject: [Python-Dev] PEP 0365: Adding the pkg_resources module In-Reply-To: <4651C8AE.5040407@egenix.com> References: <5.1.1.6.0.20070430202700.04cb8128@sparrow.telecommunity.com> <4650C6A9.1020809@acm.org> <465185EF.90608@egenix.com> <20070521140332.B8B033A40F3@sparrow.telecommunity.com> <4651C8AE.5040407@egenix.com> Message-ID: <20070521180001.9C7D53A409D@sparrow.telecommunity.com> At 06:28 PM 5/21/2007 +0200, M.-A. Lemburg wrote: >However, since this is not egg-specific it should probably be >moved to pkgutil and get a separate PEP with detailed documentation >(the link you provided doesn't really explain the concepts, reading >the code helped a bit). That doesn't really make sense in the context of the current PEP, though, which isn't to provide a general-purpose namespace package API; it's specifically about adding an existing piece of code to the stdlib, with its API intact. >What I don't understand about your approach is why importers >would have to register with the namespace implementation. > >This doesn't seem necessary, since the package __path__ attribute >already provides all functionality needed for redirecting >lookups to different paths. The registration is so that when new paths are *added* to sys.path at runtime (e.g. by activating a plugin), the necessary __path__ lists are automatically updated. Similarly, when new namespace packages are declared, they need their __path__ updated for everything that's currently on sys.path. Finally, there is no requirement that PEP 302 importer strings use filesystem-path syntax; a handler has to be registered so that the necessary string transformation can be done according to that particular importer's string format. It just happens that zipfiles and regular files have a common syntax. But a URL-based importer, LDAP-DN based importer, SQL importer, or other exotica might require entirely different string transformations. All that PEP 302 guarantees is that sys.path and __path__ lists contain strings, not what the format of those strings is. >Could you add a section that explains the side effects of >importing pkg_resources ? The short answer is, there are no side effects, unless __main__.__requires__ exists. If that variable exists, pkg_resources attempts to adjust sys.path to contain the requested package versions, even if it requires re-ordering the current sys.path contents to prevent conflicting versions from being imported. >The documentation of the module doesn't mention any, but the >code suggests that you are installing (some form of) import >hooks. There are no import hooks, just a registry for registering handlers to support other importer types (as seen at the doc link I gave previously). >Some other comments: > >* Wouldn't it be better to factor out all the meta-data access > code that's not related to eggs into pkgutil ?! > >* How about then renaming the remaining module to egglib ?! These changes in particular would negate the primary purpose of the PEP: to provide a migration path for existing packages using the pkg_resources API, including setuptools, workingenv, zc.buildout, etc. >* The get_*_platform() should probably use the platform module > which is a lot more flexible than distutils' get_platform() > (which should probably use the platform module as well in the > long run) Please feel free to propose specific improvements to the distutils-sig. But keep in mind that the platform information is specifically for supporting .egg filename platform tags. Where the information comes from is less relevant than defining a framework for determining compatibility. I first tried to get some kind of traction on this issue in 2004: http://mail.python.org/pipermail/distutils-sig/2004-December/004355.html And so far, the only platform for which something better has emerged is Mac OS X, due largely to Bob Ippolito stepping up and submitting some code. >* The module needs some reorganization: imports, globals and constants > at the top, maybe a few comments delimiting the various sections, I'm not sure I follow you. Globals such as registries are usually defined close to the functions that provide the API for manipulating them. And the rest of the globals (such as working_set), can't be defined until the class they're implemented with has been defined. From mal at egenix.com Mon May 21 20:56:49 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 21 May 2007 20:56:49 +0200 Subject: [Python-Dev] PEP 0365: Adding the pkg_resources module In-Reply-To: <20070521180001.9C7D53A409D@sparrow.telecommunity.com> References: <5.1.1.6.0.20070430202700.04cb8128@sparrow.telecommunity.com> <4650C6A9.1020809@acm.org> <465185EF.90608@egenix.com> <20070521140332.B8B033A40F3@sparrow.telecommunity.com> <4651C8AE.5040407@egenix.com> <20070521180001.9C7D53A409D@sparrow.telecommunity.com> Message-ID: <4651EB71.5050004@egenix.com> On 2007-05-21 20:01, Phillip J. Eby wrote: > At 06:28 PM 5/21/2007 +0200, M.-A. Lemburg wrote: >> However, since this is not egg-specific it should probably be >> moved to pkgutil and get a separate PEP with detailed documentation >> (the link you provided doesn't really explain the concepts, reading >> the code helped a bit). > > That doesn't really make sense in the context of the current PEP, > though, which isn't to provide a general-purpose namespace package API; > it's specifically about adding an existing piece of code to the stdlib, > with its API intact. You seem to indicate that you're not up to discussing the concepts implemented by the module and *integrating* them with the Python stdlib. Please correct me if I'm wrong, but if the whole point of the PEP is a take it or leave it decision, then I don't see the point of discussing it. I'm -1 on adding the module in its current state; I'd be +1 on integrating the concepts with the Python stdlib. Hope I'm wrong, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 21 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From g.brandl at gmx.net Mon May 21 21:01:38 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 21 May 2007 21:01:38 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> Message-ID: Nick Craig-Wood schrieb: > Georg Brandl wrote: >> over the last few weeks I've hacked on a new approach to Python's documentation. >> As Python already has an excellent documentation framework, the docutils, with a >> readable yet extendable markup format, reST, I thought that it should be >> possible to use those instead of the current LaTeX->latex2html >> toolchain. > > Good idea! > > Latex is a barrier for contribution to the docs. I imagine most > people would be much better at contributing to the docs in reST. (Me > included: I learnt latex at university a couple of decades ago and > have now forgotten it completely!) > >> - a "quick-dispatch" function: e.g., docs.python.org/q?os.path.split would >> redirect you to the matching location. > > Being a seasoned unix user, I tend to reach for pydoc as my first stab > at finding some documentation rather than (after excavating the mouse > from under a pile of paper) use a web browser. > > If you've ever used pydoc you'll know it reads docstrings and for some > modules they are great and for others they are sorely lacking. > > If pydoc could show all this documentation as well I'd be very happy! > > Maybe your quick dispatch feature could be added to pydoc too? It is my intention to work together with Ron Adam to make the pydoc <-> documentation integration as seamless as possible. >> Concluding, a small caveat: The conversion/build tools are, of course, not >> complete yet. There are a number of XXX comments in the text, most of them >> indicate that the converter could not handle a situation -- that would have >> to be corrected once after conversion is done. > > It is missing conversion of ``comment'' at the moment as I'm sure you > know... Sorry, what did you mean? > You will need to make your conversion perfect before you convince the > people who wrote most of that documentation I suspect! It already is as good as it gets, barring a few bugs here and there. Which I'd like to hear about, when you find them! cheers, Georg From g.brandl at gmx.net Mon May 21 21:09:24 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 21 May 2007 21:09:24 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <4650B823.4050908@scottdial.com> Message-ID: Robert Kern schrieb: > Neal Becker wrote: > >> There is an effort as part of numpy to come up with a new system using >> docstrings. It seems to me it would be unfortunate if these two efforts >> were not coordinated. > > I don't think so. The issue with numpy is getting our act together and making > parseable docstrings for auto-generated API documentation using existing tools > or slightly modified versions thereof. No one is actually contemplating building > a new tool. AFAICT, Georg's (excellent) work doesn't address that use. Indeed, I don't intend to do anything about docstrings. IMO, docs automatically generated from docstrings can work, but only if there's a single consistent style applied, and the whole thing is written in a markup language, of course, not text only. This is not the case for the Python standard library, so converting it is not an option; in any case, putting all information that is available in the docs into the docstrings would make many modules much less readable. > I don't > think there is anything to coordinate, here. Provided that Georg's system > doesn't place too many restrictions on the reST it handles, we could use the > available reST math options if we wanted to use Georg's system. Of course, for numpy math is much more of importance than for the core. I'm sure the docutils developers will be supportive in case someone volunteers to create/improve reST math capabilities. cheers, Georg From barry at python.org Mon May 21 21:29:41 2007 From: barry at python.org (Barry Warsaw) Date: Mon, 21 May 2007 15:29:41 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <46511FFE.4020803@v.loewis.de> References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> <46511FFE.4020803@v.loewis.de> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 21, 2007, at 12:28 AM, Martin v. L?wis wrote: > I disagree. The documentation infrastructure of Python should only > consider the needs of Python itself. If other people can use that > infrastructure for other purposes, fine - if they find that it does > not meet their needs, they have to look elsewhere. I would take a fairly broad view of the term "for Python" though. Specifically, third party modules and applications written for and in Python should be explicitly supported. I think we'd like to see one (preferred) documentation tool chain for core Python, plus the vast number of third party modules and apps. I don't see any reason why Georg's reST-based system couldn't provide that (80/20 rule perhaps?). I'd point to for example the howto templates, which can be easily used for third party applications. Mailman uses howtos for example. BTW, Georg excellent work. I'm a big latex fan and long-time user, but I do think that using reST will open the door to a lot more contributions. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQCVAwUBRlHzJXEjvBPtnXfVAQLNXQP/QUuZ2gc/DpoidI9jYt7mr66Z+JYHsslv fe4CvFSjd9OxwA3eOynd9dSOSkO6QHQPDVomW8axEkJJSHWNosnr9gmDZcC75nAD JTt4rGImqwkcVIAzE91pZ3fmce/ltp9p1Ru3B1dDRmXbNgHxZ9njaz60MPFszC/H 19jSBo5sqSU= =hlhi -----END PGP SIGNATURE----- From g.brandl at gmx.net Mon May 21 21:36:44 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 21 May 2007 21:36:44 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18001.31744.271598.871237@montanaro.dyndns.org> References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> <46511FFE.4020803@v.loewis.de> <18001.31744.271598.871237@montanaro.dyndns.org> Message-ID: skip at pobox.com schrieb: > Brett> Martin beat me to my comment. =) Python's needs should come > Brett> first, period. If Georg wants to add math support, fine. But > Brett> honestly I would rather he spend his time on Python-specific > Brett> stuff then get bogged down to support possible third parties. > > I think the people who have responded to my comment read too much into it. > Nowhere do I think I asked Georg to write an equation typesetter to include > in the Python documentation toolchain. I asked that math capability be > considered. And that is reasonable, of course. > I have no idea what tools he used to build his new > documentation set. I only briefly glanced at a couple of the output pages. > I think what he has done is marvelous. However, I don't think the door > should be shut on equation display. Is there a route to it based on the > tools Georg is using? In the end, it all depends on what kind of support basic reST can deliver. IMO, you still get the best math output from LaTeX, but I don't really know many other things. That is also something I want to convey: I'm very fond of LaTeX, and use it regularly for all my University work. For the Python docs, however, I can see many advantages of the docutils approach. > If not, then I think some accommodation should be > made. I'm being vague here on purpose because I'm unfamiliar with the > available tools. The one thing I do know is that LaTeX provides that today > and by removing it from the toolchain you have removed a significant piece > of functionality. That's the point I see differently: for the Python core docs, it's not significant, and my efforts are primarily limited to that area. cheers, Georg From g.brandl at gmx.net Mon May 21 21:55:38 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 21 May 2007 21:55:38 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <200705210923.47610.fdrake@acm.org> References: <46511FFE.4020803@v.loewis.de> <18001.31340.881646.672542@montanaro.dyndns.org> <200705210923.47610.fdrake@acm.org> Message-ID: Fred L. Drake, Jr. schrieb: > On Monday 21 May 2007, skip at pobox.com wrote: > > Then I submit that you are probably removing some significant piece of > > functionality from the provided documentation toolchain which some people > > probably rely on. After all, that's what LaTeX excels at. They will be > > able to continue to use the old tools, but where will they get them if > > they are no longer part of Python? > > I'll be happy to pull the existing tools out into a separate distribution if > we move to something else for Python. There are too many users of the > existing tools to abandon. That is a good idea! The converter is not likely to work with other projects out of the box (it's been finetuned for the Doc/ sources), and it's not clear whether they would want to switch in any case. Many of the features that the new system would be able to provide aren't needed for them anyway, and I, as a maintainer, would be very reluctant to put extra work in that too... cheers, Georg From jon+python-dev at unequivocal.co.uk Mon May 21 22:44:10 2007 From: jon+python-dev at unequivocal.co.uk (Jon Ribbens) Date: Mon, 21 May 2007 21:44:10 +0100 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: <20070521204410.GN11171@snowy.squish.net> On Sat, May 19, 2007 at 07:14:09PM +0200, Georg Brandl wrote: > For the impatient: the result can be seen at . I think that looks great. One comment I have, I don't know if it's relevant - it perhaps depends on whether the "Global Module Index" is auto-generated or not. This is the page I visit the most out of all the Python documentation, and it's far too large and unwieldy. IMHO it would be much better if only the top-level modules were shown here - having the single package 'distutils', for example, take up nearly 50 entries in the list is almost certainly hindering a lot more people than it helps. It would perhaps be better if such packages show up as one entry, which shows the sub-modules when clicked on. From pje at telecommunity.com Mon May 21 22:48:39 2007 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 21 May 2007 16:48:39 -0400 Subject: [Python-Dev] PEP 0365: Adding the pkg_resources module In-Reply-To: <4651EB71.5050004@egenix.com> References: <5.1.1.6.0.20070430202700.04cb8128@sparrow.telecommunity.com> <4650C6A9.1020809@acm.org> <465185EF.90608@egenix.com> <20070521140332.B8B033A40F3@sparrow.telecommunity.com> <4651C8AE.5040407@egenix.com> <20070521180001.9C7D53A409D@sparrow.telecommunity.com> <4651EB71.5050004@egenix.com> Message-ID: <20070521204648.B09C23A409D@sparrow.telecommunity.com> At 08:56 PM 5/21/2007 +0200, M.-A. Lemburg wrote: >On 2007-05-21 20:01, Phillip J. Eby wrote: > > At 06:28 PM 5/21/2007 +0200, M.-A. Lemburg wrote: > >> However, since this is not egg-specific it should probably be > >> moved to pkgutil and get a separate PEP with detailed documentation > >> (the link you provided doesn't really explain the concepts, reading > >> the code helped a bit). > > > > That doesn't really make sense in the context of the current PEP, > > though, which isn't to provide a general-purpose namespace package API; > > it's specifically about adding an existing piece of code to the stdlib, > > with its API intact. > >You seem to indicate that you're not up to discussing the concepts >implemented by the module and *integrating* them with the Python >stdlib. No, I'm saying something else. I'm saying it: 1. has nothing to do with the PEP, 2. isn't something I'm volunteering to do, and 3. would only make sense to do as part of Python 3 stdlib reorganization, if it were done at all. Now, the code is certainly under an open license, and the concepts are entirely free for anyone to use. If somebody wishes to do what you're describing, they're certainly welcome to take on that thankless task. But I personally don't see the point, since by definition that new API would have *no current users*. And the purpose of the PEP is to serve the (rather large) audience that would like to take advantage of existing software that uses the API. Thus, any proposal to alter that API faces a high entry barrier to show how the proposed changes would provide a signficant practical benefit to users. That's not even remotely similar to "take it or leave it". It might *seem* that way, of course, simply because in any proposal to change the API, there's an implicit question of why nobody proposed the change via the Distutils-SIG, sometime during the last 2+ years of discussions around that API. I remain open-minded and curious as to the possibility that someone *could* propose a meaningful change, but am also rationally skeptical that someone actually *will* come up with something that would outweigh the user benefit of keeping the already published, already discussed, already field-tested, already in-use API. For that matter, I remain open-minded and curious as to the possibility of whether someone could propose a reasonable justification for *not* including the module in the stdlib. After all, last year Fredrik Lundh surprised me with a convincing rationale for *not* including setuptools in the stdlib, which is why I backed off on doing so in 2.5, and am now proffering a much-reduced-in-scope proposal for 2.6. So, I'm perfectly willing and able to change my mind, given convincing reasons to do so. So far, though, your change suggestions haven't even explained why *you* want them, let alone why anybody else should agree. We can hardly discuss what you haven't yet said. From g.brandl at gmx.net Mon May 21 23:02:59 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 21 May 2007 23:02:59 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <20070521204410.GN11171@snowy.squish.net> References: <20070521204410.GN11171@snowy.squish.net> Message-ID: Jon Ribbens schrieb: > On Sat, May 19, 2007 at 07:14:09PM +0200, Georg Brandl wrote: >> For the impatient: the result can be seen at . > > I think that looks great. > > One comment I have, I don't know if it's relevant - it perhaps depends > on whether the "Global Module Index" is auto-generated or not. This is > the page I visit the most out of all the Python documentation, and > it's far too large and unwieldy. IMHO it would be much better if only > the top-level modules were shown here - having the single package > 'distutils', for example, take up nearly 50 entries in the list is > almost certainly hindering a lot more people than it helps. It would > perhaps be better if such packages show up as one entry, which shows > the sub-modules when clicked on. Sure, that is certainly possible. Georg From amk at amk.ca Mon May 21 23:04:10 2007 From: amk at amk.ca (A.M. Kuchling) Date: Mon, 21 May 2007 17:04:10 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <200705210923.47610.fdrake@acm.org> References: <46511FFE.4020803@v.loewis.de> <18001.31340.881646.672542@montanaro.dyndns.org> <200705210923.47610.fdrake@acm.org> Message-ID: <20070521210410.GA14297@localhost.localdomain> On Mon, May 21, 2007 at 09:23:47AM -0400, Fred L. Drake, Jr. wrote: > I'll be happy to pull the existing tools out into a separate distribution if > we move to something else for Python. There are too many users of the > existing tools to abandon. That seems like a straightforward task. The big stumbling block in switching away from LaTeX has always been the effort of making a good conversion; if Georg's work does 80% of the job, we should definitely take advantage of the opportunity and try to switch. Advantages of reST: * The required tool chain shrinks (at least if you're not making printed output, which will probably still go through LaTeX). * Tool chain is now more easily scriptable, and it'll be easier to make use of the docs from Python code. * We can produce XML output through the rst2xml script. Disadvantages: * reST markup isn't much simpler than LaTeX. --amk From martin at v.loewis.de Mon May 21 23:29:51 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 21 May 2007 23:29:51 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <00af01c79b6f$2f863c30$1f0a0a0a@enfoldsystems.local> References: <00af01c79b6f$2f863c30$1f0a0a0a@enfoldsystems.local> Message-ID: <46520F4F.5010502@v.loewis.de> > I'm using the full-blown VS.NET 2003, as given to a number of python-dev > people by Microsoft a number of years ago. This appears to come with the > SDK and a 64bit compiler. Not sure what it makes it appear to you that way - it doesn't. VS.NET 2003 is x86 only > I'm guessing vsextcomp doesn't use the Visual > Studio 'ReleaseAMD64' configuration - would it be OK for me to check in > changes to the PCBuild projects for this configuration? Please don't. It exclusively relies on vsextcomp, and is only useful if you have that infrastructure installed. See for yourself: it uses the /USE_CL:MS_OPTERON command line switch, which isn't a Microsoft invention (but instead invented by Peter Tr?ger). > So is there something we can do to make distutils play better with binaries > built from PCBuild8, even though it is considered temporary? In what way better? It already supports them just fine, AFAICT. I guess you are referring to the support for building extensions on top of a source tree "installation". I doubt that is used that often (but I understand you are using it). > It seems the > best thing might be to modify the PCBuild8 build process so the output > binaries are in the ../PCBuild' directory - this way distutils and others > continue to work fine. Does that sound reasonable? I think Kristjan will have to say a word here: I think he just likes it the way it is right now. That would rather suggest that build_ext needs to be changed. > I've no objection to that - but I'd like to help keep the pain to a minimum > for people who find themselves trying to build 64bit extensions in the > meantime. I recommend that those people install the official binaries. Why do you need to build the binaries from source, if all you want is to build extensions? >> In C or in C++? In C++, wchar_t is a builtin type, just like >> short, int, >> long. So there is no further formal definition. > > This was in C++, but the problem was really WCHAR, as used by much of the > win32 API. But in C, WCHAR should not be a problem (and I would like to see explicit source code and explicit compiler error message to be proven wrong). >>> * Finally, as something from left-field which may well take >>> 12 months or >>> more to pull off - but would there be any interest to >>> moving the Windows >>> build process to a cygwin environment based on the existing autoconf >>> scripts? >> What compiler would you use there? I very much like using the VS >> debugger when developing on Windows, so that capability should not >> go away. > > You would use whatever compiler the autoconf toolset found. Recent versions > know enough about MSVC for simple projects. Many people would need to take > care that their environment pointed at the correct compiler - especially the > person building releases. > > But assuming MSVC was found and had the appropriate switches passed, there > would be no impact on the ability to use Visual Studio as a debugging > environment. I doubt that could work in practice. You will have to rewrite everything to make it pass the correct command line switches. Regards, Martin From martin at v.loewis.de Mon May 21 23:43:39 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 21 May 2007 23:43:39 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> Message-ID: <4652128B.3000301@v.loewis.de> >> I don't find the need to have separate object directories convincing: >> For building the Win32/Win64 binaries, I have separate checkouts >> *anyway*, since all the add-on libraries would have to support >> multi-arch builds, but I think they don't. > > No they don't, but that doesn't mean that you need different checkouts > for python, only the others. Anyway, this is indeed something I'd like > to see addressed. I don't think we should ditch cross-compilation. Nobody proposed to ditch cross-compilation. I very much like cross-compilation, I do all Itanium and AMD64 in cross-compilation. I just want the *file structure* of the output to be the very same for all architectures, meaning that they can't coexist in a single source directory. > It > should simplify a lot of stuff, including buildbot setup and so on (if > we get the buildbot infrastructure to support it). It is also very > cumbersome, if you are working on a project, to have the binaries all > end up in the same place. Doing interactive work on python, I frequently > compile both the 32 bit and 64 bit versions for testing and it would > be downright silly to have to rebuild everything every time. No, you use two checkouts, of course. > I think this is a bit unrealistic. Here we are in the middle of 2007, > VS2005 has just got SP1, and VS2003 is still the "official" compiler. > PCBuild8 is ready, it just needs a little bit of extra love and > buildbots to make us able to release PGO versions of x86 and x64. No matter what the next compiler is (VS 2005 or VS 2007/2008), it's still *a lot* of work until the VS 2005 build can be used for releasing Python. For example, there is no support for the SxS installation of msvcr8.dll, using the MSI merge module. > Given the delay for getting even this far, waiting for Orcas and then > someone to create PCBuild9, and then getting it up and running and so on > will mean waiting another two years. No. I would expect that either the PCbuild or PCbuild8 project files can be used with little changes to build using VS9. I just tried, and it works reasonably well. > I am not familiar with the msi packaging process at all. But here is > something we should start to consider: VISTA support. This could mean > some of: Not sure whether anything really is needed. Python works fine on Vista. > 1) supplying python.dll as a Side By Side assembly What would that improve? > 2) Changing python install locations Why? > 3) Supporting shadow libraries, where .pyc files end up in a different > hierarchy from the .py files. (useful for many things beside VISTA) Yes, and people had written PEPs for it which failed due to lack of follow up. > 4) Signing the python dlls and executables Hmm. > 5) Providing user level manifests. > > Vista adoption is going very fast. We see 10% of our users have moved > to vista and rising. Sure, and have they reported problems with Python on Vista (problems specific to Vista?) > Yes. Btw, in previous visual studio versions, wchar_t was not treated > as a builtin type by default, but rather as synonymous with unsighed short. > Now the default is that it is, and this causes some semantic differences > and incompatibilities of the type seen. C or C++? According to the standards, it ought to be a builtin, primitive type in C++, and a typedef in C. Regards, Martin From mal at egenix.com Mon May 21 23:44:11 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 21 May 2007 23:44:11 +0200 Subject: [Python-Dev] PEP 0365: Adding the pkg_resources module In-Reply-To: <20070521204648.B09C23A409D@sparrow.telecommunity.com> References: <5.1.1.6.0.20070430202700.04cb8128@sparrow.telecommunity.com> <4650C6A9.1020809@acm.org> <465185EF.90608@egenix.com> <20070521140332.B8B033A40F3@sparrow.telecommunity.com> <4651C8AE.5040407@egenix.com> <20070521180001.9C7D53A409D@sparrow.telecommunity.com> <4651EB71.5050004@egenix.com> <20070521204648.B09C23A409D@sparrow.telecommunity.com> Message-ID: <465212AB.9040802@egenix.com> On 2007-05-21 22:48, Phillip J. Eby wrote: > At 08:56 PM 5/21/2007 +0200, M.-A. Lemburg wrote: >> On 2007-05-21 20:01, Phillip J. Eby wrote: >> > At 06:28 PM 5/21/2007 +0200, M.-A. Lemburg wrote: >> >> However, since this is not egg-specific it should probably be >> >> moved to pkgutil and get a separate PEP with detailed documentation >> >> (the link you provided doesn't really explain the concepts, reading >> >> the code helped a bit). >> > >> > That doesn't really make sense in the context of the current PEP, >> > though, which isn't to provide a general-purpose namespace package API; >> > it's specifically about adding an existing piece of code to the stdlib, >> > with its API intact. >> >> You seem to indicate that you're not up to discussing the concepts >> implemented by the module and *integrating* them with the Python >> stdlib. > > No, I'm saying something else. I'm saying it: > > 1. has nothing to do with the PEP, > 2. isn't something I'm volunteering to do, and > 3. would only make sense to do as part of Python 3 stdlib > reorganization, if it were done at all. I don't understand that last part: how can adding a new module or set of modules require waiting for reorganization of the stdlib ? All I'm suggesting is to reorganize the code in pkg_resources.py a bit and move the relevant bits into pkgutil.py and into a new eggutil.py. > Now, the code is certainly under an open license, and the concepts are > entirely free for anyone to use. If somebody wishes to do what you're > describing, they're certainly welcome to take on that thankless task. > > But I personally don't see the point, since by definition that new API > would have *no current users*. And the purpose of the PEP is to serve > the (rather large) audience that would like to take advantage of > existing software that uses the API. > > Thus, any proposal to alter that API faces a high entry barrier to show > how the proposed changes would provide a signficant practical benefit to > users. Why is that ? You can easily provide a pkg_resource.py module with your old API that interfaces to the new reorganized code in the stdlib. > That's not even remotely similar to "take it or leave it". It might > *seem* that way, of course, simply because in any proposal to change the > API, there's an implicit question of why nobody proposed the change via > the Distutils-SIG, sometime during the last 2+ years of discussions > around that API. This doesn't have anything to do with distutils. It's entirely about the egg distribution format. > I remain open-minded and curious as to the possibility that someone > *could* propose a meaningful change, but am also rationally skeptical > that someone actually *will* come up with something that would outweigh > the user benefit of keeping the already published, already discussed, > already field-tested, already in-use API. > > For that matter, I remain open-minded and curious as to the possibility > of whether someone could propose a reasonable justification for *not* > including the module in the stdlib. After all, last year Fredrik Lundh > surprised me with a convincing rationale for *not* including setuptools > in the stdlib, which is why I backed off on doing so in 2.5, and am now > proffering a much-reduced-in-scope proposal for 2.6. > > So, I'm perfectly willing and able to change my mind, given convincing > reasons to do so. So far, though, your change suggestions haven't even > explained why *you* want them, let alone why anybody else should agree. > We can hardly discuss what you haven't yet said. I'm not sure what you want to hear from me. You asked for comments, I wrote back and gave you comments. I also made it clear why I think that breaking up the addition into different PEPs makes a lot of sense and why separating the code into different modules for the same reason makes a lot of sense as well. I also tried to stir up some discussion to make life easier for setuptools by suggesting a user-package directory on sys.path and adding support for namespace packages as general Python feature instead of hiding it away in pkg_resources.py. You should see this as chance to introduce new concepts to Python. Instead you seem to feel offended every time someone suggests a change in your design. That's also the reason why I stopped discussing things with you on the distutils list. There was simply no way of getting through to you. Perhaps we should just meet up for a beer in London sometime and sort things out ;-) Cheers, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 21 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From martin at v.loewis.de Mon May 21 23:57:45 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 21 May 2007 23:57:45 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18001.31744.271598.871237@montanaro.dyndns.org> References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> <46511FFE.4020803@v.loewis.de> <18001.31744.271598.871237@montanaro.dyndns.org> Message-ID: <465215D9.9090700@v.loewis.de> > I think the people who have responded to my comment read too much into it. > Nowhere do I think I asked Georg to write an equation typesetter to include > in the Python documentation toolchain. I asked that math capability be > considered. I have no idea what tools he used to build his new > documentation set. I only briefly glanced at a couple of the output pages. > I think what he has done is marvelous. However, I don't think the door > should be shut on equation display. Is there a route to it based on the > tools Georg is using? I don't think anything in the world can replace TeX for math typesetting. So if math typesetting was a requirement (which it should not be, for that very reason), then we could not consider anything but TeX. Regards, Martin From martin at v.loewis.de Mon May 21 23:59:15 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 21 May 2007 23:59:15 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18001.31340.881646.672542@montanaro.dyndns.org> References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> <46511FFE.4020803@v.loewis.de> <18001.31340.881646.672542@montanaro.dyndns.org> Message-ID: <46521633.3060609@v.loewis.de> > Martin> I disagree. The documentation infrastructure of Python should > Martin> only consider the needs of Python itself. If other people can > Martin> use that infrastructure for other purposes, fine - if they find > Martin> that it does not meet their needs, they have to look elsewhere. > > Then I submit that you are probably removing some significant piece of > functionality from the provided documentation toolchain which some people > probably rely on. After all, that's what LaTeX excels at. It may be significant for other people, but it is not significant for the Python documentation. > They will be > able to continue to use the old tools, but where will they get them if they > are no longer part of Python? We are not going to remove old releases from the net. Regards, Martin From martin at v.loewis.de Tue May 22 00:04:01 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 22 May 2007 00:04:01 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <07May21.104847pdt."57996"@synergy1.parc.xerox.com> References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> <46511FFE.4020803@v.loewis.de> <07May21.104847pdt."57996"@synergy1.parc.xerox.com> Message-ID: <46521751.4040106@v.loewis.de> Bill Janssen schrieb: >> We are developing a programming language here, not a typesetting >> system. > > Good point, Martin. Are you implying that the documentation should be > kept in LaTeX, a widely-accepted widely-disseminated stable > documentation language, which someone else maintains, rather than > ReST, which elements of the the Python community maintain? No - I have no particular preference wrt. to the markup language. I can personally live with all of them, and I like none of them. I hear that contributors complain about having to use TeX, and I hear other people say that they were more happy if they could use ReST instead of TeX. Making contributors happy is really what the objective should be (if the quality of the typesetting output is adequate - and most people use the HTML output these days, where latex2html may not have adequate quality). That docutils happens to be written in Python should make little difference - it's *not* part of the Python language project, and is just a tool for us, very much like latex and latex2html. Regards, Martin From skip at pobox.com Mon May 21 23:40:20 2007 From: skip at pobox.com (skip at pobox.com) Date: Mon, 21 May 2007 16:40:20 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <20070521204410.GN11171@snowy.squish.net> Message-ID: <18002.4548.772439.879692@montanaro.dyndns.org> >> One comment I have, I don't know if it's relevant - it perhaps >> depends on whether the "Global Module Index" is auto-generated or >> not. This is the page I visit the most out of all the Python >> documentation, and it's far too large and unwieldy. IMHO it would be >> much better if only the top-level modules were shown here - having >> the single package 'distutils', for example, take up nearly 50 >> entries in the list is almost certainly hindering a lot more people >> than it helps. It would perhaps be better if such packages show up as >> one entry, which shows the sub-modules when clicked on. Georg> Sure, that is certainly possible. Take a look at . It records request counts for the various pages and presents the most frequently requested pages in a section at the top of the page. I can make the script available if anyone wants it (it uses Myghty - Mason in Python.) Skip From pje at telecommunity.com Tue May 22 01:01:06 2007 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 21 May 2007 19:01:06 -0400 Subject: [Python-Dev] PEP 0365: Adding the pkg_resources module In-Reply-To: <465212AB.9040802@egenix.com> References: <5.1.1.6.0.20070430202700.04cb8128@sparrow.telecommunity.com> <4650C6A9.1020809@acm.org> <465185EF.90608@egenix.com> <20070521140332.B8B033A40F3@sparrow.telecommunity.com> <4651C8AE.5040407@egenix.com> <20070521180001.9C7D53A409D@sparrow.telecommunity.com> <4651EB71.5050004@egenix.com> <20070521204648.B09C23A409D@sparrow.telecommunity.com> <465212AB.9040802@egenix.com> Message-ID: <20070521230039.390C43A40A8@sparrow.telecommunity.com> At 11:44 PM 5/21/2007 +0200, M.-A. Lemburg wrote: >On 2007-05-21 22:48, Phillip J. Eby wrote: > > At 08:56 PM 5/21/2007 +0200, M.-A. Lemburg wrote: > >> On 2007-05-21 20:01, Phillip J. Eby wrote: > >> > At 06:28 PM 5/21/2007 +0200, M.-A. Lemburg wrote: > >> >> However, since this is not egg-specific it should probably be > >> >> moved to pkgutil and get a separate PEP with detailed documentation > >> >> (the link you provided doesn't really explain the concepts, reading > >> >> the code helped a bit). > >> > > >> > That doesn't really make sense in the context of the current PEP, > >> > though, which isn't to provide a general-purpose namespace package API; > >> > it's specifically about adding an existing piece of code to the stdlib, > >> > with its API intact. > >> > >> You seem to indicate that you're not up to discussing the concepts > >> implemented by the module and *integrating* them with the Python > >> stdlib. > > > > No, I'm saying something else. I'm saying it: > > > > 1. has nothing to do with the PEP, > > 2. isn't something I'm volunteering to do, and > > 3. would only make sense to do as part of Python 3 stdlib > > reorganization, if it were done at all. > >I don't understand that last part: how can adding a new module >or set of modules require waiting for reorganization of the >stdlib ? I'm saying that an API reorganization that split up the pkg_resources API might be appropriate at that time, in much the same way that say, merging "dl" and "ctypes" (or dropping "dl") might take place during such a reorganization. (And would thus go along with the 2to3 conversion or whatever other mechanism will be used for porting Python 2 code to Python 3 code.) >All I'm suggesting is to reorganize the code in pkg_resources.py >a bit and move the relevant bits into pkgutil.py and into a new >eggutil.py. ... >You can easily provide a pkg_resource.py module with your old API >that interfaces to the new reorganized code in the stdlib. ...and are you proposing that this other module *also* be included in the stdlib? If so, that was not clear from your previous messages. > > That's not even remotely similar to "take it or leave it". It might > > *seem* that way, of course, simply because in any proposal to change the > > API, there's an implicit question of why nobody proposed the change via > > the Distutils-SIG, sometime during the last 2+ years of discussions > > around that API. > >This doesn't have anything to do with distutils. It's entirely >about the egg distribution format. Huh? I'm saying that the pkg_resources API has been being discussed on the Distutils-SIG for a good 2 years now. If there was something that users *wanted* to change in the API, surely it would've been discussed by now? >I'm not sure what you want to hear from me. A good place to start would be the rationale for your proposed API changes. >You asked for comments, I wrote back and gave you comments. Yep, and I explained my take on them. You then brought up this whole "take it or leave it" thing, in response to which I attempted to provide further clarification of my reasoning -- none of which involves any "take it or leave it"-ness, from my perspective. > I also >made it clear why I think that breaking up the addition into different >PEPs makes a lot of sense and why separating the code into different >modules for the same reason makes a lot of sense as well. Actually, no, you didn't. You simply asserted that certain things would be "better" (without any mention of *how* they would be better), and that other things "should probably" be done (without any explanation of *why*). So, I simply responded with information of why I did *not* see these changes to be useful in any self-evident way, and why I'd want to see some justification that would weigh against the PEP's raison d'etre -- supporting the existing user base and making it easier for more people to join that user base. >I also tried to stir up some discussion to make life easier >for setuptools by suggesting a user-package directory on >sys.path and adding support for namespace packages as >general Python feature instead of hiding it away in >pkg_resources.py. The first is a great discussion topic, but unrelated to the PEP. I'll be happy to participate in that discussion, if I have any input to offer. The second, I don't see as "hiding away"; I would actually suggest that pkgutil.extend_path simply be deprecated in favor of the pkg_resources API for namespace packages. I'm not aware of there being a particularly large user base for the pkgutil.extend_path, so I don't see a need to change an API that's used a lot, to match one that's not used as much. AFAIK, extend_path was originally created for Zope Corp., and also AFAIK, they are now using pkg_resources instead. See also this previous distutils-sig discussion (as I said, the API *does* get discussed there): http://mail.python.org/pipermail/distutils-sig/2006-August/006608.html You'll notice that in my response, I also left the door open to supporting .pkg files (an extend_path feature not supported by pkg_resources), if somebody could tell me what they're used for. Nobody has responded to that, so far. >You should see this as chance to introduce new concepts to Python. >Instead you seem to feel offended every time someone suggests a >change in your design. I'm not offended. I've simply commented on your comments, and suggested that some of them are off-topic in relation to the PEP, or shown why they would be counter to the expressed purposes of the PEP. As for introducing new concepts to Python, IIRC, Guido and Jim Fulton co-invented namespace packages and pkgutil.extend_path() quite a few years ago, so I'm not really the introducer there, nor is it a particularly new concept. So I'm not sure what new concepts you're talking about. >That's also the reason why I stopped >discussing things with you on the distutils list. There was simply >no way of getting through to you. We simply have different goals. In the case of the distutils-sig, my perception is that your proposals were aimed at preserving the investment of a small number of expert distutils users, at the expense of the broader public who knew next to nothing about the distutils. Those proposals "got through to me" just fine; I just didn't (and don't) agree with them as goals for setuptools, in any place where they conflicted with setuptools' primary goals (which included adoption by *new*, distutils-ignorant and/or distutils-disliking users). From fdrake at acm.org Tue May 22 01:32:12 2007 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Mon, 21 May 2007 19:32:12 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <20070521210410.GA14297@localhost.localdomain> References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> Message-ID: <200705211932.13295.fdrake@acm.org> On Monday 21 May 2007, A.M. Kuchling wrote: > Disadvantages: > > * reST markup isn't much simpler than LaTeX. * reST doesn't support nested markup, which is used in the current documentation. -Fred -- Fred L. Drake, Jr. From fdrake at acm.org Tue May 22 01:35:02 2007 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Mon, 21 May 2007 19:35:02 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18002.4548.772439.879692@montanaro.dyndns.org> References: <18002.4548.772439.879692@montanaro.dyndns.org> Message-ID: <200705211935.02582.fdrake@acm.org> On Monday 21 May 2007, skip at pobox.com wrote: > Take a look at . It records request > counts for the various pages and presents the most frequently requested > pages in a section at the top of the page. I can make the script > available if anyone wants it (it uses Myghty - Mason in Python.) This is very cool. ;-) -Fred -- Fred L. Drake, Jr. From ndbecker2 at gmail.com Tue May 22 01:41:48 2007 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 21 May 2007 19:41:48 -0400 Subject: [Python-Dev] The docs, reloaded References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> Message-ID: skip at pobox.com wrote: > > >>> What would be my choices for add math to the documentation? > > >> Where in the current documentation is there any math notation /at > >> all/? > > Georg> There is exactly one instance of LaTeX math in the whole docs, > Georg> it's in the description of audioop, AFAIR, an contains a sum > over Georg> square roots... > > Georg> So, that's not really a concern of mine ;) > > You must realize that people will use the core tools to create > documentation > for third party packages which aren't in the core. If you replace LaTeX > with something else I think you need to keep math in mind whether it's > used in the core documentation or not. > Perhaps my comment was misunderstood. I have no objection to a new system, and it does not have to be based on latex. I just hope there will be some escape mechanism that allows math. It happens that for math markup, there isn't really anything better (or more familiar) than latex. From alan.mcintyre at gmail.com Tue May 22 02:17:02 2007 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Mon, 21 May 2007 20:17:02 -0400 Subject: [Python-Dev] Module cleanup improvement Message-ID: <1d36917a0705211717q450e43cemd23d5bca1c3a2f73@mail.gmail.com> Hi all, Bug #1717900 has an example of a script that causes a (cryptic, IMO) error during module cleanup since instances of a class just happen to get destroyed after their class is destroyed, and the __del__ method manipulates a class attribute. As I understand it this is expected under the behavior outlined here: http://www.python.org/doc/essays/cleanup/ Adding a step C1.5 which removes only objects that return true for PyInstance_Check seems to prevent the problem exhibited by this bug (I tried it out locally on the trunk and it doesn't cause any problems with the regression test suite). Is there any reason that adding such a step to module cleanup would be a bad idea? Alan From aahz at pythoncraft.com Tue May 22 03:05:59 2007 From: aahz at pythoncraft.com (Aahz) Date: Mon, 21 May 2007 18:05:59 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <20070521204410.GN11171@snowy.squish.net> References: <20070521204410.GN11171@snowy.squish.net> Message-ID: <20070522010559.GB21664@panix.com> On Mon, May 21, 2007, Jon Ribbens wrote: > > One comment I have, I don't know if it's relevant - it perhaps depends > on whether the "Global Module Index" is auto-generated or not. This is > the page I visit the most out of all the Python documentation, and > it's far too large and unwieldy. IMHO it would be much better if only > the top-level modules were shown here - having the single package > 'distutils', for example, take up nearly 50 entries in the list is > almost certainly hindering a lot more people than it helps. It would > perhaps be better if such packages show up as one entry, which shows > the sub-modules when clicked on. That's a good point in general, but I think we want to manually label some submodules as having entries in the global module index (notably os.path). -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Look, it's your affair if you want to play with five people, but don't go calling it doubles." --John Cleese anticipates Usenet From skip at pobox.com Tue May 22 03:20:08 2007 From: skip at pobox.com (skip at pobox.com) Date: Mon, 21 May 2007 20:20:08 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> Message-ID: <18002.17736.138574.686866@montanaro.dyndns.org> Neal> It happens that for math markup, there isn't really anything Neal> better (or more familiar) than latex. True enough. There is MathML and its offspring, ASCIIMathML, which are probably worth looking at. http://www.w3.org/Math/ http://www1.chapman.edu/~jipsen/asciimath.html I have no idea if either one has backend support for presentation outside the web, but if people are interested in this (probably within the docutils scope) they probably should be considered. ASCIIMathML in particular is probably worth using now within even if you can't convert it to any other format. It's about as readable as LaTeX source. Skip From steven.bethard at gmail.com Tue May 22 03:21:02 2007 From: steven.bethard at gmail.com (Steven Bethard) Date: Mon, 21 May 2007 19:21:02 -0600 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18002.4548.772439.879692@montanaro.dyndns.org> References: <20070521204410.GN11171@snowy.squish.net> <18002.4548.772439.879692@montanaro.dyndns.org> Message-ID: On 5/21/07, skip at pobox.com wrote: > > >> One comment I have, I don't know if it's relevant - it perhaps > >> depends on whether the "Global Module Index" is auto-generated or > >> not. This is the page I visit the most out of all the Python > >> documentation, and it's far too large and unwieldy. IMHO it would be > >> much better if only the top-level modules were shown here - having > >> the single package 'distutils', for example, take up nearly 50 > >> entries in the list is almost certainly hindering a lot more people > >> than it helps. It would perhaps be better if such packages show up as > >> one entry, which shows the sub-modules when clicked on. > > Georg> Sure, that is certainly possible. > > Take a look at . It records request > counts for the various pages and presents the most frequently requested > pages in a section at the top of the page. I can make the script available > if anyone wants it (it uses Myghty - Mason in Python.) +1 for integrating this with the official docs. I loved this the last time you posted it too. STeVe -- I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a tiny blip on the distant coast of sanity. --- Bucky Katt, Get Fuzzy From bob at redivi.com Tue May 22 03:43:07 2007 From: bob at redivi.com (Bob Ippolito) Date: Mon, 21 May 2007 18:43:07 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <465215D9.9090700@v.loewis.de> References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> <46511FFE.4020803@v.loewis.de> <18001.31744.271598.871237@montanaro.dyndns.org> <465215D9.9090700@v.loewis.de> Message-ID: <6a36e7290705211843v63204c0as3d93d97296e2e9ec@mail.gmail.com> On 5/21/07, "Martin v. L?wis" wrote: > > I think the people who have responded to my comment read too much into it. > > Nowhere do I think I asked Georg to write an equation typesetter to include > > in the Python documentation toolchain. I asked that math capability be > > considered. I have no idea what tools he used to build his new > > documentation set. I only briefly glanced at a couple of the output pages. > > I think what he has done is marvelous. However, I don't think the door > > should be shut on equation display. Is there a route to it based on the > > tools Georg is using? > > I don't think anything in the world can replace TeX for math > typesetting. So if math typesetting was a requirement (which it > should not be, for that very reason), then we could not consider > anything but TeX. You can use docutils to generate LaTeX output from reST, and you can put raw LaTeX into the output with ".. raw:: latex". I would imagine this is sufficient for now. -bob From mhammond at skippinet.com.au Tue May 22 04:58:45 2007 From: mhammond at skippinet.com.au (Mark Hammond) Date: Tue, 22 May 2007 12:58:45 +1000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <46518244.9050207@egenix.com> Message-ID: <016c01c79c1d$1db9d0d0$1f0a0a0a@enfoldsystems.local> Hi Marc-Andre, > +1 from me. > > If think this is simply a bug introduced with the UCS4 patches in > Python 2.2. > > unicodeobject.h already has this code: > > #ifndef PY_UNICODE_TYPE > > /* Windows has a usable wchar_t type (unless we're using UCS-4) */ > # if defined(MS_WIN32) && Py_UNICODE_SIZE == 2 > # define HAVE_USABLE_WCHAR_T > # define PY_UNICODE_TYPE wchar_t > # endif > > # if defined(Py_UNICODE_WIDE) > # define PY_UNICODE_TYPE Py_UCS4 > # endif > > #endif > > But for some reason, pyconfig.h defines: > > /* Define as the integral type used for Unicode representation. */ > #define PY_UNICODE_TYPE unsigned short > > /* Define as the size of the unicode type. */ > #define Py_UNICODE_SIZE SIZEOF_SHORT > > /* Define if you have a useable wchar_t type defined in > wchar.h; useable > means wchar_t must be 16-bit unsigned type. (see > Include/unicodeobject.h). */ > #if Py_UNICODE_SIZE == 2 > #define HAVE_USABLE_WCHAR_T > #endif > > disabling the default settings in the unicodeobject.h. Yes, that does appear strange. The following patch works for me, keeps Python building and appears to solve my problem. Any objections? Mark Index: pyconfig.h =================================================================== --- pyconfig.h (revision 55487) +++ pyconfig.h (working copy) @@ -491,22 +491,13 @@ /* Define if you want to have a Unicode type. */ #define Py_USING_UNICODE -/* Define as the integral type used for Unicode representation. */ -#define PY_UNICODE_TYPE unsigned short - /* Define as the size of the unicode type. */ -#define Py_UNICODE_SIZE SIZEOF_SHORT +/* This is enough for unicodeobject.h to do the "right thing" on Windows. */ +#define Py_UNICODE_SIZE 2 -/* Define if you have a useable wchar_t type defined in wchar.h; useable - means wchar_t must be 16-bit unsigned type. (see - Include/unicodeobject.h). */ -#if Py_UNICODE_SIZE == 2 -#define HAVE_USABLE_WCHAR_T - /* Define to indicate that the Python Unicode representation can be passed as-is to Win32 Wide API. */ #define Py_WIN_WIDE_FILENAMES -#endif /* Use Python's own small-block memory-allocator. */ #define WITH_PYMALLOC 1 From mhammond at skippinet.com.au Tue May 22 05:25:35 2007 From: mhammond at skippinet.com.au (Mark Hammond) Date: Tue, 22 May 2007 13:25:35 +1000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <46520F4F.5010502@v.loewis.de> Message-ID: <017001c79c20$dd347e80$1f0a0a0a@enfoldsystems.local> Hi Martin, > > I'm using the full-blown VS.NET 2003, as given to a number > of python-dev > > people by Microsoft a number of years ago. This appears to > come with the > > SDK and a 64bit compiler. > > Not sure what it makes it appear to you that way - it doesn't. VS.NET > 2003 is x86 only Yes - I was confused by finding an x64 configuration option, and this looking very similar to VC8. My apologies for the confusion. > > So is there something we can do to make distutils play > better with binaries > > built from PCBuild8, even though it is considered temporary? > > In what way better? It already supports them just fine, AFAICT. > > I guess you are referring to the support for building extensions on > top of a source tree "installation". I doubt that is used that often > (but I understand you are using it). Yes, that is correct. I agree it will be rarely used by Python user's, but believe it is a common scenario for people who maintain extensions or libraries, particularly those who want debugging builds. > > It seems the > > best thing might be to modify the PCBuild8 build process so the output > > binaries are in the ../PCBuild' directory - this way distutils and others > > continue to work fine. Does that sound reasonable? > I think Kristjan will have to say a word here: I think he just likes > it the way it is right now. That would rather suggest that build_ext > needs to be changed. So this means PCBuild8 does indeed get formalized into distutils? I'm happy to live with it if it lets me work, but it seems contrary to our emails yesterday. It would also mean the PCBuild8 environment will not work with external build processes that assume the standard layout, but that really isn't something I'm willing to run the pydev gauntlet for at the current time. > > I've no objection to that - but I'd like to help keep the > pain to a minimum > > for people who find themselves trying to build 64bit > extensions in the > > meantime. > > I recommend that those people install the official binaries. > Why do you > need to build the binaries from source, if all you want is to build > extensions? Let's assume that people have a valid reason to build from source - it really is quite common in the open source world. The meta-question then becomes "is it reasonable to expect people be able to build extensions from a tree built with VC8, in the same way they can with VS7?". I think you are suggesting we do want to support this, but I want to be clear. > > >> In C or in C++? In C++, wchar_t is a builtin type, just like > >> short, int, > >> long. So there is no further formal definition. > > > > This was in C++, but the problem was really WCHAR, as used > by much of the > > win32 API. > > But in C, WCHAR should not be a problem (and I would like to see > explicit source code and explicit compiler error message to be > proven wrong). Please see my other mail to Kristjan - at this stage I can only reproduce it in C++ on VC8. C++ on VC7 and C on VC8 both work fine. Please let me know if thde code snippet I pasted or the compiler error aren't suitable. I've stopped further investigation since there seems support for a change that makes the problem go away. > >>> * Finally, as something from left-field which may well take > >>> 12 months or > >>> more to pull off - but would there be any interest to > >>> moving the Windows > >>> build process to a cygwin environment based on the > existing autoconf > >>> scripts? > >> What compiler would you use there? I very much like using the VS > >> debugger when developing on Windows, so that capability should not > >> go away. > > > > You would use whatever compiler the autoconf toolset found. > Recent versions > > know enough about MSVC for simple projects. Many people > would need to take > > care that their environment pointed at the correct compiler > - especially the > > person building releases. > > > > But assuming MSVC was found and had the appropriate > switches passed, there > > would be no impact on the ability to use Visual Studio as a > debugging > > environment. > > I doubt that could work in practice. You will have to rewrite > everything > to make it pass the correct command line switches. 'have to rewrite everything' sounds a little pessimistic, but that's fine with me - consider this idea dropped. Cheers, Mark From steve at holdenweb.com Tue May 22 07:01:29 2007 From: steve at holdenweb.com (Steve Holden) Date: Tue, 22 May 2007 01:01:29 -0400 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> Message-ID: <46527929.8040600@holdenweb.com> Kristj?n Valur J?nsson wrote: > First of all, I have put some work into pcbuild8 recently and it works > well. I am trying to drum up momentum for work on PCBuild8 > next europython. See http://wiki.python.org/moin/EuroPython2007Sprints > > >> -----Original Message----- >> From: python-dev-bounces+kristjan=ccpgames.com at python.org >> >> I don't find the need to have separate object directories convincing: >> For building the Win32/Win64 binaries, I have separate checkouts >> *anyway*, since all the add-on libraries would have to support >> multi-arch builds, but I think they don't. > > No they don't, but that doesn't mean that you need different checkouts > for python, only the others. Anyway, this is indeed something I'd like > to see addressed. I don't think we should ditch cross-compilation. It > should simplify a lot of stuff, including buildbot setup and so on (if > we get the buildbot infrastructure to support it). It is also very > cumbersome, if you are working on a project, to have the binaries all > end up in the same place. Doing interactive work on python, I frequently > compile both the 32 bit and 64 bit versions for testing and it would > be downright silly to have to rebuild everything every time. > >> I would personally like to see Python "skip" VS 2005 altogether, >> as it will be soon superceded by Orcas. Unfortunately, it's unclear >> how long Microsoft will need to release Orcas (and also, when Python >> 2.6 will be released), so I would like to defer that question by >> a few months. > I think this is a bit unrealistic. Here we are in the middle of 2007, > VS2005 has just got SP1, and VS2003 is still the "official" compiler. > PCBuild8 is ready, it just needs a little bit of extra love and > buildbots to make us able to release PGO versions of x86 and x64. > > Given the delay for getting even this far, waiting for Orcas and then > someone to create PCBuild9, and then getting it up and running and so on > will mean waiting another two years. > Addressing only the issues of PCBuild8 and 64-bit architectures, I have tried to establish "free" buildbot support for the 64-bit architectures without any real success. Should the PSF be considering paying for infrastructure that will support 64-bit build reporting? > [...] regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://del.icio.us/steve.holden ------------------ Asciimercial --------------------- Get on the web: Blog, lens and tag your way to fame!! holdenweb.blogspot.com squidoo.com/pythonology tagged items: del.icio.us/steve.holden/python All these services currently offer free registration! -------------- Thank You for Reading ---------------- From steve at holdenweb.com Tue May 22 07:01:29 2007 From: steve at holdenweb.com (Steve Holden) Date: Tue, 22 May 2007 01:01:29 -0400 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> Message-ID: <46527929.8040600@holdenweb.com> Kristj?n Valur J?nsson wrote: > First of all, I have put some work into pcbuild8 recently and it works > well. I am trying to drum up momentum for work on PCBuild8 > next europython. See http://wiki.python.org/moin/EuroPython2007Sprints > > >> -----Original Message----- >> From: python-dev-bounces+kristjan=ccpgames.com at python.org >> >> I don't find the need to have separate object directories convincing: >> For building the Win32/Win64 binaries, I have separate checkouts >> *anyway*, since all the add-on libraries would have to support >> multi-arch builds, but I think they don't. > > No they don't, but that doesn't mean that you need different checkouts > for python, only the others. Anyway, this is indeed something I'd like > to see addressed. I don't think we should ditch cross-compilation. It > should simplify a lot of stuff, including buildbot setup and so on (if > we get the buildbot infrastructure to support it). It is also very > cumbersome, if you are working on a project, to have the binaries all > end up in the same place. Doing interactive work on python, I frequently > compile both the 32 bit and 64 bit versions for testing and it would > be downright silly to have to rebuild everything every time. > >> I would personally like to see Python "skip" VS 2005 altogether, >> as it will be soon superceded by Orcas. Unfortunately, it's unclear >> how long Microsoft will need to release Orcas (and also, when Python >> 2.6 will be released), so I would like to defer that question by >> a few months. > I think this is a bit unrealistic. Here we are in the middle of 2007, > VS2005 has just got SP1, and VS2003 is still the "official" compiler. > PCBuild8 is ready, it just needs a little bit of extra love and > buildbots to make us able to release PGO versions of x86 and x64. > > Given the delay for getting even this far, waiting for Orcas and then > someone to create PCBuild9, and then getting it up and running and so on > will mean waiting another two years. > Addressing only the issues of PCBuild8 and 64-bit architectures, I have tried to establish "free" buildbot support for the 64-bit architectures without any real success. Should the PSF be considering paying for infrastructure that will support 64-bit build reporting? > [...] regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://del.icio.us/steve.holden ------------------ Asciimercial --------------------- Get on the web: Blog, lens and tag your way to fame!! holdenweb.blogspot.com squidoo.com/pythonology tagged items: del.icio.us/steve.holden/python All these services currently offer free registration! -------------- Thank You for Reading ---------------- From martin at v.loewis.de Tue May 22 07:20:40 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 22 May 2007 07:20:40 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <017001c79c20$dd347e80$1f0a0a0a@enfoldsystems.local> References: <017001c79c20$dd347e80$1f0a0a0a@enfoldsystems.local> Message-ID: <46527DA8.1030701@v.loewis.de> > Yes, that is correct. I agree it will be rarely used by Python user's, but > believe it is a common scenario for people who maintain extensions or > libraries, particularly those who want debugging builds. Ah, debugging builds. It's true that PCbuild does not support them for AMD64, and it's also true that you need such a build if you want the debug CRT. However, I think people ask much too often for a debugging build; in many cases, they could work happily with a release build that supports debugging. Depending on the problem you try to solve, you may or may not need debug information for pythonxy.dll as well, or just for your extension module. I'd like to repeat an offer that I have made several times in the past: if somebody contributes a patch to msi.py which packs all PDB files (and whatever you else you need) into a ZIP file (or whatever else works) from the release build, then I could happily release PDB files along with the actual installer files. (I would not like to release the PDB files *in* the installer files, as I expect they would blow up the msi size significantly). >> I think Kristjan will have to say a word here: I think he just likes >> it the way it is right now. That would rather suggest that build_ext >> needs to be changed. > > So this means PCBuild8 does indeed get formalized into distutils? I'm happy > to live with it if it lets me work, but it seems contrary to our emails > yesterday. It would mean that - I'm willing to compromise to make Kristjan happy (he contributed PCbuild8, so he has to decide what changes to it are acceptable and which aren't). A middle ground might be an addition build step (e.g. as a .bat file) which copies all result files also into PCbuild. > Let's assume that people have a valid reason to build from source - it > really is quite common in the open source world. The meta-question then > becomes "is it reasonable to expect people be able to build extensions from > a tree built with VC8, in the same way they can with VS7?". I think you are > suggesting we do want to support this, but I want to be clear. As long as people are willing to maintain it - why not? It's not "official" in the sense that if it breaks, nobody might fix it, but it doesn't hurt to have it (AFAICT). Also, it not being official might mean that we are not obliged to provide backwards compatibility for it, so it can go away without notice (e.g. when/if PCbuild8 is dropped). The same could be done for PC/VC6, if anybody cared to contribute and maintain it. I think there are several other cases in distutils to support special cases (e.g. the DISTUTILS_USE_SDK environment variable); if people want to see their setup supported, AND ARE WILLING TO CONTRIBUTE: more power to them. > Please see my other mail to Kristjan - at this stage I can only reproduce it > in C++ on VC8. C++ on VC7 and C on VC8 both work fine. Please let me know > if thde code snippet I pasted or the compiler error aren't suitable. It's fine. I readily believe that it causes problems in C++. Regards, Martin From stephen at xemacs.org Tue May 22 07:43:05 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 22 May 2007 14:43:05 +0900 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> Message-ID: <877ir176s6.fsf@uwakimon.sk.tsukuba.ac.jp> Neal Becker writes: > Perhaps my comment was misunderstood. I have no objection to a new system, > and it does not have to be based on latex. I just hope there will be some > escape mechanism that allows math. Docutils already provides the "raw" directive. I don't know if the latex backend supports it, but adding support shouldn't be hard. I don't think even thinking about that is Georg's responsibility, of course. From martin at v.loewis.de Tue May 22 07:32:42 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 22 May 2007 07:32:42 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <46527929.8040600@holdenweb.com> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <46527929.8040600@holdenweb.com> Message-ID: <4652807A.2030707@v.loewis.de> > Addressing only the issues of PCBuild8 and 64-bit architectures, I > have tried to establish "free" buildbot support for the 64-bit > architectures without any real success. > > Should the PSF be considering paying for infrastructure that will > support 64-bit build reporting? You can bring it up to the board, of course. However, given that all other buildbot machines were contributed by volunteers, the fact that nobody volunteers to contribute buildbot machines for PCbuild8 indicates that there is not a lot of interest in that build infrastructure. Note that there *are* 64-bit buildbot slaves, e.g. for AMD64 (contributed by Neal Norwitz), Alpha (contributed by Neal Norwitz), Itanium (contributed by Matthias Klose, offering a machine from the Debian project), and PPC64 (likewise). These machines all run Linux or Tru64, and (to my understanding) serve other purposes as well, making the buildbot slave just a minor detail. It's an unfortunate fact of life that Microsoft Windows does not support multi-tasking multi-user workloads that well, so the Windows buildbot slave are (to my knowledge) typically dedicated machines (except for Tim's machine, which is just disconnected from the master when he doesn't feel like running buildbot jobs). Regards, Martin From titus at caltech.edu Tue May 22 07:52:57 2007 From: titus at caltech.edu (Titus Brown) Date: Mon, 21 May 2007 22:52:57 -0700 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4652807A.2030707@v.loewis.de> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <46527929.8040600@holdenweb.com> <4652807A.2030707@v.loewis.de> Message-ID: <20070522055257.GA13609@caltech.edu> On Tue, May 22, 2007 at 07:32:42AM +0200, "Martin v. L?wis" wrote: -> > Addressing only the issues of PCBuild8 and 64-bit architectures, I -> > have tried to establish "free" buildbot support for the 64-bit -> > architectures without any real success. -> > -> > Should the PSF be considering paying for infrastructure that will -> > support 64-bit build reporting? -> -> You can bring it up to the board, of course. However, given that -> all other buildbot machines were contributed by volunteers, the -> fact that nobody volunteers to contribute buildbot machines for -> PCbuild8 indicates that there is not a lot of interest in that -> build infrastructure. -> -> Note that there *are* 64-bit buildbot slaves, e.g. for AMD64 -> (contributed by Neal Norwitz), Alpha (contributed by Neal -> Norwitz), Itanium (contributed by Matthias Klose, offering -> a machine from the Debian project), and PPC64 (likewise). -> -> These machines all run Linux or Tru64, and (to my understanding) -> serve other purposes as well, making the buildbot slave -> just a minor detail. -> -> It's an unfortunate fact of life that Microsoft Windows does not -> support multi-tasking multi-user workloads that well, so the -> Windows buildbot slave are (to my knowledge) typically dedicated -> machines (except for Tim's machine, which is just disconnected -> from the master when he doesn't feel like running buildbot -> jobs). I haven't really been listening to this conversation, so forgive me if this isn't relevant, but: I'd be happy to install Windows and the latest VisualStudio on a 64-bit VMware image. I just can't be responsible for day-to-day administration of the buildslave; Windows requires too much attention for me to do that. cheers, --titus From mhammond at skippinet.com.au Tue May 22 08:29:36 2007 From: mhammond at skippinet.com.au (Mark Hammond) Date: Tue, 22 May 2007 16:29:36 +1000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <46527DA8.1030701@v.loewis.de> Message-ID: <019101c79c3a$91d06b60$1f0a0a0a@enfoldsystems.local> > However, I think people ask much too often for a debugging build; > in many cases, they could work happily with a release build that > supports debugging. Depending on the problem you try to solve, you > may or may not need debug information for pythonxy.dll as well, > or just for your extension module. While that is true in theory, I often find it is not the case in practice, generally due to the optimizer. Depending on what magic the compiler has done, it can be very difficult to set breakpoints (conditional or otherwise), inspect variable values, etc. It is useful in some cases, but very often I find myself falling back to a debug build after attempting to debug a local release build. > I think there are several other cases in distutils to support > special cases (e.g. the DISTUTILS_USE_SDK environment variable); > if people want to see their setup supported, AND ARE WILLING TO > CONTRIBUTE: more power to them. Yes, but I believe its also important to solicit feedback on the best way to achieve their aims. In this particular case, I believe it would have been misguided for me to simply check in whatever was necessary to have distutils work from the PCBuild8 directory. I hope it is clear that I am willing to contribute the outcome of these discussions... Cheers, Mark From armin.ronacher at active-4.com Tue May 22 10:32:26 2007 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Tue, 22 May 2007 08:32:26 +0000 (UTC) Subject: [Python-Dev] The docs, reloaded References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> Message-ID: Hoi, Fred L. Drake, Jr. acm.org> writes: > > On Monday 21 May 2007, A.M. Kuchling wrote: > > Disadvantages: > > > > * reST markup isn't much simpler than LaTeX. > > * reST doesn't support nested markup, which is used in the current > documentation. For a lightweight markup language that is human readable (which rst certainly is) the syntax is surprisingly powerful. You can nest any block tag and I'm not sure how often you have to nest roles and stuff like that. The goal of the new docs is a less complex syntax and currently nothing beats reStructuredText in terms of readability and possibilities. rst is simpler than latex: LaTeX: \item[\code{*?}, \code{+?}, \code{??}] The \character{*}, \character{+}, and \character{?} qualifiers are all \dfn{greedy}; they match as much text as possible. Sometimes this behaviour isn't desired; if the RE \regexp{<.*>} is matched against \code{'

title

'}, it will match the entire string, and not just \code{'

'}. Adding \character{?} after the qualifier makes it perform the match in \dfn{non-greedy} or \dfn{minimal} fashion; as \emph{few} characters as possible will be matched. Using \regexp{.*?} in the previous expression will match only \code{'

'}. Here the same in rst: ``*?``, ``+?``, ``??`` The ``'\*'``, ``'+'``, and ``'?'`` qualifiers are all :dfn:`greedy`; they match as much text as possible. Sometimes this behaviour isn't desired; if the RE :regexp:`<.\*>` is matched against ``'

title

'``, it will match the entire string, and not just ``'

'``. Adding ``'?'`` after the qualifier makes it perform the match in :dfn:`non-greedy` or :dfn:`minimal` fashion; as *few* characters as possible will be matched. Using :regexp:`.\*?` in the previous expression will match only ``'

'``. Regards, Armin From g.brandl at gmx.net Tue May 22 10:39:22 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 22 May 2007 10:39:22 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> Message-ID: Armin Ronacher schrieb: > Hoi, > > Fred L. Drake, Jr. acm.org> writes: > >> >> On Monday 21 May 2007, A.M. Kuchling wrote: >> > Disadvantages: >> > >> > * reST markup isn't much simpler than LaTeX. >> >> * reST doesn't support nested markup, which is used in the current >> documentation. > > For a lightweight markup language that is human readable (which rst certainly > is) the syntax is surprisingly powerful. You can nest any block tag and I'm not > sure how often you have to nest roles and stuff like that. The goal of the new > docs is a less complex syntax and currently nothing beats reStructuredText in > terms of readability and possibilities. Also, I believe there are efforts within docutils to make a limited amount of nested inline markup possible. Lea will probably be able to provide details. cheers, Georg From g.brandl at gmx.net Tue May 22 10:41:51 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 22 May 2007 10:41:51 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <200705211935.02582.fdrake@acm.org> References: <18002.4548.772439.879692@montanaro.dyndns.org> <200705211935.02582.fdrake@acm.org> Message-ID: Fred L. Drake, Jr. schrieb: > On Monday 21 May 2007, skip at pobox.com wrote: > > Take a look at . It records request > > counts for the various pages and presents the most frequently requested > > pages in a section at the top of the page. I can make the script > > available if anyone wants it (it uses Myghty - Mason in Python.) > > This is very cool. ;-) Indeed, it would be a great addition! Georg From mal at egenix.com Tue May 22 10:57:22 2007 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 22 May 2007 10:57:22 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <016c01c79c1d$1db9d0d0$1f0a0a0a@enfoldsystems.local> References: <016c01c79c1d$1db9d0d0$1f0a0a0a@enfoldsystems.local> Message-ID: <4652B072.2040002@egenix.com> Hi Mark, >> +1 from me. >> >> I think this is simply a bug introduced with the UCS4 patches in >> Python 2.2. >> >> unicodeobject.h already has this code: >> >> #ifndef PY_UNICODE_TYPE >> >> /* Windows has a usable wchar_t type (unless we're using UCS-4) */ >> # if defined(MS_WIN32) && Py_UNICODE_SIZE == 2 >> # define HAVE_USABLE_WCHAR_T >> # define PY_UNICODE_TYPE wchar_t >> # endif >> >> # if defined(Py_UNICODE_WIDE) >> # define PY_UNICODE_TYPE Py_UCS4 >> # endif >> >> #endif >> >> But for some reason, pyconfig.h defines: >> >> /* Define as the integral type used for Unicode representation. */ >> #define PY_UNICODE_TYPE unsigned short >> >> /* Define as the size of the unicode type. */ >> #define Py_UNICODE_SIZE SIZEOF_SHORT >> >> /* Define if you have a useable wchar_t type defined in >> wchar.h; useable >> means wchar_t must be 16-bit unsigned type. (see >> Include/unicodeobject.h). */ >> #if Py_UNICODE_SIZE == 2 >> #define HAVE_USABLE_WCHAR_T >> #endif >> >> disabling the default settings in the unicodeobject.h. > > Yes, that does appear strange. The following patch works for me, keeps > Python building and appears to solve my problem. Any objections? Looks fine to me. > Mark > > > Index: pyconfig.h > =================================================================== > --- pyconfig.h (revision 55487) > +++ pyconfig.h (working copy) > @@ -491,22 +491,13 @@ > /* Define if you want to have a Unicode type. */ > #define Py_USING_UNICODE > > -/* Define as the integral type used for Unicode representation. */ > -#define PY_UNICODE_TYPE unsigned short > - > /* Define as the size of the unicode type. */ > -#define Py_UNICODE_SIZE SIZEOF_SHORT > +/* This is enough for unicodeobject.h to do the "right thing" on Windows. > */ > +#define Py_UNICODE_SIZE 2 > > -/* Define if you have a useable wchar_t type defined in wchar.h; useable > - means wchar_t must be 16-bit unsigned type. (see > - Include/unicodeobject.h). */ > -#if Py_UNICODE_SIZE == 2 > -#define HAVE_USABLE_WCHAR_T > - > /* Define to indicate that the Python Unicode representation can be passed > as-is to Win32 Wide API. */ > #define Py_WIN_WIDE_FILENAMES > -#endif > > /* Use Python's own small-block memory-allocator. */ > #define WITH_PYMALLOC 1 > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/mal%40egenix.com -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 22 2007) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ :::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 From kristjan at ccpgames.com Tue May 22 12:02:17 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 22 May 2007 10:02:17 +0000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4652807A.2030707@v.loewis.de> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <46527929.8040600@holdenweb.com> <4652807A.2030707@v.loewis.de> Message-ID: <4E9372E6B2234D4F859320D896059A9508DCBEE6D6@exchis.ccp.ad.local> > -----Original Message----- > From: "Martin v. L?wis" [mailto:martin at v.loewis.de] > Sent: Tuesday, May 22, 2007 05:33 > To: Steve Holden > Cc: Kristj?n Valur J?nsson; Mark Hammond; distutils-sig at python.org; > python-dev at python.org > Subject: Re: Adventures with x64, VS7 and VS8 on Windows > > > Addressing only the issues of PCBuild8 and 64-bit architectures, I > > have tried to establish "free" buildbot support for the 64-bit > > architectures without any real success. > > > > Should the PSF be considering paying for infrastructure that will > > support 64-bit build reporting? > > You can bring it up to the board, of course. However, given that > all other buildbot machines were contributed by volunteers, the > fact that nobody volunteers to contribute buildbot machines for > PCbuild8 indicates that there is not a lot of interest in that > build infrastructure. Most people just install whatever it is that they are offered to download. For me, the most compelling reason to provide the new builds is the 15% performance benefit. If there are no technical and corporate difficulties, such as firewalls and security, I am sure that CCP can provide a VisualStudio2005 buildbot for our needs. Wasn't there some issue that each buildbot can only provide a single build? Here is a place where cross-compilation comes into its own again. Kristjan From kristjan at ccpgames.com Tue May 22 12:31:05 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 22 May 2007 10:31:05 +0000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4652128B.3000301@v.loewis.de> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> Message-ID: <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> > -----Original Message----- > Nobody proposed to ditch cross-compilation. I very much like > cross-compilation, I do all Itanium and AMD64 in cross-compilation. > > I just want the *file structure* of the output to be the very same > for all architectures, meaning that they can't coexist in a single > source directory. > Surely there are differences between architectures? PC uses MSI after all. Why can't linux be under trunk/linux and pc 86 under trunk/pcbuild8/win32PGO and 64 under trunk/pcbuild8/x64pgo? We shouldn't let bad tools keep us from new ways of doing things, rather we should fix and update the tools. > No, you use two checkouts, of course. That?s just silly. And two visual studios open, and edit the file in two places too? I say let's just admit that tools can compile for more than one target. Let's adapt to it and we will all be happier. > No matter what the next compiler is (VS 2005 or VS 2007/2008), it's > still *a lot* of work until the VS 2005 build can be used for releasing > Python. For example, there is no support for the SxS installation of > msvcr8.dll, using the MSI merge module. Well, there is some work, which is why I am proposing the Europython sprint to do it. But we are almost there. And that work will be useful for Orcas, when that comes along. And btw, there is no need to install the msvcr8.dll. We can distribute them as a private assembly. then they (and the manifest) exist in the same directory as python2x.dll. This is a supported distribution mode and how we distribute the runtime with EVE. > Not sure whether anything really is needed. Python works fine on Vista. If you are an administrator. A limited user will have problems installing it and then running it. > > > 1) supplying python.dll as a Side By Side assembly > > What would that improve? Well, it should reduce dll-hell problems of applications that ship with python2x.dll. You ship with and link to your own and tested dll. We have some concerns here, for example, now that we are moving away from embedding python in our blue.dll and using python25.dll directly, that this exposes a vulnerability to the integrity of the software. > > > 2) Changing python install locations To conform with Windows rules and get a "Vista approved" logo. Install in the ProgramFiles folder. Just as C does. Ah, and this also means that we could install both 32 bit and 64 bit versions, another plus. > > 3) Supporting shadow libraries, where .pyc files end up in a > different > > hierarchy from the .py files. (useful for many things beside > VISTA) > > Yes, and people had written PEPs for it which failed due to lack of > follow up. Interesting. We are definitely interested in that. You see, Someone installs a game or accounting software using vista. He then runs as a limited user. Python insists on saving its .pyc files in the installation folder, but this is not something that is permitted on Vista. > > > 4) Signing the python dlls and executables > Hmm. Again, easing the installation experience for vista users. So that they don't get a red box about an unknown application requiring administrator privileges. > > Sure, and have they reported problems with Python on Vista (problems > specific to Vista?) Certainly. We are working on them, of course. Chiefly, we have had to change where we save all kinds of temporary files. And we have to be careful that all our .py files ship as .pyc files. And we are versioning, and signing all executables, providing user level manifests and fixing up the install processes to be more compliant. I am just saying that this is something that a standard python distro might want to do too. Kristjan From stephen at xemacs.org Tue May 22 13:27:16 2007 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 22 May 2007 20:27:16 +0900 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> Message-ID: <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> Armin Ronacher writes: > rst is simpler than latex: > > LaTeX: > > \item[\code{*?}, \code{+?}, \code{??}] The \character{*}, > \character{+}, and \character{?} qualifiers are all \dfn{greedy}; they > match as much text as possible. Sometimes this behaviour isn't > desired; if the RE \regexp{<.*>} is matched against > \code{'

title

'}, it will match the entire string, and not just > \code{'

'}. Adding \character{?} after the qualifier makes it > perform the match in \dfn{non-greedy} or \dfn{minimal} fashion; as > \emph{few} characters as possible will be matched. Using \regexp{.*?} > in the previous expression will match only \code{'

'}. > > Here the same in rst: > > ``*?``, ``+?``, ``??`` > The ``'\*'``, ``'+'``, and ``'?'`` qualifiers are all :dfn:`greedy`; > they match as much text as possible. Sometimes this behaviour isn't > desired; if the RE :regexp:`<.\*>` is matched against > ``'

title

'``, it will match the entire string, and not just > ``'

'``. Adding ``'?'`` after the qualifier makes it perform the > match in :dfn:`non-greedy` or :dfn:`minimal` fashion; as *few* > characters as possible will be matched. Using :regexp:`.\*?` in the > previous expression will match only ``'

'``. IMO that pair of examples shows clearly that, in this application, reST is not an improvement over LaTeX in terms of readability/ writability of source. It's probably not worse, although I can't help muttering "EIBTI". In particular I find the "``'...'``" construct horribly unreadable because it makes it hard to find the Python syntax in all the reST. I don't think that's an argument against switching to reST, though. Georg's site speaks for itself. Kudos! From kristjan at ccpgames.com Tue May 22 13:38:23 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 22 May 2007 11:38:23 +0000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <017001c79c20$dd347e80$1f0a0a0a@enfoldsystems.local> References: <46520F4F.5010502@v.loewis.de> <017001c79c20$dd347e80$1f0a0a0a@enfoldsystems.local> Message-ID: <4E9372E6B2234D4F859320D896059A9508DCBEE754@exchis.ccp.ad.local> > -----Original Message----- > > > It seems the > > > best thing might be to modify the PCBuild8 build process so the > output > > > binaries are in the ../PCBuild' directory - this way distutils and > others > > > continue to work fine. Does that sound reasonable? > > > I think Kristjan will have to say a word here: I think he just likes > > it the way it is right now. That would rather suggest that build_ext > > needs to be changed. > Someone mentioned the idea to have a bat file do it. I like that idea. There is already a build.bat file which will build whatever configuration you choose (platform and configuration). Extending it to subsequently copy the resulting binaries up one level is easy. Perhaps this is an acceptable compromise? Kristjan From armin.ronacher at active-4.com Tue May 22 13:41:19 2007 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Tue, 22 May 2007 11:41:19 +0000 (UTC) Subject: [Python-Dev] The docs, reloaded References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: Hoi, Stephen J. Turnbull xemacs.org> writes: > > IMO that pair of examples shows clearly that, in this application, > reST is not an improvement over LaTeX in terms of readability/ > writability of source. It's probably not worse, although I can't help > muttering "EIBTI". In particular I find the "``'...'``" construct > horribly unreadable because it makes it hard to find the Python syntax > in all the reST. Well. That was a bad example. But if you look at the converted sources and open the source file you can see that rst is a lot cleaner that latex for this type of documentation. Regards, Armin From skip at pobox.com Tue May 22 14:16:26 2007 From: skip at pobox.com (skip at pobox.com) Date: Tue, 22 May 2007 07:16:26 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <18002.57114.680004.707344@montanaro.dyndns.org> It would appear that while we slept Jens Mortensen was busy at work on his rst2{latex,latexmath,mathml}.py scripts: http://docutils.sourceforge.net/sandbox/jensj/latex_math/ Note the date on the files. It seems to work pretty well, and as others have pointed out, LaTeX notation is probably more familiar to people interested in math display than anything else. Skip From ndbecker2 at gmail.com Tue May 22 14:40:41 2007 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 22 May 2007 08:40:41 -0400 Subject: [Python-Dev] The docs, reloaded References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> <18002.57114.680004.707344@montanaro.dyndns.org> Message-ID: skip at pobox.com wrote: > > It would appear that while we slept Jens Mortensen was busy at work on his > rst2{latex,latexmath,mathml}.py scripts: > > http://docutils.sourceforge.net/sandbox/jensj/latex_math/ > > Note the date on the files. It seems to work pretty well, and as others > have pointed out, LaTeX notation is probably more familiar to people > interested in math display than anything else. > I know almost nothing about docutils internals. How do I 'install' the above? From arigo at tunes.org Tue May 22 15:29:33 2007 From: arigo at tunes.org (Armin Rigo) Date: Tue, 22 May 2007 15:29:33 +0200 Subject: [Python-Dev] Module cleanup improvement In-Reply-To: <1d36917a0705211717q450e43cemd23d5bca1c3a2f73@mail.gmail.com> References: <1d36917a0705211717q450e43cemd23d5bca1c3a2f73@mail.gmail.com> Message-ID: <20070522132933.GA4863@code0.codespeak.net> Hi Alan, On Mon, May 21, 2007 at 08:17:02PM -0400, Alan McIntyre wrote: > Adding a step C1.5 which removes only objects that return true for > PyInstance_Check seems to prevent the problem exhibited by this bug (I > tried it out locally on the trunk and it doesn't cause any problems > with the regression test suite). Is there any reason that adding such > a step to module cleanup would be a bad idea? On another level, would there be interest here for me to revive my old attempt at throwing away this messy procedure, which only made sense in a world where reference cycles couldn't be broken? Nowadays the fact that global variables suddenly become None when the interpreter shuts down is a known recipe for getting obscure messages from still-running threads, for example. This has one consequence that I can think about: if we consider a CPython in which the cycle GC has been disabled by the user, then many __del__'s would not be called any more at interpreter shutdown. Do we care? A bientot, Armin From mhammond at skippinet.com.au Tue May 22 15:30:57 2007 From: mhammond at skippinet.com.au (Mark Hammond) Date: Tue, 22 May 2007 23:30:57 +1000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DCBEE754@exchis.ccp.ad.local> Message-ID: <01d401c79c75$6fd20510$1f0a0a0a@enfoldsystems.local> > Someone mentioned the idea to have a bat file do it. I like > that idea. There is already a build.bat file which will > build whatever configuration you choose (platform and > configuration). Extending it to subsequently copy the > resulting binaries up one level is easy. Perhaps this is an > acceptable compromise? Sure - that will work for me. I'll check this out and contact you by private mail for further guidance. Cheers, Mark -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 1784 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20070522/aba45765/attachment-0001.bin From ncoghlan at gmail.com Tue May 22 15:59:25 2007 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 22 May 2007 23:59:25 +1000 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <4652F73D.7060002@gmail.com> Stephen J. Turnbull wrote: > IMO that pair of examples shows clearly that, in this application, > reST is not an improvement over LaTeX in terms of readability/ > writability of source. It's probably not worse, although I can't help > muttering "EIBTI". In particular I find the "``'...'``" construct > horribly unreadable because it makes it hard to find the Python syntax > in all the reST. It's interesting how perceptions can differ - I find heavily marked up latex tends to blur into a huge wall of text when I try to read it because of all of the {} and \ characters everywhere. With reST, on the other hand, I find the use of the relatively 'light' backquote and colon characters to delineate the markup breaks things up sufficiently that I can easily ignore the markup in order to read what has actually been written. So in Armin's example, I found the reST version *much* easier to read. Whether that difference in perception is due simply to my relative lack of experience in using LaTeX, or to something else, I have no idea. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at gmail.com Tue May 22 16:04:12 2007 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 23 May 2007 00:04:12 +1000 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18002.17736.138574.686866@montanaro.dyndns.org> References: <4650B823.4050908@scottdial.com> <18000.58065.249249.648413@montanaro.dyndns.org> <18002.17736.138574.686866@montanaro.dyndns.org> Message-ID: <4652F85C.4030904@gmail.com> skip at pobox.com wrote: > True enough. There is MathML and its offspring, ASCIIMathML, which are > probably worth looking at. > > http://www.w3.org/Math/ > http://www1.chapman.edu/~jipsen/asciimath.html > > I have no idea if either one has backend support for presentation outside > the web, MathML is the language used for equations in Open Document Format (aka ISO 26300). I don't know what extra typesetting tricks (if any) they wrap around it, though. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From blais at furius.ca Tue May 22 16:37:58 2007 From: blais at furius.ca (Martin Blais) Date: Tue, 22 May 2007 07:37:58 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> Message-ID: <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> On 5/22/07, Armin Ronacher wrote: > > > > > > * reST markup isn't much simpler than LaTeX. > > > > * reST doesn't support nested markup, which is used in the current > > documentation. > > For a lightweight markup language that is human readable (which rst certainly > is) the syntax is surprisingly powerful. You can nest any block tag and I'm not > sure how often you have to nest roles and stuff like that. The goal of the new > docs is a less complex syntax and currently nothing beats reStructuredText in > terms of readability and possibilities. About possibilities: I'm sorry but that is simply not true. LaTeX provides the full power of macro expansion, while ReST is a fixed (almost, roles are an exception) syntax which has its own set of problems and ambiguities. It is far from perfect, and definitely does not have the flexibility that LaTeX provides. Some of the syntax cannot be nested (try to combine ** with literals). > rst is simpler than latex: That, and the ability to already parse it from Python and more easily convert to other formats (one of LaTeX's weaknesses), are the only benefits that I can see to switching away from LaTeX. I have to admit I'm afraid we would be moving to a lesser technology, and the driver for that seems to be people's reluctance to work with the more powerful, more complex tool. Not saying it is invalid (it's about people, in the end), but I still don't see what's the big problem with LaTeX. From blais at furius.ca Tue May 22 16:46:41 2007 From: blais at furius.ca (Martin Blais) Date: Tue, 22 May 2007 07:46:41 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <4652F73D.7060002@gmail.com> References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> <4652F73D.7060002@gmail.com> Message-ID: <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> On 5/22/07, Nick Coghlan wrote: > > So in Armin's example, I found the reST version *much* easier to read. > Whether that difference in perception is due simply to my relative lack > of experience in using LaTeX, or to something else, I have no idea. - If you make a mistake in LaTeX, you will get a cryptic error which is usually a little difficult to figure out (if you're not used to it). You can an error though. - If you make a mistake in ReST, you will often get no warning nor error, and not the desired output. If you were to use the amount of markup in that example, you would have to check your text with rst2xml frequently to make sure it groks what you're trying to say. (And I've been there: I wrote an entire project who relies specifically on this, on precise structures generated by docutils (http://furius.ca/nabu/). It's *very* easy to make subtle errors that generate something else than what you want.) ReST works well only when there is little markup. Writing code documentation generally requires a lot of markup, you want to make variables, classes, functions, parameters, constants, etc.. (A better avenue IMHO would be to augment docutils with some code to automatically figure out the syntax of functions, parameters, classes, etc., i.e., less markup, and if we do this in Python we may be able to use introspection. This is a challenge, however, I don't know if it can be done at all.) From barry at python.org Tue May 22 16:49:17 2007 From: barry at python.org (Barry Warsaw) Date: Tue, 22 May 2007 10:49:17 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 22, 2007, at 10:37 AM, Martin Blais wrote: > That, and the ability to already parse it from Python and more easily > convert to other formats (one of LaTeX's weaknesses), are the only > benefits that I can see to switching away from LaTeX. I have to admit > I'm afraid we would be moving to a lesser technology, and the driver > for that seems to be people's reluctance to work with the more > powerful, more complex tool. Not saying it is invalid (it's about > people, in the end), but I still don't see what's the big problem with > LaTeX. I'm a fan of LaTeX (and latex, er, oops :) too, but what appeals to me most about moving to reST is that the tool chain simplifies considerably. Even with a nice distro packaging system it can be a PITA to get all the tools you need to build the documentation properly installed. A pure-Python solution, even a lesser one, would be a win if we can still produce top quality online and written documentation from one source. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQCVAwUBRlMC7XEjvBPtnXfVAQJoFQQAjvYsXamif459t34X4Bn00G0S1b3qeM1Y PhwdAC5cuCpMoopVl+9vtjjcP4Np9P0buY09H+mLwv0nAZRNF7HT3xDr/U65FiX+ Aa7B9+3jVqRGg1+R6oYRKuPUmcLrBFESy6thKkw9audVsT5jgpBM9m9Y405QSIEU MvK7hYrYBqQ= =Jdbt -----END PGP SIGNATURE----- From fdrake at acm.org Tue May 22 16:57:01 2007 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Tue, 22 May 2007 10:57:01 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> Message-ID: <200705221057.01714.fdrake@acm.org> On Tuesday 22 May 2007, Barry Warsaw wrote: > considerably. Even with a nice distro packaging system it can be a > PITA to get all the tools you need to build the documentation > properly installed. A pure-Python solution, even a lesser one, would > be a win if we can still produce top quality online and written > documentation from one source. The biggest potential wins I see for a new system are: - more contributions - platform-independent processing I remain sceptical on being able to achieve the first, but there some hope for it. The later should make things easier for people who are willing to put the work into contribution, which is valuable in its own right. -Fred -- Fred L. Drake, Jr. From skip at pobox.com Tue May 22 17:00:27 2007 From: skip at pobox.com (skip at pobox.com) Date: Tue, 22 May 2007 10:00:27 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> <18002.57114.680004.707344@montanaro.dyndns.org> Message-ID: <18003.1419.544035.800966@montanaro.dyndns.org> Neal> I know almost nothing about docutils internals. How do I Neal> 'install' the above? Me either. Here's what I did: * download and expand the latest docutils snapshot * replicate Jens's work in a directory called "math" at the top level of the docutils directory * edited setup.py to get them installed: diff -u setup.py.~1~ setup.py --- setup.py.~1~ 2007-03-21 18:45:38.000000000 -0500 +++ setup.py 2007-05-22 07:07:25.000000000 -0500 @@ -115,6 +115,9 @@ 'tools/rst2xml.py', 'tools/rst2pseudoxml.py', 'tools/rstpep2html.py', + 'math/tools/rst2latex.py', + 'math/tools/rst2latexmath.py', + 'math/tools/rst2mathml.py', ],} """Distutils setup parameters.""" * ran "python setup.py install" That's probably more than necessary, but with the math subdir I can easily move the whole thing to a new snapshot and the setup.py change lets me install them transparently. Skip From armin.ronacher at active-4.com Tue May 22 17:02:57 2007 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Tue, 22 May 2007 15:02:57 +0000 (UTC) Subject: [Python-Dev] The docs, reloaded References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> Message-ID: Hoi, Martin Blais furius.ca> writes: > About possibilities: I'm sorry but that is simply not true. LaTeX > provides the full power of macro expansion, while ReST is a fixed > (almost, roles are an exception) syntax which has its own set of > problems and ambiguities. I was speaking of rst in comparison with other lightweight markup languages. (textile, markdown, wiki syntax). > That, and the ability to already parse it from Python and more easily > convert to other formats (one of LaTeX's weaknesses), are the only > benefits that I can see to switching away from LaTeX. I have to admit > I'm afraid we would be moving to a lesser technology, and the driver > for that seems to be people's reluctance to work with the more > powerful, more complex tool. Not saying it is invalid (it's about > people, in the end), but I still don't see what's the big problem with > LaTeX. The problem with latex is that it requires more knowledge thus the amount of people that can contribute is a lot lower. It's a lot harder to generate a searchindex, do interlinking, generate a dynamic documentation with comments etc. Don't get me wrong, LaTeX is a powerful tool and I use it for every bigger document i type. I just think it's not the best choice for documenting scripting languages. Regards, Armin From skip at pobox.com Tue May 22 17:06:04 2007 From: skip at pobox.com (skip at pobox.com) Date: Tue, 22 May 2007 10:06:04 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <200705221057.01714.fdrake@acm.org> References: <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> <200705221057.01714.fdrake@acm.org> Message-ID: <18003.1756.554922.172523@montanaro.dyndns.org> Fred> The biggest potential wins I see for a new system are: Fred> - more contributions Fred> - platform-independent processing Fred> I remain sceptical on being able to achieve the first, but there Fred> some hope for it. You at least take away a common excuse for lack of contributions. True whiners will just come up with new ones (e.g., "the documentation isn't available in Sanskrit yet" or "the dog ate my changes before I could type them into the computer"). ;-) Skip From python at rcn.com Tue May 22 17:07:31 2007 From: python at rcn.com (Raymond Hettinger) Date: Tue, 22 May 2007 08:07:31 -0700 Subject: [Python-Dev] The docs, reloaded References: <200705210923.47610.fdrake@acm.org><20070521210410.GA14297@localhost.localdomain><200705211932.13295.fdrake@acm.org><874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp><4652F73D.7060002@gmail.com> <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> Message-ID: <002201c79c83$cd72a540$f001a8c0@RaymondLaptop1> > - If you make a mistake in LaTeX, you will get a cryptic error which > is usually a little difficult to figure out (if you're not used to > it). You can an error though. FWIW, the pure Python program in Tools/scripts/texchecker.py does a pretty good job of catching typical LaTeX mistakes and giving high-quality error reporting. Raymond From g.brandl at gmx.net Tue May 22 17:14:11 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 22 May 2007 17:14:11 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> Message-ID: Armin Ronacher schrieb: >> That, and the ability to already parse it from Python and more easily >> convert to other formats (one of LaTeX's weaknesses), are the only >> benefits that I can see to switching away from LaTeX. I have to admit >> I'm afraid we would be moving to a lesser technology, and the driver >> for that seems to be people's reluctance to work with the more >> powerful, more complex tool. Not saying it is invalid (it's about >> people, in the end), but I still don't see what's the big problem with >> LaTeX. > The problem with latex is that it requires more knowledge thus the amount of > people that can contribute is a lot lower. It's a lot harder to generate a > searchindex, do interlinking, generate a dynamic documentation with comments etc. This could of course be done with LaTeX as well, but the current toolchain uses latex2html which is written and customized in Perl -- I'd never try to do anything with it. > Don't get me wrong, LaTeX is a powerful tool and I use it for every bigger > document i type. I just think it's not the best choice for documenting scripting > languages. Who's documenting a scripting language? Georg From armin.ronacher at active-4.com Tue May 22 17:19:32 2007 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Tue, 22 May 2007 15:19:32 +0000 (UTC) Subject: [Python-Dev] The docs, reloaded References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> Message-ID: Hoi, Georg Brandl gmx.net> writes: > Who's documenting a scripting language? Wanted to say agile :D Regards, Armin From g.brandl at gmx.net Tue May 22 17:12:01 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 22 May 2007 17:12:01 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18003.1756.554922.172523@montanaro.dyndns.org> References: <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> <200705221057.01714.fdrake@acm.org> <18003.1756.554922.172523@montanaro.dyndns.org> Message-ID: skip at pobox.com schrieb: > Fred> The biggest potential wins I see for a new system are: > > Fred> - more contributions > > Fred> - platform-independent processing > > Fred> I remain sceptical on being able to achieve the first, but there > Fred> some hope for it. > > You at least take away a common excuse for lack of contributions. True > whiners will just come up with new ones (e.g., "the documentation isn't > available in Sanskrit yet" or "the dog ate my changes before I could type > them into the computer"). ;-) But that's at least funnier than before :) Georg From steve at holdenweb.com Tue May 22 17:22:31 2007 From: steve at holdenweb.com (Steve Holden) Date: Tue, 22 May 2007 11:22:31 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18003.1756.554922.172523@montanaro.dyndns.org> References: <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> <200705221057.01714.fdrake@acm.org> <18003.1756.554922.172523@montanaro.dyndns.org> Message-ID: skip at pobox.com wrote: > Fred> The biggest potential wins I see for a new system are: > > Fred> - more contributions > > Fred> - platform-independent processing > > Fred> I remain sceptical on being able to achieve the first, but there > Fred> some hope for it. > > You at least take away a common excuse for lack of contributions. True > whiners will just come up with new ones (e.g., "the documentation isn't > available in Sanskrit yet" or "the dog ate my changes before I could type > them into the computer"). ;-) > But doesn't *everyone* now know that documentation contributions don't have to be marked up? It's certainly been said enough. Maybe that fact should be more prominent in the documentation? Then we'll just have to worry about getting people to read it ... regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://del.icio.us/steve.holden ------------------ Asciimercial --------------------- Get on the web: Blog, lens and tag your way to fame!! holdenweb.blogspot.com squidoo.com/pythonology tagged items: del.icio.us/steve.holden/python All these services currently offer free registration! -------------- Thank You for Reading ---------------- From python at rcn.com Tue May 22 17:22:43 2007 From: python at rcn.com (Raymond Hettinger) Date: Tue, 22 May 2007 08:22:43 -0700 Subject: [Python-Dev] The docs, reloaded References: <200705210923.47610.fdrake@acm.org><20070521210410.GA14297@localhost.localdomain><200705211932.13295.fdrake@acm.org><874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp><4652F73D.7060002@gmail.com> <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> Message-ID: <002701c79c85$0c08df80$f001a8c0@RaymondLaptop1> > - If you make a mistake in LaTeX, you will get a cryptic error which > is usually a little difficult to figure out (if you're not used to > it). You can an error though. FWIW, the pure Python program in Tools/scripts/texchecker.py does a pretty good job of catching typical LaTeX mistakes and giving high-quality error reporting. With that tool, I've been making doc contributions for years and not needed my own LaTeX build. Also, I did not need to learn LaTeX itself. It was sufficient to read a little of Documenting Python and then model the markup from existing docs. In contrast, whenever I've tried to build a complex ReST document, it was *always* a struggle. Copying from existing docs doesn't help much there because the cues are more subtle. As Martin pointed out, most errors slide-by because the mis-markup is typically read as valid, unmarked-up text. I find myself having to continously build and view and html file as I write. I like ResT for light-weight work but think it is not ready for prime-time with respect to more complex requirements. Fred is also correct in that we don't seem to have people rushing to contribute docs (more than a line or two). For a long-time, we've always said that it is okay to submit plain text doc contributions and that another person downstream would do the mark-up. We've had few takers. Raymond From python at rcn.com Tue May 22 17:22:48 2007 From: python at rcn.com (Raymond Hettinger) Date: Tue, 22 May 2007 08:22:48 -0700 Subject: [Python-Dev] The docs, reloaded References: <200705210923.47610.fdrake@acm.org><20070521210410.GA14297@localhost.localdomain><200705211932.13295.fdrake@acm.org><874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp><4652F73D.7060002@gmail.com> <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> Message-ID: <002801c79c85$0fb1ea50$f001a8c0@RaymondLaptop1> > - If you make a mistake in LaTeX, you will get a cryptic error which > is usually a little difficult to figure out (if you're not used to > it). You can an error though. FWIW, the pure Python program in Tools/scripts/texchecker.py does a pretty good job of catching typical LaTeX mistakes and giving high-quality error reporting. With that tool, I've been making doc contributions for years and not needed my own LaTeX build. Also, I did not need to learn LaTeX itself. It was sufficient to read a little of Documenting Python and then model the markup from existing docs. In contrast, whenever I've tried to build a complex ReST document, it was *always* a struggle. Copying from existing docs doesn't help much there because the cues are more subtle. As Martin pointed out, most errors slide-by because the mis-markup is typically read as valid, unmarked-up text. I find myself having to continously build and view and html file as I write. I like ResT for light-weight work but think it is not ready for prime-time with respect to more complex requirements. Fred is also correct in that we don't seem to have people rushing to contribute docs (more than a line or two). For a long-time, we've always said that it is okay to submit plain text doc contributions and that another person downstream would do the mark-up. We've had few takers. Raymond From python at rcn.com Tue May 22 17:23:07 2007 From: python at rcn.com (Raymond Hettinger) Date: Tue, 22 May 2007 08:23:07 -0700 Subject: [Python-Dev] The docs, reloaded References: <200705210923.47610.fdrake@acm.org><20070521210410.GA14297@localhost.localdomain><200705211932.13295.fdrake@acm.org><874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp><4652F73D.7060002@gmail.com> <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> Message-ID: <002b01c79c85$1a190820$f001a8c0@RaymondLaptop1> > - If you make a mistake in LaTeX, you will get a cryptic error which > is usually a little difficult to figure out (if you're not used to > it). You can an error though. FWIW, the pure Python program in Tools/scripts/texchecker.py does a pretty good job of catching typical LaTeX mistakes and giving high-quality error reporting. With that tool, I've been making doc contributions for years and not needed my own LaTeX build. Also, I did not need to learn LaTeX itself. It was sufficient to read a little of Documenting Python and then model the markup from existing docs. In contrast, whenever I've tried to build a complex ReST document, it was *always* a struggle. Copying from existing docs doesn't help much there because the cues are more subtle. As Martin pointed out, most errors slide-by because the mis-markup is typically read as valid, unmarked-up text. I find myself having to continously build and view and html file as I write. I like ResT for light-weight work but think it is not ready for prime-time with respect to more complex requirements. Fred is also correct in that we don't seem to have people rushing to contribute docs (more than a line or two). For a long-time, we've always said that it is okay to submit plain text doc contributions and that another person downstream would do the mark-up. We've had few takers. Raymond From python at rcn.com Tue May 22 17:23:26 2007 From: python at rcn.com (Raymond Hettinger) Date: Tue, 22 May 2007 08:23:26 -0700 Subject: [Python-Dev] The docs, reloaded References: <200705210923.47610.fdrake@acm.org><20070521210410.GA14297@localhost.localdomain><200705211932.13295.fdrake@acm.org><874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp><4652F73D.7060002@gmail.com> <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> Message-ID: <003201c79c85$25aee1f0$f001a8c0@RaymondLaptop1> > - If you make a mistake in LaTeX, you will get a cryptic error which > is usually a little difficult to figure out (if you're not used to > it). You can an error though. FWIW, the pure Python program in Tools/scripts/texchecker.py does a pretty good job of catching typical LaTeX mistakes and giving high-quality error reporting. With that tool, I've been making doc contributions for years and not needed my own LaTeX build. Also, I did not need to learn LaTeX itself. It was sufficient to read a little of Documenting Python and then model the markup from existing docs. In contrast, whenever I've tried to build a complex ReST document, it was *always* a struggle. Copying from existing docs doesn't help much there because the cues are more subtle. As Martin pointed out, most errors slide-by because the mis-markup is typically read as valid, unmarked-up text. I find myself having to continously build and view and html file as I write. I like ResT for light-weight work but think it is not ready for prime-time with respect to more complex requirements. Fred is also correct in that we don't seem to have people rushing to contribute docs (more than a line or two). For a long-time, we've always said that it is okay to submit plain text doc contributions and that another person downstream would do the mark-up. We've had few takers. Raymond From g.brandl at gmx.net Tue May 22 17:11:18 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 22 May 2007 17:11:18 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> <4652F73D.7060002@gmail.com> <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> Message-ID: Martin Blais schrieb: > On 5/22/07, Nick Coghlan wrote: >> >> So in Armin's example, I found the reST version *much* easier to read. >> Whether that difference in perception is due simply to my relative lack >> of experience in using LaTeX, or to something else, I have no idea. > > - If you make a mistake in LaTeX, you will get a cryptic error which > is usually a little difficult to figure out (if you're not used to > it). You can an error though. > > - If you make a mistake in ReST, you will often get no warning nor > error, and not the desired output. If you were to use the amount of > markup in that example, you would have to check your text with rst2xml > frequently to make sure it groks what you're trying to say. (And I've > been there: I wrote an entire project who relies specifically on this, > on precise structures generated by docutils (http://furius.ca/nabu/). > It's *very* easy to make subtle errors that generate something else > than what you want.) That is correct, but can be helped with nice preview features. > ReST works well only when there is little markup. Writing code > documentation generally requires a lot of markup, you want to make > variables, classes, functions, parameters, constants, etc.. (A better > avenue IMHO would be to augment docutils with some code to > automatically figure out the syntax of functions, parameters, classes, > etc., i.e., less markup, and if we do this in Python we may be able to > use introspection. This is a challenge, however, I don't know if it > can be done at all.) While writing the converter, I stumbled about a few locations where the LaTeX markup cannot be completely converted into reST, and a few locations where invalid reST was generated and not warned about. However, both of those problems occurred far less often than I'd anticipated. Georg From fdrake at acm.org Tue May 22 17:34:13 2007 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Tue, 22 May 2007 11:34:13 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <18003.1756.554922.172523@montanaro.dyndns.org> Message-ID: <200705221134.13728.fdrake@acm.org> On Tuesday 22 May 2007, Georg Brandl wrote: > But that's at least funnier than before :) It's not our job to make whiner-babies sound funny. -Fred -- Fred L. Drake, Jr. From g.brandl at gmx.net Tue May 22 18:13:36 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 22 May 2007 18:13:36 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <002701c79c85$0c08df80$f001a8c0@RaymondLaptop1> References: <200705210923.47610.fdrake@acm.org><20070521210410.GA14297@localhost.localdomain><200705211932.13295.fdrake@acm.org><874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp><4652F73D.7060002@gmail.com> <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> <002701c79c85$0c08df80$f001a8c0@RaymondLaptop1> Message-ID: Raymond Hettinger schrieb: >> - If you make a mistake in LaTeX, you will get a cryptic error which >> is usually a little difficult to figure out (if you're not used to >> it). You can an error though. > > FWIW, the pure Python program in Tools/scripts/texchecker.py does a > pretty good job of catching typical LaTeX mistakes and giving high-quality > error reporting. With that tool, I've been making doc contributions for years > and not needed my own LaTeX build. I'm not saying that LaTeX is hard for most of us, I say that it is *perceived* to be hard by many. And as soon as you dig into the deep support infrastructure, it gets very confusing. Just look at this bug fix I made some time ago: http://mail.python.org/pipermail/python-checkins/2007-April/059637.html > Also, I did not need to learn LaTeX itself. It was sufficient to read a little > of Documenting Python and then model the markup from existing docs. ISTM that this is possible with the new markup too. I wrote a great part of the new "Documenting Python" document, and after reading that one should be prepared enough to write reST just as well. > In contrast, whenever I've tried to build a complex ReST document, > it was *always* a struggle. Copying from existing docs doesn't help > much there because the cues are more subtle. I can't see many differences. If I can translate a \begin{classdesc}{...} environment, I can also translate a ".. class::" directive for my new item. > As Martin pointed out, > most errors slide-by because the mis-markup is typically read as > valid, unmarked-up text. I find myself having to continously build and > view and html file as I write. I like ResT for light-weight work but think > it is not ready for prime-time with respect to more complex requirements. Are the docs really that complex? I mean, look at the typical source of a converted page. The most common things are "information units", i.e. .. class:: directives, code snippets and plain old text. Cross-references work flawlessly. You may also ask Thomas Heller about documenting Python modules in reST. AFAIR the ctypes docs were written with it and converted to LaTeX afterwards. > Fred is also correct in that we don't seem to have people rushing to > contribute docs (more than a line or two). For a long-time, we've > always said that it is okay to submit plain text doc contributions and > that another person downstream would do the mark-up. We've had > few takers. Sometimes it's the way you present the ability to change things that affects how many people actually do it. Finding the location that tells you how to suggest changes, and opening a new bug in the infamous SF tracker is not really something people do happily. A "click here to suggest a change" link that leads to a pseudo- edit-form, complete with preview facility, might prove more effective. cheers, Georg From jon+python-dev at unequivocal.co.uk Tue May 22 18:26:45 2007 From: jon+python-dev at unequivocal.co.uk (Jon Ribbens) Date: Tue, 22 May 2007 17:26:45 +0100 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> <002701c79c85$0c08df80$f001a8c0@RaymondLaptop1> Message-ID: <20070522162645.GW11171@snowy.squish.net> On Tue, May 22, 2007 at 06:13:36PM +0200, Georg Brandl wrote: > Finding the location that tells you how to suggest changes, and opening > a new bug in the infamous SF tracker is not really something people do > happily. A "click here to suggest a change" link that leads to a pseudo- > edit-form, complete with preview facility, might prove more effective. Indeed. I know my instinctive reaction to the Python docs is "oh, this is not something which the public can contribute to". Something more like the PHP-style "public annotations" might be good - with an appropriate moderation / voting system on the annotations it could possibly be very good. From skip at pobox.com Tue May 22 18:45:04 2007 From: skip at pobox.com (skip at pobox.com) Date: Tue, 22 May 2007 11:45:04 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> <200705221057.01714.fdrake@acm.org> <18003.1756.554922.172523@montanaro.dyndns.org> Message-ID: <18003.7696.691918.693580@montanaro.dyndns.org> >> You at least take away a common excuse for lack of contributions. >> True whiners will just come up with new ones (e.g., "the >> documentation isn't available in Sanskrit yet" or "the dog ate my >> changes before I could type them into the computer"). ;-) Steve> But doesn't *everyone* now know that documentation contributions Steve> don't have to be marked up? It's certainly been said Steve> enough. Sure, but that doesn't stop the true whiners. ;-) Skip From titus at caltech.edu Tue May 22 18:51:39 2007 From: titus at caltech.edu (Titus Brown) Date: Tue, 22 May 2007 09:51:39 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18003.7696.691918.693580@montanaro.dyndns.org> References: <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> <200705221057.01714.fdrake@acm.org> <18003.1756.554922.172523@montanaro.dyndns.org> <18003.7696.691918.693580@montanaro.dyndns.org> Message-ID: <20070522165139.GF9867@caltech.edu> On Tue, May 22, 2007 at 11:45:04AM -0500, skip at pobox.com wrote: -> -> >> You at least take away a common excuse for lack of contributions. -> >> True whiners will just come up with new ones (e.g., "the -> >> documentation isn't available in Sanskrit yet" or "the dog ate my -> >> changes before I could type them into the computer"). ;-) -> -> Steve> But doesn't *everyone* now know that documentation contributions -> Steve> don't have to be marked up? It's certainly been said -> Steve> enough. -> -> Sure, but that doesn't stop the true whiners. ;-) Nothing stops the true whiners ;). I think new and exciting ways of viewing, searching, annotating, linking to/from, and indexing the docs are more important than new formats for (not) writing the docs. For example, this rocks! :: http://pydoc.gbrandl.de/search.html?q=os.path&area=default There have been (good) efforts to wikify the docs in the past. IMO what would make them really work would be to have docs.python.org, the "official" Python docs location, start hosting these efforts. As long as that location is static, I think the majority of users will ignore other efforts. cheers, --titus p.s. Are there good directions for installing the toolchain for current docs building anywhere? I've tried once or twice, but despite a lot of LaTeX experience I could never get everything hooked up right. From steve at holdenweb.com Tue May 22 19:19:36 2007 From: steve at holdenweb.com (Steve Holden) Date: Tue, 22 May 2007 13:19:36 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <20070522165139.GF9867@caltech.edu> References: <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> <200705221057.01714.fdrake@acm.org> <18003.1756.554922.172523@montanaro.dyndns.org> <18003.7696.691918.693580@montanaro.dyndns.org> <20070522165139.GF9867@caltech.edu> Message-ID: Titus Brown wrote: > On Tue, May 22, 2007 at 11:45:04AM -0500, skip at pobox.com wrote: > -> > -> >> You at least take away a common excuse for lack of contributions. > -> >> True whiners will just come up with new ones (e.g., "the > -> >> documentation isn't available in Sanskrit yet" or "the dog ate my > -> >> changes before I could type them into the computer"). ;-) > -> > -> Steve> But doesn't *everyone* now know that documentation contributions > -> Steve> don't have to be marked up? It's certainly been said > -> Steve> enough. > -> > -> Sure, but that doesn't stop the true whiners. ;-) > > Nothing stops the true whiners ;). > > I think new and exciting ways of viewing, searching, annotating, linking > to/from, and indexing the docs are more important than new formats for > (not) writing the docs. > > For example, this rocks! :: > > http://pydoc.gbrandl.de/search.html?q=os.path&area=default > It would be more impressive if the search string returned hits ... regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC/Ltd http://www.holdenweb.com Skype: holdenweb http://del.icio.us/steve.holden ------------------ Asciimercial --------------------- Get on the web: Blog, lens and tag your way to fame!! holdenweb.blogspot.com squidoo.com/pythonology tagged items: del.icio.us/steve.holden/python All these services currently offer free registration! -------------- Thank You for Reading ---------------- From jon+python-dev at unequivocal.co.uk Tue May 22 19:32:53 2007 From: jon+python-dev at unequivocal.co.uk (Jon Ribbens) Date: Tue, 22 May 2007 18:32:53 +0100 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> <200705221057.01714.fdrake@acm.org> <18003.1756.554922.172523@montanaro.dyndns.org> <18003.7696.691918.693580@montanaro.dyndns.org> <20070522165139.GF9867@caltech.edu> Message-ID: <20070522173253.GX11171@snowy.squish.net> On Tue, May 22, 2007 at 01:19:36PM -0400, Steve Holden wrote: > > For example, this rocks! :: > > > > http://pydoc.gbrandl.de/search.html?q=os.path&area=default > > It would be more impressive if the search string returned hits ... Also if it was not completely reliant on JavaScript... (Maybe it's not finished yet?) From bjourne at gmail.com Tue May 22 19:52:16 2007 From: bjourne at gmail.com (=?ISO-8859-1?Q?BJ=F6rn_Lindqvist?=) Date: Tue, 22 May 2007 19:52:16 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <740c3aec0705221052p6f7fb3d1i6d5cd3a635481d15@mail.gmail.com> > > IMO that pair of examples shows clearly that, in this application, > > reST is not an improvement over LaTeX in terms of readability/ > > writability of source. It's probably not worse, although I can't help > > muttering "EIBTI". In particular I find the "``'...'``" construct > > horribly unreadable because it makes it hard to find the Python syntax > > in all the reST. > > Well. That was a bad example. But if you look at the converted sources and open > the source file you can see that rst is a lot cleaner that latex for this type > of documentation. In your examples, I think the ReST version can be cleaned up quite a bit. First by using the .. default-role:: literal directive so that you can type `foo()` instead of using double back quotes and then you can remove the redundant semantic markup. Like this: `\*?`, `+?`, `??` The "`*`", "`+`" and "`?`" qualifiers are all *greedy*; they match as much text as possible. Sometimes this behaviour isn't desired; if the RE `<.*>` is matched against `'

title

'`, it will match the entire string, and not just `'

'`. Adding "`?`" after the qualifier makes it perform the match in *non-greedy* or *minimal* fashion; as *few* characters as possible will be matched. Using `.*?` in the previous expression will match only `'

'`. The above is the most readable version. For example, semantic markup like :regexp:`<.\*>` doesn't serve any useful purpose. The end result is that the text is typesetted with a fixed-width font, no matter if you prepend :regexp: to it or not. -- mvh Bj?rn From g.brandl at gmx.net Tue May 22 21:34:18 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 22 May 2007 21:34:18 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> <200705221057.01714.fdrake@acm.org> <18003.1756.554922.172523@montanaro.dyndns.org> <18003.7696.691918.693580@montanaro.dyndns.org> <20070522165139.GF9867@caltech.edu> Message-ID: Steve Holden schrieb: > Titus Brown wrote: >> On Tue, May 22, 2007 at 11:45:04AM -0500, skip at pobox.com wrote: >> -> >> -> >> You at least take away a common excuse for lack of contributions. >> -> >> True whiners will just come up with new ones (e.g., "the >> -> >> documentation isn't available in Sanskrit yet" or "the dog ate my >> -> >> changes before I could type them into the computer"). ;-) >> -> >> -> Steve> But doesn't *everyone* now know that documentation contributions >> -> Steve> don't have to be marked up? It's certainly been said >> -> Steve> enough. >> -> >> -> Sure, but that doesn't stop the true whiners. ;-) >> >> Nothing stops the true whiners ;). >> >> I think new and exciting ways of viewing, searching, annotating, linking >> to/from, and indexing the docs are more important than new formats for >> (not) writing the docs. >> >> For example, this rocks! :: >> >> http://pydoc.gbrandl.de/search.html?q=os.path&area=default >> > It would be more impressive if the search string returned hits ... This is a JavaScript based search, which will only be (optionally) integrated in the offline version. The online version will get a more sophisticated search. We've just finished to implement the quick dispatcher in the repository. A request to http://docs.python.org/os.path would then redirect to the appropriate module page, as well as http://docs.python.org/?q=os.path Something like http://docs.python.org/?q=os.paht (note the misspelling) would lead to a page with close matching results, os.path being the first of them. (The web app is based on wsgiref, with a few wrappers around it...) cheers, Georg From g.brandl at gmx.net Tue May 22 21:36:37 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 22 May 2007 21:36:37 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <740c3aec0705221052p6f7fb3d1i6d5cd3a635481d15@mail.gmail.com> References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> <740c3aec0705221052p6f7fb3d1i6d5cd3a635481d15@mail.gmail.com> Message-ID: BJ?rn Lindqvist schrieb: >> > IMO that pair of examples shows clearly that, in this application, >> > reST is not an improvement over LaTeX in terms of readability/ >> > writability of source. It's probably not worse, although I can't help >> > muttering "EIBTI". In particular I find the "``'...'``" construct >> > horribly unreadable because it makes it hard to find the Python syntax >> > in all the reST. >> >> Well. That was a bad example. But if you look at the converted sources and open >> the source file you can see that rst is a lot cleaner that latex for this type >> of documentation. > > In your examples, I think the ReST version can be cleaned up quite a > bit. First by using the .. default-role:: literal directive so that > you can type `foo()` instead of using double back quotes and then you > can remove the redundant semantic markup. Like this: I've already assigned the default role to `var` since it's used most frequently. Having two ways of spelling literal code is wasting markup, for me. > The above is the most readable version. For example, semantic markup > like :regexp:`<.\*>` doesn't serve any useful purpose. The end result > is that the text is typesetted with a fixed-width font, no matter if > you prepend :regexp: to it or not. Yes, there are a few semantic roles that may be obsolete. Georg From alexandre at peadrop.com Tue May 22 22:35:36 2007 From: alexandre at peadrop.com (Alexandre Vassalotti) Date: Tue, 22 May 2007 16:35:36 -0400 Subject: [Python-Dev] Introduction and request for commit access to the sandbox. In-Reply-To: <465123A9.8090500@v.loewis.de> References: <465123A9.8090500@v.loewis.de> Message-ID: On 5/21/07, "Martin v. L?wis" wrote: > > With that said, I would to request svn access to the sandbox for my > > work. I will use this access only for modifying stuff in the directory > > I will be assigned to. I would like to use the username "avassalotti" > > and the attached SSH2 public key for this access. > > I have added your key. As we have a strict first.last account policy, > I named it alexandre.vassalotti; please correct me if I misspelled it. Thanks! > > One last thing, if you know semantic differences (other than the > > obvious ones) between the C and Python versions of the modules I need > > to merge, please let know. This will greatly simplify the merge and > > reduce the chances of later breaking. > > Somebody noticed on c.l.p that, for cPickle, > a) cPickle will start memo keys at 1; pickle at 0 > b) cPickle will not put things into the memo if their refcount is > 1, whereas pickle puts everything into the memo. Noted. I think I found the thread on c.l.p about it: http://groups.google.com/group/comp.lang.python/browse_thread/thread/68c72a5066e4c9bb/b2bc78f7d8d50320 > Not sure what you'd consider obvious, but I'll mention that cStringIO > "obviously" is constrained in what data types you can write (namely, > byte strings only), whereas StringIO allows Unicode strings as well. Yes. I was already aware of this. I just hope this problem will go away with the string unification in Python 3000. However, I will need to deal with this, sooner or later, if I want to port the merge to 2.x. > Less obviously, StringIO also allows > > py> s = StringIO(0) > py> s.write(10) > py> s.write(20) > py> s.getvalue() > '1020' That is probably due to the design of cStringIO, which is separated into two subparts StringI and StringO. So when the constructor of cStringIO is given a string, it builds an output object, otherwise it builds an input object: static PyObject * IO_StringIO(PyObject *self, PyObject *args) { PyObject *s=0; if (!PyArg_UnpackTuple(args, "StringIO", 0, 1, &s)) return NULL; if (s) return newIobject(s); return newOobject(128); } As you see, cStringIO's code also needs a good cleanup to make it, at least, conforms to PEP-7. -- Alexandre From martin at v.loewis.de Tue May 22 23:30:21 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 22 May 2007 23:30:21 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <20070522055257.GA13609@caltech.edu> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <46527929.8040600@holdenweb.com> <4652807A.2030707@v.loewis.de> <20070522055257.GA13609@caltech.edu> Message-ID: <465360ED.2090000@v.loewis.de> > I'd be happy to install Windows and the latest VisualStudio on a 64-bit > VMware image. I just can't be responsible for day-to-day administration > of the buildslave; Windows requires too much attention for me to do > that. Thanks for the offer. Perhaps Kristjan is interested in setting up a buildslave on such an installation. Regards, Martin From jimjjewett at gmail.com Tue May 22 23:30:53 2007 From: jimjjewett at gmail.com (Jim Jewett) Date: Tue, 22 May 2007 17:30:53 -0400 Subject: [Python-Dev] The docs, reloaded Message-ID: Martin v. L?wis schrieb: > That docutils happens to be written in Python should make little > difference - it's *not* part of the Python language project, > and is just a tool for us, very much like latex and latex2html. Not entirely. When I first started looking at python, I read a lot of documentation. Now I don't read it so much; the time when I could easily suggest doc changes without explicitly setting time aside has passed. At that time, the barriers to submitting were fairly large; these are the ones I remember: (1) Not realizing that I *could* submit changes, and they would be welcome. (2) Not understanding it well enough to document it correctly. (3) Not having easy access to the source -- I didn't want to to retype it, or to edit html only to find out it was maintained in some other format. Even once I found the cvs repository, the docs weren't in the main area. (4) Not having an easy way to submit the changes quickly. (5) Wanting to check my work, in case I was wrong. I have no idea how to fix (1) and (2). Putting them on a wiki improves the situation with (3) and (4). (5) is at least helped by keeping the formatting requirements as simple as possible (not sure if ReST does this or not) and by letting me verify them before I submit. Getting docutils is already a barrier; I would like to see a stdlib module (not script hidden off to the side) for verification and conversion. I don't think I installed docutils myself until I started to write a PEP. But once I did download and install and figure out how to call it ... at least it generally worked, and ran with something (python) I was already using. Getting a 3rd party tool that ends up requiring fourth party tools (such as LaTex, but then I need to a viewer, or the old toolchain that required me to install Perl) ... took longer than my attention span. This was despite the fact that I had already used all the needed tools in previous years; they just weren't installed on the machines I had at the time ... and installing them on windows was something that would *probably* work *eventually*. If I had been new to programming, it would have been even more intimidating. -jJ From martin at v.loewis.de Tue May 22 23:37:20 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 22 May 2007 23:37:20 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <019101c79c3a$91d06b60$1f0a0a0a@enfoldsystems.local> References: <019101c79c3a$91d06b60$1f0a0a0a@enfoldsystems.local> Message-ID: <46536290.8050102@v.loewis.de> > While that is true in theory, I often find it is not the case in practice, > generally due to the optimizer. Depending on what magic the compiler has > done, it can be very difficult to set breakpoints (conditional or > otherwise), inspect variable values, etc. It is useful in some cases, but > very often I find myself falling back to a debug build after attempting to > debug a local release build. What I typically do is to disable optimization in a release build. It then essentially becomes a debug build, except that _DEBUG is not defined, so that it doesn't incorporate the debug CRT. This, in turn, means that you can mix such a "debug" build readily with non-debug binaries. >> I think there are several other cases in distutils to support >> special cases (e.g. the DISTUTILS_USE_SDK environment variable); >> if people want to see their setup supported, AND ARE WILLING TO >> CONTRIBUTE: more power to them. > > Yes, but I believe its also important to solicit feedback on the best way to > achieve their aims. In this particular case, I believe it would have been > misguided for me to simply check in whatever was necessary to have distutils > work from the PCBuild8 directory. I hope it is clear that I am willing to > contribute the outcome of these discussions... Sure - all understood, and all fine. It's certainly good to build consensus first, but in cases of "should we support that borderline case of build setup as long as I'm willing to maintain it", the answer should always be "yes" (*), irrespective of the merits of the specific proposal. Heck, we would even accept patches to support HP-UX better :-) Regards, Martin (*) assuming it doesn't break anything P.S. I think there have been cases where I haven't been that supportive wrt. build issues. However, in this case I really feel "go ahead". From martin at v.loewis.de Tue May 22 23:51:26 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 22 May 2007 23:51:26 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> Message-ID: <465365DE.1090306@v.loewis.de> > Surely there are differences between architectures? PC uses MSI after all. > Why can't linux be under trunk/linux and pc 86 under trunk/pcbuild8/win32PGO > and 64 under trunk/pcbuild8/x64pgo? That couldn't work for me. I try avoid building on a network drive, and with local drives, I just can't have a Windows build and a Linux build on the same checkout - they live on separate file systems, after all (Linux on ext3, Windows on NTFS, with multi-boot switching between them). > That?s just silly. And two visual studios open, and edit the file > in two places too? I have about 10 checkouts of Python, on different machines, with no problems. I don't feel silly doing so. I don't *use* them simultaneously, of course - I cannot work on two architectures simultaneously, anyway. > I say let's just admit that tools can compile for > more than one target. Let's adapt to it and we will all be happier. You might be; I will be sad. It comes for a price, and with little benefit. Disk space is cheaper than my time to fight build processes. > And btw, there is no need to install the msvcr8.dll. We can distribute > them as a private assembly. then they (and the manifest) exist in the same > directory as python2x.dll. Yes, but then python2x.dll goes into system32, and so will msvcr8.dll, no? >> Not sure whether anything really is needed. Python works fine on Vista. > If you are an administrator. A limited user will have problems installing > it and then running it. Is there a bug report for that? >>> 1) supplying python.dll as a Side By Side assembly >> What would that improve? > Well, it should reduce dll-hell problems of applications that ship with > python2x.dll. You ship with and link to your own and tested dll. We > have some concerns here, for example, now that we are moving away from > embedding python in our blue.dll and using python25.dll directly, that > this exposes a vulnerability to the integrity of the software. Why should there be versioning problems with python25.dll? Are there any past issues with incompatibilities with any python2x.dll release? >>> 2) Changing python install locations > To conform with Windows rules and get a "Vista approved" logo. > Install in the ProgramFiles folder. Only over my dead body. *This* is silly. > Just as C does. Ah, and > this also means that we could install both 32 bit and 64 bit > versions, another plus. What about the registry? > Interesting. We are definitely interested in that. You see, Someone > installs a game or accounting software using vista. He then runs as a > limited user. Python insists on saving its .pyc files in the installation > folder, but this is not something that is permitted on Vista. But that's not a problem, is it? Writing silently "fails", i.e. it just won't save the pyc files. Happens all the time on Unix. >> Sure, and have they reported problems with Python on Vista (problems >> specific to Vista?) > Certainly. We are working on them, of course. But, of course, they have not been reported. Regards, Martin From martin at v.loewis.de Tue May 22 23:58:31 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 22 May 2007 23:58:31 +0200 Subject: [Python-Dev] [Distutils] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: References: <00af01c79b6f$2f863c30$1f0a0a0a@enfoldsystems.local> <46520F4F.5010502@v.loewis.de> Message-ID: <46536787.7040500@v.loewis.de> > I have a set of extensions that use SWIG to wrap my own C++ library. > This library, on a day-to-day basis is built against VS8 since the rest > of our product suite is. Right now I have no way to work with this code > using VS8 since the standard distribution is built against VS7 which > uses a different CRT. This is an absolute nightmare in practice since > I currently have to maintain VS7 projects in parallel with the standard > VS8 ones simply so that I can run and test my python code. If you know well enough what you are doing, and avoid using unsupported cases, you *can* mix different CRTs. > Building using the current projects I > seem to get everything in the PCBuild8 / PCBuild dirs. How can I work > with what is build? Python works correctly out of its build directory. You can just run it from there. > Is there a shell script to build a final distribution tree? Not sure what you mean by "tree". Distribution is in a single MSI file, not in a tree, and the tree that gets created on the target machine (of the MSI file) never exists on the source machine. > If not, is > there a simple way to build an MSI similar to the one found on the > Python.org site for the official releases but using the PCBuild8 stuff? No, but it should be possible to adopt Tools/msi to package pcbuild8 instead. Just make sure you are using different GUIDs as the ProductCode. Regards, Martin From martin at v.loewis.de Tue May 22 23:39:18 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 22 May 2007 23:39:18 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DCBEE6D6@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <46527929.8040600@holdenweb.com> <4652807A.2030707@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6D6@exchis.ccp.ad.local> Message-ID: <46536306.7080108@v.loewis.de> > If there are no technical and corporate difficulties, such as > firewalls and security, I am sure that CCP can provide a > VisualStudio2005 buildbot for our needs. Wasn't there some issue that > each buildbot can only provide a single build? Yes, but you can have multiple buildbot slaves on a single machine. They all create separate checkouts. Regards, Martin From python-dev at zesty.ca Wed May 23 00:12:08 2007 From: python-dev at zesty.ca (Ka-Ping Yee) Date: Tue, 22 May 2007 17:12:08 -0500 (CDT) Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> <200705221057.01714.fdrake@acm.org> <18003.1756.554922.172523@montanaro.dyndns.org> Message-ID: On Tue, 22 May 2007, Steve Holden wrote: > But doesn't *everyone* now know that documentation contributions don't > have to be marked up? It's certainly been said enough. Maybe that fact > should be more prominent in the documentation? Then we'll just have to > worry about getting people to read it ... I think the issue is instant gratification. You don't get the satisfaction of seeing your changes unless you're willing to write them in LaTeX, and that's a pretty big barrier -- a lot of what motivates open source volunteers is the sense of fulfillment. (Hence, by the same nature, Wiki-like editing with instant changes visible online will probably greatly increase contributions.) -- ?!ng From bjourne at gmail.com Wed May 23 00:15:17 2007 From: bjourne at gmail.com (=?ISO-8859-1?Q?BJ=F6rn_Lindqvist?=) Date: Wed, 23 May 2007 00:15:17 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> <740c3aec0705221052p6f7fb3d1i6d5cd3a635481d15@mail.gmail.com> Message-ID: <740c3aec0705221515y5d31a4efwe197fb336676400e@mail.gmail.com> > > In your examples, I think the ReST version can be cleaned up quite a > > bit. First by using the .. default-role:: literal directive so that > > you can type `foo()` instead of using double back quotes and then you > > can remove the redundant semantic markup. Like this: > > I've already assigned the default role to `var` since it's used most > frequently. > > Having two ways of spelling literal code is wasting markup, for me. Reassign it then. :) `var` makes italic text but you can use *var* for that instead. Minor point I know, but it would make reading ReST source just a tiny bit easier. -- mvh Bj?rn From martin at v.loewis.de Wed May 23 00:20:00 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 23 May 2007 00:20:00 +0200 Subject: [Python-Dev] Module cleanup improvement In-Reply-To: <20070522132933.GA4863@code0.codespeak.net> References: <1d36917a0705211717q450e43cemd23d5bca1c3a2f73@mail.gmail.com> <20070522132933.GA4863@code0.codespeak.net> Message-ID: <46536C90.7030604@v.loewis.de> > On another level, would there be interest here for me to revive my old > attempt at throwing away this messy procedure, which only made sense in > a world where reference cycles couldn't be broken? I definitely think Py3k, at least, should use such an approach - especially with PEP 3121, which should allow to incorporate variables stored in extension module's "globals" to participate in GC. Regards, Martin From armin.ronacher at active-4.com Wed May 23 00:38:45 2007 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Tue, 22 May 2007 22:38:45 +0000 (UTC) Subject: [Python-Dev] The docs, reloaded References: Message-ID: Hoi, Additionally to the offline docs that Georg published some days ago there is also a web version which currently looks and works pretty much like the offline version. There are however some differences that are worth knowing: - Cleaner URLs. You can actually guess them because we took the idea the PHP people had and check for similar pages if a page does not exist. We do however redirect if there was a match so that the URL stays unique. - The search doesn't require JavaScript (but is currently disabled due to a buggy stemmer and indexer) That's it for now, you can try it online at http://pydoc.gbrandl.de:3000/ Regards, Armin From tim.peters at gmail.com Wed May 23 00:41:30 2007 From: tim.peters at gmail.com (Tim Peters) Date: Tue, 22 May 2007 18:41:30 -0400 Subject: [Python-Dev] Module cleanup improvement In-Reply-To: <20070522132933.GA4863@code0.codespeak.net> References: <1d36917a0705211717q450e43cemd23d5bca1c3a2f73@mail.gmail.com> <20070522132933.GA4863@code0.codespeak.net> Message-ID: <1f7befae0705221541q3a32e3aft86443ffb43ab6b71@mail.gmail.com> [Armin Rigo] > On another level, would there be interest here for me to revive my old > attempt at throwing away this messy procedure, which only made sense in > a world where reference cycles couldn't be broken? Definitely. > Nowadays the fact that global variables suddenly become None when the > interpreter shuts down is a known recipe for getting obscure messages from > still-running threads, for example. > > This has one consequence that I can think about: if we consider a > CPython in which the cycle GC has been disabled by the user, then many > __del__'s would not be called any more at interpreter shutdown. Do we > care? I don't believe this is a potential issue in CPython. The user-exposed gc.enable() / gc.disable() only affect "automatic" cyclic gc -- they flip a flag that has no bearing on whether an /explicit/ call to gc.collect() will try to collect trash (it will, unless a collection is already in progress, & regardless of the state of the "enabled" flag). Py_Finalize() calls the C spelling of gc.collect() (PyGC_Collect), and I don't believe that can be user-disabled. From kbk at shore.net Wed May 23 01:07:29 2007 From: kbk at shore.net (Kurt B. Kaiser) Date: Tue, 22 May 2007 19:07:29 -0400 (EDT) Subject: [Python-Dev] Weekly Python Patch/Bug Summary Message-ID: <200705222307.l4MN7Tld032667@hampton.thirdcreek.com> Patch / Bug Summary ___________________ Patches : 364 open ( +2) / 3769 closed ( +3) / 4133 total ( +5) Bugs : 986 open (+18) / 6701 closed ( +9) / 7687 total (+27) RFE : 258 open ( +2) / 287 closed ( +1) / 545 total ( +3) New / Reopened Patches ______________________ syslog syscall support for SysLogLogger (2007-05-02) http://python.org/sf/1711603 reopened by luke-jr Remove backslash escapes from tokanize.c. (2007-05-16) http://python.org/sf/1720390 opened by Ron Adam Allow T_BOOL in PyMemberDef definitions (2007-05-17) http://python.org/sf/1720595 opened by Angelo Mottola fix 1668596: copy datafiles properly when package_dir is ' ' (2007-05-17) http://python.org/sf/1720897 opened by Raghuram Devarakonda Build on QNX (2007-05-20) http://python.org/sf/1722225 opened by Matt Kraai Curses Menu (2007-05-21) http://python.org/sf/1723038 opened by Fabian Frederick Patches Closed ______________ Removal of Tuple Parameter Unpacking [PEP3113] (2007-03-10) http://python.org/sf/1678060 closed by bcannon Class Decorators (2007-02-28) http://python.org/sf/1671208 closed by jackdied Allow any mapping after ** in calls (2007-03-23) http://python.org/sf/1686487 closed by gbrandl New / Reopened Bugs ___________________ build_clib --build-clib/--build-temp option bugs (2007-05-14) http://python.org/sf/1718574 opened by Pearu Peterson glibc error: corrupted double linked list (CPython 2.5.1) (2007-05-14) http://python.org/sf/1718942 opened by Yang Zhang new functools (2007-05-15) http://python.org/sf/1719222 opened by Aaron Brady Python package support not properly documented (2007-05-15) http://python.org/sf/1719423 opened by Michael Abbott tarfile stops expanding with long filenames (2007-05-16) http://python.org/sf/1719898 opened by Christian Zagrodnick No docs for PyEval_EvalCode and related functions (2007-05-16) http://python.org/sf/1719933 opened by Joseph Eagar sets module documentation: example uses deprecated method (2007-05-16) CLOSED http://python.org/sf/1719995 opened by Jens Quade Compiler is not thread safe? (2007-05-16) http://python.org/sf/1720241 opened by ??PC?? PyGILState_Ensure does not acquires GIL (2007-05-16) http://python.org/sf/1720250 opened by Kuno Ospald Tkinter + thread + urllib => crashes? (2007-05-17) http://python.org/sf/1720705 opened by Hirokazu Yamamoto docu enhancement for logging.handlers.SysLogHandler (2007-05-17) http://python.org/sf/1720726 opened by rhunger Please make sqlite3.Row iterable (2007-05-17) CLOSED http://python.org/sf/1720959 opened by phil automatic imports (2007-05-17) http://python.org/sf/1720992 opened by Juan Manuel Borges Ca?o ERROR - Microsoft Visual C++ Runtime Library (2007-05-18) http://python.org/sf/1721161 reopened by dariounipd ERROR - Microsoft Visual C++ Runtime Library (2007-05-18) http://python.org/sf/1721161 opened by darioUniPD code that writes the PKG-INFO file doesnt handle unicode (2007-05-18) http://python.org/sf/1721241 opened by Matthias Klose make testall shows many glibc detected malloc corruptions (2007-05-18) http://python.org/sf/1721309 reopened by gbrandl make testall shows many glibc detected malloc corruptions (2007-05-18) http://python.org/sf/1721309 opened by David Favor test_bsddb3 malloc corruption (2007-05-18) CLOSED http://python.org/sf/1721313 opened by David Favor emphasize iteration volatility for dict (2007-05-18) CLOSED http://python.org/sf/1721368 opened by Alan emphasize iteration volatility for set (2007-05-18) CLOSED http://python.org/sf/1721372 opened by Alan Small case which hangs (2007-05-18) http://python.org/sf/1721518 opened by Julian Todd A subclass of set doesn't always have __init__ called. (2007-05-19) http://python.org/sf/1721812 opened by David Benbennick email.FeedParser.BufferedSubFile improperly handles "\r\n" (2007-05-19) http://python.org/sf/1721862 opened by Sye van der Veen IDLE hangs in popup method completion (2007-05-19) http://python.org/sf/1721890 opened by Andy Harrington NamedTuple security issue (2007-05-20) CLOSED http://python.org/sf/1722239 reopened by tiran NamedTuple security issue (2007-05-20) CLOSED http://python.org/sf/1722239 opened by Christian Heimes Thread shutdown exception in Thread.notify() (2007-05-20) http://python.org/sf/1722344 opened by Yang Zhang urlparse.urlunparse forms file urls incorrectly (2007-05-20) http://python.org/sf/1722348 opened by Thomas Folz-Donahue Option -OO doesn't remove docstrings (2007-05-21) http://python.org/sf/1722485 opened by Grzegorz Adam Hankiewicz x = [[]]*2; x[0].append doesn't work (2007-05-21) CLOSED http://python.org/sf/1722956 opened by Jeff Britton Crash in ctypes callproc function with unicode string arg (2007-05-22) http://python.org/sf/1723338 opened by Colin Laplace Bugs Closed ___________ sets module documentation: example uses deprecated method (2007-05-16) http://python.org/sf/1719995 closed by gbrandl Please make sqlite3.Row iterable (2007-05-17) http://python.org/sf/1720959 closed by ghaering make testall shows many glibc detected malloc corruptions (2007-05-18) http://python.org/sf/1721309 closed by nnorwitz test_bsddb3 malloc corruption (2007-05-18) http://python.org/sf/1721313 closed by gbrandl emphasize iteration volatility for dict (2007-05-18) http://python.org/sf/1721368 closed by rhettinger emphasize iteration volatility for set (2007-05-18) http://python.org/sf/1721372 closed by rhettinger __getslice__ changes integer arguments (2007-05-03) http://python.org/sf/1712236 closed by rhettinger Docstring for site.addpackage() is incorrect (2007-04-09) http://python.org/sf/1697215 closed by gbrandl yield+break stops tracing (2006-10-24) http://python.org/sf/1583862 closed by luks NamedTuple security issue (2007-05-20) http://python.org/sf/1722239 closed by rhettinger NamedTuple security issue (2007-05-20) http://python.org/sf/1722239 closed by rhettinger x = [[]]*2; x[0].append doesn't work (2007-05-21) http://python.org/sf/1722956 closed by gbrandl New / Reopened RFE __________________ Add File - Reload (2007-05-17) http://python.org/sf/1721083 opened by Raymond Hettinger RFE Closed __________ Cannot use dict with unicode keys as keyword arguments (2007-05-03) http://python.org/sf/1712419 closed by gbrandl From blais at furius.ca Wed May 23 02:21:48 2007 From: blais at furius.ca (Martin Blais) Date: Tue, 22 May 2007 17:21:48 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <1b151690705220737m29bf2114g3cc4b47ba70e3@mail.gmail.com> Message-ID: <1b151690705221721h37e59dd1tdb1225f8d0fb9fe6@mail.gmail.com> On 5/22/07, Georg Brandl wrote: > > > Don't get me wrong, LaTeX is a powerful tool and I use it for every bigger > > document i type. I just think it's not the best choice for documenting scripting > > languages. > > Who's documenting a scripting language? Hehe I can't believe I just wrote that... From blais at furius.ca Wed May 23 02:30:17 2007 From: blais at furius.ca (Martin Blais) Date: Tue, 22 May 2007 17:30:17 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> <4652F73D.7060002@gmail.com> <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> Message-ID: <1b151690705221730h74b09b4em4933888738ee8972@mail.gmail.com> On 5/22/07, Martin Blais wrote: > > ReST works well only when there is little markup. Writing code > documentation generally requires a lot of markup, you want to make > variables, classes, functions, parameters, constants, etc.. (A better > avenue IMHO would be to augment docutils with some code to > automatically figure out the syntax of functions, parameters, classes, > etc., i.e., less markup, and if we do this in Python we may be able to > use introspection. This is a challenge, however, I don't know if it > can be done at all.) Just to follow-up on that idea: I don't think it would be very difficult to write a very small modification to docutils that interprets the default role with more "smarts", for example, you can all guess what the types of these are about: `class Foo` (this is a class Foo) `bar(a, b, c) -> str` (this is a function "bar" which returns a string) `name (attribute)` (this is an attribute) ...so why couldn't the computer solve that problem for you? I'm sure we could make it happen. Essentially, what is missing from ReST is "less markup for documenting programs". By restricting the problem-set to Python programs, we can go a long way towards making much of this automatic, even without resorting to introspecting the source code that is being documented. From greg.ewing at canterbury.ac.nz Wed May 23 02:42:27 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 23 May 2007 12:42:27 +1200 Subject: [Python-Dev] Module cleanup improvement In-Reply-To: <20070522132933.GA4863@code0.codespeak.net> References: <1d36917a0705211717q450e43cemd23d5bca1c3a2f73@mail.gmail.com> <20070522132933.GA4863@code0.codespeak.net> Message-ID: <46538DF3.8070705@canterbury.ac.nz> Armin Rigo wrote: > if we consider a > CPython in which the cycle GC has been disabled by the user, then many > __del__'s would not be called any more at interpreter shutdown. That can happen now anyway. Module clearing only cleans up cycles that go through the module dict. +1 from me on getting rid of module clearing at shutdown. -- Greg From anthony at interlink.com.au Wed May 23 04:41:47 2007 From: anthony at interlink.com.au (Anthony Baxter) Date: Wed, 23 May 2007 12:41:47 +1000 Subject: [Python-Dev] Need Survey Answers from Core Developers In-Reply-To: <17997.62192.311458.672697@montanaro.dyndns.org> References: <464DC502.5000700@taupro.com> <17997.62192.311458.672697@montanaro.dyndns.org> Message-ID: <200705231241.51338.anthony@interlink.com.au> On Saturday 19 May 2007, skip at pobox.com wrote: > Jeff> 1) How is the project governed? How does the community > make Jeff> decisions on what goes into a release? > Consensus (most of the time) and GvR pronouncements for > significant changes. There are situations where Guido has simply > pronounced when the community seemed unable to settle on one > solution. Decorators come to mind. Plus of course there's the minor detail of features needing to be implemented. If no-one steps up to complete something, it can just get deferred. See PEP 356's list of deferred features. > Jeff> 2) Does the language have a formal defined release > plan? > > Jeff> I know Zope 3's release plan, every six months, but > not that of Jeff> Python. Is there a requirement to push a > release out the door Jeff> every N months, as some projects > do, or is each release Jeff> separately negotiated with > developers around a planned set Jeff> of features? > > PEP 6? PEP 101? PEP 102? > > There is no hard-and-fast time schedule. I believe minor > releases leave the station approximately every 18-24 months, > micro releases roughly every six months. The goal is to have a major release (I consider 2.5, 2.6 &c to be "major", and 2.5.1, 2.5.2 &c "minor" - this is how it's always been, afaik) "when they're done". Typically this is around 18-24 months. There's not (yet?) a formal release plan for the minor/bugfix releases, but they've been every 6 months since late 2003. Obviously, if a major bug is found then a release happens sooner. > Jeff> 3) Some crude idea of how many new major and minor > features were > Jeff> added in the last release? Yes, I know > this is difficult -- the > Jeff> idea it so get some measure of > the evolution/stability of cPython > Jeff> re features. Jython > and IronPython are probably changing rapidly > Jeff> -- cPython, > not such much. We don't break down "major" or "minor" features, but according to the What's New In Python 2.5 doc: > A search through the > SVN change logs finds there were 353 patches applied and 458 bugs > fixed between Python 2.4 and 2.5. (Both figures are likely to be > underestimates.) The distinction between major and minor feature is pretty arbitrary, obviously. -- Anthony Baxter It's never too late to have a happy childhood. From nnorwitz at gmail.com Wed May 23 06:43:17 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 22 May 2007 21:43:17 -0700 Subject: [Python-Dev] [Python-3000] Introduction and request for commit access to the sandbox. In-Reply-To: References: <465123A9.8090500@v.loewis.de> Message-ID: On 5/22/07, Alexandre Vassalotti wrote: > > As you see, cStringIO's code also needs a good cleanup to make it, > at least, conforms to PEP-7. Alexandre, It would be great if you could break up unrelated changes into separate patches. Some of these can go in sooner rather than later. I don't know all the things that need to be done, but I could imagine a separate patch for each of: * whitespace normalization * function name modification * other formatting changes * bug fixes * changes to make consistent with StringIO I don't know if all those items in the list need to change, but that's the general idea. Separate patches will make it much easier to review and get benefits from your work earlier. I look forward to seeing your work! n From g.brandl at gmx.net Wed May 23 08:30:17 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 23 May 2007 08:30:17 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: Armin Ronacher schrieb: > Hoi, > > Additionally to the offline docs that Georg published some days ago there is > also a web version which currently looks and works pretty much like the offline > version. There are however some differences that are worth knowing: > > - Cleaner URLs. You can actually guess them because we took the idea the PHP > people had and check for similar pages if a page does not exist. We do however > redirect if there was a match so that the URL stays unique. > > - The search doesn't require JavaScript (but is currently disabled due to a > buggy stemmer and indexer) Also, try http://pydoc.gbrandl.de:3000/os.path.exists and From g.brandl at gmx.net Wed May 23 08:32:40 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 23 May 2007 08:32:40 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: Georg Brandl schrieb: > Armin Ronacher schrieb: >> Hoi, >> >> Additionally to the offline docs that Georg published some days ago there is >> also a web version which currently looks and works pretty much like the offline >> version. There are however some differences that are worth knowing: >> >> - Cleaner URLs. You can actually guess them because we took the idea the PHP >> people had and check for similar pages if a page does not exist. We do however >> redirect if there was a match so that the URL stays unique. >> >> - The search doesn't require JavaScript (but is currently disabled due to a >> buggy stemmer and indexer) > > Also, try > > http://pydoc.gbrandl.de:3000/os.path.exists > > and http://pydoc.gbrandl.de:3000/os.paht.exists Georg From steven.bethard at gmail.com Wed May 23 09:24:39 2007 From: steven.bethard at gmail.com (Steven Bethard) Date: Wed, 23 May 2007 01:24:39 -0600 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: On 5/23/07, Georg Brandl wrote: > Also, try > > http://pydoc.gbrandl.de:3000/os.path.exists Beautiful! STeVe -- I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a tiny blip on the distant coast of sanity. --- Bucky Katt, Get Fuzzy From nick at craig-wood.com Wed May 23 10:53:00 2007 From: nick at craig-wood.com (Nick Craig-Wood) Date: Wed, 23 May 2007 09:53:00 +0100 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> Message-ID: <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> Georg Brandl wrote: > Nick Craig-Wood schrieb: > > Being a seasoned unix user, I tend to reach for pydoc as my first stab > > at finding some documentation rather than (after excavating the mouse > > from under a pile of paper) use a web browser. > > > > If you've ever used pydoc you'll know it reads docstrings and for some > > modules they are great and for others they are sorely lacking. > > > > If pydoc could show all this documentation as well I'd be very happy! > > > > Maybe your quick dispatch feature could be added to pydoc too? > > It is my intention to work together with Ron Adam to make the pydoc <-> > documentation integration as seamless as possible. So I'll be able to read the main docs for a module in a terminal without reaching for the web browser (or info)? That would be great! How would pydoc decide which bit of docs it is going to show? If I type "pydoc re" is it going to give me the rather sparse __doc__ from the re module or the nice reST docs? Or maybe both, one after the other? Or will I have to use a flag to dis-ambiguate? If you type "pydoc re" at the moment then it says in it MODULE DOCS http://www.python.org/doc/current/lib/module-re.html which is pretty much useless to me when ssh-ed in to a linux box half way around the world... > > It is missing conversion of ``comment'' at the moment as I'm sure you > > know... > > Sorry, what did you mean? ``comment'' produces smart quotes in latex if I remember correctly. You probably want to convert it somehow because it looks a bit odd on the web page as it stands. I'm not sure what the reST replacement might be, but converting it just to "comment" would probably be OK. Likewise with `comment' to 'comment'. For an example see the first paragraph here: http://pydoc.gbrandl.de/reference/index.html -- Nick Craig-Wood -- http://www.craig-wood.com/nick From Dennis.Benzinger at gmx.net Wed May 23 11:58:34 2007 From: Dennis.Benzinger at gmx.net (Dennis Benzinger) Date: Wed, 23 May 2007 11:58:34 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: <20070523115834.7d5b1dd7@dennis-laptop> Am Wed, 23 May 2007 08:30:17 +0200 schrieb Georg Brandl : > [...] > Also, try > > http://pydoc.gbrandl.de:3000/os.path.exists > [...] Looks good. But should the source pages really use syntax highlighting? I think if somebody is interested in the source then they should get the real source without any highlighting. If you decide to keep the syntax highlighting then the highlighting of multiline ReST strings should be fixed. For example see the source for splitext(). Thanks for the work, Dennis Benzinger From armin.ronacher at active-4.com Wed May 23 12:10:15 2007 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Wed, 23 May 2007 10:10:15 +0000 (UTC) Subject: [Python-Dev] The docs, reloaded References: <20070523115834.7d5b1dd7@dennis-laptop> Message-ID: Hoi, Dennis Benzinger gmx.net> writes: > Looks good. But should the source pages really use syntax highlighting? > I think if somebody is interested in the source then they should get > the real source without any highlighting. If you decide to keep the > syntax highlighting then the highlighting of multiline ReST strings > should be fixed. For example see the source for splitext(). Yeah. Georg said the same yesterday. I'll revert that change so that it displays the plain sources as text/plain in the online version too. Regards, Armin From python-dev at xhaus.com Wed May 23 12:27:56 2007 From: python-dev at xhaus.com (Alan Kennedy) Date: Wed, 23 May 2007 11:27:56 +0100 Subject: [Python-Dev] Return value from socket.fileno() Message-ID: <4654172C.4010904@xhaus.com> Dear all, I am writing to seek information about the socket.fileno() method, and opinions on how best to implement it on jython. On cpython, socket.fileno() returns a file descriptor, i.e. an integer index into an array of file objects. This integer is then passed to select.select and select.poll, to register interest in specified events on the socket, i.e. readability, writability, connectedness, etc. Importantly, it is possible to select and poll on a socket's file descriptor immediately after the socket is created, e.g. before it is connected and even before a non-blocking connect process has been started. This is problematic on java, because java has different classes for client and server sockets. Therefore, on jython, creation of the actual socket implementation is deferred until the user has called a method which commits the socket to being a client or server socket, e.g. connect, listen, etc. This means that there is no meaningful descriptor/channel that can be returned from fileno() until the nature of the socket is determined. Also, file descriptors have no meaning on java. Instead, java Selectors select on java.nio.channels.SelectableChannel objects. But, ideally, this should not matter: AFAICT, the return value from fileno should be simply an opaque handle which has no purpose other than to be handed to select and poll, to indicate interest in events on the associated socket. I have been thinking that the simplest way to implement socket.fileno() on jython is to return the actual socket object, i.e. return self. When this object is passed to select/poll as a parameter, the select/poll implementation would know to retrieve the underlying java SelectableChannel, if it exists yet. If it does not yet exist, because the socket is not yet commited to being a server or client socket, then it is simply excluded from the select/poll operation. The only problem with it is returning a non-integer from the fileno() method; instead a socket object would be returned. So the question I'm asking is: Does anyone know of existing cpython code which relies on the return value from socket.fileno() being an integer? Or code that would break if it were returned a socket instance instead of an integer? Thanks, Alan. P.S. If anyone is interested, a non-blocking sockets and select (and soon asyncore) implementation is currently residing in the jython sandbox. It is hoped that it will be included in jython 2.2rc1. More here http://wiki.python.org/jython/NewSocketModule From skip at pobox.com Wed May 23 12:39:38 2007 From: skip at pobox.com (skip at pobox.com) Date: Wed, 23 May 2007 05:39:38 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> Message-ID: <18004.6634.254718.603879@montanaro.dyndns.org> Nick> If you type "pydoc re" at the moment then it says in it Nick> MODULE DOCS Nick> http://www.python.org/doc/current/lib/module-re.html Nick> which is pretty much useless to me when ssh-ed in to a linux box Nick> half way around the world... I get quite a bit of information about re (I've never known /F to be a documentation slouch). Only one bit of that information is a reference to the page in the library reference manual. And if I happen to be ssh'd into a machine halfway round the world through a Gnome terminal I can right mouse over that URL and pop the page up in my default local browser. If you set the PYTHONDOCS environment variable you can point it to a local (or at least different) copy of the libref manual. A flag could be added to pydoc to show that content instead, however being html it probably would be difficult to read unless pumped through lynx -dump or something similar. Skip From lgautier at gmail.com Wed May 23 12:52:43 2007 From: lgautier at gmail.com (Laurent Gautier) Date: Wed, 23 May 2007 18:52:43 +0800 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> Message-ID: <27d1e6020705230352h27642fdbgf5277be33df9e7e2@mail.gmail.com> 2007/5/23, Nick Craig-Wood : > Georg Brandl wrote: > > Nick Craig-Wood schrieb: > > > Being a seasoned unix user, I tend to reach for pydoc as my first stab > > > at finding some documentation rather than (after excavating the mouse > > > from under a pile of paper) use a web browser. > > > > > > If you've ever used pydoc you'll know it reads docstrings and for some > > > modules they are great and for others they are sorely lacking. > > > > > > If pydoc could show all this documentation as well I'd be very happy! > > > > > > Maybe your quick dispatch feature could be added to pydoc too? > > > > It is my intention to work together with Ron Adam to make the pydoc <-> > > documentation integration as seamless as possible. > > So I'll be able to read the main docs for a module in a terminal > without reaching for the web browser (or info)? That would be great! One option is to use a text-mode browser (lynx, links, or the likes). The other is to develop a terminal mode application (currently in pydoc, I believe) > How would pydoc decide which bit of docs it is going to show? > > If I type "pydoc re" is it going to give me the rather sparse __doc__ > from the re module or the nice reST docs? Or maybe both, one after > the other? Or will I have to use a flag to dis-ambiguate? I really think that making pydoc a solid library to extract/search/navigate the documentation offers a lot of interesting perspectives. When one think beyond the application discussed here, there are a lot of tools (ipython, or any IDE for example) that could make great use of the facility. [note: Ron and I seemed to disagree on what (and how) pydoc should be, and that in particular, but I keep a keen interest in having such a library.] > If you type "pydoc re" at the moment then it says in it > > MODULE DOCS > http://www.python.org/doc/current/lib/module-re.html > > which is pretty much useless to me when ssh-ed in to a linux box half > way around the world... > > > > It is missing conversion of ``comment'' at the moment as I'm sure you > > > know... > > > > Sorry, what did you mean? > > ``comment'' produces smart quotes in latex if I remember correctly. > You probably want to convert it somehow because it looks a bit odd on > the web page as it stands. I'm not sure what the reST replacement > might be, but converting it just to "comment" would probably be OK. > Likewise with `comment' to 'comment'. > > For an example see the first paragraph here: > > http://pydoc.gbrandl.de/reference/index.html > > -- > Nick Craig-Wood -- http://www.craig-wood.com/nick > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/lgautier%40gmail.com > From ronaldoussoren at mac.com Wed May 23 13:10:30 2007 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Wed, 23 May 2007 04:10:30 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18004.6634.254718.603879@montanaro.dyndns.org> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> <18004.6634.254718.603879@montanaro.dyndns.org> Message-ID: <06E73797-0112-1000-FE54-55D0E08C7821-Webmail-10014@mac.com> On Wednesday, May 23, 2007, at 12:40PM, wrote: > > Nick> If you type "pydoc re" at the moment then it says in it > > Nick> MODULE DOCS > Nick> http://www.python.org/doc/current/lib/module-re.html > > Nick> which is pretty much useless to me when ssh-ed in to a linux box > Nick> half way around the world... > >I get quite a bit of information about re (I've never known /F to be a >documentation slouch). Only one bit of that information is a reference to >the page in the library reference manual. And if I happen to be ssh'd into >a machine halfway round the world through a Gnome terminal I can right mouse >over that URL and pop the page up in my default local browser. If you set >the PYTHONDOCS environment variable you can point it to a local (or at least >different) copy of the libref manual. A flag could be added to pydoc to >show that content instead, however being html it probably would be difficult >to read unless pumped through lynx -dump or something similar. pydoc can already do this for the language reference (try 'pydoc import' on a system with a local install of the python documentation). Ronald > >Skip >_______________________________________________ >Python-Dev mailing list >Python-Dev at python.org >http://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com > > From g.brandl at gmx.net Wed May 23 15:21:46 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 23 May 2007 15:21:46 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> Message-ID: Nick Craig-Wood schrieb: >> > It is missing conversion of ``comment'' at the moment as I'm sure you >> > know... >> >> Sorry, what did you mean? > > ``comment'' produces smart quotes in latex if I remember correctly. > You probably want to convert it somehow because it looks a bit odd on > the web page as it stands. I'm not sure what the reST replacement > might be, but converting it just to "comment" would probably be OK. > Likewise with `comment' to 'comment'. Ahh, now the dime has fallen ;) (sorry, German phrase) Yes, it should probably use Unicode equivalents of these quotes, as it does with en- and em-dashes. There are also nifty "post-processor" filters which operate on complete HTML pages and replace normal quotes by "smart" ones, perhaps that is the way to go. Georg From g.brandl at gmx.net Wed May 23 15:54:06 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 23 May 2007 15:54:06 +0200 Subject: [Python-Dev] -OO and Docstrings Message-ID: Bug #1722485 reports that Py 2.5+ doesn't ignore docstrings anymore if used with -OO. Attached patch should fix this. Georg -------------- next part -------------- A non-text attachment was scrubbed... Name: docstrings.diff Type: text/x-patch Size: 493 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20070523/eadcac08/attachment.bin From alexandre at peadrop.com Wed May 23 16:01:11 2007 From: alexandre at peadrop.com (Alexandre Vassalotti) Date: Wed, 23 May 2007 10:01:11 -0400 Subject: [Python-Dev] [Python-3000] Introduction and request for commit access to the sandbox. In-Reply-To: References: <465123A9.8090500@v.loewis.de> Message-ID: On 5/23/07, Neal Norwitz wrote: > On 5/22/07, Alexandre Vassalotti wrote: > > > > As you see, cStringIO's code also needs a good cleanup to make it, > > at least, conforms to PEP-7. > > Alexandre, > > It would be great if you could break up unrelated changes into > separate patches. Some of these can go in sooner rather than later. > I don't know all the things that need to be done, but I could imagine > a separate patch for each of: > > * whitespace normalization > * function name modification > * other formatting changes > * bug fixes > * changes to make consistent with StringIO > > I don't know if all those items in the list need to change, but that's > the general idea. Separate patches will make it much easier to review > and get benefits from your work earlier. I totally agree, and that was already my current idea. > I look forward to seeing your work! Thanks! -- Alexandre From scott+python-dev at scottdial.com Wed May 23 16:37:03 2007 From: scott+python-dev at scottdial.com (Scott Dial) Date: Wed, 23 May 2007 10:37:03 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> Message-ID: <4654518F.9080605@scottdial.com> Nick Craig-Wood wrote: > ``comment'' produces smart quotes in latex if I remember correctly. > You probably want to convert it somehow because it looks a bit odd on > the web page as it stands. I'm not sure what the reST replacement > might be, but converting it just to "comment" would probably be OK. > Likewise with `comment' to 'comment'. > > For an example see the first paragraph here: > > http://pydoc.gbrandl.de/reference/index.html > In fairness to Georg, latex2html also misses the smart quotes. See the same paragraph here: http://docs.python.org/ref/front.html -- Scott Dial scott at scottdial.com scodial at cs.indiana.edu From fdrake at acm.org Wed May 23 17:00:27 2007 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Wed, 23 May 2007 11:00:27 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <4654518F.9080605@scottdial.com> References: <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> <4654518F.9080605@scottdial.com> Message-ID: <200705231100.28207.fdrake@acm.org> Nick Craig-Wood wrote: > ``comment'' produces smart quotes in latex if I remember correctly. > You probably want to convert it somehow because it looks a bit odd on > the web page as it stands. I'm not sure what the reST replacement > might be, but converting it just to "comment" would probably be OK. > Likewise with `comment' to 'comment'. > > For an example see the first paragraph here: > > http://pydoc.gbrandl.de/reference/index.html What latex does here for typeset output is nice, but it's also a bit of a hack job. The ` and ' characters aren't smart, the fonts just have curved glyphs for them. `` and '' are mapped to additional glyphs using ligatures, again part of the font information. The result, of course, is really nice. :-) Scott Dial wrote: > In fairness to Georg, latex2html also misses the smart quotes. See the > same paragraph here: > > http://docs.python.org/ref/front.html There's a way to make latex2html do "the right thing" for these, except... it then happily does so even to ` and '' (and `` and '') in code samples, since there's no equivalent to the font information used to handle this in latex. -Fred -- Fred L. Drake, Jr. From talin at acm.org Wed May 23 18:25:46 2007 From: talin at acm.org (Talin) Date: Wed, 23 May 2007 09:25:46 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <1b151690705221730h74b09b4em4933888738ee8972@mail.gmail.com> References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> <4652F73D.7060002@gmail.com> <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> <1b151690705221730h74b09b4em4933888738ee8972@mail.gmail.com> Message-ID: <46546B0A.7000502@acm.org> Martin Blais wrote: > On 5/22/07, Martin Blais wrote: >> ReST works well only when there is little markup. Writing code >> documentation generally requires a lot of markup, you want to make >> variables, classes, functions, parameters, constants, etc.. (A better >> avenue IMHO would be to augment docutils with some code to >> automatically figure out the syntax of functions, parameters, classes, >> etc., i.e., less markup, and if we do this in Python we may be able to >> use introspection. This is a challenge, however, I don't know if it >> can be done at all.) > > Just to follow-up on that idea: I don't think it would be very > difficult to write a very small modification to docutils that > interprets the default role with more "smarts", for example, you can > all guess what the types of these are about: > > `class Foo` (this is a class Foo) > `bar(a, b, c) -> str` (this is a function "bar" which returns a string) > `name (attribute)` (this is an attribute) > > ...so why couldn't the computer solve that problem for you? I'm sure > we could make it happen. Essentially, what is missing from ReST is > "less markup for documenting programs". By restricting the > problem-set to Python programs, we can go a long way towards making > much of this automatic, even without resorting to introspecting the > source code that is being documented. I was going to suggest something similar. Ideally, any markup language ought to have a kind of "Huffman Coding" of complexity - in other words, the markup symbols that are used the most frequently are the ones that should be the shortest and easiest to type. Just as in real Huffman Coding, the popularity of a given element is going to depend on context. This would imply that there should be customizations of the markup language for different knowledge domains. While there are some benefits to having a 'standard' markup, any truly universal markup is going to be much heavier and more cumbersome than one that is specialized for the task. I would advocate a system in which the author inserts minimalistic 'hints' into the text, and the document processor uses those hints along with some automatic reasoning to determine the final markup. As in the above example, the use of backticks can be signal to the document processor that the enclosed text should be examined for identifiers and other Python syntax. I would also suggest that one test for evaluating the quality of markup syntax is whether or not it can be learned by example - can a user follow the pattern of some other part of the docs, without having to learn the syntax in a formal way? -- Talin From kristjan at ccpgames.com Wed May 23 18:38:06 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 23 May 2007 16:38:06 +0000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <465365DE.1090306@v.loewis.de> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> Message-ID: <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> > -----Original Message----- > From: "Martin v. L?wis" [mailto:martin at v.loewis.de] > > That couldn't work for me. I try avoid building on a network drive, and > with local drives, I just can't have a Windows build and a Linux build > on the same checkout - they live on separate file systems, after all > (Linux on ext3, Windows on NTFS, with multi-boot switching between > them). Very well, leaving linux aside, I don't see why this: /win32mount/trunk/PCbuild/ /x64mount/trunk/PCbuild/ Is any different from /winmount/trunk/PCBuild/win32 /winmount/trunk/PCBuild/x64 I don't understand this extraordinary reluctance to add a single extra directory. The windows build process is different from any other build process, so even If all the other platforms live one directory higher, why must windows? > I don't *use* them simultaneously, of course - I cannot work > on two architectures simultaneously, anyway. I do so on a daily basis. I designed PCBuild8 to be easy for interactive work using VisualStudio, using property sheets and such to group common settings for easy editing. During the course of my work, I will edit a file (which is checked out from Perforce), compile debug for two platforms, test, and repeat. > > > I say let's just admit that tools can compile for > > more than one target. Let's adapt to it and we will all be happier. > > You might be; I will be sad. It comes for a price, I well understand the benefits, I use it all the time, but the price still eludes me. how can a different name for the output folder for a different platform be such a big problem? > > And btw, there is no need to install the msvcr8.dll. We can > distribute > > them as a private assembly. then they (and the manifest) exist in > the same > > directory as python2x.dll. > > Yes, but then python2x.dll goes into system32, and so will msvcr8.dll, > no? Yes, that is correct. Well, there is a CRT .exe redist if you want to deploy this into the SxS cache, it just has to be run as part of the install process. But that may or may not be problematic, I don't know. > > >> Not sure whether anything really is needed. Python works fine on > Vista. > > If you are an administrator. A limited user will have problems > installing > > it and then running it. > > Is there a bug report for that? I don't know. At any rate, I think any vista issues is a completely separate thing, something that needs to be handled as a whole, rather than responding to a particular problem reported in a bug report. > > >>> 1) supplying python.dll as a Side By Side assembly > >> What would that improve? > > Well, it should reduce dll-hell problems of applications that ship > with > > python2x.dll. You ship with and link to your own and tested dll. We > > have some concerns here, for example, now that we are moving away > from > > embedding python in our blue.dll and using python25.dll directly, > that > > this exposes a vulnerability to the integrity of the software. > > Why should there be versioning problems with python25.dll? Are there > any past issues with incompatibilities with any python2x.dll release? Someone could replace the python25.dll that we ship with their own patched version, thereby gaining backdoor access to the software. The way windows searches for old style dlls makes this easy. Using the SxS signed loading scheme, you can protect yourself up to a point from such attacks. Of course, this doesn't have to be a problem for vanilla python, we can do this on our own for the patched python25 that we employ, but it still might be something others could find useful. > > Install in the ProgramFiles folder. > > Only over my dead body. *This* is silly. Bill doesn't think so. And he gets to decide. I mean we do want to play nice, don't we? Nothing installs itself in the root anymore, not since windows 3.1 > > > Just as C does. Ah, and > > this also means that we could install both 32 bit and 64 bit > > versions, another plus. > > What about the registry? I don't know about the registry, what is it used for? 64 bit windows already ships with dual versions of some apps, notably explorer.exe so that shouldn't be a big problem. > > > Interesting. We are definitely interested in that. You see, Someone > > installs a game or accounting software using vista. He then runs as > a > > limited user. Python insists on saving its .pyc files in the > installation > > folder, but this is not something that is permitted on Vista. > > But that's not a problem, is it? Writing silently "fails", i.e. it just > won't save the pyc files. Happens all the time on Unix. It may not silently fail, depending on your user status. An admin might get a confirmation window, for example. > > >> Sure, and have they reported problems with Python on Vista (problems > >> specific to Vista?) > > Certainly. We are working on them, of course. > > But, of course, they have not been reported. These are not python errors as such, but rather EVE errors. We ship the .py files precompiled and therefore avoid the aforementioned problems. But we have had to move all sorts of temporary files out of "program files" and into "documents and settings/user/local settings/Application Data/CCP/Eve Kristjan From trentm at activestate.com Wed May 23 19:40:15 2007 From: trentm at activestate.com (Trent Mick) Date: Wed, 23 May 2007 10:40:15 -0700 Subject: [Python-Dev] Windows buildbot (Was: buildbot failure in x86 W2k trunk) In-Reply-To: <464FFFDC.4020600@v.loewis.de> References: <20070520071645.BA1C01E4004@bag.python.org> <464FFFDC.4020600@v.loewis.de> Message-ID: <46547C7F.7040908@activestate.com> http://www.python.org/dev/buildbot/all/x86%20W2k%20trunk Is my buildbot the only reliable Windows buildbot machine? It is possible that within a couple of weeks or so I'll have to take this one offline. Are there others that can provide a Windows buildbot? It would probably be good to have two -- and a WinXP one would be good. Trent -- Trent Mick trentm at activestate.com From rrr at ronadam.com Wed May 23 19:46:50 2007 From: rrr at ronadam.com (Ron Adam) Date: Wed, 23 May 2007 12:46:50 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> Message-ID: <46547E0A.6090409@ronadam.com> Nick Craig-Wood wrote: > Georg Brandl wrote: >> Nick Craig-Wood schrieb: >>> Being a seasoned unix user, I tend to reach for pydoc as my first stab >>> at finding some documentation rather than (after excavating the mouse >>> from under a pile of paper) use a web browser. >>> >>> If you've ever used pydoc you'll know it reads docstrings and for some >>> modules they are great and for others they are sorely lacking. >>> >>> If pydoc could show all this documentation as well I'd be very happy! >>> >>> Maybe your quick dispatch feature could be added to pydoc too? >> It is my intention to work together with Ron Adam to make the pydoc <-> >> documentation integration as seamless as possible. > > So I'll be able to read the main docs for a module in a terminal > without reaching for the web browser (or info)? That would be great! > > How would pydoc decide which bit of docs it is going to show? Pydoc currently gets topic info for some items by scraping the text from document 'local' web pages. This is kind of messy for a couple of reasons. - The documents may not be installed locally. - It can be problematic locating the docs even if they are installed. - They are not formatted well after they are retrieved. I think this is an area for improvement. This feature is also limited to a small list where the word being searched for is a keyword, or a very common topic reference, *and* they are not likely to clash with other module, class, or function names. The introspection help parts of pydoc are completely separate from topic help parts. So replacing this part can be done without much trouble. What the best behavior is and how it should work would need to be discussed. Keep in mind doc strings are meant to be more of a quick reference to an item, and Pydoc is the interface for that. > If I type "pydoc re" is it going to give me the rather sparse __doc__ > from the re module or the nice reST docs? Or maybe both, one after > the other? Or will I have to use a flag to dis-ambiguate? If retrieval from the full docs is desired, then it will probably need to be disambiguated in some way or be a separate interface. help('re') # Quick reference on 're'. helpdocs('re') # Full documentation for 're'. > If you type "pydoc re" at the moment then it says in it > > MODULE DOCS > http://www.python.org/doc/current/lib/module-re.html > > which is pretty much useless to me when ssh-ed in to a linux box half > way around the world... > >>> It is missing conversion of ``comment'' at the moment as I'm sure you >>> know... >> Sorry, what did you mean? > > ``comment'' produces smart quotes in latex if I remember correctly. > You probably want to convert it somehow because it looks a bit odd on > the web page as it stands. I'm not sure what the reST replacement > might be, but converting it just to "comment" would probably be OK. > Likewise with `comment' to 'comment'. > > For an example see the first paragraph here: > > http://pydoc.gbrandl.de/reference/index.html > From rrr at ronadam.com Wed May 23 20:12:03 2007 From: rrr at ronadam.com (Ron Adam) Date: Wed, 23 May 2007 13:12:03 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <27d1e6020705230352h27642fdbgf5277be33df9e7e2@mail.gmail.com> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> <27d1e6020705230352h27642fdbgf5277be33df9e7e2@mail.gmail.com> Message-ID: <465483F3.5000505@ronadam.com> Laurent Gautier wrote: > 2007/5/23, Nick Craig-Wood : >> Georg Brandl wrote: >>> Nick Craig-Wood schrieb: >>>> Being a seasoned unix user, I tend to reach for pydoc as my first stab >>>> at finding some documentation rather than (after excavating the mouse >>>> from under a pile of paper) use a web browser. >>>> >>>> If you've ever used pydoc you'll know it reads docstrings and for some >>>> modules they are great and for others they are sorely lacking. >>>> >>>> If pydoc could show all this documentation as well I'd be very happy! >>>> >>>> Maybe your quick dispatch feature could be added to pydoc too? >>> It is my intention to work together with Ron Adam to make the pydoc <-> >>> documentation integration as seamless as possible. >> So I'll be able to read the main docs for a module in a terminal >> without reaching for the web browser (or info)? That would be great! > > One option is to use a text-mode browser (lynx, links, or the likes). > > The other is to develop a terminal mode application (currently in pydoc, > I believe) > >> How would pydoc decide which bit of docs it is going to show? >> >> If I type "pydoc re" is it going to give me the rather sparse __doc__ >> from the re module or the nice reST docs? Or maybe both, one after >> the other? Or will I have to use a flag to dis-ambiguate? > > I really think that making pydoc a solid library to extract/search/navigate > the documentation offers a lot of interesting perspectives. > When one think beyond the application discussed here, there are a > lot of tools (ipython, or any IDE for example) that could make great use > of the facility. > > [note: Ron and I seemed to disagree on what (and how) pydoc should be, > and that in particular, but I keep a keen interest in having such a library.] But we agree on making it a useful library module or package. ;-) And I don't see anything above I disagree with. Cheers, Ron >> If you type "pydoc re" at the moment then it says in it >> >> MODULE DOCS >> http://www.python.org/doc/current/lib/module-re.html >> >> which is pretty much useless to me when ssh-ed in to a linux box half >> way around the world... >> >>>> It is missing conversion of ``comment'' at the moment as I'm sure you >>>> know... >>> Sorry, what did you mean? >> ``comment'' produces smart quotes in latex if I remember correctly. >> You probably want to convert it somehow because it looks a bit odd on >> the web page as it stands. I'm not sure what the reST replacement >> might be, but converting it just to "comment" would probably be OK. >> Likewise with `comment' to 'comment'. >> >> For an example see the first paragraph here: >> >> http://pydoc.gbrandl.de/reference/index.html >> >> -- >> Nick Craig-Wood -- http://www.craig-wood.com/nick >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/lgautier%40gmail.com >> > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/rrr%40ronadam.com > > From theller at ctypes.org Wed May 23 20:43:05 2007 From: theller at ctypes.org (Thomas Heller) Date: Wed, 23 May 2007 20:43:05 +0200 Subject: [Python-Dev] Windows buildbot (Was: buildbot failure in x86 W2k trunk) In-Reply-To: <46547C7F.7040908@activestate.com> References: <20070520071645.BA1C01E4004@bag.python.org> <464FFFDC.4020600@v.loewis.de> <46547C7F.7040908@activestate.com> Message-ID: Trent Mick schrieb: > http://www.python.org/dev/buildbot/all/x86%20W2k%20trunk > > Is my buildbot the only reliable Windows buildbot machine? > It is possible that within a couple of weeks or so I'll have to take > this one offline. > > Are there others that can provide a Windows buildbot? It would probably > be good to have two -- and a WinXP one would be good. How much work is it to set one up, and to maintain it? Maybe I can offer an XP VMWare image. Thomas From trentm at activestate.com Wed May 23 21:08:52 2007 From: trentm at activestate.com (Trent Mick) Date: Wed, 23 May 2007 12:08:52 -0700 Subject: [Python-Dev] Windows buildbot (Was: buildbot failure in x86 W2k trunk) In-Reply-To: References: <20070520071645.BA1C01E4004@bag.python.org> <464FFFDC.4020600@v.loewis.de> <46547C7F.7040908@activestate.com> Message-ID: <46549144.6050901@activestate.com> Thomas Heller wrote: >> Are there others that can provide a Windows buildbot? It would probably >> be good to have two -- and a WinXP one would be good. > > How much work is it to set one up, and to maintain it? Maybe I can offer an XP VMWare image. It has been a while since I set it up. Tim did so at about the same time and wrote down his steps to setup... but I can't find the reference to those instructions right now. I've found maintenance to be low -- just have to have to start a cmd shell and run the buildbot slave command. However, that may be because the box on which it is running isn't one I use regularly, so I don't have to worry about accidentally killing the process, frequent reboots or anything like that. I'll try to dig around and see what I can find for setup instructions. Trent -- Trent Mick trentm at activestate.com From titus at caltech.edu Wed May 23 21:17:55 2007 From: titus at caltech.edu (Titus Brown) Date: Wed, 23 May 2007 12:17:55 -0700 Subject: [Python-Dev] Windows buildbot (Was: buildbot failure in x86 W2k trunk) In-Reply-To: <46549144.6050901@activestate.com> References: <20070520071645.BA1C01E4004@bag.python.org> <464FFFDC.4020600@v.loewis.de> <46547C7F.7040908@activestate.com> <46549144.6050901@activestate.com> Message-ID: <20070523191755.GA29353@caltech.edu> On Wed, May 23, 2007 at 12:08:52PM -0700, Trent Mick wrote: -> Thomas Heller wrote: -> >> Are there others that can provide a Windows buildbot? It would probably -> >> be good to have two -- and a WinXP one would be good. -> > -> > How much work is it to set one up, and to maintain it? Maybe I can offer an XP VMWare image. -> -> It has been a while since I set it up. Tim did so at about the same time -> and wrote down his steps to setup... but I can't find the reference to -> those instructions right now. -> -> I've found maintenance to be low -- just have to have to start a cmd -> shell and run the buildbot slave command. However, that may be because -> the box on which it is running isn't one I use regularly, so I don't -> have to worry about accidentally killing the process, frequent reboots -> or anything like that. -> -> I'll try to dig around and see what I can find for setup instructions. It's mildly tricky to install, but very easy to set up a slave process; I have several. I can offer an image, but what kills me is the maintenance. It's rarely a big deal -- reboot after some updates, install some startup scripts, etc. -- but when it *does* require an effort I hate it off because I dislike Windows so intensely ;). Not a good reason, I know, but... cheers, --titus From trentm at activestate.com Wed May 23 21:27:48 2007 From: trentm at activestate.com (Trent Mick) Date: Wed, 23 May 2007 12:27:48 -0700 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <46520F4F.5010502@v.loewis.de> References: <00af01c79b6f$2f863c30$1f0a0a0a@enfoldsystems.local> <46520F4F.5010502@v.loewis.de> Message-ID: <465495B4.1040400@activestate.com> [MarkH] >> I'm guessing vsextcomp doesn't use the Visual >> Studio 'ReleaseAMD64' configuration - would it be OK for me to check in >> changes to the PCBuild projects for this configuration? > [Martin v. L?wis] > Please don't. It exclusively relies on vsextcomp, and is only useful > if you have that infrastructure installed. See for yourself: it uses > the /USE_CL:MS_OPTERON command line switch, which isn't a Microsoft > invention (but instead invented by Peter Tr?ger). Aside: it isn't my experience that vsextcomp is necessary to cross-compile for AMD64 and IA64. My cross-build process basically equates to: - Run the appropriate environment setup for the correct compiler. E.g., for the Platform SDK AMD64 compiler and with the current Platform SDK this is: C:\Program Files\Microsoft Platform SDK\SetEnv.Cmd /X64 /RETAIL - Run the solution file with "devenv.com" (IIRC, devenv.exe doesn't take command-line args) and be sure to pass ing "/useenv" to pick up the environment changes. (*) set DEVENV_COM=path/to/devenv.com %DEVENV_COM% PCbuild\pcbuild.sln /useenv /build ReleaseAMD64 (*) Note that for a cross-build the "make_versioninfo" and "make_buildinfo" projects need to be built natively first: "C:\Program Files\Microsoft Visual Studio .NET 2003\Vc7\bin\vcvars32.bat" %DEVENV_COM% PCbuild\pcbuild.sln /useenv /build ReleaseAMD64 /project make_versioninfo %DEVENV_COM% PCbuild\pcbuild.sln /useenv /build ReleaseAMD64 /project make_buildinfo This is all VS7.1, though. I don't yet know if VS8 throws a spanner into the works. For VS6 I use "msdev" instead of "devenv.com" and "PC\VC6\pcbuild.dsw" instead of "PCbuild\pcbuild.sln". I haven't looked into what vsextcomp does, so apologies if this is ignorant. Trent -- Trent Mick trentm at activestate.com From trentm at activestate.com Wed May 23 21:32:41 2007 From: trentm at activestate.com (Trent Mick) Date: Wed, 23 May 2007 12:32:41 -0700 Subject: [Python-Dev] Windows buildbot (Was: buildbot failure in x86 W2k trunk) In-Reply-To: <46549144.6050901@activestate.com> References: <20070520071645.BA1C01E4004@bag.python.org> <464FFFDC.4020600@v.loewis.de> <46547C7F.7040908@activestate.com> <46549144.6050901@activestate.com> Message-ID: <465496D9.9040503@activestate.com> Trent Mick wrote: > It has been a while since I set it up. Tim did so at about the same time > and wrote down his steps to setup... but I can't find the reference to > those instructions right now. http://wiki.python.org/moin/BuildbotOnWindows If you run into problems setting it up, feel free to ping me. chat: (Gtalk/Jabber) trentm at gmail.com > > I've found maintenance to be low -- just have to have to start a cmd > shell and run the buildbot slave command. I believe MarkH posted some notes on getting a buildbot Windows slave running as a service as well, but I didn't try that. Getting that working easily could help cut down on the maintenance. Trent -- Trent Mick trentm at activestate.com From trentm at activestate.com Wed May 23 21:37:54 2007 From: trentm at activestate.com (Trent Mick) Date: Wed, 23 May 2007 12:37:54 -0700 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <46520F4F.5010502@v.loewis.de> References: <00af01c79b6f$2f863c30$1f0a0a0a@enfoldsystems.local> <46520F4F.5010502@v.loewis.de> Message-ID: <46549812.9080400@activestate.com> >> It seems the >> best thing might be to modify the PCBuild8 build process so the output >> binaries are in the ../PCBuild' directory - this way distutils and others >> continue to work fine. Does that sound reasonable? > > I think Kristjan will have to say a word here: I think he just likes > it the way it is right now. That would rather suggest that build_ext > needs to be changed. I use this patch in ActivePython to get distutils to find the correct PCbuild dir (see attached). Trent -- Trent Mick trentm at activestate.com -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: distutils_pcbuild_lib_dir.patch Url: http://mail.python.org/pipermail/python-dev/attachments/20070523/41c69cef/attachment.pot From snaury at gmail.com Wed May 23 22:36:21 2007 From: snaury at gmail.com (Alexey Borzenkov) Date: Thu, 24 May 2007 00:36:21 +0400 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> Message-ID: On 5/23/07, Kristj?n Valur J?nsson wrote: > > > Install in the ProgramFiles folder. > > Only over my dead body. *This* is silly. > Bill doesn't think so. And he gets to decide. I mean we do want > to play nice, don't we? Nothing installs itself in the root anymore, > not since windows 3.1 Maybe installing in the root is not good, but installing to "Program Files" is just asking for trouble. All sorts of development tools might suddenly break because of that space in the middle of the path and requirement to use quotes around it. I thus usually install things to :\Programs. I'm not sure if any packages/programs will break because of that space, but what if some will? From martin at v.loewis.de Thu May 24 00:34:07 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 24 May 2007 00:34:07 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> Message-ID: <4654C15F.2040906@v.loewis.de> > Very well, leaving linux aside, I don't see why this: > /win32mount/trunk/PCbuild/ > /x64mount/trunk/PCbuild/ > > Is any different from > /winmount/trunk/PCBuild/win32 > /winmount/trunk/PCBuild/x64 > > I don't understand this extraordinary reluctance to add a single extra directory. > The windows build process is different from any other build process, so even > If all the other platforms live one directory higher, why must windows? It doesn't need to, and reluctance is not wrt. to the proposed new layout, but wrt. changing the current one. Tons of infrastructure depends on the files having exactly the names that they have now, and being located in exactly the locations where they are currently located. Any change to that, whatever minor, will cause problems to some people. So for the change to be made, the advantages must clearly outweigh the incompatibility of the change in the first place. >>> I say let's just admit that tools can compile for >>> more than one target. Let's adapt to it and we will all be happier. >> You might be; I will be sad. It comes for a price, > I well understand the benefits, I use it all the time, but the price > still eludes me. how can a different name for the output folder > for a different platform be such a big problem? When running Python now, I type (after having changed to the source directory in the cmd.exe window) PCbpyt or some such. To navigate to a subdirectory, I need many more keystrokes. I find it very painful to invoke python.exe from the PCbuild8 directory. I don't want that pain to be the default. > Yes, that is correct. Well, there is a CRT .exe redist if you want to deploy > this into the SxS cache, it just has to be run as part of the install process. > But that may or may not be problematic, I don't know. Microsoft recommends to use the merge module (.msm), and I think this is what should be done (if feasible). >> Why should there be versioning problems with python25.dll? Are there >> any past issues with incompatibilities with any python2x.dll release? > Someone could replace the python25.dll that we ship with their own patched > version, thereby gaining backdoor access to the software. The way > windows searches for old style dlls makes this easy. Using the SxS > signed loading scheme, you can protect yourself up to a point from such > attacks. Of course, this doesn't have to be a problem for vanilla > python, we can do this on our own for the patched python25 that we > employ, but it still might be something others could find useful. I personally think that if hostile users can replace DLLs on your system, you have bigger problems than SxS installation of pythhonxy.dll, but perhaps that's just me. >>> Install in the ProgramFiles folder. >> Only over my dead body. *This* is silly. > Bill doesn't think so. And he gets to decide. He can decide not to give Python the "Vista ready" logo. If so, I don't want it. He cannot decide to make the Python installer to install into a directory with a space in its name by default. > I mean we do want to play nice, don't we? Nice to users of Python, sure. I would not have said a word if the standard directory would have been, say "\usr\bin". However, they happened to chose "Program Files", making it language dependent, and putting a space in the name. That space has caused numerous problems to Python scripts, and I expect changing the default would cause a lot of problems to end users. >> What about the registry? > I don't know about the registry, what is it used for? For two things, with different importance to different users: 1. File extensions are registered there, e.g. .py and .pyc. With two binaries installed, the will stomp over each other's file associations; only one of them can win. 2. Python installs keys under HL{LM|CU}\Software\Python\PythonCore\, namely InstallPath InstallGroup PythonPath Documentation Modules For some of these, add-on libraries and applications may modify these keys, and the interpreter will pick up the changes. Again, there can be only one installation on the system "owning" these settings; two simultaneous installations will stomp on each other's settings. > 64 bit windows already ships with dual versions of some apps, notably > explorer.exe so that shouldn't be a big problem. The two versions of MSIE actually *are* a big problem, that's why MS only runs the 32-bit IE, even on Win64 (otherwise, ActiveX controls downloaded from the net wouldn't work). Also, while they are both shipped, you can't the two versions of explorer.exe simultaneously (without trickery), so its far from simple. >> But that's not a problem, is it? Writing silently "fails", i.e. it just >> won't save the pyc files. Happens all the time on Unix. > It may not silently fail, depending on your user status. An admin might > get a confirmation window, for example. Can you describe the precise scenario which makes that happen? To my knowledge, Vista will *not* open a confirmation window when Python attempts to write a .pyc file. Regards, Martin From martin at v.loewis.de Thu May 24 00:38:56 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 24 May 2007 00:38:56 +0200 Subject: [Python-Dev] Windows buildbot (Was: buildbot failure in x86 W2k trunk) In-Reply-To: <46547C7F.7040908@activestate.com> References: <20070520071645.BA1C01E4004@bag.python.org> <464FFFDC.4020600@v.loewis.de> <46547C7F.7040908@activestate.com> Message-ID: <4654C280.1080802@v.loewis.de> Trent Mick schrieb: > http://www.python.org/dev/buildbot/all/x86%20W2k%20trunk > > Is my buildbot the only reliable Windows buildbot machine? Tim Peter's machine comes and goes, depending on whether he starts the buildbot. Alan McIntyre's machien should be mostly he reliable, but nobody really notices if it goes away. > It is possible that within a couple of weeks or so I'll have to take > this one offline. Don't worry about that. It's a volunteer service, so if nobody volunteers, regular building on Windows just won't happen. > Are there others that can provide a Windows buildbot? It would probably > be good to have two -- and a WinXP one would be good. It certainly would be good. Unfortunately, Windows users are not that much engaged in the open source culture, so few of them volunteer (plus it's more painful, with Windows not being a true multi-user system). Regards, Martin From martin at v.loewis.de Thu May 24 00:45:33 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 24 May 2007 00:45:33 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <46549812.9080400@activestate.com> References: <00af01c79b6f$2f863c30$1f0a0a0a@enfoldsystems.local> <46520F4F.5010502@v.loewis.de> <46549812.9080400@activestate.com> Message-ID: <4654C40D.1090803@v.loewis.de> > I use this patch in ActivePython to get distutils to find the correct > PCbuild dir (see attached). Would you like to commit this to 2.6? (or perhaps 2.5 even?) Regards, Martin From martin at v.loewis.de Thu May 24 00:48:49 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 24 May 2007 00:48:49 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <465495B4.1040400@activestate.com> References: <00af01c79b6f$2f863c30$1f0a0a0a@enfoldsystems.local> <46520F4F.5010502@v.loewis.de> <465495B4.1040400@activestate.com> Message-ID: <4654C4D1.3010303@v.loewis.de> > - Run the appropriate environment setup for the correct compiler. E.g., > for the Platform SDK AMD64 compiler and with the current Platform SDK > this is: > > C:\Program Files\Microsoft Platform SDK\SetEnv.Cmd /X64 /RETAIL > > - Run the solution file with "devenv.com" (IIRC, devenv.exe doesn't take > command-line args) and be sure to pass ing "/useenv" to pick up the > environment changes. (*) > > set DEVENV_COM=path/to/devenv.com > %DEVENV_COM% PCbuild\pcbuild.sln /useenv /build ReleaseAMD64 Yes, that should work equally fine. > I haven't looked into what vsextcomp does, so apologies if this is ignorant. It spares you having to setup the environment; it provides a cl.exe wrapper that locates the SDK from the registry, and then invokes the cl.exe in the SDK if necessary. As a consequence, you can still just double-click the solution file, without having to run devenv.exe/com manually. Regards, Martin From martin at v.loewis.de Thu May 24 00:52:14 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 24 May 2007 00:52:14 +0200 Subject: [Python-Dev] Windows buildbot (Was: buildbot failure in x86 W2k trunk) In-Reply-To: References: <20070520071645.BA1C01E4004@bag.python.org> <464FFFDC.4020600@v.loewis.de> <46547C7F.7040908@activestate.com> Message-ID: <4654C59E.9080404@v.loewis.de> >> Are there others that can provide a Windows buildbot? It would probably >> be good to have two -- and a WinXP one would be good. > > How much work is it to set one up, and to maintain it? Maybe I can offer an XP VMWare image. Setting it up essentially requires to put all the software into place, see the wiki. Maintaining it requires attention in case it suddenly hangs, which it does more often on Windows than it does on Unix. In particular, when a process fails to terminate, subsequently it may fail to remove or modify files, and then it breaks completely until the process is killed. A weekly reboot would like fix the majority of the maintenance problems. Regards, Martin From mhammond at skippinet.com.au Thu May 24 00:53:06 2007 From: mhammond at skippinet.com.au (Mark Hammond) Date: Thu, 24 May 2007 08:53:06 +1000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> Message-ID: <031801c79d8d$2107b010$1f0a0a0a@enfoldsystems.local> > Very well, leaving linux aside, I don't see why this: > /win32mount/trunk/PCbuild/ > /x64mount/trunk/PCbuild/ > > Is any different from > /winmount/trunk/PCBuild/win32 > /winmount/trunk/PCBuild/x64 In the former case, assuming python is running from the 'trunk' directory, all architectures know how to locate the binary directory - it is always named 'PCBuild'. In the latter case, the directory name depends on the architecture. A number of existing tools already know about the 'PCBuild' directory, so these tools will not need to be taught anything new for x64. > I don't understand this extraordinary reluctance to add a > single extra directory. I think that this thread has enumerated the concerns fairly well. You may not agree with them, but if you don't understand them it might be worth re-reading Martin's responses. Note that I also understand your concerns and goals - I certainly see where you are coming from - but we have 2 competing goals - "work with as many existing, external build tools as possible" versus "take this opportunity to create a new cross-compile-capable x64 build environment for Windows and let those external tools deal with the breakage". As I mentioned in a previous email, my personal opinion would be swayed by looking externally. Specifically, if we could determine the likelihood of external build processes (eg, mozilla) working unchanged if we stick with 'PCBuild', and if we could determine the cross-compilation strategy being adopted by the external libs we use (zlib, bsddb, etc), I think we could make an informed decision. > > You might be; I will be sad. It comes for a price, > I well understand the benefits, I use it all the time, but the price > still eludes me. how can a different name for the output folder > for a different platform be such a big problem? Please see above - its not a problem if you think of the PCBuild8 process as the "last step" in a build process - but often it is not. External tools that use Python (ie, things you try and build after the Python build has completed) are impacted. I understand that you might not use such tools, but they do exist. > > Why should there be versioning problems with python25.dll? Are there > > any past issues with incompatibilities with any > python2x.dll release? > Someone could replace the python25.dll that we ship with > their own patched > version, thereby gaining backdoor access to the software. The way > windows searches for old style dlls makes this easy. Using the SxS > signed loading scheme, you can protect yourself up to a point > from such attacks. I'm not sure I buy this. If someone has enough access to your machine to change pythonxx.dll, you are pretty screwed already. > > What about the registry? > I don't know about the registry, what is it used for? Please see PC/getpathp.c in Python source tree. However, I agree that there are a number of things we could do to help Python play nicely on Vista. It might help if we can enumerate the specific problems and potential solutions in a more formal way (eg, a Python bug) Cheers, Mark -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3360 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20070524/c25f579d/attachment.bin From trentm at activestate.com Thu May 24 00:53:53 2007 From: trentm at activestate.com (Trent Mick) Date: Wed, 23 May 2007 15:53:53 -0700 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4654C40D.1090803@v.loewis.de> References: <00af01c79b6f$2f863c30$1f0a0a0a@enfoldsystems.local> <46520F4F.5010502@v.loewis.de> <46549812.9080400@activestate.com> <4654C40D.1090803@v.loewis.de> Message-ID: <4654C601.7070000@activestate.com> Martin v. L?wis wrote: >> I use this patch in ActivePython to get distutils to find the correct >> PCbuild dir (see attached). > > Would you like to commit this to 2.6? (or perhaps 2.5 even?) Sure, if others think it is a good thing. Will do tomorrow unless I hear a -1 before then. Trent -- Trent Mick trentm at activestate.com From mhammond at skippinet.com.au Thu May 24 01:12:40 2007 From: mhammond at skippinet.com.au (Mark Hammond) Date: Thu, 24 May 2007 09:12:40 +1000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4654C601.7070000@activestate.com> Message-ID: <032601c79d8f$dc7c8260$1f0a0a0a@enfoldsystems.local> > Martin v. L?wis wrote: > >> I use this patch in ActivePython to get distutils to find > the correct > >> PCbuild dir (see attached). > > > > Would you like to commit this to 2.6? (or perhaps 2.5 even?) > > Sure, if others think it is a good thing. Will do tomorrow > unless I hear > a -1 before then. I'm not quite a '-1', but am a little confused about where this would leave us. To some extent, this would formalize PCBuild8 and VC6 directories. External tools would then slowly start growing support for these additional directories and the previous benefits of "PCBuild is the canonical location" appear to vanish. Further, I expect that such a patch would confuse any attempts to manually copy from PCBuild8 into PCBuild, for example (ie, some tools knowing about PCBuild8 while others assume PCBuild is likely to get confusing.) Trent: I assume you use the same source tree for multiple platforms and compilers, meaning that changing these "optional" build processes to copy from PCBuild8/VC6 into PCBuild would cause pain? If not, do you think that would be a reasonable solution? Cheers, Mark From trentm at activestate.com Thu May 24 01:29:20 2007 From: trentm at activestate.com (Trent Mick) Date: Wed, 23 May 2007 16:29:20 -0700 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <032601c79d8f$dc7c8260$1f0a0a0a@enfoldsystems.local> References: <032601c79d8f$dc7c8260$1f0a0a0a@enfoldsystems.local> Message-ID: <4654CE50.2020004@activestate.com> > I'm not quite a '-1', but am a little confused about where this would leave > us. To some extent, this would formalize PCBuild8 and VC6 directories. > External tools would then slowly start growing support for these additional > directories and the previous benefits of "PCBuild is the canonical location" > appear to vanish. Further, I expect that such a patch would confuse any > attempts to manually copy from PCBuild8 into PCBuild, for example (ie, some > tools knowing about PCBuild8 while others assume PCBuild is likely to get > confusing.) > > Trent: I assume you use the same source tree for multiple platforms and > compilers, meaning that changing these "optional" build processes to copy > from PCBuild8/VC6 into PCBuild would cause pain? If not, do you think that > would be a reasonable solution? Changing to have bits always in PCbuild would work for me -- i.e. I *don't* build for multiple compilers/platforms in the same tree. Perhaps that is a better solution -- in the long run, anyway. Having the "bits" always in one dir for whatever the configuration is more akin to the Unix-y configure/make system. Is this something that could work for Python 2.5? Or just 2.6? Long term/aside: Moving to a configure/make build system on Windows, as you proposed in your first email, would be interesting. With MSYS though, not cygwin (a la bsmedberg's new MozillaBuild stuff). I just wish there were an autoconf alternative that wasn't as painful as autoconf. I have a few attempts for my purposes that are written in Python (an obvious bootstrapping problem for building Python itself :). Trent -- Trent Mick trentm at activestate.com From blais at furius.ca Thu May 24 02:08:56 2007 From: blais at furius.ca (Martin Blais) Date: Wed, 23 May 2007 17:08:56 -0700 Subject: [Python-Dev] nodef Message-ID: <1b151690705231708v19ab6f5mb647007d6bcd1cea@mail.gmail.com> Hi I often have the need for a generic object to use as the default value for a function parameter, where 'None' is a valid value for the parameter. For example: _sentinel = object() def first(iterable, default=_sentinel): """Return the first element of the iterable, otherwise the default value (if specified). """ for elem in iterable: # thx to rhettinger for optim. return elem if default is _sentinel: raise StopIteration return default Here, 'default' could legally accept None, so I cannot use that as the default value, nor anything else as far as I'm concerned (I don't know what lives in the iterable, so why should I make assumptions?). Sometimes in the past I've create generic objects, declared a class, e.g.: class NoDef: pass def foo(default=NoDef): ... and lately I've even started using the names of builtin functions (it bothers me a little bit though, that I do that). I think Python needs a builtin for this very purpose. I propose 'nodef', a unique object whose sole purpose is to serve as a default value. It should be unique, in the same sense that 'None' is unique. Comments or alternative solutions? From barry at python.org Thu May 24 02:40:35 2007 From: barry at python.org (Barry Warsaw) Date: Wed, 23 May 2007 20:40:35 -0400 Subject: [Python-Dev] nodef In-Reply-To: <1b151690705231708v19ab6f5mb647007d6bcd1cea@mail.gmail.com> References: <1b151690705231708v19ab6f5mb647007d6bcd1cea@mail.gmail.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 23, 2007, at 8:08 PM, Martin Blais wrote: > I often have the need for a generic object to use as the default value > for a function parameter, where 'None' is a valid value for the > parameter. I do the same thing for 'get' calls, where None is a legal return value. I often call this unique object 'missing', e.g. missing = object() if some_dict.get('foo', missing) is missing: # It's missing I like the way that reads. Still, I'm -1 on adding something like this to built-ins because any built-in could potentially be a value in a mapping or a default argument. Have some super-secret module global instantiated just for the purpose prevents that. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQCVAwUBRlTfBHEjvBPtnXfVAQI3kQP6AzYa1VNUIgaqY4aQAW3dUX2sxicEikts NT6NIo/F676b1P7XYCrN7RA9JYWoyJmMhqrz7EN3SL2dkzd4mcY/XZF/zbY9ph8d W1SEWo00ImFitSRwngIlUmhlcZimpIQ0Of0hCdm9uK0Cpyk03FXbUelY1LvunJ2T z8tCQzd8hOw= =R8bc -----END PGP SIGNATURE----- From nnorwitz at gmail.com Thu May 24 03:25:59 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 23 May 2007 18:25:59 -0700 Subject: [Python-Dev] Windows buildbot (Was: buildbot failure in x86 W2k trunk) In-Reply-To: <4654C280.1080802@v.loewis.de> References: <20070520071645.BA1C01E4004@bag.python.org> <464FFFDC.4020600@v.loewis.de> <46547C7F.7040908@activestate.com> <4654C280.1080802@v.loewis.de> Message-ID: On 5/23/07, "Martin v. L?wis" wrote: > Trent Mick schrieb: > > http://www.python.org/dev/buildbot/all/x86%20W2k%20trunk > > > > Is my buildbot the only reliable Windows buildbot machine? > > Tim Peter's machine comes and goes, depending on whether he starts > the buildbot. Alan McIntyre's machien should be mostly he reliable, > but nobody really notices if it goes away. I ping buildbot owners from time to time if their bot is unavailable or otherwise having problems. I know I've talked to Alan recently, but in this case I think he contacted me. n From greg.ewing at canterbury.ac.nz Thu May 24 04:35:05 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 24 May 2007 14:35:05 +1200 Subject: [Python-Dev] Return value from socket.fileno() In-Reply-To: <4654172C.4010904@xhaus.com> References: <4654172C.4010904@xhaus.com> Message-ID: <4654F9D9.1030106@canterbury.ac.nz> Alan Kennedy wrote: > I am writing to seek information about the socket.fileno() method, and > opinions on how best to implement it on jython. I would hope that the new i/o system will make it unnecessary to use fileno() in portable code. It's really a unix-specific thing. > So the question I'm asking is: Does anyone know of existing cpython code > which relies on the return value from socket.fileno() being an integer? > Or code that would break if it were returned a socket instance instead > of an integer? If you only pass it to other things supported on that platform that use filenos, probably not. BTW, you can pass socket objects directly to select() anyway. I'd regard this as the current portable way to use sockets and select. The man page says that this works via the fileno() method, but it doesn't have to be implemented that way -- select() could be taught to recognise socket objects natively. -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | Carpe post meridiem! | Christchurch, New Zealand | (I'm not a morning person.) | greg.ewing at canterbury.ac.nz +--------------------------------------+ From guido at python.org Thu May 24 04:52:09 2007 From: guido at python.org (Guido van Rossum) Date: Wed, 23 May 2007 19:52:09 -0700 Subject: [Python-Dev] Return value from socket.fileno() In-Reply-To: <4654F9D9.1030106@canterbury.ac.nz> References: <4654172C.4010904@xhaus.com> <4654F9D9.1030106@canterbury.ac.nz> Message-ID: On 5/23/07, Greg Ewing wrote: > Alan Kennedy wrote: > > I am writing to seek information about the socket.fileno() method, and > > opinions on how best to implement it on jython. > > I would hope that the new i/o system will make it > unnecessary to use fileno() in portable code. It's > really a unix-specific thing. > > > So the question I'm asking is: Does anyone know of existing cpython code > > which relies on the return value from socket.fileno() being an integer? > > Or code that would break if it were returned a socket instance instead > > of an integer? > > If you only pass it to other things supported on > that platform that use filenos, probably not. > > BTW, you can pass socket objects directly to > select() anyway. I'd regard this as the > current portable way to use sockets and select. > The man page says that this works via the fileno() > method, but it doesn't have to be implemented that > way -- select() could be taught to recognise > socket objects natively. I want to emphasize this option. Passing the socket to select should be more portable than using fileno(). -- --Guido van Rossum (home page: http://www.python.org/~guido/) From greg.ewing at canterbury.ac.nz Thu May 24 05:15:52 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 24 May 2007 15:15:52 +1200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> Message-ID: <46550368.3050800@canterbury.ac.nz> Georg Brandl wrote: > Ahh, now the dime has fallen ;) (sorry, German phrase) In English it's "the penny has dropped", so it's not much different. :-) Although I thought dimes were an American thing, and Germans would be more likely to use a different coin. -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | Carpe post meridiem! | Christchurch, New Zealand | (I'm not a morning person.) | greg.ewing at canterbury.ac.nz +--------------------------------------+ From greg.ewing at canterbury.ac.nz Thu May 24 05:43:48 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 24 May 2007 15:43:48 +1200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <46546B0A.7000502@acm.org> References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> <4652F73D.7060002@gmail.com> <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> <1b151690705221730h74b09b4em4933888738ee8972@mail.gmail.com> <46546B0A.7000502@acm.org> Message-ID: <465509F4.5060203@canterbury.ac.nz> Talin wrote: > As in the > above example, the use of backticks can be signal to the document > processor that the enclosed text should be examined for identifiers and > other Python syntax. Does this mean it's time for "pyST" -- Python-structured text?-) -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | Carpe post meridiem! | Christchurch, New Zealand | (I'm not a morning person.) | greg.ewing at canterbury.ac.nz +--------------------------------------+ From greg.ewing at canterbury.ac.nz Thu May 24 06:17:55 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 24 May 2007 16:17:55 +1200 Subject: [Python-Dev] nodef In-Reply-To: <1b151690705231708v19ab6f5mb647007d6bcd1cea@mail.gmail.com> References: <1b151690705231708v19ab6f5mb647007d6bcd1cea@mail.gmail.com> Message-ID: <465511F3.8060400@canterbury.ac.nz> Martin Blais wrote: > I don't know > what lives in the iterable, so why should I make assumptions? > > I think Python needs a builtin for this very purpose. I propose > 'nodef', a unique object whose sole purpose is to serve as a default > value. If the aforementioned iterable can yield *anything*, then it might yield this 'nodef' value as well. For this reason, there *can't* exist any *standard* guaranteed-unambiguous sentinel value. Each use case needs its own, to ensure it's truly unambiguous in the context of that use case. -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | Carpe post meridiem! | Christchurch, New Zealand | (I'm not a morning person.) | greg.ewing at canterbury.ac.nz +--------------------------------------+ From python at rcn.com Thu May 24 06:47:46 2007 From: python at rcn.com (Raymond Hettinger) Date: Wed, 23 May 2007 21:47:46 -0700 Subject: [Python-Dev] nodef References: <1b151690705231708v19ab6f5mb647007d6bcd1cea@mail.gmail.com> <465511F3.8060400@canterbury.ac.nz> Message-ID: <001401c79dbe$ad392920$f701a8c0@RaymondLaptop1> From: "Greg Ewing" > If the aforementioned iterable can yield *anything*, > then it might yield this 'nodef' value as well. > > For this reason, there *can't* exist any *standard* > guaranteed-unambiguous sentinel value. Each use > case needs its own, to ensure it's truly unambiguous > in the context of that use case. Right. That's why Barry and others write: missing = object() v = d.get(k, missing) That is the guaranteed way to get a unique object. Raymond From talin at acm.org Thu May 24 08:20:47 2007 From: talin at acm.org (Talin) Date: Wed, 23 May 2007 23:20:47 -0700 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <465509F4.5060203@canterbury.ac.nz> References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> <4652F73D.7060002@gmail.com> <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> <1b151690705221730h74b09b4em4933888738ee8972@mail.gmail.com> <46546B0A.7000502@acm.org> <465509F4.5060203@canterbury.ac.nz> Message-ID: <46552EBF.2000404@acm.org> Greg Ewing wrote: > Talin wrote: >> As in the above example, the use of backticks can be signal to the >> document processor that the enclosed text should be examined for >> identifiers and other Python syntax. > > Does this mean it's time for "pyST" -- Python-structured > text?-) I wasn't going to say it :) Now, at the risk of going even further out of the mainstream (actually, there's no risk, it's a dead certainty), if I had been clever enough to think that I could write a LaTeX translator, I probably would have made my target language Docbook or some other flavor of XML. Now, you might argue that XML is more cumbersome and harder to author than reST, and that is certainly a valid argument. On the other hand, there are a couple of interesting advantages to using XML: 1) You get an instant WYSIWYG preview capability by publishing a standard CSS stylesheet along with the docs. Anyone would be able to see what the output would look like merely by viewing it in a browser. While there would be some document transformations which would be not be previewable in CSS (such as breaking the document up into hyperlinked chapters), you would at least be able to see enough to be able to do a decent job of editing the text without having to install any special tools. And some of those more difficult transformations would be doable with a suitable XSTL stylesheet, which can be directly executed in most browsers. (As an example, I once wrote an XSLT stylesheet that converted OpenDocument XML into the equivalent HTML - this was part of my Firefox ODFReader plugin [http://www.alcoholicsunanimous.com/odfreader/], that allowed ODF documents to be directly viewed in the browser without having to launch an external helper application.) 2) There are a few WYSIWYG XML editors out there, which allow you to edit the styled text directly in an editor (although I don't know of any open source ones.) 3) The document processing tool could be very minimal, mostly assembled out of standard modules for processing XML. 4) XML has a well-specified method of escaping into other (XML-based) languages, which is XML namespaces. So for those who want equations in their docs, they could simply insert a block of MathML inside their Docbook XML. Similarly, illustrations could be embedded using bitmap images or SVG as appropriate. 5) Having XML-based docs would make it easy to write other kinds of processors that operate on the docs in different ways, such as building a keyword index or doing various kinds of analysis. Now, this suggestion of using XML isn't really a serious one. But I think that the various advantages that I have listed ought to be considered when thinking about how the tool chain for python documentations should operate. I think that there is a big advantage to making the document processing tools simple and hosted entirely in Python. People who contribute to the docs are likely to know quite a bit about Python, but it is far from certain what else they might know. And tools written in Python are automatically able to run in diverse environments, which may not be the case for tools written in other languages. This means that tools that are in Python are more likely to be used, and further, they are more likely to be improved or specialized to the task by those who use them. In terms of authoring, the convenience of the markup language is only one factor; A bigger factor I think is having a short feedback cycle between edit and test, where 'test' means seeing what your written text would look like in the finished product. The quicker you can make that feedback loop, the more likely people will be to work on the docs. -- Talin From bjourne at gmail.com Thu May 24 10:55:06 2007 From: bjourne at gmail.com (=?ISO-8859-1?Q?BJ=F6rn_Lindqvist?=) Date: Thu, 24 May 2007 10:55:06 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <465509F4.5060203@canterbury.ac.nz> References: <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> <4652F73D.7060002@gmail.com> <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> <1b151690705221730h74b09b4em4933888738ee8972@mail.gmail.com> <46546B0A.7000502@acm.org> <465509F4.5060203@canterbury.ac.nz> Message-ID: <740c3aec0705240155n49baccf8wdd87c6bb8c6ce86c@mail.gmail.com> On 5/24/07, Greg Ewing wrote: > Talin wrote: > > As in the > > above example, the use of backticks can be signal to the document > > processor that the enclosed text should be examined for identifiers and > > other Python syntax. > > Does this mean it's time for "pyST" -- Python-structured > text?-) Not before someone writes it. Georg Brandl's awesome ReST based system has the nice property that it actually exists and works. For a great number of reasons it is superior to the existing LaTeX based system, and I hope and think that they are strong enough to replace LaTeX with it. -- mvh Bj?rn From nick at craig-wood.com Thu May 24 11:46:08 2007 From: nick at craig-wood.com (Nick Craig-Wood) Date: Thu, 24 May 2007 10:46:08 +0100 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <18004.6634.254718.603879@montanaro.dyndns.org> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> <18004.6634.254718.603879@montanaro.dyndns.org> Message-ID: <20070524094607.GA24417@craig-wood.com> On Wed, May 23, 2007 at 05:39:38AM -0500, skip at pobox.com wrote: > Nick> If you type "pydoc re" at the moment then it says in it > > Nick> MODULE DOCS > Nick> http://www.python.org/doc/current/lib/module-re.html > > Nick> which is pretty much useless to me when ssh-ed in to a linux box > Nick> half way around the world... > > I get quite a bit of information about re (I've never known /F to be a > documentation slouch). Yes it is certainly better than no docs. It doesn't for instance have any regexp info, and I can never remember all the special non matching brackets (eg (?:...) so I have to read for the full docs for that. > Only one bit of that information is a reference to the page in the > library reference manual. And if I happen to be ssh'd into a > machine halfway round the world through a Gnome terminal I can right > mouse over that URL and pop the page up in my default local browser. > If you set the PYTHONDOCS environment variable you can point it to a > local (or at least different) copy of the libref manual. I take your point. However the unix tradition is that everything is in the man pages. man pages have expanded over the years to include info pages and you *can* read the full python docs via info, it just isn't quite as convenient as pydoc. I think perl had the right idea with perldoc. You can read all the perl documentation whether it is in module documentation (like docstrings) or general documentation (like the latex docs under discussion). I'd like to see pydoc be a viewer for all the python documentation, not just a subset of it. > A flag could be added to pydoc to show that content instead, however > being html it probably would be difficult to read unless pumped > through lynx -dump or something similar. I'm assuming that we do reST all the python documentation which would make it easier. -- Nick Craig-Wood -- http://www.craig-wood.com/nick From nick at craig-wood.com Thu May 24 11:58:29 2007 From: nick at craig-wood.com (Nick Craig-Wood) Date: Thu, 24 May 2007 10:58:29 +0100 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <46547E0A.6090409@ronadam.com> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> <46547E0A.6090409@ronadam.com> Message-ID: <20070524095829.GB24417@craig-wood.com> On Wed, May 23, 2007 at 12:46:50PM -0500, Ron Adam wrote: > Nick Craig-Wood wrote: > >So I'll be able to read the main docs for a module in a terminal > >without reaching for the web browser (or info)? That would be great! > > > >How would pydoc decide which bit of docs it is going to show? > > Pydoc currently gets topic info for some items by scraping the text from > document 'local' web pages. This is kind of messy for a couple of reasons. > - The documents may not be installed locally. > - It can be problematic locating the docs even if they are installed. > - They are not formatted well after they are retrieved. > > I think this is an area for improvement. And it would be improved by converting the docs to reST I imagine. > This feature is also limited to a small list where the word being searched > for is a keyword, or a very common topic reference, *and* they are not > likely to clash with other module, class, or function names. > > The introspection help parts of pydoc are completely separate from topic > help parts. So replacing this part can be done without much trouble. What > the best behavior is and how it should work would need to be discussed. > > Keep in mind doc strings are meant to be more of a quick reference to an > item, and Pydoc is the interface for that. I think that if reST was an acceptable form for the documentation, and it could be auto included in the main docs from docstrings then you would find more modules completely documented in __doc__. > >If I type "pydoc re" is it going to give me the rather sparse __doc__ > >from the re module or the nice reST docs? Or maybe both, one after > >the other? Or will I have to use a flag to dis-ambiguate? > > If retrieval from the full docs is desired, then it will probably need to > be disambiguated in some way or be a separate interface. > > help('re') # Quick reference on 're'. > helpdocs('re') # Full documentation for 're'. Actually if it gave both sets of docs quick, then long, one after the other that would suit me fine. -- Nick Craig-Wood -- http://www.craig-wood.com/nick From kristjan at ccpgames.com Thu May 24 12:58:05 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Thu, 24 May 2007 10:58:05 +0000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4654CE50.2020004@activestate.com> References: <032601c79d8f$dc7c8260$1f0a0a0a@enfoldsystems.local> <4654CE50.2020004@activestate.com> Message-ID: <4E9372E6B2234D4F859320D896059A9508DCBEEBCD@exchis.ccp.ad.local> > -----Original Message----- > though, not cygwin (a la bsmedberg's new MozillaBuild stuff). I just > wish there were an autoconf alternative that wasn't as painful as > autoconf. I have a few attempts for my purposes that are written in > Python (an obvious bootstrapping problem for building Python itself :). > Only for the theorist. As you, we use build tools for our stackless branch written in python. There exist successfully built python versions back from the nineties, which can be considered "external" tools for all practical purposes. Building python with python is really nifty. Kristjan From g.brandl at gmx.net Thu May 24 15:05:38 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 24 May 2007 15:05:38 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> Message-ID: Nick Craig-Wood schrieb: >> > It is missing conversion of ``comment'' at the moment as I'm sure you >> > know... >> >> Sorry, what did you mean? > > ``comment'' produces smart quotes in latex if I remember correctly. > You probably want to convert it somehow because it looks a bit odd on > the web page as it stands. I'm not sure what the reST replacement > might be, but converting it just to "comment" would probably be OK. > Likewise with `comment' to 'comment'. > > For an example see the first paragraph here: > > http://pydoc.gbrandl.de/reference/index.html Okay, there's now support for SmartyPants in Subversion -- it converts these quotes as well as triple dashes to their pretty equivalents. cheers, Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From g.brandl at gmx.net Thu May 24 17:08:38 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 24 May 2007 17:08:38 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <46546B0A.7000502@acm.org> References: <200705210923.47610.fdrake@acm.org> <20070521210410.GA14297@localhost.localdomain> <200705211932.13295.fdrake@acm.org> <874pm56quj.fsf@uwakimon.sk.tsukuba.ac.jp> <4652F73D.7060002@gmail.com> <1b151690705220746j49a7759bk28caa94ec4cfb36d@mail.gmail.com> <1b151690705221730h74b09b4em4933888738ee8972@mail.gmail.com> <46546B0A.7000502@acm.org> Message-ID: Talin schrieb: > Martin Blais wrote: >> On 5/22/07, Martin Blais wrote: >>> ReST works well only when there is little markup. Writing code >>> documentation generally requires a lot of markup, you want to make >>> variables, classes, functions, parameters, constants, etc.. (A better >>> avenue IMHO would be to augment docutils with some code to >>> automatically figure out the syntax of functions, parameters, classes, >>> etc., i.e., less markup, and if we do this in Python we may be able to >>> use introspection. This is a challenge, however, I don't know if it >>> can be done at all.) >> >> Just to follow-up on that idea: I don't think it would be very >> difficult to write a very small modification to docutils that >> interprets the default role with more "smarts", for example, you can >> all guess what the types of these are about: >> >> `class Foo` (this is a class Foo) >> `bar(a, b, c) -> str` (this is a function "bar" which returns a string) >> `name (attribute)` (this is an attribute) What's better here than :class:`Foo` or :attr:`name`? You wouldn't want to put an " (attribute)" after all references to it in your text, so this is just an alternative way to spell roles. >> ...so why couldn't the computer solve that problem for you? I'm sure >> we could make it happen. Essentially, what is missing from ReST is >> "less markup for documenting programs". By restricting the >> problem-set to Python programs, we can go a long way towards making >> much of this automatic, even without resorting to introspecting the >> source code that is being documented. > > I was going to suggest something similar. > > Ideally, any markup language ought to have a kind of "Huffman Coding" of > complexity - in other words, the markup symbols that are used the most > frequently are the ones that should be the shortest and easiest to type. > > Just as in real Huffman Coding, the popularity of a given element is > going to depend on context. This would imply that there should be > customizations of the markup language for different knowledge domains. > > While there are some benefits to having a 'standard' markup, any truly > universal markup is going to be much heavier and more cumbersome than > one that is specialized for the task. > > I would advocate a system in which the author inserts minimalistic > 'hints' into the text, and the document processor uses those hints along > with some automatic reasoning to determine the final markup. As in the > above example, the use of backticks can be signal to the document > processor that the enclosed text should be examined for identifiers and > other Python syntax. What I could propose is that we could abandon :class:`foo`, :meth:`foo` etc. and just use `foo`. There shouldn't be too many cases where this gets ambiguous crossreferencing. Variables would just use *var*, since they're not marked up speciall anyways. > I would also suggest that one test for evaluating the quality of markup > syntax is whether or not it can be learned by example - can a user > follow the pattern of some other part of the docs, without having to > learn the syntax in a formal way? I think he/she can, given a piece of document that contains most of the needed markup constructs. You'll pretty soon grok that reST uses indentation (and you'll be pleased with it if you like Python, which seems a reasonable assumption ;). You'll also get that :foo:`bar` is the syntax for semantic inline markup. Code examples are always introduced with a "::" (okay, the exact rules are a bit nifty, but very convenient if you know them). What else do you need to know? ".. function::" directives are quite easy to recognize. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From rrr at ronadam.com Thu May 24 19:43:18 2007 From: rrr at ronadam.com (Ron Adam) Date: Thu, 24 May 2007 12:43:18 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <20070524095829.GB24417@craig-wood.com> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> <46547E0A.6090409@ronadam.com> <20070524095829.GB24417@craig-wood.com> Message-ID: <4655CEB6.2050301@ronadam.com> Nick Craig-Wood wrote: > On Wed, May 23, 2007 at 12:46:50PM -0500, Ron Adam wrote: >> Nick Craig-Wood wrote: >>> So I'll be able to read the main docs for a module in a terminal >>> without reaching for the web browser (or info)? That would be great! >>> >>> How would pydoc decide which bit of docs it is going to show? >> Pydoc currently gets topic info for some items by scraping the text from >> document 'local' web pages. This is kind of messy for a couple of reasons. >> - The documents may not be installed locally. >> - It can be problematic locating the docs even if they are installed. >> - They are not formatted well after they are retrieved. >> >> I think this is an area for improvement. > > And it would be improved by converting the docs to reST I imagine. Yes, this will need a reST to html converter for displaying in the html browser. DocUtils provides that, but it's not part of the library. (?) And for console text output, is the unmodified reST suitable, or would it be desired to modify it in some way? Should a subset of the main documents be included with pydoc to avoid the documents not available messages if they are not installed? Or should the topics retrieval code be moved from pydoc to the main document tools so it's installed with the documents. Then that can be maintianed with the documents instead of being maintained with pydoc. Then pydoc will just looks for it, instead of looking for the html pages. >> This feature is also limited to a small list where the word being searched >> for is a keyword, or a very common topic reference, *and* they are not >> likely to clash with other module, class, or function names. >> >> The introspection help parts of pydoc are completely separate from topic >> help parts. So replacing this part can be done without much trouble. What >> the best behavior is and how it should work would need to be discussed. >> >> Keep in mind doc strings are meant to be more of a quick reference to an >> item, and Pydoc is the interface for that. > > I think that if reST was an acceptable form for the documentation, and > it could be auto included in the main docs from docstrings then you > would find more modules completely documented in __doc__. That would be fine for third party modules if they want to do that or if there is not much difference between the two. >>> If I type "pydoc re" is it going to give me the rather sparse __doc__ >> >from the re module or the nice reST docs? Or maybe both, one after >>> the other? Or will I have to use a flag to dis-ambiguate? >> If retrieval from the full docs is desired, then it will probably need to >> be disambiguated in some way or be a separate interface. >> >> help('re') # Quick reference on 're'. >> helpdocs('re') # Full documentation for 're'. > > Actually if it gave both sets of docs quick, then long, one after the > other that would suit me fine. That may work well for the full documentation, but the quick reference wouldn't be a short quick reference any more. I'm attempting to have a pydoc api call that gets a single item or sub-item and format it to a desired output so it can be included in other content. That's makes it possible for the full docs (not necessarily pythons) to embed pydoc output in it if it's desirable. This will need pydoc formatters for the target document type. I hope to include a reST output formatter for pydoc. The help() function is imported from pydoc by site.py when you start python. It may not be difficult to have it as a function that first tries pydoc to get a request, and if the original request is returned unchanged, tries to get information from the full documentation. There could be a way to select one or the other, (or both). But this feature doesn't need to be built into pydoc, or the full documentation. They just need to be able to work together so things like this are possible in an easy to write 4 or 5 line function. (give or take a few lines) So it looks like most of these issues are more a matter of how to organize the interfaces. It turns out that what I've done with pydoc, and what Georg is doing with the main documentation should work together quite nicely. Cheers, Ron From g.brandl at gmx.net Thu May 24 19:47:13 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 24 May 2007 19:47:13 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <4655CEB6.2050301@ronadam.com> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> <46547E0A.6090409@ronadam.com> <20070524095829.GB24417@craig-wood.com> <4655CEB6.2050301@ronadam.com> Message-ID: Ron Adam schrieb: > Nick Craig-Wood wrote: > > On Wed, May 23, 2007 at 12:46:50PM -0500, Ron Adam wrote: > >> Nick Craig-Wood wrote: > >>> So I'll be able to read the main docs for a module in a terminal > >>> without reaching for the web browser (or info)? That would be great! > >>> > >>> How would pydoc decide which bit of docs it is going to show? > >> Pydoc currently gets topic info for some items by scraping the text from > >> document 'local' web pages. This is kind of messy for a couple of reasons. > >> - The documents may not be installed locally. > >> - It can be problematic locating the docs even if they are installed. > >> - They are not formatted well after they are retrieved. > >> > >> I think this is an area for improvement. > > > > And it would be improved by converting the docs to reST I imagine. > > Yes, this will need a reST to html converter for displaying in the html > browser. DocUtils provides that, but it's not part of the library. (?) > > And for console text output, is the unmodified reST suitable, or would it > be desired to modify it in some way? A text writer for docutils should not be hard to write. You'd get something that looks like the reST, but stripped of markup that makes no sense when viewed on a terminal, such as :class:`xyz`. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From fuzzyman at voidspace.org.uk Thu May 24 21:28:50 2007 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Thu, 24 May 2007 20:28:50 +0100 Subject: [Python-Dev] The docs, reloaded [PEP?] In-Reply-To: References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> <46547E0A.6090409@ronadam.com> <20070524095829.GB24417@craig-wood.com> <4655CEB6.2050301@ronadam.com> Message-ID: <4655E772.8060604@voidspace.org.uk> This subject is generating a lot of discussion and [almost entirely] positive feedback. It would be a great shame to run out of steam. Does it need a PEP to see a chance of it getting accepted as the formal documentation system? (or a pronouncement that it will never happen...) Michael Foord Georg Brandl wrote: > Ron Adam schrieb: > >> Nick Craig-Wood wrote: >> > On Wed, May 23, 2007 at 12:46:50PM -0500, Ron Adam wrote: >> >> Nick Craig-Wood wrote: >> >>> So I'll be able to read the main docs for a module in a terminal >> >>> without reaching for the web browser (or info)? That would be great! >> >>> >> >>> How would pydoc decide which bit of docs it is going to show? >> >> Pydoc currently gets topic info for some items by scraping the text from >> >> document 'local' web pages. This is kind of messy for a couple of reasons. >> >> - The documents may not be installed locally. >> >> - It can be problematic locating the docs even if they are installed. >> >> - They are not formatted well after they are retrieved. >> >> >> >> I think this is an area for improvement. >> > >> > And it would be improved by converting the docs to reST I imagine. >> >> Yes, this will need a reST to html converter for displaying in the html >> browser. DocUtils provides that, but it's not part of the library. (?) >> >> And for console text output, is the unmodified reST suitable, or would it >> be desired to modify it in some way? >> > > A text writer for docutils should not be hard to write. You'd get something that > looks like the reST, but stripped of markup that makes no sense when viewed on > a terminal, such as :class:`xyz`. > > Georg > > From martin at v.loewis.de Thu May 24 23:09:17 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 24 May 2007 23:09:17 +0200 Subject: [Python-Dev] The docs, reloaded [PEP?] In-Reply-To: <4655E772.8060604@voidspace.org.uk> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> <46547E0A.6090409@ronadam.com> <20070524095829.GB24417@craig-wood.com> <4655CEB6.2050301@ronadam.com> <4655E772.8060604@voidspace.org.uk> Message-ID: <4655FEFD.6000805@v.loewis.de> Michael Foord schrieb: > This subject is generating a lot of discussion and [almost entirely] > positive feedback. It would be a great shame to run out of steam. > > Does it need a PEP to see a chance of it getting accepted as the formal > documentation system? (or a pronouncement that it will never happen...) No. First of all, it needs a dedicated developer (preferably, but not necessarily a committer) who indicates willingness to maintain that for the coming years and releases. It might be that Fred Drake's offer to maintain the documentation would be still valid after such a switch, but we should not assume so without explicit confirmation. It might be that this would be the time to pass one documentation maintenance to somebody else (and I seriously do not have any one particular in mind here). Then, I think a should be made where the documentation is converted. Again, a volunteer would be needed to create the branch, and then eventually merge it back to the trunk. It might be helpful, but isn't strictly necessary, to close all documentation patches before doing so, as they all break with the conversion. For that activity, multiple volunteers would be useful. I don't think a formal document needs to be written, unless there is a hint of disagreement within the community. In that case, a process PEP would be necessary. However, it is much more important that the documentation maintainer explicitly agrees than that nobody explicitly disagrees, or that a pronouncement is made - the pronouncement alone will *not* cause this change to be carried out. Regards, Martin From guido at python.org Fri May 25 02:10:12 2007 From: guido at python.org (Guido van Rossum) Date: Thu, 24 May 2007 17:10:12 -0700 Subject: [Python-Dev] Wither PEP 335 (Overloadable Boolean Operators)? In-Reply-To: References: Message-ID: On 5/18/07, Guido van Rossum wrote: > While reviewing PEPs, I stumbled over PEP 335 ( Overloadable Boolean > Operators) by Greg Ewing. I am of two minds of this -- on the one > hand, it's been a long time without any working code or anything. OTOH > it might be quite useful to e.g. numpy folks. > > It is time to reject it due to lack of interest, or revive it! Last call for discussion! I'm tempted to reject this -- the ability to generate optimized code based on the shortcut semantics of and/or is pretty important to me. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From greg.ewing at canterbury.ac.nz Fri May 25 04:05:35 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 25 May 2007 14:05:35 +1200 Subject: [Python-Dev] Wither PEP 335 (Overloadable Boolean Operators)? In-Reply-To: References: Message-ID: <4656446F.8030802@canterbury.ac.nz> Guido van Rossum wrote: > Last call for discussion! I'm tempted to reject this -- the ability to > generate optimized code based on the shortcut semantics of and/or is > pretty important to me. Please don't be hasty. I've had to think about this issue a bit. The conclusion I've come to is that there may be a small loss in the theoretical amount of optimization opportunity available, but not much. Furthermore, if you take into account some other improvements that can be made (which I'll explain below) the result is actually *better* than what 2.5 currently generates. For example, Python 2.5 currently compiles if a and b: into JUMP_IF_FALSE L1 POP_TOP JUMP_IF_FALSE L1 POP_TOP JUMP_FORWARD L2 L1: 15 POP_TOP L2: Under my PEP, without any other changes, this would become LOGICAL_AND_1 L1 LOGICAL_AND_2 L1: JUMP_IF_FALSE L2 POP_TOP JUMP_FORWARD L3 L2: 15 POP_TOP L3: The fastest path through this involves executing one extra bytecode. However, since we're not using JUMP_IF_FALSE to do the short-circuiting any more, there's no need for it to leave its operand on the stack. So let's redefine it and change its name to POP_JUMP_IF_FALSE. This allows us to get rid of all the POP_TOPs, plus the jump at the end of the statement body. Now we have LOGICAL_AND_1 L1 LOGICAL_AND_2 L1: POP_JUMP_IF_FALSE L2 L2: The fastest path through this executes one *less* bytecode than in the current 2.5-generated code. Also, any path that ends up executing the body benefits from the lack of a jump at the end. The same benefits also result when the boolean expression is more complex, e.g. if a or b and c: becomes LOGICAL_OR_1 L1 LOGICAL_AND_1 L2 LOGICAL_AND_2 L2: LOGICAL_OR_2 L1: POP_JUMP_IF_FALSE L3 L3: which contains 3 fewer instructions overall than the corresponding 2.5-generated code. So I contend that optimization is not an argument for rejecting this PEP, and may even be one for accepting it. -- Greg From guido at python.org Fri May 25 04:53:40 2007 From: guido at python.org (Guido van Rossum) Date: Thu, 24 May 2007 19:53:40 -0700 Subject: [Python-Dev] Wither PEP 335 (Overloadable Boolean Operators)? In-Reply-To: <4656446F.8030802@canterbury.ac.nz> References: <4656446F.8030802@canterbury.ac.nz> Message-ID: On 5/24/07, Greg Ewing wrote: > Guido van Rossum wrote: > > > Last call for discussion! I'm tempted to reject this -- the ability to > > generate optimized code based on the shortcut semantics of and/or is > > pretty important to me. > > Please don't be hasty. I've had to think about this issue > a bit. > > The conclusion I've come to is that there may be a small loss > in the theoretical amount of optimization opportunity available, > but not much. Furthermore, if you take into account some other > improvements that can be made (which I'll explain below) the > result is actually *better* than what 2.5 currently generates. > > For example, Python 2.5 currently compiles > > if a and b: > > > into > > > JUMP_IF_FALSE L1 > POP_TOP > > JUMP_IF_FALSE L1 > POP_TOP > > JUMP_FORWARD L2 > L1: > 15 POP_TOP > L2: > > Under my PEP, without any other changes, this would become > > > LOGICAL_AND_1 L1 > > LOGICAL_AND_2 > L1: > JUMP_IF_FALSE L2 > POP_TOP > > JUMP_FORWARD L3 > L2: > 15 POP_TOP > L3: > > The fastest path through this involves executing one extra > bytecode. However, since we're not using JUMP_IF_FALSE to > do the short-circuiting any more, there's no need for it > to leave its operand on the stack. So let's redefine it and > change its name to POP_JUMP_IF_FALSE. This allows us to > get rid of all the POP_TOPs, plus the jump at the end of > the statement body. Now we have > > > LOGICAL_AND_1 L1 > > LOGICAL_AND_2 > L1: > POP_JUMP_IF_FALSE L2 > > L2: > > The fastest path through this executes one *less* bytecode > than in the current 2.5-generated code. Also, any path that > ends up executing the body benefits from the lack of a > jump at the end. > > The same benefits also result when the boolean expression is > more complex, e.g. > > if a or b and c: > > > becomes > > > LOGICAL_OR_1 L1 > > LOGICAL_AND_1 L2 > > LOGICAL_AND_2 > L2: > LOGICAL_OR_2 > L1: > POP_JUMP_IF_FALSE L3 > > L3: > > which contains 3 fewer instructions overall than the > corresponding 2.5-generated code. > > So I contend that optimization is not an argument for > rejecting this PEP, and may even be one for accepting > it. Do you have an implementation available to measure this? In most cases the cost is not in the number of bytecode instructions executed but in the total amount of work. Two cheap bytecodes might well be cheaper than one expensive one. However, I'm happy to keep your PEP open until you have code that we can measure. (However, adding additional optimizations elsewhere to make up for the loss wouldn't be fair -- we would have to compare with a 2.5 or trunk (2.6) interpreter with the same additional optimizations added.) -- --Guido van Rossum (home page: http://www.python.org/~guido/) From g.brandl at gmx.net Fri May 25 11:38:57 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 25 May 2007 11:38:57 +0200 Subject: [Python-Dev] The docs, reloaded [PEP?] In-Reply-To: <4655FEFD.6000805@v.loewis.de> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> <46547E0A.6090409@ronadam.com> <20070524095829.GB24417@craig-wood.com> <4655CEB6.2050301@ronadam.com> <4655E772.8060604@voidspace.org.uk> <4655FEFD.6000805@v.loewis.de> Message-ID: Martin v. L?wis schrieb: > Michael Foord schrieb: >> This subject is generating a lot of discussion and [almost entirely] >> positive feedback. It would be a great shame to run out of steam. >> >> Does it need a PEP to see a chance of it getting accepted as the formal >> documentation system? (or a pronouncement that it will never happen...) > > No. First of all, it needs a dedicated developer (preferably, but not > necessarily a committer) who indicates willingness to maintain that > for the coming years and releases. > > It might be that Fred Drake's offer > to maintain the documentation would be still valid after such a switch, > but we should not assume so without explicit confirmation. It might > be that this would be the time to pass one documentation maintenance > to somebody else (and I seriously do not have any one particular > in mind here). Assuming that Fred goes into well-earned retirement from the doc maintainer position (private mail exchange hinted that way), and nobody more qualified steps up, I'd be available to take that post. (If someone else wants to take maintainership of the content, very good, I'd have to be maintainer of the build tools anyway.) I'd then try to form a doc maintaining team, just as the PEP editor team that was created recently, to deal with the (hopefully relatively large ;) ) amount of comments and edit requests. > Then, I think a should be made where the documentation is converted. > Again, a volunteer would be needed to create the branch, and then > eventually merge it back to the trunk. It might be helpful, but isn't > strictly necessary, to close all documentation patches before doing > so, as they all break with the conversion. For that activity, > multiple volunteers would be useful. I agree. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From theller at ctypes.org Fri May 25 14:52:05 2007 From: theller at ctypes.org (Thomas Heller) Date: Fri, 25 May 2007 14:52:05 +0200 Subject: [Python-Dev] Windows buildbot (Was: buildbot failure in x86 W2k trunk) In-Reply-To: <4654C280.1080802@v.loewis.de> References: <20070520071645.BA1C01E4004@bag.python.org> <464FFFDC.4020600@v.loewis.de> <46547C7F.7040908@activestate.com> <4654C280.1080802@v.loewis.de> Message-ID: >> Are there others that can provide a Windows buildbot? It would probably >> be good to have two -- and a WinXP one would be good. > > It certainly would be good. Unfortunately, Windows users are not that > much engaged in the open source culture, so few of them volunteer > (plus it's more painful, with Windows not being a true multi-user > system). I'll try to setup a buildbot under WinXP. Whom do I contact to get HOST:PORT and PASSWORD ? Thanks, Thomas From nick at craig-wood.com Fri May 25 15:15:05 2007 From: nick at craig-wood.com (Nick Craig-Wood) Date: Fri, 25 May 2007 14:15:05 +0100 Subject: [Python-Dev] The docs, reloaded In-Reply-To: <4655CEB6.2050301@ronadam.com> References: <20070521094341.C13A414C525@irishsea.home.craig-wood.com> <20070523085300.E3B9014C525@irishsea.home.craig-wood.com> <46547E0A.6090409@ronadam.com> <20070524095829.GB24417@craig-wood.com> <4655CEB6.2050301@ronadam.com> Message-ID: <20070525131505.GA29814@craig-wood.com> On Thu, May 24, 2007 at 12:43:18PM -0500, Ron Adam wrote: > And for console text output, is the unmodified reST suitable, or would it > be desired to modify it in some way? Currently pydoc output looks like a man page under Unix. if it could look like that then that would be great. Otherwise raw reST is fine! > Should a subset of the main documents be included with pydoc to avoid the > documents not available messages if they are not installed? > > Or should the topics retrieval code be moved from pydoc to the main > document tools so it's installed with the documents. Then that can be > maintianed with the documents instead of being maintained with pydoc. Then > pydoc will just looks for it, instead of looking for the html pages. I think the latter proposal sounds like the correct one. In debian for instance, the python docs are a seperate package, and it would seem reasonable that you'd have to have that package installed to get the long docs. > > I think that if reST was an acceptable form for the documentation, and > > it could be auto included in the main docs from docstrings then you > > would find more modules completely documented in __doc__. > > That would be fine for third party modules if they want to do that or if > there is not much difference between the two. If you look at the documentation for subprocess for instance, you'll see that the docstring is pretty much the same as the library reference documentation which seems like needless duplication and opportunity for code/doc skew. Maybe one is auto generated from the other - I don't know! > > Actually if it gave both sets of docs quick, then long, one after the > > other that would suit me fine. > > That may work well for the full documentation, but the quick reference > wouldn't be a short quick reference any more. Well you could stop after reading the short bit! > I'm attempting to have a pydoc api call that gets a single item or sub-item > and format it to a desired output so it can be included in other content. > That's makes it possible for the full docs (not necessarily pythons) to > embed pydoc output in it if it's desirable. This will need pydoc > formatters for the target document type. I hope to include a reST output > formatter for pydoc. > > The help() function is imported from pydoc by site.py when you start > python. It may not be difficult to have it as a function that first tries > pydoc to get a request, and if the original request is returned unchanged, > tries to get information from the full documentation. There could be a way > to select one or the other, (or both). > > But this feature doesn't need to be built into pydoc, or the full > documentation. They just need to be able to work together so things like > this are possible in an easy to write 4 or 5 line function. (give or take a > few lines) > > So it looks like most of these issues are more a matter of how to organize > the interfaces. It turns out that what I've done with pydoc, and what > Georg is doing with the main documentation should work together quite > nicely. Sounds good! Nick -- Nick Craig-Wood -- http://www.craig-wood.com/nick From pje at telecommunity.com Fri May 25 17:54:50 2007 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 25 May 2007 11:54:50 -0400 Subject: [Python-Dev] [Python-3000] Wither PEP 335 (Overloadable Boolean Operators)? In-Reply-To: References: <4656446F.8030802@canterbury.ac.nz> Message-ID: <20070525155302.917F23A4061@sparrow.telecommunity.com> At 11:25 AM 5/25/2007 +0200, Neville Grech Neville Grech wrote: > >From a user's POV, I'm +1 on having overloadable boolean > functions. In many cases I had to resort to overload add or neg > instead of and & not, I foresee a lot of cases where the and > overload could be used to join objects which represent constraints. > Overloadable boolean operators could also be used to implement > other types of logic (eg: fuzzy logic). Constraining them to just > primitive binary operations in my view will be delimiting for a > myriad of use cases. > >Sure, in some cases, one could overload the neg operator instead of >the not but semantically they have different meanings. Actually, I think that most of the use cases for this PEP would be better served by being able to "quote" code, i.e. to create AST objects directly from Python syntax. Then, you can do anything you can do in a Python expression (including conditional expressions, generator expressions, yield expressions, lambdas, etc.) without having to introduce new special methods for any of that stuff. In fact, if new features are added to the language later, they automatically become available in the same way. From trentm at activestate.com Fri May 25 18:52:51 2007 From: trentm at activestate.com (Trent Mick) Date: Fri, 25 May 2007 09:52:51 -0700 Subject: [Python-Dev] Windows buildbot (Was: buildbot failure in x86 W2k trunk) In-Reply-To: References: <20070520071645.BA1C01E4004@bag.python.org> <464FFFDC.4020600@v.loewis.de> <46547C7F.7040908@activestate.com> <4654C280.1080802@v.loewis.de> Message-ID: <46571463.90701@activestate.com> Thomas Heller wrote: >>> Are there others that can provide a Windows buildbot? It would probably >>> be good to have two -- and a WinXP one would be good. >> It certainly would be good. Unfortunately, Windows users are not that >> much engaged in the open source culture, so few of them volunteer >> (plus it's more painful, with Windows not being a true multi-user >> system). > > I'll try to setup a buildbot under WinXP. > Whom do I contact to get HOST:PORT and PASSWORD ? Martin, I believe. Trent -- Trent Mick trentm at activestate.com From jimjjewett at gmail.com Fri May 25 21:33:28 2007 From: jimjjewett at gmail.com (Jim Jewett) Date: Fri, 25 May 2007 15:33:28 -0400 Subject: [Python-Dev] Wither PEP 335 (Overloadable Boolean Operators)? Message-ID: Greg, If you do update this PEP, please update the __not__ portion as well, at least regarding possible return values. It currently says that __not__ can return NotImplemented, which falls back to the current semantics. (Why? to override an explicit __not__? Then why not just put the current semantics on __object__, and override by calling that directly?) It does not yet say what will happen for objects that return something else outside of {True, False}, such as class AntiBool(object): def __not__(self): return self Is that OK, because "not not X" should now be spelled "bool(x)", and you haven't allowed the overriding of __bool__? (And, if so, how does that work Py3K?) -jJ From brett at python.org Sat May 26 02:06:54 2007 From: brett at python.org (Brett Cannon) Date: Fri, 25 May 2007 17:06:54 -0700 Subject: [Python-Dev] whitespace normalization pre-commit hook is giving me grief Message-ID: In my bcannon-objcap branch I am trying to check in a change that involves a soft symlink from Lib/controlled_importlib.py to ../importlib/controlled_importlib.py through ``ln -s ../controlled_importlib.py controlled_importlib.py`` while in the Lib directory. I have done this before in this branch so as to allow for easy importing of code from the svn import of importlib that the branch contains. But svn keeps rejecting the commit saying that Lib/controlled_importlib.py needs to be whitespace normalized. I have run reindent on both the external import in the branch and in the original sandbox copy and both are coming back clean. I even imported reindent manually and followed what Georg posted when the pre-commit hook was added and it still passes. Unfortunately the pre-commit hook does not specify what line a change was made on so I have no clue where it is failing (maybe this should be added?). Can somebody help me figure out what the hell is going on? I am waiting to find out it is something small and stupid, but at this point I am not seeing a solution to this. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070525/0ed8a93d/attachment.html From armin.ronacher at active-4.com Sat May 26 02:29:23 2007 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Sat, 26 May 2007 00:29:23 +0000 (UTC) Subject: [Python-Dev] The docs, reloaded References: <20070523115834.7d5b1dd7@dennis-laptop> Message-ID: Hoi, Due to some server issues I had to take the web version down. But expect an updated version in a few days. Regards, Armin From nnorwitz at gmail.com Sat May 26 02:49:29 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 25 May 2007 17:49:29 -0700 Subject: [Python-Dev] whitespace normalization pre-commit hook is giving me grief In-Reply-To: References: Message-ID: On 5/25/07, Brett Cannon wrote: > In my bcannon-objcap branch I am trying to check in a change that involves a > soft symlink from Lib/controlled_importlib.py to > ../importlib/controlled_importlib.py through ``ln -s > ../controlled_importlib.py controlled_importlib.py`` while in the Lib > directory. I have done this before in this branch so as to allow for easy > importing of code from the svn import of importlib that the branch contains. I don't know that we've ever tested the commit hook with a link. Maybe there is some other problem. Have you tried running reindent on the actual file (not the symlink)? > Can somebody help me figure out what the hell is going on? I am waiting to > find out it is something small and stupid, but at this point I am not seeing > a solution to this. Can you check in a smaller part of the change (like leaving out the symlink)? That will at least allow you to make progress. You can send me the file if you want. I can try to look at it and see if there are any problems. n From darrinth at gmail.com Sat May 26 06:45:32 2007 From: darrinth at gmail.com (Darrin Thompson) Date: Sat, 26 May 2007 00:45:32 -0400 Subject: [Python-Dev] Build problems with sqlite on OSX Message-ID: First of all 1000 apologies if this is the wrong list. Please redirect me if necessary. I'm attempting to build python 2.5.1 fat binaries on OSX and statically link to a newer sqlite than what ships with OSX. (3.3.17). I'm getting "Bus Error" early when I run my app. If I turn on a lot of malloc debugging options and run under gdb I get this trace: (gdb) info threads * 1 process 18968 local thread 0x1003 0x900e41d1 in strtol_l () (gdb) bt #0 0x900e41d1 in strtol_l () #1 0x900160a5 in atoi () #2 0x9406fd80 in sqlite3InitCallback () #3 0x0381faf2 in sqlite3_exec (db=0x338d080, zSql=0x331f1e0 "SELECT name, rootpage, sql FROM 'main'.sqlite_master WHERE tbl_name='sqlite_sequence'", xCallback=0x9406fd00 , pArg=0xbfffde14, pzErrMsg=0x0) at ./src/legacy.c:93 #4 0x0384c769 in sqlite3VdbeExec (p=0x1945200) at ./src/vdbe.c:4090 #5 0x03816686 in sqlite3Step (p=0x1945200) at ./src/vdbeapi.c:236 #6 0x03816817 in sqlite3_step (pStmt=0x1945200) at ./src/vdbeapi.c:289 #7 0x0380b9aa in _sqlite_step_with_busyhandler (statement=0x1945200, connection=0x32a77a0) at /Users/pandora/build-toolchain/build/Python-2.5.1/Modules/_sqlite/util.c:33 #8 0x0380850d in cursor_executescript (self=0x32bd4d0, args=0x32a2d10) at /Users/pandora/build-toolchain/build/Python-2.5.1/Modules/_sqlite/cursor.c:788 #9 0x0020e6fc in PyObject_Call (func=0x329ecd8, arg=0x32a2d10, kw=0x0) at Objects/abstract.c:1860 #10 0x00292a36 in PyEval_CallObjectWithKeywords (func=0x329ecd8, arg=0x32a2d10, kw=0x0) at Python/ceval.c:3433 #11 0x0020e6cd in PyObject_CallObject (o=0x329ecd8, a=0x32a2d10) at Objects/abstract.c:1851 #12 0x03806e1c in connection_executescript (self=0x32a77a0, args=0x32a2d10, kwargs=0x0) at /Users/pandora/build-toolchain/build/Python-2.5.1/Modules/_sqlite/connection.c:1001 #13 0x002998ae in PyEval_EvalFrameEx (f=0x338c250, throwflag=0) at Python/ceval.c:3564 Can someone advise as to the correct configure arguments for sqlite or something else I might be missing? Thanks in advance. -- Darrin From ocean at m2.ccsnet.ne.jp Sat May 26 06:50:40 2007 From: ocean at m2.ccsnet.ne.jp (ocean) Date: Sat, 26 May 2007 13:50:40 +0900 Subject: [Python-Dev] python/trunk/Lib/test/test_urllib.py (for ftpwrapper) Message-ID: <004901c79f51$68e6ec00$0300a8c0@whiterabc2znlh> # Sorry, I posted to inapropreate mailing list. (Python-3000) http://mail.python.org/pipermail/python-checkins/2007-May/060507.html Hello. I'm using Windows2000, I tried some investigation for test_ftpwrapper. After I did this change, most errors were gone. Index: Lib/urllib.py =================================================================== --- Lib/urllib.py (revision 55584) +++ Lib/urllib.py (working copy) @@ -833,7 +833,7 @@ self.busy = 0 self.ftp = ftplib.FTP() self.ftp.connect(self.host, self.port, self.timeout) - self.ftp.login(self.user, self.passwd) +# self.ftp.login(self.user, self.passwd) for dir in self.dirs: self.ftp.cwd(dir) I don't know, but probably 'login' on Win2000 is problamatic. Remaining error is: File "e:\python-dev\trunk\lib\threading.py", line 460, in __bootstrap self.run() File "e:\python-dev\trunk\lib\threading.py", line 440, in run self.__target(*self.__args, **self.__kwargs) File "test_urllib.py", line 565, in server conn.recv(13) error: (10035, 'The socket operation could not complete without blocking') And after commented out conn.recv block in test_urllib.py, test passed fine. def server(evt): serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) serv.settimeout(3) serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) serv.bind(("", 9093)) serv.listen(5) try: conn, addr = serv.accept() conn.send("1 Hola mundo\n") """ cantdata = 0 while cantdata < 13: data = conn.recv(13-cantdata) cantdata += len(data) time.sleep(.3) """ conn.send("2 No more lines\n") conn.close() except socket.timeout: pass finally: serv.close() evt.set() From tjreedy at udel.edu Sat May 26 07:51:26 2007 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 26 May 2007 01:51:26 -0400 Subject: [Python-Dev] Build problems with sqlite on OSX References: Message-ID: "Darrin Thompson" wrote in message news:a2e649c70705252145u68735a40u236b35b422b085c7 at mail.gmail.com... | First of all 1000 apologies if this is the wrong list. Please redirect | me if necessary. Usage questions should usually be directed first to comp.lang.python / gmane.comp.python.general / python-list (all 3 are interconnected). Try that unless you get answer here fairly quickly. There are *many* more readers. From g.brandl at gmx.net Sat May 26 08:36:03 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 26 May 2007 08:36:03 +0200 Subject: [Python-Dev] whitespace normalization pre-commit hook is giving me grief In-Reply-To: References: Message-ID: Neal Norwitz schrieb: > On 5/25/07, Brett Cannon wrote: >> In my bcannon-objcap branch I am trying to check in a change that involves a >> soft symlink from Lib/controlled_importlib.py to >> ../importlib/controlled_importlib.py through ``ln -s >> ../controlled_importlib.py controlled_importlib.py`` while in the Lib >> directory. I have done this before in this branch so as to allow for easy >> importing of code from the svn import of importlib that the branch contains. > > I don't know that we've ever tested the commit hook with a link. > Maybe there is some other problem. The cause: For symlinks, SVN saves a file containing "link /target" and sets the "svn:special" property. Since the special file doesn't end with a newline, reindent adds that, and boom. The solution: add if fs.node_prop(txn_root, path, 'svn:special') == '*': continue in the commit hook's for loop. cheers, Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From kristjan at ccpgames.com Sat May 26 14:01:10 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Sat, 26 May 2007 12:01:10 +0000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4654C15F.2040906@v.loewis.de> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> <4654C15F.2040906@v.loewis.de> Message-ID: <4E9372E6B2234D4F859320D896059A9508DE8B403C@exchis.ccp.ad.local> > -----Original Message----- > From: "Martin v. L?wis" [mailto:martin at v.loewis.de] > > It doesn't need to, and reluctance is not wrt. to the proposed new > layout, but wrt. changing the current one. Tons of infrastructure > depends on the files having exactly the names that they have now, > and being located in exactly the locations where they are currently > located. Any change to that, whatever minor, will cause problems > to some people. Just to be absolutely clear: You are talking about the build environment, right? Because I am not proposing to change any layout of the installed Python (wherever that may be :) I am baffled about why the build environment's layout matters, but once an .msi install can place the binaries in any old place it wants. The build structure doesn't have to reflect the final installed structure at all. Kristj?n From kristjan at ccpgames.com Sat May 26 14:20:59 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Sat, 26 May 2007 12:20:59 +0000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> Message-ID: <4E9372E6B2234D4F859320D896059A9508DE8B4046@exchis.ccp.ad.local> > -----Original Message----- > From: Alexey Borzenkov [mailto:snaury at gmail.com] > Sent: Wednesday, May 23, 2007 20:36 > To: Kristj?n Valur J?nsson > Cc: Martin v. L?wis; Mark Hammond; distutils-sig at python.org; python- > dev at python.org > Subject: Re: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows > > On 5/23/07, Kristj?n Valur J?nsson wrote: > > > > Install in the ProgramFiles folder. > > > Only over my dead body. *This* is silly. > > Bill doesn't think so. And he gets to decide. I mean we do want > > to play nice, don't we? Nothing installs itself in the root anymore, > > not since windows 3.1 > > Maybe installing in the root is not good, but installing to "Program > Files" is just asking for trouble. All sorts of development tools > might suddenly break because of that space in the middle of the path > and requirement to use quotes around it. I thus usually install things > to :\Programs. I'm not sure if any packages/programs will break > because of that space, but what if some will? Development tools used on windows already have to cope with this. Spaces are not going away, so why not bite the bullet and deal with them? Moving forward sometimes means crossing rivers. As for Vista issues, I'll gather more data before making any more claims, but I think that it is important that we play by the rules here. Just imagine the a school teacher who in good faith wants to introduce his pupils to the wonderful programming language of Python, but when he installs it, all kinds of scary looking warnings drive him off. Vista is, like it or not, going to be very prevalent. If we want python to be easily accessible to the masses, we mustn't take an elitist attitude or else risk scaring people off. Finally, to add a (mis)quote from Mr. Gorbachev: "Wer zu sp?t kommt, den bestraft das Leben" Kristj?n From kristjan at ccpgames.com Sat May 26 15:31:09 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Sat, 26 May 2007 13:31:09 +0000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4654C15F.2040906@v.loewis.de> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> <4654C15F.2040906@v.loewis.de> Message-ID: <4E9372E6B2234D4F859320D896059A9508DE8B405A@exchis.ccp.ad.local> > -----Original Message----- > > I personally think that if hostile users can replace DLLs on your > system, you have bigger problems than SxS installation of > pythhonxy.dll, > but perhaps that's just me. An end user application on an end-user's machine is always voulnerable to reverse engineering. But it helps to make it more difficult. The old-style dll loader is a particular vulnerability which makes it easy to patch into an application at load time. > >> What about the registry? > > I don't know about the registry, what is it used for? > > For two things, with different importance to different users: > 1. File extensions are registered there, e.g. .py and .pyc. > With two binaries installed, the will stomp over each other's > file associations; only one of them can win. Sure. No argument about this. But as with the explorer and other apps, it is perfectly possible to manually start one or the other, autoclicking on .py files isn't the only option. > 2. Python installs keys under > HL{LM|CU}\Software\Python\PythonCore\, namely > InstallPath > InstallGroup > PythonPath > Documentation > Modules > Funnily enough, Bill has thought of this too. See http://support.microsoft.com/kb/305097/ for info. Co-existence of 32 and 64 bit apps is supported. > > The two versions of MSIE actually *are* a big problem, that's > why MS only runs the 32-bit IE, even on Win64 (otherwise, ActiveX > controls downloaded from the net wouldn't work). > > Also, while they are both shipped, you can't the two versions of > explorer.exe simultaneously (without trickery), so its far from > simple. The two versions aren't the problem, it is the backward support for 32 bit active thingies that are, as you point out. There is confusion here: internet explorer shipped in both versions. The 32 bit version war default for the above reason. But explorer.exe (which I was talking about) also had two versions. The 64 bit version ran by default. You may recall that before tortoise shipped a 64 bit version, one had to kill explorer.exe and restart it (explorer32.exe IIRC) to get tortoise to work. Supporting both kinds (country and western) on the same machine might be helpful to people for this very reason. A lot of legacy modules are only avaible in 32 bit mode. But people may want to do contemporary development using the new 64 bit mode. Cheers, Kristj?n From josepharmbruster at gmail.com Sat May 26 17:50:01 2007 From: josepharmbruster at gmail.com (Joseph Armbruster) Date: Sat, 26 May 2007 11:50:01 -0400 Subject: [Python-Dev] vs2005 Project Patch and Configuration Inquiry Message-ID: <46585729.2030305@gmail.com> All, As per the removal of the rgbimgmodule due to deprecation in the trunk: http://svn.python.org/projects/python/trunk ---Begin SVN log--- r55458 | brett.cannon | 2007-05-20 03:09:50 -0400 (Sun, 20 May 2007) | 2 lines Remove the rgbimg module. It has been deprecated since Python 2.5. ---End SVN log--- The PCbuild8 solution needs to be corrected. A patch is attached. In addition, I noticed that under C++/Advanced Properties, all the configurations appear to be set to "Compile as C++ Code" with the /TP argument. Should these be set to "Compile as C Code" with the /TC argument? Joseph Armbruster -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: vsproj.patch Url: http://mail.python.org/pipermail/python-dev/attachments/20070526/39e5c046/attachment.asc From alexandre at peadrop.com Sat May 26 18:38:02 2007 From: alexandre at peadrop.com (Alexandre Vassalotti) Date: Sat, 26 May 2007 12:38:02 -0400 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: On 5/19/07, Georg Brandl wrote: > over the last few weeks I've hacked on a new approach to Python's documentation. > As Python already has an excellent documentation framework, the docutils, with a > readable yet extendable markup format, reST, I thought that it should be > possible to use those instead of the current LaTeX->latex2html toolchain. > > For the impatient: the result can be seen at . > [SNIP] > Waiting for comments! Here a small suggestion, move the sidebar to the right. Moving it to the right makes it much less intrusive. See that by yourself: http://peadrop.com/files/pydoc-sidebar-right.png div.body { background-color:white; margin:0pt 190pt 0pt 0px; } div.sidebar { float:right; margin-left:-100%; width:230px; } Keep up the great work, -- Alexandre From tonynelson at georgeanelson.com Sat May 26 19:54:55 2007 From: tonynelson at georgeanelson.com (Tony Nelson) Date: Sat, 26 May 2007 13:54:55 -0400 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DE8B4046@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> <4E9372E6B2234D4F859320D896059A9508DE8B4046@exchis.ccp.ad.local> Message-ID: At 12:20 PM +0000 5/26/07, Kristj?n Valur J?nsson wrote: >> -----Original Message----- >> From: Alexey Borzenkov [mailto:snaury at gmail.com] >> Sent: Wednesday, May 23, 2007 20:36 >> To: Kristj?n Valur J?nsson >> Cc: Martin v. L?wis; Mark Hammond; distutils-sig at python.org; python- >> dev at python.org >> Subject: Re: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows >> >> On 5/23/07, Kristj?n Valur J?nsson wrote: >> > > > Install in the ProgramFiles folder. >> > > Only over my dead body. *This* is silly. >> > Bill doesn't think so. And he gets to decide. I mean we do want >> > to play nice, don't we? Nothing installs itself in the root anymore, >> > not since windows 3.1 >> >> Maybe installing in the root is not good, but installing to "Program >> Files" is just asking for trouble. All sorts of development tools >> might suddenly break because of that space in the middle of the path >> and requirement to use quotes around it. I thus usually install things >> to :\Programs. I'm not sure if any packages/programs will break >> because of that space, but what if some will? > >Development tools used on windows already have to cope with this. >Spaces are not going away, so why not bite the bullet and deal >with them? Moving forward sometimes means crossing rivers. ... Microsoft's command line cannot cope with two pathnames that must be quoted, so if the command path itself must be quoted, then no argument to the command can be quoted. There are tricky hacks that can work around this mind-boggling stupidity, but life is simpler if Python itself doesn't use up the one quoted pathname. I don't know if Microsoft has had the good sense to fix this in Vista (which I probably will never use, since an alternative exists), but they didn't in XP. -- ____________________________________________________________________ TonyN.:' ' From kristjan at ccpgames.com Sat May 26 19:57:17 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Sat, 26 May 2007 17:57:17 +0000 Subject: [Python-Dev] vs2005 Project Patch and Configuration Inquiry In-Reply-To: <46585729.2030305@gmail.com> References: <46585729.2030305@gmail.com> Message-ID: <4E9372E6B2234D4F859320D896059A9508DE8B40B5@exchis.ccp.ad.local> > -----Original Message----- > From: python-dev-bounces+kristjan=ccpgames.com at python.org > The PCbuild8 solution needs to be corrected. A patch is attached. > Thanks, I'll apply it. > In addition, I noticed that under C++/Advanced Properties, all the > configurations appear to be set to "Compile as C++ Code" with the /TP > argument. Should these be set to "Compile as C Code" with the /TC > argument? Interesting. I hadn't noticed. I investigated, and this is the default value for all projects. However, if you click a single .c file and check its properties, you will find that it gets the /TC flag in its advanced settings. So each file will be correctly compiled. (you can confirm this by checking the command line). Removing the /TP flag from the project settings also results in the disappearance of the per-file /TC setting. Very curious. In end effect, the C files are compiled as such and there is no need for panic. Cheers, Kristj?n From josepharmbruster at gmail.com Sat May 26 21:13:58 2007 From: josepharmbruster at gmail.com (Joseph Armbruster) Date: Sat, 26 May 2007 15:13:58 -0400 Subject: [Python-Dev] vs2005 Project Patch and Configuration Inquiry In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DE8B40B5@exchis.ccp.ad.local> References: <46585729.2030305@gmail.com> <4E9372E6B2234D4F859320D896059A9508DE8B40B5@exchis.ccp.ad.local> Message-ID: <465886F6.4070401@gmail.com> Kristj?n, I started to investigate the WINVER warnings that are scattered throughout the VS 2005 build. This patch eliminates them but I may have overlooked the intentions of the #include ordering. If this invalid, please let me know. Patch attached. Joseph Armbruster Kristj?n Valur J?nsson wrote: > >> -----Original Message----- >> From: python-dev-bounces+kristjan=ccpgames.com at python.org >> The PCbuild8 solution needs to be corrected. A patch is attached. >> > Thanks, I'll apply it. > >> In addition, I noticed that under C++/Advanced Properties, all the >> configurations appear to be set to "Compile as C++ Code" with the /TP >> argument. Should these be set to "Compile as C Code" with the /TC >> argument? > > Interesting. I hadn't noticed. I investigated, and this is the default > value for all projects. However, if you click a single .c file and > check its properties, you will find that it gets the /TC flag in its > advanced settings. So each file will be correctly compiled. (you can > confirm this by checking the command line). Removing the /TP flag from > the project settings also results in the disappearance of the per-file > /TC setting. > Very curious. In end effect, the C files are compiled as such and there > is no need for panic. > > Cheers, > Kristj?n > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: warnings.patch Url: http://mail.python.org/pipermail/python-dev/attachments/20070526/f967d431/attachment-0001.asc From brett at python.org Sat May 26 21:22:05 2007 From: brett at python.org (Brett Cannon) Date: Sat, 26 May 2007 12:22:05 -0700 Subject: [Python-Dev] whitespace normalization pre-commit hook is giving me grief In-Reply-To: References: Message-ID: On 5/25/07, Neal Norwitz wrote: > > On 5/25/07, Brett Cannon wrote: > > In my bcannon-objcap branch I am trying to check in a change that > involves a > > soft symlink from Lib/controlled_importlib.py to > > ../importlib/controlled_importlib.py through ``ln -s > > ../controlled_importlib.py controlled_importlib.py`` while in the Lib > > directory. I have done this before in this branch so as to allow for > easy > > importing of code from the svn import of importlib that the branch > contains. > > I don't know that we've ever tested the commit hook with a link. > Maybe there is some other problem. > > Have you tried running reindent on the actual file (not the symlink)? Yes. > Can somebody help me figure out what the hell is going on? I am waiting > to > > find out it is something small and stupid, but at this point I am not > seeing > > a solution to this. > > Can you check in a smaller part of the change (like leaving out the > symlink)? That will at least allow you to make progress. Already did that. You can send me the file if you want. I can try to look at it and see > if there are any problems. Looks like Georg knows the issue. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070526/278dc727/attachment.htm From brett at python.org Sat May 26 21:23:41 2007 From: brett at python.org (Brett Cannon) Date: Sat, 26 May 2007 12:23:41 -0700 Subject: [Python-Dev] whitespace normalization pre-commit hook is giving me grief In-Reply-To: References: Message-ID: On 5/25/07, Georg Brandl wrote: > > Neal Norwitz schrieb: > > On 5/25/07, Brett Cannon wrote: > >> In my bcannon-objcap branch I am trying to check in a change that > involves a > >> soft symlink from Lib/controlled_importlib.py to > >> ../importlib/controlled_importlib.py through ``ln -s > >> ../controlled_importlib.py controlled_importlib.py`` while in the Lib > >> directory. I have done this before in this branch so as to allow for > easy > >> importing of code from the svn import of importlib that the branch > contains. > > > > I don't know that we've ever tested the commit hook with a link. > > Maybe there is some other problem. > > The cause: For symlinks, SVN saves a file containing "link /target" and > sets > the "svn:special" property. Since the special file doesn't end with a > newline, > reindent adds that, and boom. > > The solution: add > if fs.node_prop(txn_root, path, 'svn:special') == '*': continue > > in the commit hook's for loop. Great! So can someone do this? I don't know where the svn hook code is stored, let alone whether I have access to commit a change. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070526/3f7d62af/attachment.html From josepharmbruster at gmail.com Sat May 26 21:31:46 2007 From: josepharmbruster at gmail.com (Joseph Armbruster) Date: Sat, 26 May 2007 15:31:46 -0400 Subject: [Python-Dev] Minor ConfigParser Change In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DE8B40B5@exchis.ccp.ad.local> References: <46585729.2030305@gmail.com> <4E9372E6B2234D4F859320D896059A9508DE8B40B5@exchis.ccp.ad.local> Message-ID: <46588B22.3090808@gmail.com> Kristj?n, While we are on the topic of minor changes... :-) I noticed that one of the parts of ConfigParser was not using "for line in fp" style of readline-ing :-) So, this will reduce the SLOC by 3 lines and improve readability. However, I did a quick grep and this type of practice appears in several other places. There is a possibility of good savings in this department. If you think this is worthwhile, I can create one large patch for them all. I sure hope I am not missing something fundamental with this one... Joseph Armbruster -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: configparser.patch Url: http://mail.python.org/pipermail/python-dev/attachments/20070526/e58ac9cd/attachment.pot From nnorwitz at gmail.com Sat May 26 21:41:11 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 26 May 2007 12:41:11 -0700 Subject: [Python-Dev] whitespace normalization pre-commit hook is giving me grief In-Reply-To: References: Message-ID: On 5/26/07, Brett Cannon wrote: > > > On 5/25/07, Georg Brandl wrote: > > Neal Norwitz schrieb: > > > On 5/25/07, Brett Cannon wrote: > > >> In my bcannon-objcap branch I am trying to check in a change that > involves a > > >> soft symlink from Lib/controlled_importlib.py to > > >> ../importlib/controlled_importlib.py through ``ln -s > > >> ../controlled_importlib.py controlled_importlib.py`` while in the Lib > > >> directory. I have done this before in this branch so as to allow for > easy > > >> importing of code from the svn import of importlib that the branch > contains. > > > > > > I don't know that we've ever tested the commit hook with a link. > > > Maybe there is some other problem. > > > > The cause: For symlinks, SVN saves a file containing "link /target" and > sets > > the "svn:special" property. Since the special file doesn't end with a > newline, > > reindent adds that, and boom. > > > > The solution: add > > if fs.node_prop(txn_root, path, 'svn:special') == '*': continue > > > > in the commit hook's for loop. > > > Great! So can someone do this? I don't know where the svn hook code is > stored, let alone whether I have access to commit a change. I made the change Georg suggested, give it a try. n From g.brandl at gmx.net Sat May 26 21:44:11 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 26 May 2007 21:44:11 +0200 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: Hi, We managed to get an up to date version of the web version of the docs running on the server. The address is still the same (http://pydoc.gbrandl.de:3000) and it's also still running on top of wsgiref. Changes so far: * comments: each page that is generated from an rst file can have some comments attached to it. Commenting doesn't require registration at the moment. * antispam with optional reverse captcha (captcha for bots, a hidden input field named "homepage" which bots hopefully fill out, dumb as they are) and a regular expression filter rules based on MoinMoin's BadContent file. * administration panel for moderating comments. You can find the admin panel at http://pydoc.gbrandl.de:3000/admin/ -- login credentials are testuser:password) * feeds for comments on a page or the last n comments on the whole site. * source view is text only (again). What still works: * intelligent error pages: if a page does not exist the URL path is used to conduct a fuzzy keyword search (see below). * fuzzy keyword search: "os.path.exists" jumps to the entry, "os.paht.exists" shows some possibilities. What needs to be implemented: * full text search * proposing documentation patches Note that the comment area is really, really dark, that's intentional. This is meant to visually separate comments from the official docs, but if the constrast is deemed to unsettling, another way can be found. Also, we're experimenting with alternate stylesheets, e.g. placing the sidebar on the right of the main text, or a "traditional" style for those liking the original docs' style. In any case, we're waiting for your input! cheers, Georg and Armin -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From josepharmbruster at gmail.com Sat May 26 21:48:40 2007 From: josepharmbruster at gmail.com (Joseph Armbruster) Date: Sat, 26 May 2007 15:48:40 -0400 Subject: [Python-Dev] Minor ConfigParser Change In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DE8B40B5@exchis.ccp.ad.local> References: <46585729.2030305@gmail.com> <4E9372E6B2234D4F859320D896059A9508DE8B40B5@exchis.ccp.ad.local> Message-ID: <46588F18.1020509@gmail.com> Kristj?n, Here is a part of the patch that I was referring to. Something to that effect. Joseph Armbruster -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: forline.patch Url: http://mail.python.org/pipermail/python-dev/attachments/20070526/dc4af0d9/attachment-0001.asc From kristjan at ccpgames.com Sat May 26 22:13:26 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Sat, 26 May 2007 20:13:26 +0000 Subject: [Python-Dev] Minor ConfigParser Change In-Reply-To: <46588B22.3090808@gmail.com> References: <46585729.2030305@gmail.com> <4E9372E6B2234D4F859320D896059A9508DE8B40B5@exchis.ccp.ad.local> <46588B22.3090808@gmail.com> Message-ID: <4E9372E6B2234D4F859320D896059A9508DE8B40EB@exchis.ccp.ad.local> > -----Original Message----- > From: Joseph Armbruster [mailto:josepharmbruster at gmail.com] > > I noticed that one of the parts of ConfigParser was not using "for line > in fp" style of readline-ing I'm afraid my authority is limited to .c stuff having to do with pcbuild8, but I'm sure someone else here would like to comment. Kristj?n From josepharmbruster at gmail.com Sat May 26 22:16:17 2007 From: josepharmbruster at gmail.com (Joseph Armbruster) Date: Sat, 26 May 2007 16:16:17 -0400 Subject: [Python-Dev] Minor ConfigParser Change In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DE8B40EB@exchis.ccp.ad.local> References: <46585729.2030305@gmail.com> <4E9372E6B2234D4F859320D896059A9508DE8B40B5@exchis.ccp.ad.local> <46588B22.3090808@gmail.com> <4E9372E6B2234D4F859320D896059A9508DE8B40EB@exchis.ccp.ad.local> Message-ID: <46589591.6010104@gmail.com> Kristj?n, Whoops! My apologies. In any case, I created the bugs in the sf issue tracker for proper CM practice. Look under the patches section within sf.net. You should go ahead and close out the 2005 build ones then, once applied :-) Thank you again, Joseph Armbruster Kristj?n Valur J?nsson wrote: > >> -----Original Message----- >> From: Joseph Armbruster [mailto:josepharmbruster at gmail.com] >> >> I noticed that one of the parts of ConfigParser was not using "for line >> in fp" style of readline-ing > > I'm afraid my authority is limited to .c stuff having to do with pcbuild8, > but I'm sure someone else here would like to comment. > > Kristj?n > From rrr at ronadam.com Sat May 26 22:59:04 2007 From: rrr at ronadam.com (Ron Adam) Date: Sat, 26 May 2007 15:59:04 -0500 Subject: [Python-Dev] The docs, reloaded In-Reply-To: References: Message-ID: <46589F98.2020409@ronadam.com> Georg Brandl wrote: > Hi, > > We managed to get an up to date version of the web version of the docs running > on the server. The address is still the same (http://pydoc.gbrandl.de:3000) and > it's also still running on top of wsgiref. > > Changes so far: > * comments: each page that is generated from an rst file can have some > comments attached to it. Commenting doesn't require registration at the > moment. > * antispam with optional reverse captcha (captcha for bots, a hidden input > field named "homepage" which bots hopefully fill out, dumb as they are) and > a regular expression filter rules based on MoinMoin's BadContent file. > * administration panel for moderating comments. You can find the admin panel > at http://pydoc.gbrandl.de:3000/admin/ -- login credentials are > testuser:password) > * feeds for comments on a page or the last n comments on the whole site. > * source view is text only (again). > > What still works: > * intelligent error pages: if a page does not exist the URL path is used to > conduct a fuzzy keyword search (see below). > * fuzzy keyword search: "os.path.exists" jumps to the entry, "os.paht.exists" > shows some possibilities. > > What needs to be implemented: > * full text search > * proposing documentation patches > > Note that the comment area is really, really dark, that's intentional. This is > meant to visually separate comments from the official docs, but if the constrast > is deemed to unsettling, another way can be found. > > Also, we're experimenting with alternate stylesheets, e.g. placing the sidebar > on the right of the main text, or a "traditional" style for those liking the > original docs' style. > > In any case, we're waiting for your input! > > cheers, > Georg and Armin Yes, the comments are a bit too dark. The separation could be done better by moving it below the footer. Or better yet, duplicate the navigation bar between the the document page and the comments. ------------------------------------------------ crumbs navagation ------------------------------------------------ side | main page bar | | | ------------------------------------------------ crumbs navagation ------------------------------------------------ User Comment section ------------------------------------------------ copy right ------------------------------------------------ The user comment section could have it's own side bar if that's desirable. Also the python version information needs to be on every page someplace. Ron From astrand at cendio.se Sat May 26 23:22:05 2007 From: astrand at cendio.se (=?UTF-8?Q?Peter_=C3=85strand?=) Date: Sat, 26 May 2007 23:22:05 +0200 (CEST) Subject: [Python-Dev] Unable to commit Message-ID: I'm unable to commit tonight: Sending Doc/lib/libsubprocess.tex Sending Lib/subprocess.py Sending Lib/test/test_subprocess.py Transmitting file data ...svn: Commit failed (details follow): svn: 'pre-commit' hook failed with error output: Traceback (most recent call last): File "/data/repos/projects/hooks/checkwhitespace.py", line 50, in ? run_app(main) File "/usr/lib/python2.3/site-packages/svn/core.py", line 33, in run_app return apply(func, (pool,) + args, kw) File "/data/repos/projects/hooks/checkwhitespace.py", line 32, in main if fs.node_prop(txn_root, path, 'svn:special') == '*': continue TypeError: svn_fs_node_prop() takes exactly 4 arguments (3 given) Any ideas? (CC me, I'm not on the list.) Regards, --- Peter ?strand ThinLinc Chief Developer Cendio AB http://www.cendio.se Wallenbergs gata 4 583 30 Link?ping Phone: +46-13-21 46 00 From nnorwitz at gmail.com Sun May 27 00:08:43 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 26 May 2007 15:08:43 -0700 Subject: [Python-Dev] Unable to commit In-Reply-To: References: Message-ID: On 5/26/07, Peter ?strand wrote: > > I'm unable to commit tonight: > > Sending Doc/lib/libsubprocess.tex > Sending Lib/subprocess.py > Sending Lib/test/test_subprocess.py > Transmitting file data ...svn: Commit failed (details follow): > svn: 'pre-commit' hook failed with error output: > Traceback (most recent call last): > File "/data/repos/projects/hooks/checkwhitespace.py", line 50, in ? > run_app(main) > File "/usr/lib/python2.3/site-packages/svn/core.py", line 33, in run_app > return apply(func, (pool,) + args, kw) > File "/data/repos/projects/hooks/checkwhitespace.py", line 32, in main > if fs.node_prop(txn_root, path, 'svn:special') == '*': continue > TypeError: svn_fs_node_prop() takes exactly 4 arguments (3 given) > > Any ideas? I tried to fix it so you can check in links, but that didn't work. I commented out the change, so you should be able to commit again. n From g.brandl at gmx.net Sun May 27 01:02:31 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 27 May 2007 01:02:31 +0200 Subject: [Python-Dev] Unable to commit In-Reply-To: References: Message-ID: Neal Norwitz schrieb: > On 5/26/07, Peter ?strand wrote: >> >> I'm unable to commit tonight: >> >> Sending Doc/lib/libsubprocess.tex >> Sending Lib/subprocess.py >> Sending Lib/test/test_subprocess.py >> Transmitting file data ...svn: Commit failed (details follow): >> svn: 'pre-commit' hook failed with error output: >> Traceback (most recent call last): >> File "/data/repos/projects/hooks/checkwhitespace.py", line 50, in ? >> run_app(main) >> File "/usr/lib/python2.3/site-packages/svn/core.py", line 33, in run_app >> return apply(func, (pool,) + args, kw) >> File "/data/repos/projects/hooks/checkwhitespace.py", line 32, in main >> if fs.node_prop(txn_root, path, 'svn:special') == '*': continue >> TypeError: svn_fs_node_prop() takes exactly 4 arguments (3 given) >> >> Any ideas? > > I tried to fix it so you can check in links, but that didn't work. I > commented out the change, so you should be able to commit again. Odd... the call worked here (SVN 1.4.3). Which version is the server using? Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From brett at python.org Sun May 27 01:33:34 2007 From: brett at python.org (Brett Cannon) Date: Sat, 26 May 2007 16:33:34 -0700 Subject: [Python-Dev] whitespace normalization pre-commit hook is giving me grief In-Reply-To: References: Message-ID: On 5/26/07, Neal Norwitz wrote: > > On 5/26/07, Brett Cannon wrote: > > > > > > On 5/25/07, Georg Brandl wrote: > > > Neal Norwitz schrieb: > > > > On 5/25/07, Brett Cannon wrote: > > > >> In my bcannon-objcap branch I am trying to check in a change that > > involves a > > > >> soft symlink from Lib/controlled_importlib.py to > > > >> ../importlib/controlled_importlib.py through ``ln -s > > > >> ../controlled_importlib.py controlled_importlib.py`` while in the > Lib > > > >> directory. I have done this before in this branch so as to allow > for > > easy > > > >> importing of code from the svn import of importlib that the branch > > contains. > > > > > > > > I don't know that we've ever tested the commit hook with a link. > > > > Maybe there is some other problem. > > > > > > The cause: For symlinks, SVN saves a file containing "link /target" > and > > sets > > > the "svn:special" property. Since the special file doesn't end with a > > newline, > > > reindent adds that, and boom. > > > > > > The solution: add > > > if fs.node_prop(txn_root, path, 'svn:special') == '*': continue > > > > > > in the commit hook's for loop. > > > > > > Great! So can someone do this? I don't know where the svn hook code is > > stored, let alone whether I have access to commit a change. > > I made the change Georg suggested, give it a try. Still failing. I checked the added file for svn:special and it's set with an '*' just like my other symlink. And I double-checked the file by running ``python Tools/scripts/reindent.py -v Lib/controlled_importlib.py`` and it said nothing had changed. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070526/b8e2565b/attachment.html From nnorwitz at gmail.com Sun May 27 01:56:12 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 26 May 2007 16:56:12 -0700 Subject: [Python-Dev] Unable to commit In-Reply-To: References: Message-ID: On 5/26/07, Georg Brandl wrote: > > > > I tried to fix it so you can check in links, but that didn't work. I > > commented out the change, so you should be able to commit again. > > Odd... the call worked here (SVN 1.4.3). Which version is the server using? I don't know. After some Googling, I found that the call might need to be: if fs.node_prop(txn_root, path, SVN_PROP_MIME_TYPE, 'svn:special') == '*': continue That's a pretty blind guess based on the error message and other random code that I could find. The line above is commented out right now. If I remember and feel like it, I'll try to test this out. n From status at bugs.python.org Sun May 27 02:00:51 2007 From: status at bugs.python.org (Tracker) Date: Sun, 27 May 2007 00:00:51 +0000 (UTC) Subject: [Python-Dev] Summary of Tracker Issues Message-ID: <20070527000051.286CF78060@psf.upfronthosting.co.za> ACTIVITY SUMMARY (05/20/07 - 05/27/07) Tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue number. Do NOT respond to this message. 1649 open ( +0) / 8584 closed ( +0) / 10233 total ( +0) Average duration of open issues: 806 days. Median duration of open issues: 757 days. Open Issues Breakdown open 1649 ( +0) pending 0 ( +0) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070527/a6ada058/attachment.htm From orsenthil at users.sourceforge.net Sun May 27 13:23:56 2007 From: orsenthil at users.sourceforge.net (O.R.Senthil Kumaran) Date: Sun, 27 May 2007 16:53:56 +0530 Subject: [Python-Dev] python/trunk/Lib/test/test_urllib.py (for ftpwrapper) In-Reply-To: <004901c79f51$68e6ec00$0300a8c0@whiterabc2znlh> References: <004901c79f51$68e6ec00$0300a8c0@whiterabc2znlh> Message-ID: <20070527112356.GA9106@gmail.com> * ocean [2007-05-26 13:50:40]: > > http://mail.python.org/pipermail/python-checkins/2007-May/060507.html > > Hello. I'm using Windows2000, I tried some investigation for > test_ftpwrapper. > > After I did this change, most errors were gone. > > Index: Lib/urllib.py > =================================================================== > --- Lib/urllib.py (revision 55584) > +++ Lib/urllib.py (working copy) > @@ -833,7 +833,7 @@ > self.busy = 0 > self.ftp = ftplib.FTP() > self.ftp.connect(self.host, self.port, self.timeout) > - self.ftp.login(self.user, self.passwd) > +# self.ftp.login(self.user, self.passwd) > for dir in self.dirs: > self.ftp.cwd(dir) The init function in urllib is called under cases when ftp retrive has failed with one of ftp related errors. ( non-programmatic) Under those cases, my assumption is we might require the ftp.login to be present. Are you sure, this change does not affect anyother places? -- O.R.Senthil Kumaran http://uthcode.sarovar.org From facundo at taniquetil.com.ar Mon May 28 21:41:37 2007 From: facundo at taniquetil.com.ar (Facundo Batista) Date: Mon, 28 May 2007 19:41:37 +0000 (UTC) Subject: [Python-Dev] python/trunk/Lib/test/test_urllib.py (for ftpwrapper) References: <004901c79f51$68e6ec00$0300a8c0@whiterabc2znlh> Message-ID: ocean wrote: > After I did this change, most errors were gone. > > Index: Lib/urllib.py > =================================================================== > --- Lib/urllib.py (revision 55584) > +++ Lib/urllib.py (working copy) > @@ -833,7 +833,7 @@ > self.busy = 0 > self.ftp = ftplib.FTP() > self.ftp.connect(self.host, self.port, self.timeout) > - self.ftp.login(self.user, self.passwd) > +# self.ftp.login(self.user, self.passwd) > for dir in self.dirs: > self.ftp.cwd(dir) > > I don't know, but probably 'login' on Win2000 is problamatic. But you can *not* do this, you're trimming functionality from this class. The test that I left commented out in test_urllib.py just tries to simulate the first two FTP server answers, one saying "hello", and the other asking for the user. This is just to test ftpwrapper in its more basic first steps. The simulation goes ok in linux, but not in windows. But the problem is clearly in the tests, even if I can not find it, not in the urllib module. Regards, -- . Facundo . Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From martin at v.loewis.de Tue May 29 00:09:04 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 29 May 2007 00:09:04 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DE8B403C@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> <4654C15F.2040906@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DE8B403C@exchis.ccp.ad.local> Message-ID: <465B5300.8090901@v.loewis.de> >> It doesn't need to, and reluctance is not wrt. to the proposed new >> layout, but wrt. changing the current one. Tons of infrastructure >> depends on the files having exactly the names that they have now, >> and being located in exactly the locations where they are currently >> located. Any change to that, whatever minor, will cause problems >> to some people. > > Just to be absolutely clear: You are talking about the build environment, > right? Because I am not proposing to change any layout of the > installed Python (wherever that may be :) Correct. > I am baffled about why the build environment's layout matters, > but once an .msi install can place the binaries in any > old place it wants. The build structure doesn't have to > reflect the final installed structure at all. No. But still, people like to be able to "run" Python out of a source check-out. This has been supported for a long time, and more and more stuff was added to support it. For examples within Python itself, see the support in distutils, getpathp.c, PCbuild/rt.bat, and Tools/buildbot/*.bat. Reportedly (by Mark), building Mozilla (the web browser) also "knows" about PCbuild. Regards, Martin From martin at v.loewis.de Tue May 29 00:13:03 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 29 May 2007 00:13:03 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DE8B4046@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> <4E9372E6B2234D4F859320D896059A9508DE8B4046@exchis.ccp.ad.local> Message-ID: <465B53EF.6070108@v.loewis.de> > Development tools used on windows already have to cope with this. > Spaces are not going away, so why not bite the bullet and deal > with them? Moving forward sometimes means crossing rivers. But in a safe path, step by step. People continue to report problems with spaces in file names, even though many of them have been solved also. > As for Vista issues, I'll gather more data before making any more > claims, but I think that it is important that we play by the rules > here. I completely disagree. It is important that it "works", not that we play by the rules. > Just imagine the a school teacher who in good faith wants to introduce > his pupils to the wonderful programming language of Python, but > when he installs it, all kinds of scary looking warnings drive him off. > Vista is, like it or not, going to be very prevalent. If we want python > to be easily accessible to the masses, we mustn't take an elitist > attitude or else risk scaring people off. I'm completely in favor of fixing actual bugs. However, I'm not aware of any (related to Vista). That it is not logo compliant is *not* a bug. Python hasn't been logo compliant for more than a decade now (the "install to Program Files" is not a new requirement, but existed since Win95). Regards, Martin From martin at v.loewis.de Tue May 29 00:16:59 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 29 May 2007 00:16:59 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DE8B405A@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> <4654C15F.2040906@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DE8B405A@exchis.ccp.ad.local> Message-ID: <465B54DB.3040809@v.loewis.de> > Supporting both kinds (country and western) on the same machine might be helpful > to people for this very reason. A lot of legacy modules are only avaible > in 32 bit mode. But people may want to do contemporary development using the > new 64 bit mode. Of course, people who really want that can install both versions simultaneously today. Regards, Martin From martin at v.loewis.de Tue May 29 00:40:33 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 29 May 2007 00:40:33 +0200 Subject: [Python-Dev] Unable to commit In-Reply-To: References: Message-ID: <465B5A61.3070409@v.loewis.de> > Odd... the call worked here (SVN 1.4.3). Which version is the server using? 1.1. Subversion did a grand renaming at some point. I fixed most of the functions when deploying the script, but apparently missed some. Regards, Martin From mhammond at skippinet.com.au Tue May 29 05:11:33 2007 From: mhammond at skippinet.com.au (Mark Hammond) Date: Tue, 29 May 2007 13:11:33 +1000 Subject: [Python-Dev] [Distutils] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: Message-ID: <03d801c7a19f$1033f120$1f0a0a0a@enfoldsystems.local> > From: Jamie Kirkpatrick [mailto:jkp at kirkconsulting.co.uk] > Sent: Wednesday, 23 May 2007 5:16 AM > I have a set of extensions that use SWIG to wrap my own C++ library. This library, on a > day-to-day basis is built against VS8 since the rest of our product suite is. Right now > I have no way to work with this code using VS8 since the standard distribution is built > against VS7 which uses a different CRT. This is an absolute nightmare in practice since > I currently have to maintain VS7 projects in parallel with the standard VS8 ones simply > so that I can run and test my python code. If you are brave and willing to ensure your module doesn't voilate certain constraints (such as never passing a CRT 'concept' - such as a file handle or memory block to be free'd) across mismatched CRT boundaries, you may find that you can happily load your VS8 built pronect with VS7 - but yes, your general point is valid but beyond the scope of this discussion. A separate discussion on making Python "crt agnostic" is almost certainly worthwhile though, but not directly related to this current discussion. > I've downloaded the Python source and had a look at building up my own distributions for each case > (ideally there would be an easy way to separate out Release / Debug products as well as the > VS8 / VS7 variants, and I guess potentially for those cross-compiling we'd need to go a step > further and do this per arch as well. Anyway, this isn't how it works at the moment, but I'm > still searching for a way to be able to work on the python code in VS8. Building using the > current projects I seem to get everything in the PCBuild8 / PCBuild dirs. How can I work with > what is build? This is *exactly* the point of this thread, and what we are trying to resolve. In the short term, we have agreed a change to PCBuild8\build.bat that copies the build files into PCBuild is a solution that should "work", where "work" is defined as "allow a source tree built with VS8 to operate in the same way, from the POV of building extensions, as one built with VS8." My primary issue with this is solved by the change to the .bat file, but we welcome all feedback from people who believe this is not ideal. I've agreement from Kristjan on the specific change to that .bat file, I'm just yet to check it in (but it literally just copies everything from the PVBuild8 target dir into the PCBuild dir after checking the expected dirs do indeed exist) > Is there a shell script to build a final distribution tree? If not, is there a simple > way to build an MSI similar to the one found on the Python.org site for the official > releases but using the PCBuild8 stuff? I believe not. In most cases, people who build from source on Windows will run directly from that source tree, rather than attempting the intermediate step of creating a .msi and installing it. > If not how do you recommend getting myself to a state where I have at least a feature complete > distribution build against VS8? I'm happy with a one time build that I can just install into > my source tree and upload to the SCM. I'd suggest that once I check the .bat change in, you build the PCBuild8 directory via that .bat file, then continue to use the 'PCBuild' directory as it it were a VC7 build. Cheers, Mark From martin at v.loewis.de Tue May 29 06:16:35 2007 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 29 May 2007 06:16:35 +0200 Subject: [Python-Dev] whitespace normalization pre-commit hook is giving me grief In-Reply-To: References: Message-ID: <465BA923.4020303@v.loewis.de> Brett Cannon schrieb: > Unfortunately the pre-commit hook > does not specify what line a change was made on so I have no clue where > it is failing (maybe this should be added?). It creates a reindent.Reindenter on the new contents, then invokes .run() on it, and complains if that returns true. If you can come up with a patch to reindent that makes it report more detailed errors, please post it, and I'll try to merge it into the hook script. Regards, Martin From martin at v.loewis.de Tue May 29 06:27:17 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 29 May 2007 06:27:17 +0200 Subject: [Python-Dev] [Distutils] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <03d801c7a19f$1033f120$1f0a0a0a@enfoldsystems.local> References: <03d801c7a19f$1033f120$1f0a0a0a@enfoldsystems.local> Message-ID: <465BABA5.5030301@v.loewis.de> >> Is there a shell script to build a final distribution tree? If >> not, is there a simple way to build an MSI similar to the one found >> on the Python.org site for the official releases but using the >> PCBuild8 stuff? > > I believe not. It's actually not that difficult. You just have to run Tools/msi/msi.py, with a python interpreter that support PythonCOM (i.e. has PythonWin installed). Replacing all occurrences of PC[Bb]uild with PCbuild8 should do the trick. This will give you a "snapshot" MSI, i.e. one that you can install along with a regularly-release Python interpreter. If you put a config.py with testpackage=True into the msi directory, it will register the extensions .px, .pxw, .pxo, .pxc (rather than .py*), and overwrite the registry at Software\xPython (rather than Software\Python), so that the package shouldn't interfere at all with a regular installation. Regards, Martin From martin at v.loewis.de Tue May 29 06:34:05 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 29 May 2007 06:34:05 +0200 Subject: [Python-Dev] Build problems with sqlite on OSX In-Reply-To: References: Message-ID: <465BAD3D.8060008@v.loewis.de> Darrin Thompson schrieb: > First of all 1000 apologies if this is the wrong list. Please redirect > me if necessary. The list is right, but the question is slightly wrong: > Can someone advise as to the correct configure arguments for sqlite or > something else I might be missing? The question for python-dev is "how can I debug that further, and where should I submit a patch" :-) > (gdb) info threads > * 1 process 18968 local thread 0x1003 0x900e41d1 in strtol_l () > (gdb) bt > #0 0x900e41d1 in strtol_l () > #1 0x900160a5 in atoi () > #2 0x9406fd80 in sqlite3InitCallback () Can you figure out what parameter is being passed to atoi() here? Go up (u) a few stack frames, list (l) the source, and print (p) the variables being passed to atoi(). I'm puzzled that it doesn't display source code information - so one possible cause is that you pick up the wrong sqlite3InitCallback (i.e. the one that came with OSX, instead of the one you built yourself). Regards, Martin From martin at v.loewis.de Tue May 29 06:36:20 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 29 May 2007 06:36:20 +0200 Subject: [Python-Dev] Unable to commit In-Reply-To: References: Message-ID: <465BADC4.2010701@v.loewis.de> > I don't know. After some Googling, I found that the call might need to be: > > if fs.node_prop(txn_root, path, SVN_PROP_MIME_TYPE, > 'svn:special') == '*': continue No. Instead, the missing argument was the apr_pool_t parameter, which was mandatory earlier, and is optional in current releases. The requirement to pass an apr_pool_t is answered by putting the entire logic into a single callback function that expects a pool, and then passing the pool to all API functions as the last parameter. The callback itself is passed to run_app. I have fixed the hook, and tested that it still checks regular .py files, but skips them if they are svn:special. Regards, Martin From nnorwitz at gmail.com Tue May 29 07:30:33 2007 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 28 May 2007 22:30:33 -0700 Subject: [Python-Dev] Build problems with sqlite on OSX In-Reply-To: <465BAD3D.8060008@v.loewis.de> References: <465BAD3D.8060008@v.loewis.de> Message-ID: One other thing to check is to ensure that sqlite was compiled with -fno-strict-aliasing. I know there was a strange problem on one of the buildbots due to this flag not being present. I have no idea if that could be your problem here though. n -- On 5/28/07, "Martin v. L?wis" wrote: > Darrin Thompson schrieb: > > First of all 1000 apologies if this is the wrong list. Please redirect > > me if necessary. > > The list is right, but the question is slightly wrong: > > > Can someone advise as to the correct configure arguments for sqlite or > > something else I might be missing? > > The question for python-dev is "how can I debug that further, and where > should I submit a patch" :-) > > > (gdb) info threads > > * 1 process 18968 local thread 0x1003 0x900e41d1 in strtol_l () > > (gdb) bt > > #0 0x900e41d1 in strtol_l () > > #1 0x900160a5 in atoi () > > #2 0x9406fd80 in sqlite3InitCallback () > > Can you figure out what parameter is being passed to atoi() here? > Go up (u) a few stack frames, list (l) the source, and print (p) > the variables being passed to atoi(). I'm puzzled that it doesn't > display source code information - so one possible cause is that > you pick up the wrong sqlite3InitCallback (i.e. the one that > came with OSX, instead of the one you built yourself). > > Regards, > Martin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/nnorwitz%40gmail.com > From g.brandl at gmx.net Tue May 29 08:26:08 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 29 May 2007 08:26:08 +0200 Subject: [Python-Dev] Unable to commit In-Reply-To: <465B5A61.3070409@v.loewis.de> References: <465B5A61.3070409@v.loewis.de> Message-ID: Martin v. L?wis schrieb: >> Odd... the call worked here (SVN 1.4.3). Which version is the server using? > > 1.1. Subversion did a grand renaming at some point. I fixed most of the > functions when deploying the script, but apparently missed some. Okay. Is an upgrade planned? I've heard that several actions were sped up significantly with later releases. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From martin at v.loewis.de Tue May 29 09:07:53 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 29 May 2007 09:07:53 +0200 Subject: [Python-Dev] Unable to commit In-Reply-To: References: <465B5A61.3070409@v.loewis.de> Message-ID: <465BD149.2000208@v.loewis.de> Georg Brandl schrieb: > Martin v. L?wis schrieb: >>> Odd... the call worked here (SVN 1.4.3). Which version is the server using? >> 1.1. Subversion did a grand renaming at some point. I fixed most of the >> functions when deploying the script, but apparently missed some. > > Okay. Is an upgrade planned? I've heard that several actions were sped up > significantly with later releases. I have plans for that. However, these plans may take many months to execute, as one needs to set aside a lot of time to work on the update of www.python.org continuously, to deal with the aftermath of the Debian upgrade, and you also need to have desaster plans, which are difficult with no physical access to the machine (so I need to learn how to get virtual physical access first). Regards, Martin From scott+python-dev at scottdial.com Tue May 29 09:26:56 2007 From: scott+python-dev at scottdial.com (Scott Dial) Date: Tue, 29 May 2007 03:26:56 -0400 Subject: [Python-Dev] whitespace normalization pre-commit hook is giving me grief In-Reply-To: <465BA923.4020303@v.loewis.de> References: <465BA923.4020303@v.loewis.de> Message-ID: <465BD5C0.7090406@scottdial.com> Martin v. L?wis wrote: > Brett Cannon schrieb: >> Unfortunately the pre-commit hook >> does not specify what line a change was made on so I have no clue where >> it is failing (maybe this should be added?). > > It creates a reindent.Reindenter on the new contents, then invokes > .run() on it, and complains if that returns true. If you can come > up with a patch to reindent that makes it report more detailed > errors, please post it, and I'll try to merge it into the hook > script. > How about you do: if reindenter.run(): print >>sys.stderr, "file %s is not whitespace-normalized" %path print >>sys.stderr, difflib.unified_diff(reindenter.raw, reindenter.after) bad += 1 Which would provide you a unified diff to give you a clue. -- Scott Dial scott at scottdial.com scodial at cs.indiana.edu From g.brandl at gmx.net Tue May 29 09:33:05 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 29 May 2007 09:33:05 +0200 Subject: [Python-Dev] whitespace normalization pre-commit hook is giving me grief In-Reply-To: <465BD5C0.7090406@scottdial.com> References: <465BA923.4020303@v.loewis.de> <465BD5C0.7090406@scottdial.com> Message-ID: Scott Dial schrieb: > Martin v. L?wis wrote: >> Brett Cannon schrieb: >>> Unfortunately the pre-commit hook >>> does not specify what line a change was made on so I have no clue where >>> it is failing (maybe this should be added?). >> >> It creates a reindent.Reindenter on the new contents, then invokes >> .run() on it, and complains if that returns true. If you can come >> up with a patch to reindent that makes it report more detailed >> errors, please post it, and I'll try to merge it into the hook >> script. >> > > How about you do: > > if reindenter.run(): > print >>sys.stderr, "file %s is not whitespace-normalized" %path > print >>sys.stderr, difflib.unified_diff(reindenter.raw, > reindenter.after) > bad += 1 > > Which would provide you a unified diff to give you a clue. As I said before, you don't really need that when you can (and should!) just run reindent.py over the source file yourself, not care about any diffs and just resubmit. In this particular case, the diff wouldn't have helped too, since there was no way to "fix" the SVN-generated file content... Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From martin at v.loewis.de Tue May 29 09:44:24 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 29 May 2007 09:44:24 +0200 Subject: [Python-Dev] whitespace normalization pre-commit hook is giving me grief In-Reply-To: References: <465BA923.4020303@v.loewis.de> <465BD5C0.7090406@scottdial.com> Message-ID: <465BD9D8.2040505@v.loewis.de> > As I said before, you don't really need that when you can (and should!) just run > reindent.py over the source file yourself, not care about any diffs and just > resubmit. Right. So I withdraw my offer to do anything about the hook. Regards, Martin From g.brandl at gmx.net Tue May 29 09:47:02 2007 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 29 May 2007 09:47:02 +0200 Subject: [Python-Dev] whitespace normalization pre-commit hook is giving me grief In-Reply-To: <465BD9D8.2040505@v.loewis.de> References: <465BA923.4020303@v.loewis.de> <465BD5C0.7090406@scottdial.com> <465BD9D8.2040505@v.loewis.de> Message-ID: Martin v. L?wis schrieb: >> As I said before, you don't really need that when you can (and should!) just run >> reindent.py over the source file yourself, not care about any diffs and just >> resubmit. > > Right. So I withdraw my offer to do anything about the hook. I think printing something like "Please run Tools/scripts/reindent.py with the rejected files" if bad files were found would be a good idea. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From jkp at kirkconsulting.co.uk Tue May 22 21:16:19 2007 From: jkp at kirkconsulting.co.uk (Jamie Kirkpatrick) Date: Tue, 22 May 2007 20:16:19 +0100 Subject: [Python-Dev] [Distutils] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <46520F4F.5010502@v.loewis.de> References: <00af01c79b6f$2f863c30$1f0a0a0a@enfoldsystems.local> <46520F4F.5010502@v.loewis.de> Message-ID: > > I recommend that those people install the official binaries. Why do > you > need to build the binaries from source, if all you want is to build > extensions? I've been following this discussion and it seems like an appropriate place to mention such a scenario which I have encountered myself and am still trying to work around: I have a set of extensions that use SWIG to wrap my own C++ library. This library, on a day-to-day basis is built against VS8 since the rest of our product suite is. Right now I have no way to work with this code using VS8 since the standard distribution is built against VS7 which uses a different CRT. This is an absolute nightmare in practice since I currently have to maintain VS7 projects in parallel with the standard VS8 ones simply so that I can run and test my python code. I've downloaded the Python source and had a look at building up my own distributions for each case (ideally there would be an easy way to separate out Release / Debug products as well as the VS8 / VS7 variants, and I guess potentially for those cross-compiling we'd need to go a step further and do this per arch as well. Anyway, this isn't how it works at the moment, but I'm still searching for a way to be able to work on the python code in VS8. Building using the current projects I seem to get everything in the PCBuild8 / PCBuild dirs. How can I work with what is build? Is there a shell script to build a final distribution tree? If not, is there a simple way to build an MSI similar to the one found on the Python.org site for the official releases but using the PCBuild8 stuff? If not how do you recommend getting myself to a state where I have at least a feature complete distribution build against VS8? I'm happy with a one time build that I can just install into my source tree and upload to the SCM. Thanks in advance, and I hope that my thoughts proved useful in some way. Jamie -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070522/d5f8d368/attachment.html From nevillegrech at gmail.com Fri May 25 11:25:17 2007 From: nevillegrech at gmail.com (Neville Grech Neville Grech) Date: Fri, 25 May 2007 11:25:17 +0200 Subject: [Python-Dev] [Python-3000] Wither PEP 335 (Overloadable Boolean Operators)? In-Reply-To: References: <4656446F.8030802@canterbury.ac.nz> Message-ID: >From a user's POV, I'm +1 on having overloadable boolean functions. In many cases I had to resort to overload add or neg instead of and & not, I foresee a lot of cases where the and overload could be used to join objects which represent constraints. Overloadable boolean operators could also be used to implement other types of logic (eg: fuzzy logic). Constraining them to just primitive binary operations in my view will be delimiting for a myriad of use cases. Sure, in some cases, one could overload the neg operator instead of the not but semantically they have different meanings. On 5/25/07, Guido van Rossum wrote: > > On 5/24/07, Greg Ewing wrote: > > Guido van Rossum wrote: > > > > > Last call for discussion! I'm tempted to reject this -- the ability to > > > generate optimized code based on the shortcut semantics of and/or is > > > pretty important to me. > > > > Please don't be hasty. I've had to think about this issue > > a bit. > > > > The conclusion I've come to is that there may be a small loss > > in the theoretical amount of optimization opportunity available, > > but not much. Furthermore, if you take into account some other > > improvements that can be made (which I'll explain below) the > > result is actually *better* than what 2.5 currently generates. > > > > For example, Python 2.5 currently compiles > > > > if a and b: > > > > > > into > > > > > > JUMP_IF_FALSE L1 > > POP_TOP > > > > JUMP_IF_FALSE L1 > > POP_TOP > > > > JUMP_FORWARD L2 > > L1: > > 15 POP_TOP > > L2: > > > > Under my PEP, without any other changes, this would become > > > > > > LOGICAL_AND_1 L1 > > > > LOGICAL_AND_2 > > L1: > > JUMP_IF_FALSE L2 > > POP_TOP > > > > JUMP_FORWARD L3 > > L2: > > 15 POP_TOP > > L3: > > > > The fastest path through this involves executing one extra > > bytecode. However, since we're not using JUMP_IF_FALSE to > > do the short-circuiting any more, there's no need for it > > to leave its operand on the stack. So let's redefine it and > > change its name to POP_JUMP_IF_FALSE. This allows us to > > get rid of all the POP_TOPs, plus the jump at the end of > > the statement body. Now we have > > > > > > LOGICAL_AND_1 L1 > > > > LOGICAL_AND_2 > > L1: > > POP_JUMP_IF_FALSE L2 > > > > L2: > > > > The fastest path through this executes one *less* bytecode > > than in the current 2.5-generated code. Also, any path that > > ends up executing the body benefits from the lack of a > > jump at the end. > > > > The same benefits also result when the boolean expression is > > more complex, e.g. > > > > if a or b and c: > > > > > > becomes > > > > > > LOGICAL_OR_1 L1 > > > > LOGICAL_AND_1 L2 > > > > LOGICAL_AND_2 > > L2: > > LOGICAL_OR_2 > > L1: > > POP_JUMP_IF_FALSE L3 > > > > L3: > > > > which contains 3 fewer instructions overall than the > > corresponding 2.5-generated code. > > > > So I contend that optimization is not an argument for > > rejecting this PEP, and may even be one for accepting > > it. > > Do you have an implementation available to measure this? In most cases > the cost is not in the number of bytecode instructions executed but in > the total amount of work. Two cheap bytecodes might well be cheaper > than one expensive one. > > However, I'm happy to keep your PEP open until you have code that we > can measure. (However, adding additional optimizations elsewhere to > make up for the loss wouldn't be fair -- we would have to compare with > a 2.5 or trunk (2.6) interpreter with the same additional > optimizations added.) > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > _______________________________________________ > Python-3000 mailing list > Python-3000 at python.org > http://mail.python.org/mailman/listinfo/python-3000 > Unsubscribe: > http://mail.python.org/mailman/options/python-3000/nevillegrech%40gmail.com > -- Regards, Neville Grech -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070525/8c88dbf1/attachment.htm From kristjan at ccpgames.com Tue May 29 15:14:04 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 29 May 2007 13:14:04 +0000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> <4E9372E6B2234D4F859320D896059A9508DE8B4046@exchis.ccp.ad.local> Message-ID: <4E9372E6B2234D4F859320D896059A9508DE8B4350@exchis.ccp.ad.local> > -----Original Message----- > > Microsoft's command line cannot cope with two pathnames that must be > quoted, so if the command path itself must be quoted, then no argument > to > the command can be quoted. There are tricky hacks that can work around > this mind-boggling stupidity, but life is simpler if Python itself > doesn't > use up the one quoted pathname. I don't know if Microsoft has had the > good > sense to fix this in Vista (which I probably will never use, since an > alternative exists), but they didn't in XP. Do you have any references for this claim? In my command line on XP sp2, this works just fine: C:\Program Files\Microsoft Visual Studio 8\VC>"c:\Program Files\TextPad 4\TextPad.exe" "c:\tmp\f a.txt" "c:\tmp\f b.txt" Both the program, and the two file names are quoted and textpad.exe opens them both. Cheers, Kristj?n From ncoghlan at gmail.com Tue May 29 15:27:36 2007 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 May 2007 23:27:36 +1000 Subject: [Python-Dev] [Distutils] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <46536787.7040500@v.loewis.de> References: <00af01c79b6f$2f863c30$1f0a0a0a@enfoldsystems.local> <46520F4F.5010502@v.loewis.de> <46536787.7040500@v.loewis.de> Message-ID: <465C2A48.3030205@gmail.com> Martin v. L?wis wrote: >> I have a set of extensions that use SWIG to wrap my own C++ library. >> This library, on a day-to-day basis is built against VS8 since the rest >> of our product suite is. Right now I have no way to work with this code >> using VS8 since the standard distribution is built against VS7 which >> uses a different CRT. This is an absolute nightmare in practice since >> I currently have to maintain VS7 projects in parallel with the standard >> VS8 ones simply so that I can run and test my python code. > > If you know well enough what you are doing, and avoid using unsupported > cases, you *can* mix different CRTs. I can attest to this - I have SWIG-wrapped extensions built with VC6 running quite happily against the official VS7 binaries for Python 2.4. Moving from Python 2.2 to Python 2.4 was a simple matter of recompiling and relinking the modules. The important thing was to make sure to never pass memory ownership or standard lib data structures across the boundary. I haven't actually found this to be all that difficult in practice, as I am typically either copying data from standard library data structures into native Python data structures (e.g. between std::string and PyString) or else merely accessing the Python wrappers around my own C++ classes. In both cases memory ownership remains entirely within the extension module, and all interaction occurs through the Python C API, and never indirectly through the CRT. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From kristjan at ccpgames.com Tue May 29 15:29:58 2007 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 29 May 2007 13:29:58 +0000 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <465B53EF.6070108@v.loewis.de> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> <4E9372E6B2234D4F859320D896059A9508DE8B4046@exchis.ccp.ad.local> <465B53EF.6070108@v.loewis.de> Message-ID: <4E9372E6B2234D4F859320D896059A9508DE8B435C@exchis.ccp.ad.local> > -----Original Message----- > From: "Martin v. L?wis" [mailto:martin at v.loewis.de] > > Just imagine the a school teacher who in good faith wants to > introduce > > his pupils to the wonderful programming language of Python, but > > when he installs it, all kinds of scary looking warnings drive him > off. > > Vista is, like it or not, going to be very prevalent. If we want > python > > to be easily accessible to the masses, we mustn't take an elitist > > attitude or else risk scaring people off. > > I'm completely in favor of fixing actual bugs. However, I'm not aware > of any (related to Vista). That it is not logo compliant is *not* > a bug. Python hasn't been logo compliant for more than a decade > now (the "install to Program Files" is not a new requirement, but > existed since Win95). > I'm not saying that it is a bug, but since this is python-dev, I am discussing it as a desirable "feature". One feature that is easily addable and will certainly make installing python on vista nicer, is to add authenticode signing to the install. Currently the user is faced with a very nasty and off-putting message about an unidentified program requesting access to his computer. See http://msdn2.microsoft.com/en-us/library/bb172338.aspx . I think the PSF should be able to obtain a certificate from MS. cheers, Kristj?n From ronaldoussoren at mac.com Tue May 29 11:05:36 2007 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Tue, 29 May 2007 11:05:36 +0200 Subject: [Python-Dev] Build problems with sqlite on OSX In-Reply-To: References: Message-ID: On 26 May, 2007, at 6:45, Darrin Thompson wrote: > First of all 1000 apologies if this is the wrong list. Please redirect > me if necessary. > > I'm attempting to build python 2.5.1 fat binaries on OSX and > statically link to a newer sqlite than what ships with OSX. (3.3.17). > > I'm getting "Bus Error" early when I run my app. If I turn on a lot of > malloc debugging options and run under gdb I get this trace: What happens when you use the binary installer at python.org? This is build with a newer version of sqlite as well (because the installer supports OSX 10.3). The script that builds the binary installer is in Mac/BuildScript. Ronald -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3562 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20070529/02f8e52b/attachment.bin From tonynelson at georgeanelson.com Tue May 29 18:36:45 2007 From: tonynelson at georgeanelson.com (Tony Nelson) Date: Tue, 29 May 2007 12:36:45 -0400 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DE8B4350@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> <4E9372E6B2234D4F859320D896059A9508DE8B4046@exchis.ccp.ad.local> <4E9372E6B2234D4F859320D896059A9508DE8B4350@exchis.ccp.ad.local> Message-ID: At 1:14 PM +0000 5/29/07, Kristj?n Valur J?nsson wrote: >> -----Original Message----- >> >> Microsoft's command line cannot cope with two pathnames that must be >> quoted, so if the command path itself must be quoted, then no argument >> to >> the command can be quoted. There are tricky hacks that can work around >> this mind-boggling stupidity, but life is simpler if Python itself >> doesn't >> use up the one quoted pathname. I don't know if Microsoft has had the >> good >> sense to fix this in Vista (which I probably will never use, since an >> alternative exists), but they didn't in XP. > >Do you have any references for this claim? >In my command line on XP sp2, this works just fine: > >C:\Program Files\Microsoft Visual Studio 8\VC>"c:\Program Files\TextPad 4\TextPad.exe" "c:\tmp\f a.txt" "c:\tmp\f b.txt" > >Both the program, and the two file names are quoted and textpad.exe opens >them both. I pounded my head against this issue when working on a .bat file a few years back, until I read the help for cmd and saw the quote logic (and switched to VBScript). It's still there, in "help cmd". I had once found references to the same issue for the run command in Microsoft's online help. Perhaps it is fixed in SP2. If so, just change it and don't worry about users with earlier versions of Windows. -- ____________________________________________________________________ TonyN.:' ' From darrinth at gmail.com Tue May 29 18:41:43 2007 From: darrinth at gmail.com (Darrin Thompson) Date: Tue, 29 May 2007 12:41:43 -0400 Subject: [Python-Dev] Build problems with sqlite on OSX In-Reply-To: References: Message-ID: On 5/29/07, Ronald Oussoren wrote: > What happens when you use the binary installer at python.org? This is > build with a newer version of sqlite as well (because the installer > supports OSX 10.3). > Hmmm. I hadn't thought of checking the sqlite version in there. I did use the binary installer once but had "other problems" which I'm now suspecting were my misuse of Qt's pyrcc. > The script that builds the binary installer is in Mac/BuildScript. > Sweet! I'll look into this. -- Darrin From brett at python.org Tue May 29 19:38:31 2007 From: brett at python.org (Brett Cannon) Date: Tue, 29 May 2007 10:38:31 -0700 Subject: [Python-Dev] whitespace normalization pre-commit hook is giving me grief In-Reply-To: <465BA923.4020303@v.loewis.de> References: <465BA923.4020303@v.loewis.de> Message-ID: On 5/28/07, "Martin v. L?wis" wrote: > > Brett Cannon schrieb: > > Unfortunately the pre-commit hook > > does not specify what line a change was made on so I have no clue where > > it is failing (maybe this should be added?). > > It creates a reindent.Reindenter on the new contents, then invokes > .run() on it, and complains if that returns true. If you can come > up with a patch to reindent that makes it report more detailed > errors, please post it, and I'll try to merge it into the hook > script. The commit worked. Thanks for fixing this, Martin! And thanks to Georg for finding the initial solution. And thanks to Neal for trying Georg's initial solution. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070529/586b3c17/attachment.html From darrinth at gmail.com Tue May 29 19:26:36 2007 From: darrinth at gmail.com (Darrin Thompson) Date: Tue, 29 May 2007 13:26:36 -0400 Subject: [Python-Dev] Build problems with sqlite on OSX In-Reply-To: <465BAD3D.8060008@v.loewis.de> References: <465BAD3D.8060008@v.loewis.de> Message-ID: On 5/29/07, "Martin v. L?wis" wrote: > The question for python-dev is "how can I debug that further, and where > should I submit a patch" :-) > I have no problem with that. :-) > > (gdb) info threads > > * 1 process 18968 local thread 0x1003 0x900e41d1 in strtol_l () > > (gdb) bt > > #0 0x900e41d1 in strtol_l () > > #1 0x900160a5 in atoi () > > #2 0x9406fd80 in sqlite3InitCallback () > > Can you figure out what parameter is being passed to atoi() here? > Go up (u) a few stack frames, list (l) the source, and print (p) > the variables being passed to atoi(). Well, duh! #3 0x0395faca in sqlite3_exec (db=0x338d160, zSql=0x338faf0 "SELECT name, rootpage, sql FROM 'main'.sqlite_master WHERE tbl_name='sqlite_sequence'", xCallback=0x9406fd00 , pArg=0xbfffde14, pzErrMsg=0x0) at ./src/legacy.c:93 #4 0x0398c741 in sqlite3VdbeExec (p=0x1943e00) at ./src/vdbe.c:4090 #5 0x0395665e in sqlite3Step (p=0x1943e00) at ./src/vdbeapi.c:236 (gdb) l 88 azVals = &azCols[nCol]; 89 for(i=0; iFrom looking at the source code I know that what is being passed to atoi is supposed to be a root page number. int sqlite3InitCallback(void *pInit, int argc, char **argv, char **azColName){ Specifically, argv[1] is what goes to atoi, and is documented to be a root page number. All kinds of possibilities suggest themselves. > I'm puzzled that it doesn't > display source code information - so one possible cause is that > you pick up the wrong sqlite3InitCallback (i.e. the one that > came with OSX, instead of the one you built yourself). I'm confident it isn't picking up the wrong lib, based on otool -L: $ otool -L /opt/so/Library/Frameworks/Python.framework/Versions/Current/lib/python2.5/lib-dynload/_sqlite3.so /opt/so/Library/Frameworks/Python.framework/Versions/Current/lib/python2.5/lib-dynload/_sqlite3.so: /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 88.3.3) On Linux I might also poke around in /proc to see what files were mapped into memory, but I'm not sure how to do that on OSX yet. -- Darrin From draghuram at gmail.com Tue May 29 21:04:41 2007 From: draghuram at gmail.com (Raghuram Devarakonda) Date: Tue, 29 May 2007 15:04:41 -0400 Subject: [Python-Dev] couple of bug fixes Message-ID: <2c51ecee0705291204r2bc2d38chf98d6b2472a28231@mail.gmail.com> Hi, I uploaded couple of patches to fix bugs. 1) http://www.python.org/sf/1720897 to fix the bug 668596 (distutils chops the first character of filenames). 2) http://www.python.org/sf/1713041 to fix the bug 1712742 (pprint handles depth argument incorrectly). Both the patches are extremely small (2, 3 lines change) and include test cases. Can I interest some one to review them? Thanks, Raghu. From dinov at exchange.microsoft.com Tue May 29 23:15:07 2007 From: dinov at exchange.microsoft.com (Dino Viehland) Date: Tue, 29 May 2007 14:15:07 -0700 Subject: [Python-Dev] New Super PEP In-Reply-To: <76fd5acf0704291231u79bb85ffrff20e517db517fdd@mail.gmail.com> References: <76fd5acf0704281943i6ea9162by4448cb7ce5b646bf@mail.gmail.com> <43aa6ff70704290906p43b59ccdkaa2292ae615174bd@mail.gmail.com> <76fd5acf0704290947o79cb9722k66c8ba37fa0b6826@mail.gmail.com> <43aa6ff70704291215g1e6be3ffq60a3c3954d88fb19@mail.gmail.com> <76fd5acf0704291231u79bb85ffrff20e517db517fdd@mail.gmail.com> Message-ID: <7AD436E4270DD54A94238001769C222791BCBF64E4@DF-GRTDANE-MSG.exchange.corp.microsoft.com> Just to chime in from the IronPython side (better late than never I suppose): If we need to get access to the frame which is calling super then we can make that happen in IronPython. It just means that super gets treated like we treat eval today and won't work if it's been aliased. -----Original Message----- From: python-dev-bounces+dinov=microsoft.com at python.org [mailto:python-dev-bounces+dinov=microsoft.com at python.org] On Behalf Of Calvin Spealman Sent: Sunday, April 29, 2007 12:31 PM To: Collin Winter Cc: Python Mailing List Subject: Re: [Python-Dev] New Super PEP On 4/29/07, Collin Winter wrote: > On 4/29/07, Calvin Spealman wrote: > > On 4/29/07, Collin Winter wrote: > > > What if the instance isn't called "self"? PEP 3099 states that "self > > > will not become implicit"; it's talking about method signatures, but I > > > think that dictum applies equally well in this case. > > > > I don't use the name self. I use whatever the first argument name is, > > found by this line of python code: > > > > instance_name = calling_frame.f_code.co_varnames[0] > > So I can't use super with anything but the method's invocant? That > seems arbitrary. This will be added to the open issues, but it comes back to the problem with allow the exact same super implementation both operate in the super(Class, Object).foo() form and also the super.__call__() form in the new version. Any suggestions are welcome for how to solve this. > > > Also, it's my understanding that not all Python implementations have > > > an easy analogue to CPython's frames; have you given any thought to > > > whether and how PyPy, IronPython, Jython, etc, will implement this? > > > > I'll bring this up for input from PyPy and IronPython people, but I > > don't know any Jython people. Are we yet letting the alternative > > implementations influence so strongly what we do in CPython? I'm not > > saying "screw them", just pointing out that there is always a way to > > implement anything, and if its some trouble for them, well, 2.6 or 3.0 > > targetting is far down the road for any of them yet. > > It's a smell test: if a given proposal is unduly difficult for > anything but CPython to implement, it's probably a bad idea. The > language shouldn't go down the Perl 5 road, where python (the C > interpreter) becomes the only thing that can implement Python (the > language). Understandable. I still haven't contacted anyone about it on in the PyPy or IronPython worlds, and anyone familiar with Jython who can comment would be appreciated. Ditto for PyPy and IronPython, even though I should be able to find some information there myself. -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://ironfroggy-code.blogspot.com/ _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/dinov%40microsoft.com From martin at v.loewis.de Tue May 29 23:37:13 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 29 May 2007 23:37:13 +0200 Subject: [Python-Dev] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <4E9372E6B2234D4F859320D896059A9508DE8B435C@exchis.ccp.ad.local> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> <4E9372E6B2234D4F859320D896059A9508DE8B4046@exchis.ccp.ad.local> <465B53EF.6070108@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DE8B435C@exchis.ccp.ad.local> Message-ID: <465C9D09.9040401@v.loewis.de> > One feature that is easily addable and will certainly make installing > python on vista nicer, is to add authenticode signing to the install. This I question very much. I experimented with authenticode before 2.4, and found it an unacceptable experience. When the MSI file starts running, installer needs to verify the signature, for which it needs to compute a hash of the entire file. For the Python MSI, this takes many seconds on a slower Pentium 4 machine. During that time, there is no visual feedback, so users are uncertain whether they have actually invoked the MSI file at all. > Currently the user is faced with a very nasty and off-putting message > about an unidentified program requesting access to his computer. Certainly. However, telling them that they have to wait just so that Windows finds out what they know already (that this is the MSI file from the Python Software Foundation, or from Martin v. L?wis) is even more nasty. Regards, Martin From darrinth at gmail.com Wed May 30 01:32:25 2007 From: darrinth at gmail.com (Darrin Thompson) Date: Tue, 29 May 2007 19:32:25 -0400 Subject: [Python-Dev] Build problems with sqlite on OSX In-Reply-To: <465C9FE5.1040201@v.loewis.de> References: <465BAD3D.8060008@v.loewis.de> <465C9FE5.1040201@v.loewis.de> Message-ID: On 5/29/07, "Martin v. L?wis" wrote: > p &sqlite3InitCallback > (gdb) p $sqlite3InitCallback $1 = void grrrr. > Try "info shared" in gdb. Not sure whether that works on OSX, > though. > Worked beautifully! The smoking gun: something is hauling in the system provided sqlite3 in addition to my static one. (Weren't you just saying that?) I'm going to investigate that further unless you tell me I'm an idiot. I suspect it's the official Qt binaries from Trolltech doing it indirectly. Plus that's the only way I can rationalize some sqlite functions having source available and some acting like they've been stripped. -- Darrin From martin at v.loewis.de Wed May 30 07:14:15 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 30 May 2007 07:14:15 +0200 Subject: [Python-Dev] Build problems with sqlite on OSX In-Reply-To: References: <465BAD3D.8060008@v.loewis.de> <465C9FE5.1040201@v.loewis.de> Message-ID: <465D0827.3020408@v.loewis.de> Darrin Thompson schrieb: > On 5/29/07, "Martin v. L?wis" wrote: >> p &sqlite3InitCallback >> > > (gdb) p $sqlite3InitCallback > $1 = void Please try '&' instead of '$'. It's the address of that function I was after (to then find out whether it is in the address range of the extension module). > The smoking gun: something is hauling in the system provided sqlite3 > in addition to my static one. (Weren't you just saying that?) I'm > going to investigate that further unless you tell me I'm an idiot. I > suspect it's the official Qt binaries from Trolltech doing it > indirectly. > > Plus that's the only way I can rationalize some sqlite functions > having source available and some acting like they've been stripped. Very likely. I think you will have to read up on "two-level namespaces" and stuff like that to resolve this. Regards, Martin From theller at ctypes.org Wed May 30 21:50:42 2007 From: theller at ctypes.org (Thomas Heller) Date: Wed, 30 May 2007 21:50:42 +0200 Subject: [Python-Dev] Windows buildbot (Was: buildbot failure in x86 W2k trunk) In-Reply-To: References: <20070520071645.BA1C01E4004@bag.python.org> <464FFFDC.4020600@v.loewis.de> <46547C7F.7040908@activestate.com> <4654C280.1080802@v.loewis.de> Message-ID: Thomas Heller schrieb: >>> Are there others that can provide a Windows buildbot? It would probably >>> be good to have two -- and a WinXP one would be good. >> >> It certainly would be good. Unfortunately, Windows users are not that >> much engaged in the open source culture, so few of them volunteer >> (plus it's more painful, with Windows not being a true multi-user >> system). > > I'll try to setup a buildbot under WinXP. The buildbot is now working. Thomas From martin at v.loewis.de Wed May 30 22:31:52 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 30 May 2007 22:31:52 +0200 Subject: [Python-Dev] Windows buildbot (Was: buildbot failure in x86 W2k trunk) In-Reply-To: References: <20070520071645.BA1C01E4004@bag.python.org> <464FFFDC.4020600@v.loewis.de> <46547C7F.7040908@activestate.com> <4654C280.1080802@v.loewis.de> Message-ID: <465DDF38.1040901@v.loewis.de> > The buildbot is now working. Thanks for the effort. If any of the current operators of a Windows buildbot want to shut down theirs in return, please let me know. Regards, Martin From alan.mcintyre at gmail.com Wed May 30 23:03:08 2007 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Wed, 30 May 2007 17:03:08 -0400 Subject: [Python-Dev] Windows buildbot (Was: buildbot failure in x86 W2k trunk) In-Reply-To: <4654C280.1080802@v.loewis.de> References: <20070520071645.BA1C01E4004@bag.python.org> <464FFFDC.4020600@v.loewis.de> <46547C7F.7040908@activestate.com> <4654C280.1080802@v.loewis.de> Message-ID: <1d36917a0705301403o3af6a8a0x9935344b4b23f38b@mail.gmail.com> On 5/23/07, "Martin v. L?wis" wrote: > Tim Peter's machine comes and goes, depending on whether he starts > the buildbot. Alan McIntyre's machien should be mostly he reliable, > but nobody really notices if it goes away. FWIW, my current internet service is less than spectacular, and frequently vanishes for hours at a time. I will be moving it within the next 3 weeks to my new residence which--I hope--will have better service. So hopefully that will mean it becomes more reliable. ;-) Alan From fuzzyman at voidspace.org.uk Thu May 31 00:16:36 2007 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Wed, 30 May 2007 23:16:36 +0100 Subject: [Python-Dev] New Super PEP In-Reply-To: <7AD436E4270DD54A94238001769C222791BCBF64E4@DF-GRTDANE-MSG.exchange.corp.microsoft.com> References: <76fd5acf0704281943i6ea9162by4448cb7ce5b646bf@mail.gmail.com> <43aa6ff70704290906p43b59ccdkaa2292ae615174bd@mail.gmail.com> <76fd5acf0704290947o79cb9722k66c8ba37fa0b6826@mail.gmail.com> <43aa6ff70704291215g1e6be3ffq60a3c3954d88fb19@mail.gmail.com> <76fd5acf0704291231u79bb85ffrff20e517db517fdd@mail.gmail.com> <7AD436E4270DD54A94238001769C222791BCBF64E4@DF-GRTDANE-MSG.exchange.corp.microsoft.com> Message-ID: <465DF7C4.7010305@voidspace.org.uk> Dino Viehland wrote: > Just to chime in from the IronPython side (better late than never I suppose): > > If we need to get access to the frame which is calling super then we can make that happen in IronPython. It just means that super gets treated like we treat eval today and won't work if it's been aliased. > Being able to access the calling frame from IronPython would be really useful... Michael Foord http://www.voidspace.org.uk/ironpython/index.shtml > -----Original Message----- > From: python-dev-bounces+dinov=microsoft.com at python.org [mailto:python-dev-bounces+dinov=microsoft.com at python.org] On Behalf Of Calvin Spealman > Sent: Sunday, April 29, 2007 12:31 PM > To: Collin Winter > Cc: Python Mailing List > Subject: Re: [Python-Dev] New Super PEP > > On 4/29/07, Collin Winter wrote: > >> On 4/29/07, Calvin Spealman wrote: >> >>> On 4/29/07, Collin Winter wrote: >>> >>>> What if the instance isn't called "self"? PEP 3099 states that "self >>>> will not become implicit"; it's talking about method signatures, but I >>>> think that dictum applies equally well in this case. >>>> >>> I don't use the name self. I use whatever the first argument name is, >>> found by this line of python code: >>> >>> instance_name = calling_frame.f_code.co_varnames[0] >>> >> So I can't use super with anything but the method's invocant? That >> seems arbitrary. >> > > This will be added to the open issues, but it comes back to the > problem with allow the exact same super implementation both operate in > the super(Class, Object).foo() form and also the super.__call__() form > in the new version. > > Any suggestions are welcome for how to solve this. > > >>>> Also, it's my understanding that not all Python implementations have >>>> an easy analogue to CPython's frames; have you given any thought to >>>> whether and how PyPy, IronPython, Jython, etc, will implement this? >>>> >>> I'll bring this up for input from PyPy and IronPython people, but I >>> don't know any Jython people. Are we yet letting the alternative >>> implementations influence so strongly what we do in CPython? I'm not >>> saying "screw them", just pointing out that there is always a way to >>> implement anything, and if its some trouble for them, well, 2.6 or 3.0 >>> targetting is far down the road for any of them yet. >>> >> It's a smell test: if a given proposal is unduly difficult for >> anything but CPython to implement, it's probably a bad idea. The >> language shouldn't go down the Perl 5 road, where python (the C >> interpreter) becomes the only thing that can implement Python (the >> language). >> > > Understandable. I still haven't contacted anyone about it on in the > PyPy or IronPython worlds, and anyone familiar with Jython who can > comment would be appreciated. Ditto for PyPy and IronPython, even > though I should be able to find some information there myself. > > -- > Read my blog! I depend on your acceptance of my opinion! I am interesting! > http://ironfroggy-code.blogspot.com/ > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/dinov%40microsoft.com > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > > From dinov at exchange.microsoft.com Thu May 31 00:46:28 2007 From: dinov at exchange.microsoft.com (Dino Viehland) Date: Wed, 30 May 2007 15:46:28 -0700 Subject: [Python-Dev] New Super PEP In-Reply-To: <465DF7C4.7010305@voidspace.org.uk> References: <76fd5acf0704281943i6ea9162by4448cb7ce5b646bf@mail.gmail.com> <43aa6ff70704290906p43b59ccdkaa2292ae615174bd@mail.gmail.com> <76fd5acf0704290947o79cb9722k66c8ba37fa0b6826@mail.gmail.com> <43aa6ff70704291215g1e6be3ffq60a3c3954d88fb19@mail.gmail.com> <76fd5acf0704291231u79bb85ffrff20e517db517fdd@mail.gmail.com> <7AD436E4270DD54A94238001769C222791BCBF64E4@DF-GRTDANE-MSG.exchange.corp.microsoft.com> <465DF7C4.7010305@voidspace.org.uk> Message-ID: <7AD436E4270DD54A94238001769C222791BCBF678B@DF-GRTDANE-MSG.exchange.corp.microsoft.com> >>> Being able to access the calling frame from IronPython would be really >>> useful... We do have a -X:Frames option but it's going to hurt your performance, but for example: IronPython 1.0.60816 on .NET 2.0.50727.312 Copyright (c) Microsoft Corporation. All rights reserved. >>> def f(): ... x = locals ... print x() ... >>> f() {'__name__': '__main__', '__builtins__': , '__doc__': None, 'site': , ' f': } >>> ^Z C:\Product\Released\IronPython-1.0>.\ipy.exe -X:Frames IronPython 1.0.60816 on .NET 2.0.50727.312 Copyright (c) Microsoft Corporation. All rights reserved. >>> def f(): ... x = locals ... print x() ... >>> f() {'x': } >>> ^Z But then we'll NEVER use the CLR stack for storing locals, but we can also always get the calling frames. From fdrake at acm.org Thu May 31 06:45:58 2007 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 31 May 2007 00:45:58 -0400 Subject: [Python-Dev] Minor ConfigParser Change In-Reply-To: <46588B22.3090808@gmail.com> References: <46585729.2030305@gmail.com> <4E9372E6B2234D4F859320D896059A9508DE8B40B5@exchis.ccp.ad.local> <46588B22.3090808@gmail.com> Message-ID: <200705310045.58802.fdrake@acm.org> On Saturday 26 May 2007, Joseph Armbruster wrote: > I noticed that one of the parts of ConfigParser was not using "for line > in fp" style of readline-ing :-) So, this will reduce the SLOC by 3 > lines and improve readability. However, I did a quick grep and this > type of practice appears in several other places. Before the current iteration support was part of Python, there was no way to iterate over a the way there is now; the code you've dug up is simply from before the current iteration support. (As I'm sure you know.) Is there motivation for these changes other than a stylistic preference for the newer idioms? Keeping the SLOC count down seems pretty minimal, and unimportant. Making the code more understandable is valuable, but it's not clear how much this really achieves that. In general, we try to avoid making style changes to the code since that can increase the maintenance burden (patches can be harder to produce that can be cleanly applied to multiple versions). Are there motivations we're missing? -Fred -- Fred L. Drake, Jr. From greg.ewing at canterbury.ac.nz Thu May 31 08:30:40 2007 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 31 May 2007 18:30:40 +1200 Subject: [Python-Dev] Quoting netiquette reminder [Re: proposed which.py replacement] In-Reply-To: <20070530223754.GA13084@panix.com> References: <460EC367.8090007@ncee.net> <461131E3.8090305@activestate.com> <46117366.4090005@canterbury.ac.nz> <20070530223754.GA13084@panix.com> Message-ID: <465E6B90.4080002@canterbury.ac.nz> Aahz wrote: > Guido has previously given himself explicit permission to violate > netiquette (including the rule about top-posting). Only in the Python mailing lists, I hope -- unless he's declared himself BDFL of the whole Internet as well. :-) I suppose he could be considered to have a right to do that, but it doesn't stop sloppy quoting practices from being annoying and inefficient. The quoting conventions that emerged in the early days did so for good reasons -- they avoid squandering bandwidth and aid clear communication. To me, it's not so much a matter of politeness, but of common sense. -- Greg From martin at v.loewis.de Thu May 31 09:09:32 2007 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 31 May 2007 09:09:32 +0200 Subject: [Python-Dev] [Distutils] Adventures with x64, VS7 and VS8 on Windows In-Reply-To: <465E08D3.6090309@ibp.de> References: <009f01c79b51$b1da52c0$1f0a0a0a@enfoldsystems.local> <46512B09.1080304@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE4B0@exchis.ccp.ad.local> <4652128B.3000301@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEE6FB@exchis.ccp.ad.local> <465365DE.1090306@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DCBEEACB@exchis.ccp.ad.local> <4E9372E6B2234D4F859320D896059A9508DE8B4046@exchis.ccp.ad.local> <465B53EF.6070108@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DE8B435C@exchis.ccp.ad.local> <465C9D09.9040401@v.loewis.de> <465CA75B.50901@ibp.de> <465D073C.6040304@v.loewis.de> <465D3936.2020700@ibp.de> <465DDD79.2030404@v.loewis.de> <4E9372E6B2234D4F859320D896059A9508DE8B4779@exchis.ccp.ad.local> <465DFCD9.4020906@v.loewis.de> <465E08D3.6090309@ibp.de> Message-ID: <465E74AC.5030706@v.loewis.de> > The practical problem is: how should 'The PSF' get a Microsoft-signed > certificate? If you want to experiment with a signed installer, try http://www.dcl.hpi.uni-potsdam.de/home/loewis/python-2.5.1.msi In order for Windows to verify the signature, you can install http://www.dcl.hpi.uni-potsdam.de/home/loewis/osm.cer first (and remove it from the list of trusted root certificates when you are done, if you are worried about that). While XP displays the certificate just fine (in Properties/Digital Signatures), my installation of Vista fails to recognize that the file has a signature. Not sure what's wrong here (FWIW, signtool would fail to sign the file as well if run on Vista, but signed it just fine when run on XP). Regards, Martin From josepharmbruster at gmail.com Thu May 31 14:16:57 2007 From: josepharmbruster at gmail.com (Joseph Armbruster) Date: Thu, 31 May 2007 08:16:57 -0400 Subject: [Python-Dev] Minor ConfigParser Change In-Reply-To: <200705310045.58802.fdrake@acm.org> References: <46585729.2030305@gmail.com> <4E9372E6B2234D4F859320D896059A9508DE8B40B5@exchis.ccp.ad.local> <46588B22.3090808@gmail.com> <200705310045.58802.fdrake@acm.org> Message-ID: <938f42d70705310516h5dc6aabeu7495cad7a32dc6d@mail.gmail.com> Fred, My only motivation was style. As per your comment: "In general, we try to avoid making style changes to the code since that can increase the maintenance burden (patches can be harder to produce that can be cleanly applied to multiple versions)." I will keep this in mind when supplying future patches. Joseph Armbruster On 5/31/07, Fred L. Drake, Jr. wrote: > > On Saturday 26 May 2007, Joseph Armbruster wrote: > > I noticed that one of the parts of ConfigParser was not using "for line > > in fp" style of readline-ing :-) So, this will reduce the SLOC by 3 > > lines and improve readability. However, I did a quick grep and this > > type of practice appears in several other places. > > Before the current iteration support was part of Python, there was no way > to > iterate over a the way there is now; the code you've dug up is simply from > before the current iteration support. (As I'm sure you know.) > > Is there motivation for these changes other than a stylistic preference > for > the newer idioms? Keeping the SLOC count down seems pretty minimal, and > unimportant. Making the code more understandable is valuable, but it's > not > clear how much this really achieves that. > > In general, we try to avoid making style changes to the code since that > can > increase the maintenance burden (patches can be harder to produce that can > be > cleanly applied to multiple versions). No other motivat Are there motivations we're missing? > > > -Fred > > -- > Fred L. Drake, Jr. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070531/535f320d/attachment.html From brett at python.org Thu May 31 22:09:32 2007 From: brett at python.org (Brett Cannon) Date: Thu, 31 May 2007 13:09:32 -0700 Subject: [Python-Dev] removing use of mimetools, multifile, and rfc822 Message-ID: I just finished going through PEP 4 and adding DeprecationWarnings in 2.6for the various modules that were lacking the warning for some reason or another ... ... except for mimetools, multifile, and rfc822. All three modules are still used by some other module somewhere in the stdlib. The docs say to use the email package to replace these three, but there is no one-to-one mapping. And as I never use any of these three modules, email, or the modules still using the three in question I don't know how to go about ripping them out. In other words this email is to hopefully inspire someone to remove the uses of rfc822, mimetools, and multifile from the stdlib so the DeprecationWarnings can finally go in. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20070531/522d0652/attachment.html From barry at python.org Thu May 31 23:42:03 2007 From: barry at python.org (Barry Warsaw) Date: Thu, 31 May 2007 17:42:03 -0400 Subject: [Python-Dev] removing use of mimetools, multifile, and rfc822 In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On May 31, 2007, at 4:09 PM, Brett Cannon wrote: > I just finished going through PEP 4 and adding DeprecationWarnings in > 2.6for the various modules that were lacking the warning for some > reason or > another ... > > ... except for mimetools, multifile, and rfc822. All three modules > are > still used by some other module somewhere in the stdlib. The docs > say to > use the email package to replace these three, but there is no one- > to-one > mapping. And as I never use any of these three modules, email, or the > modules still using the three in question I don't know how to go about > ripping them out. > > In other words this email is to hopefully inspire someone to remove > the uses > of rfc822, mimetools, and multifile from the stdlib so the > DeprecationWarnings can finally go in. +1 for deprecating these. I don't have time to slog through the stdlib and do the work, but I would be happy to help answer questions about alternatives. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) iQCVAwUBRl9BLHEjvBPtnXfVAQKwZgP/aib1PVuLab/YLhCZ+XC82i9ow/tJrelk cLQhcM8Qc/iKcUmFKtKzkhdpOF43dZp6apGrq0ej9pMOdydsFk1wU8egRf+NRJac 00z4ZrMkmM4ZbQ/bNLWHTkqWmadkmTnOErNMl8HzCmbUdBOVNmj6/nzJNT14BRAy K3IRsA6RXzg= =QwId -----END PGP SIGNATURE-----