From guido@python.org Fri Nov 1 00:25:24 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 31 Oct 2002 19:25:24 -0500 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: Your message of "Thu, 31 Oct 2002 20:47:32 -0300." <20021031204732.B32673@ibook.distro.conectiva> References: <20021031204732.B32673@ibook.distro.conectiva> Message-ID: <200211010025.gA10POc02765@pcp02138704pcs.reston01.va.comcast.net> > Now that I'm relieved by sharing with you my feelings, ;-) And thanks for that! I think you're right. I'll let others comment first; I'm about to leave for a 3-day trip, back on Monday. > what's the best path to get python-bz2 module into Python 2.3? > Do you think it's something which should be in the core, or it'd > be better to keep it as an external module? If there are no licensing issues, and if there's a decent bz2 library, it should be welcome in the core. Batteries included. > The code is currently maintained at http://python-bz2.sf.net, if > someone wants to have a look at it. --Guido van Rossum (home page: http://www.python.org/~guido/) From martin@v.loewis.de Fri Nov 1 07:56:14 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 01 Nov 2002 08:56:14 +0100 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: <20021031204046.A32673@ibook.distro.conectiva> References: <20021031204046.A32673@ibook.distro.conectiva> Message-ID: Gustavo Niemeyer writes: > At the same time, I've seen Guido and others bothered a few times > because of the lack of man power. So the question is: how do I, a > developer which feels capable of helping in python's development, can > get some of the tasks which take your time out of your hands? Or even, > how is it possible to improve some part of the development process by > giving people like me some instructions? To me, the most important aspect I'd like to hand off is the review of patches; to Tim, it is the analysis of bug reports. There is an ever-growing backlog of both of these (atleast for patches, we are in the state where just less than 100 are pending, so it's not that badly growing). It is clearly unsatisfying for submitters of both patches and bug reports if they don't hear anything. While some of these issues are tricky and require expertise not everybody might have, I'd still like to see more people getting involved with that. For reviewing of patches, I'd suggest the following guidelines: - start with the oldest patches, and work forward to the more recent ones (the most recent ones are regularly checked by several people, and applied or rejected if a quick decision is possible - but some of these slip through) - for each patch, try to find out if it a) is present (if not, post a notice saying that the upload is missing), b) applies to the Python source code cleanly (if not, either update the patch yourself, or request that the submitter does that), c) does what it says it does (no detailed analysis necessary yet) (if it is not clear what the patch does, or how it does that, request clarification); d) is appropriate for inclusion, by comparison to other features that are already in Python (if not, ask submitter for a rationale why this patch should be included, pointing out your objections), e) has undesirable side effects (if yes, ask the submitter for an evaluation why these side effects are acceptable), f) is complete (new features need documentation, bug fixes need regression test cases if possible), g) is correct: try compiling it to see whether it works, try to come up with boundary cases to see whether it still works, inspect it to see if you find any flaws... - if you find problems with the patch, post a note asking for correction. If you find there is already a note and the submitter hasn't acted for quite some time (10% of the entire life of the patch on SF), set a deadline at which time it will be rejected. If there is already a commentary from the submitter, re-perform the entire analysis. - If you find that you are not qualified to review the patch, still try to perform as many steps as you can. Don't hesitate to ask the submitter to explain things to you that you don't understand. - If you complete evaluation, propose rejection or acceptance. From time to time, post a list of patches that you think we should act upon. If I find your analysis convincing, I'll execute the proposed action quickly. For bug reports, a similar procedure applies. Check: - if the bug report is reproducable, - whether this is a bug at all, or perhaps a misunderstanding on the part of the submitter, and the documentation clearly says otherwise, - if there is a patch included, analyse the patch, - if there is no patch, but you can see how the bug would be fixed, ask the submitter (politely) whether he would like to write a patch, - see whether you can write a patch yourself. If you can't create a patch, dealing with a bug report might be tricky. If you can see a solution, but can't work on it, post your analysis into the bug report. If you think the bug is unfixable, explain this as well. If fixing the bug requires expertise that you know has, ask the submitter of who might have this expertise (especially when it comes to strange systems). > Also, isn't it easy to point out what's wrong in a commit from > someone who is following the development process for a while than > taking the time to review its code in the sourceforge patch system? I'm not sure what you are proposing here? That all patches are just applied, and then backed-out if incorrect? While this may be appropriate sometimes, I think it would cause to many disturbances. It happens from time to time that people reading python-checkins find problems by inspection. More often, they find problems some time afterwards, because of unexpected side effects. This is a good thing, and it is possible only because the patches had been reviewed carefully on SF. > My feeling is that the Python development is currently overly > centralized, and that you might be suffering from that now, by being > unable to handover some of your tasks to someone else. I agree with your first observation, but disagree with the second: there are plenty of tasks that could be handed over. There are just no volunteers to perform these tasks. > I feel that everytime I send a patch, besides being contributing, > I'm also overloading you with more stuff to review and comment and > etc. Perhaps the fallback costs for some wrong commit is too high > now (did I heard someone mentioning subversion?)?! It's not that. Patches *must* be reviewed, or else they would be just incomplete: people might commit the patch, but fail to commit the documentation. Then, for a couple of years, there would be no documentation. Likewise for test cases. That the patch works is alone not good enough. It is time-consuming to produce high-quality software. However, that should not alone be a reason to give up the high standards of Python development. Regards, Martin From mal@lemburg.com Fri Nov 1 10:22:26 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 01 Nov 2002 11:22:26 +0100 Subject: [Python-Dev] Becoming a python contributor References: <20021031204046.A32673@ibook.distro.conectiva> Message-ID: <3DC255E2.7040901@lemburg.com> Martin v. Loewis wrote: > - for each patch, try to find out if it > a) is present (if not, post a notice saying that the upload is missing), > b) applies to the Python source code cleanly (if not, either update the > patch yourself, or request that the submitter does that), > c) does what it says it does (no detailed analysis necessary yet) > (if it is not clear what the patch does, or how it does that, > request clarification); > d) is appropriate for inclusion, by comparison to other features > that are already in Python (if not, ask submitter for a rationale > why this patch should be included, pointing out your objections), > e) has undesirable side effects (if yes, ask the submitter for > an evaluation why these side effects are acceptable), > f) is complete (new features need documentation, bug fixes need > regression test cases if possible), > g) is correct: try compiling it to see whether it works, try to come > up with boundary cases to see whether it still works, inspect > it to see if you find any flaws... I think that the current assignment solution causes much of the delays we are seeing + python-devs are pretty busy these days with other stuff (needed for pizza and beer). I for one wouldn't mind if other developers with some time at hand jump in on already assigned patches and bug reports to help out. Martin does this on a regular basis and I find it really helps. Another strategy would be for developers to take over maintenance of certain parts of the code. We should then probably have a list of maintainers for the various parts on the patch submission list to make the assignment process easier for the submitting parties. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From niemeyer@conectiva.com Fri Nov 1 13:42:38 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Fri, 1 Nov 2002 10:42:38 -0300 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: References: <20021031204046.A32673@ibook.distro.conectiva> Message-ID: <20021101104238.C12803@ibook.distro.conectiva> > It is clearly unsatisfying for submitters of both patches and bug > reports if they don't hear anything. While some of these issues are Indeed. > tricky and require expertise not everybody might have, I'd still like > to see more people getting involved with that. I see.. > For reviewing of patches, I'd suggest the following guidelines: [...] > - If you complete evaluation, propose rejection or acceptance. From > time to time, post a list of patches that you think we should act > upon. If I find your analysis convincing, I'll execute the proposed > action quickly. [...] > For bug reports, a similar procedure applies. Check: [...] Thank you very much for your detailed descriptions. It will be a great reference for myself and for others which feel in the same situation. I'll try to understand the whole process and follow your instructions. > > Also, isn't it easy to point out what's wrong in a commit from > > someone who is following the development process for a while than > > taking the time to review its code in the sourceforge patch system? > > I'm not sure what you are proposing here? That all patches are just > applied, and then backed-out if incorrect? [...] Not at all. I meant that people who prove to understand the basic development strategy, and also provided correct working and non-breaking patches, could get commit access more easily. I've read somewhere once something which I took for the projects I currently maintain. To decide if you should give someone commit access, it's not important if that person can produce pages of complex code, but if that person usually provide patches which follow the general strategy, and don't break anything. > While this may be appropriate sometimes, I think it would cause to > many disturbances. It happens from time to time that people reading > python-checkins find problems by inspection. More often, they find > problems some time afterwards, because of unexpected side effects. > This is a good thing, and it is possible only because the patches had > been reviewed carefully on SF. I'm not suggesting by any means that the current system should be dropped, as stated above. OTOH, perhaps if we can improve the reviewing process somehow, even that wouldn't be important anymore. > > My feeling is that the Python development is currently overly > > centralized, and that you might be suffering from that now, by being > > unable to handover some of your tasks to someone else. > > I agree with your first observation, but disagree with the second: > there are plenty of tasks that could be handed over. There are just no > volunteers to perform these tasks. Here! Here! :-) Sometimes there's just no obvious open door for people to get in. > > I feel that everytime I send a patch, besides being contributing, > > I'm also overloading you with more stuff to review and comment and > > etc. Perhaps the fallback costs for some wrong commit is too high > > now (did I heard someone mentioning subversion?)?! > > It's not that. Patches *must* be reviewed, or else they would be just > incomplete: people might commit the patch, but fail to commit the > documentation. Then, for a couple of years, there would be no > documentation. Likewise for test cases. That the patch works is alone > not good enough. That's a failure in the process, which can easily be reviewed in a post-commit fashion. If somebody fail to provide tests/documentation, drop them a line asking for it before anything else. If they still don't want to provide it, cut them out. > It is time-consuming to produce high-quality software. However, that > should not alone be a reason to give up the high standards of Python > development. Fully agreed. And I don't want to contribute to reduce the standards in any way. Thank you very much! -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From niemeyer@conectiva.com Fri Nov 1 13:58:42 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Fri, 1 Nov 2002 10:58:42 -0300 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: <3DC255E2.7040901@lemburg.com> References: <20021031204046.A32673@ibook.distro.conectiva> <3DC255E2.7040901@lemburg.com> Message-ID: <20021101105842.D12803@ibook.distro.conectiva> > Another strategy would be for developers to take over maintenance > of certain parts of the code. We should then probably have a list > of maintainers for the various parts on the patch submission list > to make the assignment process easier for the submitting parties. That looks interesting. For an example of a project using a similar strategy, have a look at Subversion: http://svn.collab.net/repos/svn/trunk/COMMITTERS -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From niemeyer@conectiva.com Fri Nov 1 14:13:34 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Fri, 1 Nov 2002 11:13:34 -0300 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: <200211010025.gA10POc02765@pcp02138704pcs.reston01.va.comcast.net> References: <20021031204732.B32673@ibook.distro.conectiva> <200211010025.gA10POc02765@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021101111333.E12803@ibook.distro.conectiva> > If there are no licensing issues, and if there's a decent bz2 library, > it should be welcome in the core. Batteries included. Great! This module is currently distributed under LGPL, but I have no problems in changing it to Python's license. About being decent, I'd be suspect to evaluate my own code. I accept comments and suggestions about it though. -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From jrw@pobox.com Fri Nov 1 16:12:41 2002 From: jrw@pobox.com (John Williams) Date: Fri, 01 Nov 2002 10:12:41 -0600 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: <20021031204046.A32673@ibook.distro.conectiva> References: <20021031204046.A32673@ibook.distro.conectiva> Message-ID: <3DC2A7F9.6040902@pobox.com> Martin v. Loewis wrote: > Gustavo Niemeyer writes: > >My feeling is that the Python development is currently overly > >centralized, and that you might be suffering from that now, by being > >unable to handover some of your tasks to someone else. > > I agree with your first observation, but disagree with the second: > there are plenty of tasks that could be handed over. There are just no > volunteers to perform these tasks. There aren't enough volunteers willing to dedicate a lot of time, but I bet there's a large group of people like me who do things like submitting an occasional patch of bug report. My interpretation of the problem Gustavo is pointing out is that the larger group really ins't able to help, because everything we do places an additional burden on the core developers. If more of these people were able to contribute directly (i.e. via CVS commit access), you'd get a double benefit, since they'd get their contributions in faster and it would help free up the core developers from a lot of busy work. I think people with CVS access would also be a lot more motivated to contribute, since it removes the uncertaintly about whether their work will go into the release or just be wasted. My solution is: 1. Post Martin's guidelines on how to help very prominently. 2. Offer CVS access to developers who submit useful patches. 3. Publicize #2 to promote more patches from people wanting to prove themselves. I'm proposing to set the bar pretty low for CVS access, but I think this is a good strategy overall. As long as people are aware of the standards they're expected to hold up and the trust they're being given, most of them will do their best not to abuse it. The cost of granting commit access to the wrong person is fairly low (just back out their changes and revoke their access), but granting access to the right person could pay off for many years. Ok, I'll quit trying to sound so important now :) jw From padraig.brady@corvil.com Fri Nov 1 17:29:20 2002 From: padraig.brady@corvil.com (Padraig Brady) Date: Fri, 01 Nov 2002 17:29:20 +0000 Subject: [Python-Dev] PEP270 (list.uniq()) Message-ID: <3DC2B9F0.60306@corvil.com> Hi Jason, I was just reading this and it seems to overlap with: http://www.python.org/peps/pep-0218.html I.E. uniq is equivalent to union (with self). wrt to the uniq command line tool, if you've 2 files a & b, then: uniq a b =3D union uniq -u a b =3D difference uniq -d a b =3D intersection So uniq really is a set operation and if Python had a set builtin type then I'm not sure a uniq method would be required? Perhaps the list object could support the union operator so to uniqify a list you could do: mylist |=3D mylist cheers, P=E1draig. From jp@demonseed.net Fri Nov 1 17:39:37 2002 From: jp@demonseed.net (jason petrone) Date: Fri, 1 Nov 2002 12:39:37 -0500 Subject: [Python-Dev] Re: PEP270 (list.uniq()) In-Reply-To: <3DC2B9F0.60306@corvil.com>; from padraig.brady@corvil.com on Fri, Nov 01, 2002 at 05:29:20PM +0000 References: <3DC2B9F0.60306@corvil.com> Message-ID: <20021101123937.A17710@demonseed.net> On Fri, Nov 01, 2002 at 05:29:20PM +0000, Padraig Brady wrote: > So uniq really is a set operation and if Python had > a set builtin type then I'm not sure a uniq method > would be required? > could do: Yes, sets are a much better solution to this problem. For that reason, my PEP wasn't accepted. In the meantime, I just use dictionaries with None for values to achieve the same goal. Jason From skip@pobox.com Fri Nov 1 17:47:54 2002 From: skip@pobox.com (Skip Montanaro) Date: Fri, 1 Nov 2002 11:47:54 -0600 Subject: [Python-Dev] PEP270 (list.uniq()) In-Reply-To: <3DC2B9F0.60306@corvil.com> References: <3DC2B9F0.60306@corvil.com> Message-ID: <15810.48714.982505.52191@montanaro.dyndns.org> Padraig> So uniq really is a set operation and if Python had a set Padraig> builtin type then I'm not sure a uniq method would be required? There's a sets.py module in the CVS repository. (new w/ 2.3.) -- Skip Montanaro - skip@pobox.com http://www.mojam.com/ http://www.musi-cal.com/ From martin@v.loewis.de Fri Nov 1 20:53:53 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 01 Nov 2002 21:53:53 +0100 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: <3DC255E2.7040901@lemburg.com> References: <20021031204046.A32673@ibook.distro.conectiva> <3DC255E2.7040901@lemburg.com> Message-ID: "M.-A. Lemburg" writes: > I for one wouldn't mind if other developers with some time > at hand jump in on already assigned patches and bug reports to > help out. Martin does this on a regular basis and I find > it really helps. I personally don't see assignments as a fixed thing. I think developers who cannot process assigned tracker items should unassign themselves - I personally assign things for myself only if I know I can process them in the near future. I also think that people should not consider assigned items as "solved". If they had been assigned quite some time ago, people should indeed contribute if they can. > Another strategy would be for developers to take over maintenance of > certain parts of the code. We should then probably have a list of > maintainers for the various parts on the patch submission list to > make the assignment process easier for the submitting parties. I don't really think this would work. I quite dislike the idea of somebody being "responsible" for some area of the code, being able to overrule his peers merely by his position - except for the BDFL, of course. Regards, Martin From martin@v.loewis.de Fri Nov 1 21:02:20 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 01 Nov 2002 22:02:20 +0100 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: <20021101104238.C12803@ibook.distro.conectiva> References: <20021031204046.A32673@ibook.distro.conectiva> <20021101104238.C12803@ibook.distro.conectiva> Message-ID: Gustavo Niemeyer writes: > Not at all. I meant that people who prove to understand the basic > development strategy, and also provided correct working and non-breaking > patches, could get commit access more easily. That procedure is already in place: If you want commit privileges, just step forward and say that you want. In the past, Guido has set a policy that people who's commit privilege is fresh will still have to use SF, but can perform the checkin themselves. > I've read somewhere once something which I took for the projects > I currently maintain. To decide if you should give someone commit > access, it's not important if that person can produce pages of > complex code, but if that person usually provide patches which > follow the general strategy, and don't break anything. Yes, that's the strategy I apply when reviewing patches: If the submitter wants it, and I can't find problems, I'll accept the patch ("problems" being studied widely, of course). > That's a failure in the process, which can easily be reviewed in a > post-commit fashion. If somebody fail to provide tests/documentation, > drop them a line asking for it before anything else. If they still > don't want to provide it, cut them out. I've been contributing to GCC for a while, and I can tell you from first-hand experience that this won't work. If you don't make documentation a commit prerequisite, you never get it. Shutting out the contributor won't work, since they a) may still provide valuable contributions, and b) will tell you that they will get to provide the missing pieces RSN - which then of course never happens. Regards, Martin From martin@v.loewis.de Fri Nov 1 21:09:43 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 01 Nov 2002 22:09:43 +0100 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: <3DC2A7F9.6040902@pobox.com> References: <20021031204046.A32673@ibook.distro.conectiva> <3DC2A7F9.6040902@pobox.com> Message-ID: John Williams writes: > There aren't enough volunteers willing to dedicate a lot of time, but > I bet there's a large group of people like me who do things like > submitting an occasional patch of bug report. My interpretation of > the problem Gustavo is pointing out is that the larger group really > ins't able to help, because everything we do places an additional > burden on the core developers. I don't think this is the case. If you manage to fix a bug per week, and there are ten of you, we can make quite some progress within a few months. If those regular contributors know how to prepare a contribution to fit the formal requirements, and provide rationale and explanations, patches can be applied quite quickly. > If more of these people were able to contribute directly (i.e. via CVS > commit access), you'd get a double benefit, since they'd get their > contributions in faster and it would help free up the core developers > from a lot of busy work. I think people with CVS access would also be > a lot more motivated to contribute, since it removes the uncertaintly > about whether their work will go into the release or just be wasted. It is my impression that all people who want CVS write access already have it (with Gustavo perhaps being one of a few exceptions). > 2. Offer CVS access to developers who submit useful patches. That may not be known - but we already do that. > 3. Publicize #2 to promote more patches from people wanting to prove > themselves. Not sure whether this helps: people should not produce a burst of patches just to get commit privileges. Instead, they should contribute patches steadily (and should have done so in the past), and then get CVS write access as a simplification for the rest of the maintainers. > I'm proposing to set the bar pretty low for CVS access, but I think > this is a good strategy overall. I would expect that this will lead to quite a large committer list, with many people running away after some time. Regards, Martin From martin@v.loewis.de Fri Nov 1 21:16:12 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 01 Nov 2002 22:16:12 +0100 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: <20021101111333.E12803@ibook.distro.conectiva> References: <20021031204732.B32673@ibook.distro.conectiva> <200211010025.gA10POc02765@pcp02138704pcs.reston01.va.comcast.net> <20021101111333.E12803@ibook.distro.conectiva> Message-ID: Gustavo Niemeyer writes: > > and if there's a decent bz2 library, > About being decent, I'd be suspect to evaluate my own code. I think Guido was talking about libbz2, not your bz2.c. I think you should create a patch (complete with test suite, documentation, and everything :-). If you can, you should also include Windows build instructions (what to download and how to unpack) - perhaps even with a MSVC project file. Regards, Martin From niemeyer@conectiva.com Fri Nov 1 22:15:12 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Fri, 1 Nov 2002 19:15:12 -0300 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: References: <20021031204732.B32673@ibook.distro.conectiva> <200211010025.gA10POc02765@pcp02138704pcs.reston01.va.comcast.net> <20021101111333.E12803@ibook.distro.conectiva> Message-ID: <20021101191512.A20879@ibook.distro.conectiva> > > > and if there's a decent bz2 library, > > > About being decent, I'd be suspect to evaluate my own code. > > I think Guido was talking about libbz2, not your bz2.c. Oh.. I see. Perhaps he wasn't aware that unlike gzip/zlib scheme, bzip2 is based on its own library? > I think you should create a patch (complete with test suite, > documentation, and everything :-). :-) It already has a test suite, and complete inline documentation. I'll reuse the inline docs to create "external" documentation. > If you can, you should also include Windows build instructions (what > to download and how to unpack) - perhaps even with a MSVC project > file. Unfortunately I can't provide that part. :-( I'm away from the Windows world for many years now, and have no access to a machine were I could create those files. I'll promptly help anyone to solve any issues in that area though. In the URL below one may find information about bzip2, including binaries for many platforms: http://sources.redhat.com/bzip2/ Thank you! -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From tim.one@comcast.net Fri Nov 1 23:06:58 2002 From: tim.one@comcast.net (Tim Peters) Date: Fri, 01 Nov 2002 18:06:58 -0500 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: <20021101191512.A20879@ibook.distro.conectiva> Message-ID: ay, November 01, 2002 5:15 PM [Martin v. Loewis] >> If you can, you should also include Windows build instructions (what >> to download and how to unpack) - perhaps even with a MSVC project >> file. [Gustavo Niemeyer] > Unfortunately I can't provide that part. :-( Actually, you just did . The link you provided *is* the place with Windows build instructions and an MSVC project file. > I'm away from the Windows world for many years now, and have no > access to a machine were I could create those files. I'll promptly > help anyone to solve any issues in that area though. > > In the URL below one may find information about bzip2, including > binaries for many platforms: > > http://sources.redhat.com/bzip2/ Semi-unfortunately, the author of that has no idea if it actually works on 95/98/ME/NT/XP and in the docs for "3.8 Making a Windows DLL" I haven't tried any of this stuff myself, but it all looks plausible. That means it will require some real work to build and test this stuff on 6 flavors of Windows. Not a showstopper, but does raise the bar for getting into the PLabs Windows distro. The good news is that it's BSD-licensed, so we don't have to haggle about license issues. From neal@metaslash.com Sat Nov 2 14:57:42 2002 From: neal@metaslash.com (Neal Norwitz) Date: Sat, 02 Nov 2002 09:57:42 -0500 Subject: [Python-Dev] Contributing...current bug status Message-ID: <20021102145742.GF17370@epoch.metaslash.com> I've downloaded and formatted the 325 bugs in SF. Here's a breakdown of bugs by category (before I changed some of the Nones to an appropriate category): Build 18 Demos and Tools 8 Distutils 29 Documentation 39 Extension Modules 15 IDLE 1 Installation 8 Macintosh 23 None 19 Parser/Compiler 3 Python Interpreter Core 26 Python Library 78 Regular Expressions 14 Threads 7 Tkinter 11 Type/class unification 8 Unicode 6 Windows 16 XML 5 There's more detail available in html format here: http://www.metaslash.com/py/sf.data.html gnumeric file here: http://www.metaslash.com/py/sf.data.gnumeric text (tab separated) here: http://www.metaslash.com/py/sf.data.txt The info shown is: SF id #, Summary, Date Submitted, Assigned To, Submitted By, Category, Comments, Date of Last Comment. Current sort criteria is by Category, then by Date Submitted. Bugs fall into many categories including: platform specific, duplicates (e.g., RE max recursion), cannot be reproduced, believed to be fixed, but the submitter has not verified the fix, feature requests, and of course plain bugs. I may try to work up another list of the bugs which are almost fixed. Meaning, bugs which need verification or have a proposed fix attached. It would be great if people could go through topics of interest to them and help fix problems in that category. Many of the bugs in the 'Python Library' are web related (ie, cgi, htmllib, urllib[2], etc). Neal From niemeyer@conectiva.com Sat Nov 2 17:44:21 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Sat, 2 Nov 2002 14:44:21 -0300 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: References: <20021031204046.A32673@ibook.distro.conectiva> <20021101104238.C12803@ibook.distro.conectiva> Message-ID: <20021102144421.A24064@ibook.distro.conectiva> > > Not at all. I meant that people who prove to understand the basic > > development strategy, and also provided correct working and > > non-breaking patches, could get commit access more easily. > > That procedure is already in place: If you want commit privileges, > just step forward and say that you want. In the past, Guido has set a > policy that people who's commit privilege is fresh will still have to > use SF, but can perform the checkin themselves. That's ok with me. I think that besides that being the case for people who's just got commit access, any serious developer will submit a patch for review whenever he's not sure about something. > I've been contributing to GCC for a while, and I can tell you from > first-hand experience that this won't work. If you don't make > documentation a commit prerequisite, you never get it. [...] Ouch.. :-( Thanks again Martin! -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From niemeyer@conectiva.com Sat Nov 2 18:07:53 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Sat, 2 Nov 2002 15:07:53 -0300 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: References: <20021101191512.A20879@ibook.distro.conectiva> Message-ID: <20021102150753.B24064@ibook.distro.conectiva> > > Unfortunately I can't provide that part. :-( > > Actually, you just did . The link you provided *is* the place > with Windows build instructions and an MSVC project file. Great! :-) > Semi-unfortunately, the author of that has > > no idea if it actually works on 95/98/ME/NT/XP I think he was refering specifically to the pre-built binaries for 1.0.2, since older binaries are available for many platforms for quite some time, and bzip2 seems to be very portable. > That means it will require some real work to build and test this stuff > on 6 flavors of Windows. Not a showstopper, but does raise the bar > for getting into the PLabs Windows distro. > > The good news is that it's BSD-licensed, so we don't have to haggle > about license issues. Given your comments, I belive such libraries are included directly in the distribution, right? -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From martin@v.loewis.de Sat Nov 2 19:47:49 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 02 Nov 2002 20:47:49 +0100 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: <20021102150753.B24064@ibook.distro.conectiva> References: <20021101191512.A20879@ibook.distro.conectiva> <20021102150753.B24064@ibook.distro.conectiva> Message-ID: Gustavo Niemeyer writes: > Given your comments, I belive such libraries are included directly in > the distribution, right? Correct. The Windows distribution does not require any additional software, except for Windows itself: all modules that it includes also come with their supporting libraries. Regards, Martin From neal@metaslash.com Sat Nov 2 21:50:55 2002 From: neal@metaslash.com (Neal Norwitz) Date: Sat, 02 Nov 2002 16:50:55 -0500 Subject: [Python-Dev] Low hanging fruit Message-ID: <20021102215055.GB27413@epoch.metaslash.com> Here's the lowest hanging fruit I could find. Maybe some of these can be fixed/closed: These bugs/patches seem quite easy to close/fix based on last comment: http://python.org/sf/464405 - freeze doesn't like DOS files on Linux http://python.org/sf/534669 - remember to sync trees http://python.org/sf/599836 - Bugfix for urllib2.py this is closed, but should urllib also be fixed? http://python.org/sf/626570 - strptime() always returns 0 in dst field http://python.org/sf/627900 - Bytecode copy bug in freeze http://python.org/sf/618146 - overflow error in calendar module http://python.org/sf/569668 - LINKCC incorrectly set These bugs/patches may be relatively easy fix: http://python.org/sf/527521 - httplib strict mode fails in 2.2.2 http://python.org/sf/570655 - bdist_rpm and the changelog option http://python.org/sf/622537 - dummy_thread.py implementation http://python.org/sf/623464 - tempfile crashes http://python.org/sf/622831 - textwrap fails on unicode using defaults (didn't Greg have the fix on python-dev?) http://python.org/sf/622849 - inconsistent results of leading whitespace in textwrap input http://python.org/sf/630195 - bdist_rpm target breaks with rpm 4.1 http://python.org/sf/594893 - printing email object deletes whitespace From guido@python.org Sat Nov 2 22:40:00 2002 From: guido@python.org (Guido van Rossum) Date: Sat, 02 Nov 2002 17:40:00 -0500 Subject: [Python-Dev] Low hanging fruit In-Reply-To: Your message of "Sat, 02 Nov 2002 16:50:55 EST." <20021102215055.GB27413@epoch.metaslash.com> References: <20021102215055.GB27413@epoch.metaslash.com> Message-ID: <200211022240.gA2Me0H07969@pcp02138704pcs.reston01.va.comcast.net> > Here's the lowest hanging fruit I could find. Maybe some of these > can be fixed/closed: > > These bugs/patches seem quite easy to close/fix based on last comment: > http://python.org/sf/464405 - freeze doesn't like DOS files on Linux > http://python.org/sf/534669 - remember to sync trees > http://python.org/sf/599836 - Bugfix for urllib2.py > this is closed, but should urllib also be fixed? > http://python.org/sf/626570 - strptime() always returns 0 in dst field > http://python.org/sf/627900 - Bytecode copy bug in freeze > http://python.org/sf/618146 - overflow error in calendar module > http://python.org/sf/569668 - LINKCC incorrectly set > > These bugs/patches may be relatively easy fix: > http://python.org/sf/527521 - httplib strict mode fails in 2.2.2 > http://python.org/sf/570655 - bdist_rpm and the changelog option > http://python.org/sf/622537 - dummy_thread.py implementation > http://python.org/sf/623464 - tempfile crashes > http://python.org/sf/622831 - textwrap fails on unicode using defaults > (didn't Greg have the fix on python-dev?) > http://python.org/sf/622849 - inconsistent results > of leading whitespace in textwrap input > http://python.org/sf/630195 - bdist_rpm target breaks with rpm 4.1 > http://python.org/sf/594893 - printing email object deletes whitespace Are you asking for permission to fix them? I haven't looked at these specifically, but I can't imagine what would stop you. --Guido van Rossum (home page: http://www.python.org/~guido/) From neal@metaslash.com Sat Nov 2 23:19:27 2002 From: neal@metaslash.com (Neal Norwitz) Date: Sat, 02 Nov 2002 18:19:27 -0500 Subject: [Python-Dev] Low hanging fruit In-Reply-To: <200211022240.gA2Me0H07969@pcp02138704pcs.reston01.va.comcast.net> References: <20021102215055.GB27413@epoch.metaslash.com> <200211022240.gA2Me0H07969@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021102231927.GC27413@epoch.metaslash.com> On Sat, Nov 02, 2002 at 05:40:00PM -0500, Guido van Rossum wrote: > > Here's the lowest hanging fruit I could find. Maybe some of these > > can be fixed/closed: > > Are you asking for permission to fix them? I haven't looked at these > specifically, but I can't imagine what would stop you. No, I was hoping to get some more people involved. If no one steps up, I will probably fix some/most that seem harmless. Mostly, it's another set of eyeballs, so silly mistakes aren't made. Also, it's an attempt to help relieve Martin. Martin does a lot of good work, but if there were more people helping, perhaps he could be more productive. Neal From neal@metaslash.com Sat Nov 2 23:51:01 2002 From: neal@metaslash.com (Neal Norwitz) Date: Sat, 02 Nov 2002 18:51:01 -0500 Subject: [Python-Dev] Snake farm Message-ID: <20021102235101.GA28348@epoch.metaslash.com> As some of you may have noticed, I recently checked in a bunch of fixes that deal with portability. I have been granted access to the snake farm, which allows me to test python on different architectures. The snake farm architectures include: HPUX 11 AIX 4.2 & 4.3 Linux 2.4 & 2.2/Alpha Solaris 8 SunOS 4.1.1 I am mostly testing 2.3, but can test 2.2 as well. Right now, there is no easy way to tell if a bug on SF is specific to a particular OS/architecture. There are already Macintosh and Windows categories, perhaps a UNIX category should be added? In the meantime, feel free to assign UNIX platform specific problems to me. I know there are some FreeBSD problems. There is a FreeBSD machine in the SF compile farm (ssh to compile.sf.net). Just in case someone wants to try to fix the FreeBSD bugs. :-) Neal From Jack.Jansen@oratrix.com Sat Nov 2 23:51:39 2002 From: Jack.Jansen@oratrix.com (Jack Jansen) Date: Sun, 3 Nov 2002 00:51:39 +0100 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: Message-ID: <07A855C2-EEBE-11D6-8C0A-003065517236@oratrix.com> On vrijdag, november 1, 2002, at 09:53 , Martin v. Loewis wrote: >> Another strategy would be for developers to take over maintenance of >> certain parts of the code. We should then probably have a list of >> maintainers for the various parts on the patch submission list to >> make the assignment process easier for the submitting parties. > > I don't really think this would work. I quite dislike the idea of > somebody being "responsible" for some area of the code, being able to > overrule his peers merely by his position - except for the BDFL, of > course. I think that part of the problem is that large areas *are* de-facto controlled by one person (or, at best, a few people). If the word "Unicode" shows up I expectantly start looking in your or MAL's direction. Likewise, if "Macintosh" appears everyone starts staring at me. And the areas that don't have a clear champion either get ignored, or passed on to Guido, or picked up by yourself or Michael or one of the very few other people who do general firefighting. -- - Jack Jansen http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma Goldman - From arigo@tunes.org Sun Nov 3 00:40:34 2002 From: arigo@tunes.org (Armin Rigo) Date: Sat, 2 Nov 2002 16:40:34 -0800 (PST) Subject: [Python-Dev] Becoming a python contributor In-Reply-To: ; from martin@v.loewis.de on Fri, Nov 01, 2002 at 10:09:43PM +0100 References: <20021031204046.A32673@ibook.distro.conectiva> <3DC2A7F9.6040902@pobox.com> Message-ID: <20021103004034.A79F81F4B@bespin.org> Hello Martin, On Fri, Nov 01, 2002 at 10:09:43PM +0100, Martin v. Loewis wrote: > It is my impression that all people who want CVS write access already > have it (with Gustavo perhaps being one of a few exceptions). If I may step in here -- let describe my own position, as I feel it might be shared by a number of bystanders. I have submitted a couple of bugs and patches, and am getting some sense of what is expected. I often run into pending patches and bugs that I'd like to help review, some that I even feel I could accept or reject (according to your guidelines), but I'm not sure I should be trusted CVS access right now. What about adding an SF outcome/resolution status ("reviewed" or "proposedly closed" or even "low-hanging fruit" :-) meaning that the issue has been reviewed and discussed, according to the guidelines, and that the reviewer thinks the item should now be closed (commited or rejected) ? I feel it is a better solution than just assigning the item to an arbitrary core developer. This lets anyone step in as a reviewer, which is a status that should be clearly documented: review other people's work and not your own, of course, and closely follow the guidelines. (SF might get in the way if it disallows third-parties to change an issue's outcome or resolution status; reviewers could instead use an inline keyword or ask the author to change the status.) Armin From neal@metaslash.com Sun Nov 3 00:58:15 2002 From: neal@metaslash.com (Neal Norwitz) Date: Sat, 02 Nov 2002 19:58:15 -0500 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: <20021103004034.A79F81F4B@bespin.org> References: <20021031204046.A32673@ibook.distro.conectiva> <3DC2A7F9.6040902@pobox.com> <20021103004034.A79F81F4B@bespin.org> Message-ID: <20021103005815.GD28348@epoch.metaslash.com> On Sat, Nov 02, 2002 at 04:40:34PM -0800, Armin Rigo wrote: > > On Fri, Nov 01, 2002 at 10:09:43PM +0100, Martin v. Loewis wrote: > > It is my impression that all people who want CVS write access already > > have it (with Gustavo perhaps being one of a few exceptions). > > What about adding an SF outcome/resolution status ("reviewed" or > "proposedly closed" or even "low-hanging fruit" :-) meaning that the > issue has been reviewed and discussed, according to the guidelines, > and that the reviewer thinks the item should now be closed (commited > or rejected) ? I feel it is a better solution than just assigning > the item to an arbitrary core developer. This lets anyone step in > as a reviewer, which is a status that should be clearly documented: > review other people's work and not your own, of course, and closely > follow the guidelines. (SF might get in the way if it disallows > third-parties to change an issue's outcome or resolution status; > reviewers could instead use an inline keyword or ask the author to > change the status.) I hope everyone feels welcome to make comments already. I'm not sure how SF works, but comments by others are helpful. Also, it would be great if people provided patches to existing bugs. Unfortunately, it's pretty obscure if the patch is added to the bug report. However, if a separate patch is added, it is likely to be seen/fixed sooner rather than later. For now, if anyone wants to make a comment on a bug proposing a patch or solution or more info and wants to make sure it's seen, add your comment and assign it to me (nnorwitz). I'd love to see the bugs brought down. Neal From martin@v.loewis.de Sun Nov 3 08:14:36 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 03 Nov 2002 09:14:36 +0100 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: <07A855C2-EEBE-11D6-8C0A-003065517236@oratrix.com> References: <07A855C2-EEBE-11D6-8C0A-003065517236@oratrix.com> Message-ID: Jack Jansen writes: > I think that part of the problem is that large areas *are* de-facto > controlled by one person (or, at best, a few people). If the word > "Unicode" shows up I expectantly start looking in your or MAL's > direction. Likewise, if "Macintosh" appears everyone starts staring at > me. And there is nothing wrong with that. However, I would not want to formalize this any more: in principle, it should be possible for anybody to who as an OS X box to step in and validate a certain OS X patch. Given the various compile farms, really anybody could help. Of course, the regular contributors may not have the time to obtain the expert knowledge just to validate a single patch. However, I would sure hope that new contributors become experts on areas on which we already have one. Regards, Martin From martin@v.loewis.de Sun Nov 3 08:25:30 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 03 Nov 2002 09:25:30 +0100 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: <20021103004034.A79F81F4B@bespin.org> References: <20021031204046.A32673@ibook.distro.conectiva> <3DC2A7F9.6040902@pobox.com> <20021103004034.A79F81F4B@bespin.org> Message-ID: Armin Rigo writes: > If I may step in here -- let describe my own position, as I feel it > might be shared by a number of bystanders. I have submitted a couple > of bugs and patches, and am getting some sense of what is > expected. I often run into pending patches and bugs that I'd like to > help review, some that I even feel I could accept or reject > (according to your guidelines), but I'm not sure I should be trusted > CVS access right now. I would hope the majority of Python contributors is in your position. It's not necessarily a matter of trust, but perhaps also of obligation: Many people contribute to a number of projects, as they use these projects in their (possibly paid) work, or else feel attracted to these projects - yet they would not consider them core contributors. Contributing to other projects in this way myself, I *like* not having to worry about CVS, and committing changes, etc. > What about adding an SF outcome/resolution status ("reviewed" or > "proposedly closed" or even "low-hanging fruit" :-) meaning that the > issue has been reviewed and discussed, according to the guidelines, > and that the reviewer thinks the item should now be closed (commited > or rejected) ? Unfortunately, adding a Status field is not possible on SF. However, if you add a comment in this respect to the bug report, many people will see your comment. If you think someone should really act on a report, don't hesitate to post to python-dev. > I feel it is a better solution than just assigning the item to an > arbitrary core developer. Indeed. Leaving those unassigned would be best. The core developer can then assign the item, just to avoid duplication of efforts. > This lets anyone step in as a reviewer, which is a status that > should be clearly documented: review other people's work and not > your own, of course, and closely follow the guidelines. (SF might > get in the way if it disallows third-parties to change an issue's > outcome or resolution status; reviewers could instead use an inline > keyword or ask the author to change the status.) I usually phrase this as a recommendation: "I recommend to approve this patch", or some such. When rejecting patches, this has the additional benefit of giving the contributor a chance to revise the patch, or point out potential misunderstandings, before it gets closed. Regards, Martin From martin@v.loewis.de Sun Nov 3 08:27:19 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 03 Nov 2002 09:27:19 +0100 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: <20021103005815.GD28348@epoch.metaslash.com> References: <20021031204046.A32673@ibook.distro.conectiva> <3DC2A7F9.6040902@pobox.com> <20021103004034.A79F81F4B@bespin.org> <20021103005815.GD28348@epoch.metaslash.com> Message-ID: Neal Norwitz writes: > I hope everyone feels welcome to make comments already. I'm not sure > how SF works, but comments by others are helpful. Also, it would be > great if people provided patches to existing bugs. Unfortunately, > it's pretty obscure if the patch is added to the bug report. However, > if a separate patch is added, it is likely to be seen/fixed sooner > rather than later. I would also suggest that people indicate such patches in the title, e.g. "Fix for #foobar". I usually give such titles high priority, because it would allow to close two things simultaneously. Regards, Martin From skip@manatee.mojam.com Sun Nov 3 13:00:25 2002 From: skip@manatee.mojam.com (Skip Montanaro) Date: Sun, 3 Nov 2002 07:00:25 -0600 Subject: [Python-Dev] Weekly Python Bug/Patch Summary Message-ID: <200211031300.gA3D0Pv8021064@manatee.mojam.com> Bug/Patch Summary ----------------- 321 open / 2999 total bugs (+1) 99 open / 1752 total patches (+3) New Bugs -------- FAIL: test_crlf_separation (email.test.t (2002-10-28) http://python.org/sf/629756 int("123123123123123123") doesn't work (2002-10-28) http://python.org/sf/629989 bdist_rpm target breaks with rpm 4.1 (2002-10-28) http://python.org/sf/630195 expat causes a core dump (2002-10-29) http://python.org/sf/630494 Dialogs too tight on OSX (2002-10-29) http://python.org/sf/630818 __all__ as determiner of a module's api (2002-10-30) http://python.org/sf/631055 Mac OS 10 configure problem (2002-10-30) http://python.org/sf/631247 https via httplib trips over IIS defect (2002-10-31) http://python.org/sf/631683 Tkinter (?) refct (?) bug (2002-11-01) http://python.org/sf/632323 Typo string instead of sting in LibDoc (2002-11-03) http://python.org/sf/632864 New Patches ----------- Add a sample selection method to random.py (2002-10-27) http://python.org/sf/629637 telnetlib.py: don't block on IAC and enhancement (2002-10-29) http://python.org/sf/630829 Exceptions raised by line trace function (2002-10-30) http://python.org/sf/631276 New pdb command "pp" (2002-10-31) http://python.org/sf/631678 Punycode encoding (2002-11-02) http://python.org/sf/632643 Closed Bugs ----------- HP-UX: Problem building socket (2001-10-29) http://python.org/sf/475951 __del__ docs need update (2001-12-13) http://python.org/sf/492619 buffer object API description truncated (2002-02-17) http://python.org/sf/518775 corefile: python 2.1.3, zope 2.5.0 (2002-04-17) http://python.org/sf/545410 python -v sometimes fails to find init (2002-05-02) http://python.org/sf/551504 import _tkinter python dumps core. (2002-08-21) http://python.org/sf/598160 overflow error in calendar module (2002-10-03) http://python.org/sf/618146 KeyPress bindings don't work (2002-10-23) http://python.org/sf/627798 Closed Patches -------------- names in types module (2002-06-15) http://python.org/sf/569328 HTTP Auth support for xmlrpclib (2002-10-16) http://python.org/sf/624180 From barry@python.org Sun Nov 3 15:52:18 2002 From: barry@python.org (Barry A. Warsaw) Date: Sun, 3 Nov 2002 10:52:18 -0500 Subject: [Python-Dev] Low hanging fruit References: <20021102215055.GB27413@epoch.metaslash.com> <200211022240.gA2Me0H07969@pcp02138704pcs.reston01.va.comcast.net> <20021102231927.GC27413@epoch.metaslash.com> Message-ID: <15813.17970.435737.310176@gargle.gargle.HOWL> >>>>> "NN" == Neal Norwitz writes: NN> No, I was hoping to get some more people involved. If no one NN> steps up, I will probably fix some/most that seem harmless. NN> Mostly, it's another set of eyeballs, so silly mistakes aren't NN> made. I'm aware of the email package bug. I'm just looking for a block of a few hours where I can attack a bunch of issues at the same time. -Barry From padraig.brady@corvil.com Mon Nov 4 09:26:28 2002 From: padraig.brady@corvil.com (Padraig Brady) Date: Mon, 04 Nov 2002 09:26:28 +0000 Subject: [Python-Dev] Re: PEP270 (list.uniq()) References: <3DC2B9F0.60306@corvil.com> <20021101123937.A17710@demonseed.net> Message-ID: <3DC63D44.2040707@corvil.com> jason petrone wrote: > On Fri, Nov 01, 2002 at 05:29:20PM +0000, Padraig Brady wrote: >=20 >>So uniq really is a set operation and if Python had >>a set builtin type then I'm not sure a uniq method >>would be required?=20 >>could do: >=20 > Yes, sets are a much better solution to this problem. For that reason, > my PEP wasn't accepted. =20 Well can it be removed from the list of Open PEPs? > In the meantime, I just use dictionaries with None for values to achiev= e=20 > the same goal. Or sets.py (in 2.3) thanks, P=E1draig. From Jack.Jansen@cwi.nl Mon Nov 4 11:23:49 2002 From: Jack.Jansen@cwi.nl (Jack Jansen) Date: Mon, 4 Nov 2002 12:23:49 +0100 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: Message-ID: On Sunday, Nov 3, 2002, at 09:14 Europe/Amsterdam, Martin v. Loewis wrote: > Jack Jansen writes: > >> I think that part of the problem is that large areas *are* de-facto >> controlled by one person (or, at best, a few people). If the word >> "Unicode" shows up I expectantly start looking in your or MAL's >> direction. Likewise, if "Macintosh" appears everyone starts staring at >> me. > > And there is nothing wrong with that. However, I would not want to > formalize this any more: in principle, it should be possible for > anybody to who as an OS X box to step in and validate a certain OS X > patch. Given the various compile farms, really anybody could help. What I intended to say was almost the opposite of how it came out (apparently). Because certain areas are deemed the responsibility of certain people the other developers tend to use their SEP field on them. -- - Jack Jansen http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma Goldman - From martin@v.loewis.de Mon Nov 4 13:29:48 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 04 Nov 2002 14:29:48 +0100 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: References: Message-ID: Jack Jansen writes: > What I intended to say was almost the opposite of how it came out > (apparently). Because certain areas are deemed the responsibility of > certain people the other developers tend to use their SEP field on > them. It's a different aspect of the same problem :-) If you think you have patches or bug reports assigned that you cannot process (either because of lack of knowledge, or lack of time), I think you should unassign them from yourself. This gives people looking at these things, and the submitter, a much better picture of the likelyness of an upcoming solution. Regards, Martin From mwh@python.net Mon Nov 4 13:42:27 2002 From: mwh@python.net (Michael Hudson) Date: 04 Nov 2002 13:42:27 +0000 Subject: [Python-Dev] Becoming a python contributor In-Reply-To: martin@v.loewis.de's message of "03 Nov 2002 09:25:30 +0100" References: <20021031204046.A32673@ibook.distro.conectiva> <3DC2A7F9.6040902@pobox.com> <20021103004034.A79F81F4B@bespin.org> Message-ID: <2mwuntwiws.fsf@starship.python.net> martin@v.loewis.de (Martin v. Loewis) writes: > Unfortunately, adding a Status field is not possible on SF. Which reminds me: how much longer are we going to have to put up with SF for bugs? Is the roundup work getting anywhere? Cheers, M. -- > So what does "abc" / "ab" equal? cheese -- Steve Holden defends obscure semantics on comp.lang.python From barry@python.org Mon Nov 4 14:39:24 2002 From: barry@python.org (Barry A. Warsaw) Date: Mon, 4 Nov 2002 09:39:24 -0500 Subject: [Python-Dev] David Goodger joins PEP editor Message-ID: <15814.34460.485132.634853@gargle.gargle.HOWL> Just a quick note to let folks know that David Goodger has joined the PEP editors. He'll be doing most of the editor's work, but I'll still provide backup help as necessary. This should greatly improve the responsiveness of the PEP editors . Please continue to use peps@python.org for all correspondence about the PEPs, as this will reach both David and myself. Thanks David! -Barry From Sve@softserv.attunity.co.il Mon Nov 4 15:20:45 2002 From: Sve@softserv.attunity.co.il (Sve) Date: Mon, 4 Nov 2002 17:20:45 +0200 Subject: [Python-Dev] extended python Message-ID: This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. ------_=_NextPart_001_01C28415.BF034F40 Content-Type: text/plain; charset="windows-1255" Hello! Can You pls help us with extended python on Open VMS platform. We already extended python on NT (with DLL). How we have to create same C code on Open VMS. Best Regards Svetlana Stolyarov Project Manager Attunity Software ServicesLtd. t +972-4-990-9966 ext.114 m +972-50-972-509 www.attunity.com ------_=_NextPart_001_01C28415.BF034F40 Content-Type: text/html; charset="windows-1255" extended python

Hello!
Can You pls help us with extended python on Open VMS platform.
We already extended python on NT (with DLL).
How we have to create same C code on Open VMS.

Best Regards

Svetlana Stolyarov
Project Manager
Attunity Software ServicesLtd.
t  +972-4-990-9966 ext.114
m +972-50-972-509
www.attunity.com



------_=_NextPart_001_01C28415.BF034F40-- From Greg.Brondo@allegiancetelecom.com Mon Nov 4 15:31:31 2002 From: Greg.Brondo@allegiancetelecom.com (Brondo, Greg) Date: Mon, 4 Nov 2002 09:31:31 -0600 Subject: [Python-Dev] regex parsing error Message-ID: <25B0AB27D7AED5119D7F0002A58779E504BC8ADF@dfwex12.algx.com> AssertionError: sorry, but this version only supports 100 named groups. I'm getting this error in the latest Python 2.2. Is there any fix for this yet? Thanks! Greg B. From martin@v.loewis.de Mon Nov 4 17:00:09 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 04 Nov 2002 18:00:09 +0100 Subject: [Python-Dev] extended python In-Reply-To: References: Message-ID: Sve writes: > Can You pls help us with extended python on Open VMS platform. No. python-help is for the discussion of the future development *of* Python, not for the development *with* python. Please use python-list@python.org instead. Regards, Martin From martin@v.loewis.de Mon Nov 4 17:00:50 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 04 Nov 2002 18:00:50 +0100 Subject: [Python-Dev] regex parsing error In-Reply-To: <25B0AB27D7AED5119D7F0002A58779E504BC8ADF@dfwex12.algx.com> References: <25B0AB27D7AED5119D7F0002A58779E504BC8ADF@dfwex12.algx.com> Message-ID: "Brondo, Greg" writes: > I'm getting this error in the latest Python 2.2. Is there any fix for this > yet? Not yet, no. Contributions are welcome. Regards, Martin From mwh@python.net Mon Nov 4 17:05:28 2002 From: mwh@python.net (Michael Hudson) Date: 04 Nov 2002 17:05:28 +0000 Subject: [Python-Dev] extended python In-Reply-To: martin@v.loewis.de's message of "04 Nov 2002 18:00:09 +0100" References: Message-ID: <2mela1b6zr.fsf@starship.python.net> martin@v.loewis.de (Martin v. Loewis) writes: > Sve writes: > > > Can You pls help us with extended python on Open VMS platform. > > No. python-help is for the discussion of the future development *of* ^^^^^^^^^^^ Martin meant python-dev here... > Python, not for the development *with* python. Please use > python-list@python.org instead. This bit is still right. Cheers, M. -- FORD: Just pust the fish in your ear, come on, it's only a little one. ARTHUR: Uuuuuuuuggh! -- The Hitch-Hikers Guide to the Galaxy, Episode 1 From mwh@python.net Mon Nov 4 17:10:46 2002 From: mwh@python.net (Michael Hudson) Date: 04 Nov 2002 17:10:46 +0000 Subject: [Python-Dev] metaclass insanity In-Reply-To: Guido van Rossum's message of "Thu, 31 Oct 2002 13:58:00 -0500" References: <2md6psyowr.fsf@starship.python.net> <200210301644.g9UGiaZ18410@odiug.zope.com> <2my98faj0p.fsf@starship.python.net> <200210302044.g9UKicx22801@odiug.zope.com> <2mznsu3nhq.fsf@starship.python.net> <200210311858.g9VIw0509968@odiug.zope.com> Message-ID: <2mbs55b6qx.fsf@starship.python.net> Guido van Rossum writes: > > > > Should assigning to __bases__ automatically tweak __mro__ and > > > > __base__? Guess so. > > > > > > Yes. Note that changing __base__ should not be done lightly -- > > > basically, the old and new base must be layout compatible, exactly > > > like for assignment to __class__. > > > > OK. I can crib code from type_set_class, I guess. Or one could just > > allow assignment to __bases__ when __base__ doesn't change? __base__ > > is object for the majority of new-style classes, isn't it? > > But if you derive from a builtin type (e.g. list or dict), __base__ > will be that. True. But I was going for low hanging fruit. > > Brr. There's a lot I don't know about post 2.2 typeobject.c. > > Me too. :-) I'm not sure that remark warrants a smiley... > > > > What would assigning to __base__ do in isolation? > > > > Perhaps that shouldn't be writeable. > > > > > > Perhaps it could be writable when __bases__ is a 1-tuple. > > > > Don't see the point of that. > > > > > But it's fine if it's not writable. > > > > Easier :) > > Agreed. Good. > > > > > I'd also take a patch for assignable __name__. > > > > > > > > This is practically a one-liner, isn't it? Not hard, anyway. > > > > > > Probably. Can't remember why I didn't do it earlier. > > > > It's a bit more complicated than that. > > > > What's the deal wrt. dots in tp_name? Is there any way for a user > > defined class to end up called "something.somthing_else"? > > I hope not. The dots are for extensions living inside a package; > everything before the last dot ends up as __module__. Fine. What happens for nested classes? In class X: class Y: pass are X.Y instances picklable without extra fiddling? > > Oh, and while we're at it, here's a bogosity: > > > > >>> class C(object): > > ... pass > > ... > > >>> C.__module__ > > '__main__' > > >>> C.__module__ = 1 > > >>> C.__module__ > > Traceback (most recent call last): > > File "", line 1, in ? > > AttributeError: __module__ > > > > caused by lax testing in type_set_module. > > Oops. Can you fix it? Or are there complications? No, should be easy. if (!PyString_Check(arg)) { PyErr_SetString(PyExc_TypeError, "don't do that"); return -1; } or something like that. > Seems to be broken in 2.2 too. Yes. It's a change in behaviour, but I *really* can't see anyone relying on that one. > > > > And there was me wondering what I was going to do this evening. > > > > > > I don't have that problem -- a Zope customer problem was waiting for > > > me today. :-( > > > > Well, I didn't get it finished either. Fiddly, this stuff. Maybe by > > tomorrow. > > Great! > > I'll be offline Friday through Monday -- going to a weekend conference. Not that it mattered -- I still haven't finished. Damn EV Nova and its plugins... Cheers, M. -- Indeed, when I design my killer language, the identifiers "foo" and "bar" will be reserved words, never used, and not even mentioned in the reference manual. Any program using one will simply dump core without comment. Multitudes will rejoice. -- Tim Peters, 29 Apr 1998 From andymac@bullseye.apana.org.au Mon Nov 4 11:51:07 2002 From: andymac@bullseye.apana.org.au (Andrew MacIntyre) Date: Mon, 4 Nov 2002 21:51:07 +1000 (est) Subject: [Python-Dev] Snake farm In-Reply-To: <20021102235101.GA28348@epoch.metaslash.com> Message-ID: On Sat, 2 Nov 2002, Neal Norwitz wrote: > I know there are some FreeBSD problems. There is a FreeBSD machine in > the SF compile farm (ssh to compile.sf.net). Just in case someone wants to > try to fix the FreeBSD bugs. :-) My FreeBSD Python-CVS autobuild setup has been down for a few weeks. I've just however run a build/test on CVS of about 1800 UTC Nov2, and the I find the following oddities: - test_locale skipped (unsupported locale: "en_US" not supported, but "en_US.ISO_8859-1" is supported - could be a system configuration issue, because adding an en_US symlink to en_US.ISO_8859-1 in /usr/share/locale fixes this) - test_strptime failed (error in test_returning_RE, due to lang being None as a side effect of defaulting to the C locale; failure in test_timezone due to my system having indentical std & dst designators ['EST', 'EST'] even though DST is active) This is on FreeBSD 4.4 - I need to upgrade to 4.7 to see whether the locale support is more complete. Anything else I should look at when I can spare the cycles? -- Andrew I MacIntyre "These thoughts are mine alone..." E-mail: andymac@bullseye.apana.org.au | Snail: PO Box 370 andymac@pcug.org.au | Belconnen ACT 2616 Web: http://www.andymac.org/ | Australia From pobrien@orbtech.com Mon Nov 4 18:36:59 2002 From: pobrien@orbtech.com (Patrick K. O'Brien) Date: Mon, 4 Nov 2002 12:36:59 -0600 Subject: [Python-Dev] metaclass insanity In-Reply-To: <2mbs55b6qx.fsf@starship.python.net> References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> Message-ID: <200211041236.59893.pobrien@orbtech.com> On Monday 04 November 2002 11:10 am, Michael Hudson wrote: > > What happens for nested classes? > > In > > class X: > class Y: > pass > > are X.Y instances picklable without extra fiddling? Picklable functions and classes must be defined in the top level of a module. Nested classes, and instances thereof, cannot be pickled at all. -- Patrick K. O'Brien Orbtech http://www.orbtech.com/web/pobrien ----------------------------------------------- "Your source for Python programming expertise." ----------------------------------------------- From niemeyer@conectiva.com Mon Nov 4 19:15:37 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Mon, 4 Nov 2002 17:15:37 -0200 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: <20021031204732.B32673@ibook.distro.conectiva> References: <20021031204732.B32673@ibook.distro.conectiva> Message-ID: <20021104171536.A30400@ibook.distro.conectiva> Submitted: http://python.org/sf/633425 It'd be nice if someone could review the documentation and check for grammatical errors, as I'm not a native speaker. Thank you! -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From perky@fallin.lv Mon Nov 4 19:29:37 2002 From: perky@fallin.lv (Hye-Shik Chang) Date: Tue, 5 Nov 2002 04:29:37 +0900 Subject: [Python-Dev] Snake farm In-Reply-To: <20021102235101.GA28348@epoch.metaslash.com> References: <20021102235101.GA28348@epoch.metaslash.com> Message-ID: <20021104192937.GA78845@fallin.lv> On Sat, Nov 02, 2002 at 06:51:01PM -0500, Neal Norwitz wrote: > As some of you may have noticed, I recently checked in a bunch of > fixes that deal with portability. I have been granted access to the > snake farm, which allows me to test python on different architectures. > > The snake farm architectures include: > > HPUX 11 > AIX 4.2 & 4.3 > Linux 2.4 & 2.2/Alpha > Solaris 8 > SunOS 4.1.1 > > I am mostly testing 2.3, but can test 2.2 as well. > > Right now, there is no easy way to tell if a bug on SF is specific to > a particular OS/architecture. There are already Macintosh and Windows > categories, perhaps a UNIX category should be added? In the meantime, > feel free to assign UNIX platform specific problems to me. I know > there are some FreeBSD problems. There is a FreeBSD machine in the SF > compile farm (ssh to compile.sf.net). Just in case someone wants to > try to fix the FreeBSD bugs. :-) > BTW, Python-CVS can't be compiled FreeBSD-CURRENT, nowadays. Because FreeBSD-CURRENT hides non-POSIX stuffs on _POSIX_C_SOURCE mode, Modules/posixmodule.c fails to find chroot, minor, major, makedev and etc. This patch enables python builds on FreeBSD-CURRENT but I can't find another generous workaround. Index: configure.in =================================================================== RCS file: /cvsroot/python/python/dist/src/configure.in,v retrieving revision 1.362 diff -c -r1.362 configure.in *** configure.in 2 Nov 2002 16:58:05 -0000 1.362 --- configure.in 4 Nov 2002 19:12:11 -0000 *************** *** 24,49 **** AC_SUBST(SOVERSION) SOVERSION=1.0 - # The later defininition of _XOPEN_SOURCE disables certain features - # on Linux, so we need _GNU_SOURCE to re-enable them (makedev, tm_zone). - AC_DEFINE(_GNU_SOURCE, 1, [Define on Linux to activate all library features]) - - # The definition of _GNU_SOURCE potentially causes a change of the value - # of _XOPEN_SOURCE. So define it only conditionally. - AH_VERBATIM([_XOPEN_SOURCE], - [/* Define on UNIX to activate XPG/5 features. */ - #ifndef _XOPEN_SOURCE - # define _XOPEN_SOURCE 500 - #endif]) - AC_DEFINE(_XOPEN_SOURCE, 500) - - # On Tru64 Unix 4.0F, defining _XOPEN_SOURCE also requires definition - # of _XOPEN_SOURCE_EXTENDED and _POSIX_C_SOURCE, or else several APIs - # are not declared. Since this is also needed in some cases for HP-UX, - # we define it globally. - AC_DEFINE(_XOPEN_SOURCE_EXTENDED, 1, Define to activate Unix95-and-earlier features) - AC_DEFINE(_POSIX_C_SOURCE, 199506L, Define to activate features from IEEE Stds 1003.{123}-1995) - # Arguments passed to configure. AC_SUBST(CONFIG_ARGS) CONFIG_ARGS="$ac_configure_args" --- 24,29 ---- *************** *** 131,136 **** --- 111,144 ---- fi AC_MSG_RESULT($MACHDEP) + # The later defininition of _XOPEN_SOURCE disables certain features + # on Linux, so we need _GNU_SOURCE to re-enable them (makedev, tm_zone). + AC_DEFINE(_GNU_SOURCE, 1, [Define on Linux to activate all library features]) + + # The definition of _GNU_SOURCE potentially causes a change of the value + # of _XOPEN_SOURCE. So define it only conditionally. + AH_VERBATIM([_XOPEN_SOURCE], + [/* Define on UNIX to activate XPG/5 features. */ + #if !defined(_XOPEN_SOURCE) && !defined(__FreeBSD__) + # define _XOPEN_SOURCE 500 + #endif]) + + case $MACHDEP in + # FreeBSD hides non-POSIX functions on POSIX-compatible mode. + freebsd*) + ;; + *) + AC_DEFINE(_XOPEN_SOURCE, 500) + + # On Tru64 Unix 4.0F, defining _XOPEN_SOURCE also requires definition + # of _XOPEN_SOURCE_EXTENDED and _POSIX_C_SOURCE, or else several APIs + # are not declared. Since this is also needed in some cases for HP-UX, + # we define it globally. + AC_DEFINE(_XOPEN_SOURCE_EXTENDED, 1, Define to activate Unix95-and-earlier features) + AC_DEFINE(_POSIX_C_SOURCE, 199506L, Define to activate features from IEEE Stds 1003.{123}-1995) + ;; + esac + # checks for alternative programs AC_MSG_CHECKING(for --without-gcc) AC_ARG_WITH(gcc, But, this build gets segfault on installing. ... make install ... ./python -E ./setup.py install --prefix=/home/perky/test --install-scripts=/home/perky/test/bin --install-platlib=/home/perky/test/lib/python2.3/lib-dynload running install running build running build_ext Segmentation fault (core dumped) *** Error code 139 #0 PyObject_Free (p=0x800) at Objects/obmalloc.c:713 713 if (ADDRESS_IN_RANGE(p, pool->arenaindex)) { (gdb) bt #0 PyObject_Free (p=0x800) at Objects/obmalloc.c:713 #1 0x080e0533 in function_call (func=0x8210b1c, arg=0x82129ac, kw=0x821ebdc) at Objects/funcobject.c:481 #2 0x08059564 in PyObject_Call (func=0x0, arg=0x82129ac, kw=0x821ebdc) at Objects/abstract.c:1688 #3 0x0809ea95 in ext_do_call (func=0x8210b1c, pp_stack=0xbfbfddbc, flags=0, na=1, nk=0) at Python/ceval.c:3438 #4 0x0809d14f in eval_frame (f=0x81f500c) at Python/ceval.c:2031 ..... > Neal > > _______________________________________________ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Regards, -- Hye-Shik Chang Yonsei University, Seoul ^D From tim.one@comcast.net Mon Nov 4 20:31:29 2002 From: tim.one@comcast.net (Tim Peters) Date: Mon, 04 Nov 2002 15:31:29 -0500 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: <20021104171536.A30400@ibook.distro.conectiva> Message-ID: [Gustavo Niemeyer] > Submitted: > > http://python.org/sf/633425 > > It'd be nice if someone could review the documentation and > check for grammatical errors, as I'm not a native speaker. Understood. > Thank you! Thank you! If you haven't noticed yet, you also have developer privs on the Python project now; it should make your life harder, but the good news is that it should make ours easier . From guido@python.org Mon Nov 4 20:54:48 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 04 Nov 2002 15:54:48 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "Mon, 04 Nov 2002 12:36:59 CST." <200211041236.59893.pobrien@orbtech.com> References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> Message-ID: <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> > > What happens for nested classes? > > > > In > > > > class X: > > class Y: > > pass > > > > are X.Y instances picklable without extra fiddling? > > Picklable functions and classes must be defined in the top level of a > module. Nested classes, and instances thereof, cannot be pickled at all. BTW, I consider this a flaw. In the case of nested classes, a possible solution might be for X.Y.__name__ to be "X.Y" rather than plain "Y". Then a simple change to pickle (or to getattr :-) could allow the correct unpickling. This won't work for classes defined inside functions though -- those are never picklable. But making them module-global would be a simple enough fix (also more efficient, since the class definition code is executed on every function call). Can someone provide a reason why you'd want to use nested classes? I've never felt this need myself. What are the motivations? --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Mon Nov 4 20:00:51 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 04 Nov 2002 15:00:51 -0500 Subject: [Python-Dev] RoundUp status In-Reply-To: Your message of "04 Nov 2002 13:42:27 GMT." <2mwuntwiws.fsf@starship.python.net> References: <20021031204046.A32673@ibook.distro.conectiva> <3DC2A7F9.6040902@pobox.com> <20021103004034.A79F81F4B@bespin.org> <2mwuntwiws.fsf@starship.python.net> Message-ID: <200211042000.gA4K0px21508@pcp02138704pcs.reston01.va.comcast.net> > Which reminds me: how much longer are we going to have to put up with > SF for bugs? Is the roundup work getting anywhere? Several things got in the way, unfortunately. Gordon put up a working demo at http://www.python.org:8080, but it died when the box was restarted. (Gordon, can you send a brief note on how to restart it?) I liked it a lot, but didn't have time to finish my review of the setup, and Gordon didn't have more time to put in it, so it languished. Since then, RoundUp has changed some of its internal architecture (as far as I can remember) and Gordon's work needs some reworking to catch up. We need a new volunteer! --Guido van Rossum (home page: http://www.python.org/~guido/) From thomas.heller@ion-tof.com Mon Nov 4 21:10:42 2002 From: thomas.heller@ion-tof.com (Thomas Heller) Date: 04 Nov 2002 22:10:42 +0100 Subject: [Python-Dev] metaclass insanity In-Reply-To: <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> Message-ID: Guido van Rossum writes: > Can someone provide a reason why you'd want to use nested classes? > I've never felt this need myself. What are the motivations? > Nested classes provide a nice way (IMO) to write metaclasses and keep them near the classes using them: class X(object): class __metaclass__(type): .... Thomas From bfordham@socialistsushi.com Mon Nov 4 21:46:59 2002 From: bfordham@socialistsushi.com (Bryan L. Fordham) Date: Mon, 4 Nov 2002 16:46:59 -0500 (EST) Subject: [Python-Dev] RoundUp status In-Reply-To: <200211042000.gA4K0px21508@pcp02138704pcs.reston01.va.comcast.net> References: <20021031204046.A32673@ibook.distro.conectiva> <3DC2A7F9.6040902@pobox.com> <20021103004034.A79F81F4B@bespin.org> <2mwuntwiws.fsf@starship.python.net> <200211042000.gA4K0px21508@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <21540.12.13.155.253.1036446419.squirrel@socialistsushi.com> > I liked it a lot, but didn't have time to finish my review of the > setup, and Gordon didn't have more time to put in it, so it > languished. Since then, RoundUp has changed some of its internal > architecture (as far as I can remember) and Gordon's work needs some > reworking to catch up. We need a new volunteer! What exactly is needed? thanks --B socialistsushi.com sushi for everyone From guido@python.org Mon Nov 4 21:54:16 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 04 Nov 2002 16:54:16 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "04 Nov 2002 22:10:42 +0100." References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <200211042154.gA4LsGs22441@pcp02138704pcs.reston01.va.comcast.net> > > Can someone provide a reason why you'd want to use nested classes? > > I've never felt this need myself. What are the motivations? > > Nested classes provide a nice way (IMO) to write metaclasses and keep > them near the classes using them: > > class X(object): > class __metaclass__(type): > .... Sorry, that looks really ugly to me, and makes it impossible to share a metaclass. It sounds like you're trying to emulate Smalltalk's idea of metaclasses. Don't do this; Python's concept of metaclasses is different, and trying to do it the Smalltalk way is counterproductive. (For example, multiple inheritance from two classes with different metaclasses doesn't work unless one metaclass inherits from the other.) --Guido van Rossum (home page: http://www.python.org/~guido/) From martin@v.loewis.de Mon Nov 4 22:00:48 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 04 Nov 2002 23:00:48 +0100 Subject: [Python-Dev] metaclass insanity In-Reply-To: <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> Message-ID: Guido van Rossum writes: > In the case of nested classes, a possible solution might be for > X.Y.__name__ to be "X.Y" rather than plain "Y". Then a simple change > to pickle (or to getattr :-) could allow the correct unpickling. I'd rather expect that X.Y.__module__ is "Foo.X". Regards, Martin From martin@v.loewis.de Mon Nov 4 22:02:54 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 04 Nov 2002 23:02:54 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: References: Message-ID: Andrew MacIntyre writes: > - test_locale skipped (unsupported locale: "en_US" not supported, but > "en_US.ISO_8859-1" is supported - could be a system configuration > issue, because adding an en_US symlink to en_US.ISO_8859-1 in > /usr/share/locale fixes this) It's debatable whether en_US should exist or not. All we know is that there can't be an automatic test if you can't rely on existance of a certain locale. The test could try several alternatives, though. Regards, Martin From dave@boost-consulting.com Mon Nov 4 21:51:07 2002 From: dave@boost-consulting.com (David Abrahams) Date: 04 Nov 2002 16:51:07 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> Message-ID: Guido van Rossum writes: > Can someone provide a reason why you'd want to use nested classes? "Namespaces are one honking great idea -- let's do more of those!" > I've never felt this need myself. What are the motivations? My people want it so they can mirror the structure of their C++ code with their Python wrappers, among other things. -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com From martin@v.loewis.de Mon Nov 4 22:06:31 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 04 Nov 2002 23:06:31 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: <20021104192937.GA78845@fallin.lv> References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> Message-ID: Hye-Shik Chang writes: > BTW, Python-CVS can't be compiled FreeBSD-CURRENT, nowadays. > Because FreeBSD-CURRENT hides non-POSIX stuffs on _POSIX_C_SOURCE > mode, Modules/posixmodule.c fails to find chroot, minor, major, > makedev and etc. This patch enables python builds on > FreeBSD-CURRENT but I can't find another generous workaround. I don't like this approach. For -CURRENT, I would outright reject any such patch; there should be a way to enable extensions even if _POSIX_C_SOURCE is defined. Perhaps they reconsider until they release the system. OTOH, why absence of chroot a problem? Should not HAVE_CHROOT be undefined if chroot is hidden? Regards, Martin From guido@python.org Mon Nov 4 22:34:57 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 04 Nov 2002 17:34:57 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "04 Nov 2002 23:00:48 +0100." References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <200211042234.gA4MYvJ22842@pcp02138704pcs.reston01.va.comcast.net> > > In the case of nested classes, a possible solution might be for > > X.Y.__name__ to be "X.Y" rather than plain "Y". Then a simple change > > to pickle (or to getattr :-) could allow the correct unpickling. > > I'd rather expect that X.Y.__module__ is "Foo.X". Also possible. There are problems with each though; if X is a class in module Foo, __import__("Foo.X") doesn't work. Example: >>> class C: class N: pass >>> C.N.__module__ '__main__' >>> C.N.__module__ = '__main__.C' >>> import pickle >>> s = pickle.dumps(C.N()) >>> pickle.loads(s) Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.3/pickle.py", line 1071, in loads return Unpickler(file).load() File "/usr/local/lib/python2.3/pickle.py", line 675, in load dispatch[key](self) File "/usr/local/lib/python2.3/pickle.py", line 852, in load_inst klass = self.find_class(module, name) File "/usr/local/lib/python2.3/pickle.py", line 907, in find_class __import__(module) ImportError: No module named C >>> Given that we have to fix pickle.py and cPickle.c in either case, I think I'd prefer setting __name__ to a dotted name reflecting the full name of the class inside its module over setting __module__ to something that's not a module. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Mon Nov 4 22:38:01 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 04 Nov 2002 17:38:01 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "04 Nov 2002 16:51:07 EST." References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <200211042238.gA4Mc1622868@pcp02138704pcs.reston01.va.comcast.net> > > Can someone provide a reason why you'd want to use nested classes? > > "Namespaces are one honking great idea -- let's do more of those!" Yeah, but I simply don't think that a Python class is a good mechanism for grouping arbitrary names together. We've got packages and modules for that. > > I've never felt this need myself. What are the motivations? > > My people want it so they can mirror the structure of their C++ code > with their Python wrappers, among other things. This sounds unpythonic, and confirms my expectation that this is an example of trying to write C++ in any language. :-) --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Mon Nov 4 23:04:16 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 04 Nov 2002 18:04:16 -0500 Subject: [Python-Dev] RoundUp status In-Reply-To: Your message of "Mon, 04 Nov 2002 16:46:59 EST." <21540.12.13.155.253.1036446419.squirrel@socialistsushi.com> References: <20021031204046.A32673@ibook.distro.conectiva> <3DC2A7F9.6040902@pobox.com> <20021103004034.A79F81F4B@bespin.org> <2mwuntwiws.fsf@starship.python.net> <200211042000.gA4K0px21508@pcp02138704pcs.reston01.va.comcast.net> <21540.12.13.155.253.1036446419.squirrel@socialistsushi.com> Message-ID: <200211042304.gA4N4Gs23061@pcp02138704pcs.reston01.va.comcast.net> > > I liked it a lot, but didn't have time to finish my review of the > > setup, and Gordon didn't have more time to put in it, so it > > languished. Since then, RoundUp has changed some of its internal > > architecture (as far as I can remember) and Gordon's work needs some > > reworking to catch up. We need a new volunteer! > > What exactly is needed? Since Gordon did the work, RoundUp was changed to use Zope style templating. So Gordon's templates need to changed too. Possibly bugs or missing features in the new templating need to be fixed in RoundUp before this is possible. Apart from that, Gordon didn't finish everything -- I'd point you to the meta-RoundUp instance that we used to keep track of the remaining problems, but it's down. There may still be problems with sending email from the RoundUp instance to users not at python.org. Finally, there's the small task of actual deployment. There needs to be a flag day when the SF trackers are converted to RoundUp once more (Gordon wrote all the screen-scraping software to do this), the RoundUp tracker is opened up for the public, and the SF trackers are disabled. Befor this, the RoundUp tracker needs to be operated in "beta" mode for a while. After this, it probably needs babysitting as heavy use reveals more bugs. --Guido van Rossum (home page: http://www.python.org/~guido/) From rjones@ekit-inc.com Mon Nov 4 23:18:22 2002 From: rjones@ekit-inc.com (Richard Jones) Date: Tue, 5 Nov 2002 10:18:22 +1100 Subject: [Python-Dev] RoundUp status In-Reply-To: <200211042304.gA4N4Gs23061@pcp02138704pcs.reston01.va.comcast.net> References: <20021031204046.A32673@ibook.distro.conectiva> <21540.12.13.155.253.1036446419.squirrel@socialistsushi.com> <200211042304.gA4N4Gs23061@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <200211051018.22845.rjones@ekit-inc.com> On Tue, 5 Nov 2002 10:04 am, Guido van Rossum wrote: > > > I liked it a lot, but didn't have time to finish my review of the > > > setup, and Gordon didn't have more time to put in it, so it > > > languished. Since then, RoundUp has changed some of its internal > > > architecture (as far as I can remember) and Gordon's work needs some > > > reworking to catch up. We need a new volunteer! > > > > What exactly is needed? > > Since Gordon did the work, RoundUp was changed to use Zope style > templating. So Gordon's templates need to changed too. Possibly bugs > or missing features in the new templating need to be fixed in RoundUp > before this is possible. I'll put my hand up for this. Richard From aleaxit@yahoo.com Mon Nov 4 23:58:24 2002 From: aleaxit@yahoo.com (Alex Martelli) Date: Tue, 5 Nov 2002 00:58:24 +0100 Subject: [Python-Dev] metaclass insanity In-Reply-To: <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> References: <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <02110500582403.27027@arthur> On Monday 04 November 2002 21:54, Guido van Rossum wrote: ... > Can someone provide a reason why you'd want to use nested classes? > I've never felt this need myself. What are the motivations? For example, I find it natural to use a nested class to provide an iterator object for a class that defines __iter__, in many cases. E.g. something like: class Outer: [snip snip] def __iter__(self): class Inner: def __init__(self, outer): self.outer = outer def __iter__(self): return self def next(self): if self.outer.isAtEnd(): raise StopIteration else: result = self.outer.currentState() self.outer.advanceState() return result return Inner(self) this would be a typical idiom to adapt a class Outer that provides methods isAtEnd, currentState and advanceState, with hopefully obvious semantics, to provide an iterator object respecting Python's iteration protocol. Of course, I could define that "class Inner" in any place at all, but since it's only meant to be used in this one spot, why not define it right here? I think it enhances legibility -- if I put it elsewhere, the reader of the code seeing just the return statement in the def __iter__ must go look elsewhere to see what I'm doing, and/or if the reader sees class Inner on its own it may not be equally obvious what it's meant to be used for, while with this placement it IS abundantly obvious. There are other wrapping/adaptation examples that work similarly, where I need a class just inside one particular method or function because the only reason for that class's existence is to be suitably instantiated to wrap another object and adapt it to some externally imposed protocol. I like being able to nest such "local use only" wrapper classes in the one and only place where they're needed: by being right there they enhance readability as outlined in the previous paragraph, in my opinion. Alex From dave@boost-consulting.com Tue Nov 5 00:14:16 2002 From: dave@boost-consulting.com (David Abrahams) Date: 04 Nov 2002 19:14:16 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: <200211042238.gA4Mc1622868@pcp02138704pcs.reston01.va.comcast.net> References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> <200211042238.gA4Mc1622868@pcp02138704pcs.reston01.va.comcast.net> Message-ID: Guido van Rossum writes: > > > Can someone provide a reason why you'd want to use nested classes? > > > > "Namespaces are one honking great idea -- let's do more of those!" > > Yeah, but I simply don't think that a Python class is a good mechanism > for grouping arbitrary names together. We've got packages and modules > for that. If I write a container it seems reasonable to nest its iterator class. > > > I've never felt this need myself. What are the motivations? > > > > My people want it so they can mirror the structure of their C++ code > > with their Python wrappers, among other things. > > This sounds unpythonic, and confirms my expectation that this is an > example of trying to write C++ in any language. :-) A latent hatred of C++ sure do come in handy if'n there's an idea we don't like floatin' around, don't it? If we define "Pythonic" narrowly enough, we can make sure it never grows up to be another one of those `multiparadigm' programming languages. kill-all-the-other-nasty-languages-(or-maybe-this-one)-ly y'rs, dave -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com From greg@cosc.canterbury.ac.nz Tue Nov 5 01:04:43 2002 From: greg@cosc.canterbury.ac.nz (Greg Ewing) Date: Tue, 05 Nov 2002 14:04:43 +1300 (NZDT) Subject: [Python-Dev] metaclass insanity In-Reply-To: <02110500582403.27027@arthur> Message-ID: <200211050104.gA514hS18074@kuku.cosc.canterbury.ac.nz> Alex Martelli : > I could define that "class Inner" in any place at all, but since it's > only meant to be used in this one spot, why not define it right here? Putting it inside the function means the class definition is executed every time the function is called, which is rather needlessly inefficient. You could get most of the namespace benefit by putting it one level further out, i.e. class Foo: class Iterator: ... def __iter__(self): return self.Iterator(self) The iterator class would then be known from outside as Foo.Iterator, which is nicely descriptive and makes it fairly obvious what it's for. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg@cosc.canterbury.ac.nz +--------------------------------------+ From guido@python.org Tue Nov 5 01:51:27 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 04 Nov 2002 20:51:27 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "Tue, 05 Nov 2002 00:58:24 +0100." <02110500582403.27027@arthur> References: <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> <02110500582403.27027@arthur> Message-ID: <200211050151.gA51pRg23551@pcp02138704pcs.reston01.va.comcast.net> Summary: please, let's not encourage this use of nested classes. > > Can someone provide a reason why you'd want to use nested classes? > > I've never felt this need myself. What are the motivations? > > For example, I find it natural to use a nested class to provide an > iterator object for a class that defines __iter__, in many cases. [Example deleted] Why do you find this natural? Perhaps because you've written a lot of Java? I find it "not natural" -- I have used this pattern (a helper class) many times but have never felt the urge to nest the helper inside the outer class (let alone inside a method of the outer class). > Of course, I could define that "class Inner" in any place at all, > but since it's only meant to be used in this one spot, why not > define it right here? I think it enhances legibility -- if I put it > elsewhere, the reader of the code seeing just the return statement > in the def __iter__ must go look elsewhere to see what I'm doing, > and/or if the reader sees class Inner on its own it may not be > equally obvious what it's meant to be used for, while with this > placement it IS abundantly obvious. It's already been pointed out that placing it inside the __iter__ method is a bad idea because of performance. I also think that it's better that the iterator class *is* accessible to the user -- that way you can do an isinstance() check for it, for example. The legibility argument is dubious: in a realistic example, the iterator class may easily be fairly big, and that makes it a disruptive detail for the reader of the Outer class. > There are other wrapping/adaptation examples that work similarly, > where I need a class just inside one particular method or function > because the only reason for that class's existence is to be suitably > instantiated to wrap another object and adapt it to some externally > imposed protocol. I like being able to nest such "local use only" > wrapper classes in the one and only place where they're needed: > by being right there they enhance readability as outlined in the > previous paragraph, in my opinion. It's still a bad idea. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Tue Nov 5 01:56:33 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 04 Nov 2002 20:56:33 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "04 Nov 2002 19:14:16 EST." References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> <200211042238.gA4Mc1622868@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <200211050156.gA51uXB23567@pcp02138704pcs.reston01.va.comcast.net> > If I write a container it seems reasonable to nest its iterator class. There are lots of situations where you need helper classes. It's been an established style among most Python developers for many years to place these in the same module as the "main" class. Nesting them inside other classes has few advantages -- it's disruptive for the reader of the main class (often the helpers are "details" whose understanding can be easily put off) and it doesn't work very well (e.g. can't be pickled). > > This sounds unpythonic, and confirms my expectation that this is an > > example of trying to write C++ in any language. :-) > > A latent hatred of C++ sure do come in handy if'n there's an idea we > don't like floatin' around, don't it? Well, you acted as the straight man by bringing up C++. :-) I was expecting examples from the Java world, where there's no distinction between classes and modules, and hence classes become the unit of packaging. > If we define "Pythonic" narrowly enough, we can make sure it never > grows up to be another one of those `multiparadigm' programming > languages. Class nesting just hasn't been observed much among Python programmers, and I don't think it's the best solution for the problem that is brought up as a reason to do it. --Guido van Rossum (home page: http://www.python.org/~guido/) From niemeyer@conectiva.com Tue Nov 5 02:29:45 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Tue, 5 Nov 2002 00:29:45 -0200 Subject: [Python-Dev] [#454030] distutils cannot link C++ code with GCC Message-ID: <20021105002945.A4448@ibook.distro.conectiva> Can someone please review the proposed solution attached on bug #454030, fixing the following bugs: [#413582] g++ must be called for c++ extensions [#454030] distutils cannot link C++ code with GCC Thank you! -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From bfordham@socialistsushi.com Tue Nov 5 03:09:19 2002 From: bfordham@socialistsushi.com (Bryan L. Fordham) Date: Mon, 04 Nov 2002 22:09:19 -0500 Subject: [Python-Dev] RoundUp status References: <20021031204046.A32673@ibook.distro.conectiva> <21540.12.13.155.253.1036446419.squirrel@socialistsushi.com> <200211042304.gA4N4Gs23061@pcp02138704pcs.reston01.va.comcast.net> <200211051018.22845.rjones@ekit-inc.com> Message-ID: <3DC7365F.60800@socialistsushi.com> Richard Jones wrote: >On Tue, 5 Nov 2002 10:04 am, Guido van Rossum wrote: > > >>Since Gordon did the work, RoundUp was changed to use Zope style >>templating. So Gordon's templates need to changed too. Possibly bugs >>or missing features in the new templating need to be fixed in RoundUp >>before this is possible. >> >> > >I'll put my hand up for this. > > I'm also happy to help. If someone will point me to the current code I'll take a peek. Richard, if you'd like some help with this just let me know what you need me to do. --B From radix@twistedmatrix.com Tue Nov 5 03:18:15 2002 From: radix@twistedmatrix.com (Christopher Armstrong) Date: Tue, 5 Nov 2002 04:18:15 +0100 Subject: [Python-Dev] metaclass insanity Message-ID: <20021105031815.GC8926@toshi> Guido van Russom wrote on Mon, 04 Nov 2002 15:54:48 -0500 > Can someone provide a reason why you'd want to use nested classes? > I've never felt this need myself. What are the motivations? Yes. In Reality (a Twisted Matrix Labs project), a text-based virtual world simulation framework, we have a system that can automatically generate Interface definitions from classes (with a metaclass that builds them at class-def time). Twisted has an Interface/adapter/ componentized system similar to Zope's in twisted.python.components, and we use components/adapters to their fullest in Reality. The problem is, our virtual worlds are persistent, so most of our objects get pickled (or serialized with some similar mechanism), and our objects hold references to these automatically-generated Interfaces. We would *like* to have these automatically-generated Interfaces as attributes of the classes they were generated for, but we have to use a hack that does something like:: setattr(subclass.__module__, interfaceName, NewInterface) This is really horrible, IMO. Anyway, regardless that this may be a somewhat obscure use-case (although one we rely heavily on), It _is_ wrong that str(klass) is lying about its location, and I seriously doubt fixing it would cause any problems. -- Christopher Armstrong << radix@twistedmatrix.com >> http://twistedmatrix.com/users/radix.twistd/ From just@letterror.com Tue Nov 5 08:16:32 2002 From: just@letterror.com (Just van Rossum) Date: Tue, 5 Nov 2002 09:16:32 +0100 Subject: [Python-Dev] metaclass insanity In-Reply-To: <02110500582403.27027@arthur> Message-ID: Alex Martelli wrote: > class Outer: > [snip snip] > def __iter__(self): > class Inner: > def __init__(self, outer): > self.outer = outer > def __iter__(self): > return self > def next(self): > if self.outer.isAtEnd(): > raise StopIteration > else: > result = self.outer.currentState() > self.outer.advanceState() > return result > return Inner(self) While I'm sure this was just a theoretical example, _this_ specific case is much easier written as a generator (and I _love_ the idiom of making the __iter__ method a generator, I think it's underused...). class Outer: [snip snip] def __iter__(self): while not self.isAtEnd(): result = self.outer.currentState() self.outer.advanceState() yield result Just From just@letterror.com Tue Nov 5 08:48:03 2002 From: just@letterror.com (Just van Rossum) Date: Tue, 5 Nov 2002 09:48:03 +0100 Subject: [Python-Dev] metaclass insanity In-Reply-To: Message-ID: Just van Rossum wrote: > class Outer: > [snip snip] > def __iter__(self): > while not self.isAtEnd(): > result = self.outer.currentState() > self.outer.advanceState() > yield result PS, getting more off-topic, I'm somewhat surprised that the above is more compact and readable than the obvious generator-less equivalent: class Outer: [snip snip] def __iter__(self): return self def next(self): if self.isAtEnd(): raise StopIteration else: result = self.outer.currentState() self.outer.advanceState() return result Just From walter@livinglogic.de Tue Nov 5 10:21:14 2002 From: walter@livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=) Date: Tue, 05 Nov 2002 11:21:14 +0100 Subject: [Python-Dev] metaclass insanity In-Reply-To: References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <3DC79B9A.9010504@livinglogic.de> Martin v. Loewis wrote: > Guido van Rossum writes: > > >>In the case of nested classes, a possible solution might be for >>X.Y.__name__ to be "X.Y" rather than plain "Y". Then a simple change >>to pickle (or to getattr :-) could allow the correct unpickling. > > > I'd rather expect that X.Y.__module__ is "Foo.X". Even better would to have "X.Y.__outerclass__ is X", i.e. __outerclass__ as a (weak) reference to the class in which X.Y was defined. What I came up with is this: class nestedtype(type): def __new__(cls, name, bases, dict): dict["__outerclass__"] = None res = type.__new__(cls, name, bases, dict) for (key, value) in dict.items(): if isinstance(value, type): value.__outerclass__ = res return res def __fullname__(cls): name = cls.__name__ while 1: cls = cls.__outerclass__ if cls is None: return name name = cls.__name__ + "." + name def __repr__(cls): return "" % \ (cls.__module__, cls.__fullname__(), id(cls)) Bye, Walter Dörwald From mwh@python.net Tue Nov 5 11:32:48 2002 From: mwh@python.net (Michael Hudson) Date: 05 Nov 2002 11:32:48 +0000 Subject: [Python-Dev] metaclass insanity In-Reply-To: Just van Rossum's message of "Tue, 5 Nov 2002 09:16:32 +0100" References: Message-ID: <2mela0l09r.fsf@starship.python.net> Just van Rossum writes: > While I'm sure this was just a theoretical example, _this_ specific > case is much easier written as a generator (and I _love_ the idiom > of making the __iter__ method a generator, I think it's > underused...). Maybe it's underused because it hasn't occured to people? It certainly does look neat. Cheers, M. now struggling to think of an iterator that's better done as a class than a generator... -- incidentally, asking why things are "left out of the language" is a good sign that the asker is fairly clueless. -- Erik Naggum, comp.lang.lisp From walter@livinglogic.de Tue Nov 5 12:35:52 2002 From: walter@livinglogic.de (=?ISO-8859-15?Q?Walter_D=F6rwald?=) Date: Tue, 05 Nov 2002 13:35:52 +0100 Subject: [Python-Dev] metaclass insanity In-Reply-To: <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <3DC7BB28.8020006@livinglogic.de> Guido van Rossum wrote: > [...] > This won't work for classes defined inside functions though -- those > are never picklable. But making them module-global would be a simple > enough fix (also more efficient, since the class definition code is > executed on every function call). > > Can someone provide a reason why you'd want to use nested classes? > I've never felt this need myself. What are the motivations? XIST (http://www.livinglogic.de/Python/xist/) uses nested classes to map XML element types and their attributes to Python classes. For example the HTML element type img looks like this in XIST: class img(Element): class Attrs(Element.Attrs): class src(URLAttr): required = True class alt(TextAttr): required = True class align(TextAttr): values = ("top", "middle", ...) Having XML elements as classes makes it possible to add methods for transforming these elements. Defining the attributes via classes is just an extension to that, although there are moments when I'm not so sure, whether this is the best method, but the title of this thread is "metaclass insanity"! ;) Bye, Walter Dörwald From dave@boost-consulting.com Tue Nov 5 14:45:30 2002 From: dave@boost-consulting.com (David Abrahams) Date: 05 Nov 2002 09:45:30 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: References: Message-ID: Just van Rossum writes: > Alex Martelli wrote: > > > class Outer: > > [snip snip] > > def __iter__(self): > > class Inner: > > def __init__(self, outer): > > self.outer = outer > > def __iter__(self): > > return self > > def next(self): > > if self.outer.isAtEnd(): > > raise StopIteration > > else: > > result = self.outer.currentState() > > self.outer.advanceState() > > return result > > return Inner(self) > > While I'm sure this was just a theoretical example, _this_ specific case is much > easier written as a generator (and I _love_ the idiom of making the __iter__ > method a generator, I think it's underused...). > > class Outer: > [snip snip] > def __iter__(self): > while not self.isAtEnd(): > result = self.outer.currentState() > self.outer.advanceState() > yield result This only works for 'self-iterable' types like files, right? -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com From dave@boost-consulting.com Tue Nov 5 14:47:50 2002 From: dave@boost-consulting.com (David Abrahams) Date: 05 Nov 2002 09:47:50 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: References: Message-ID: Just van Rossum writes: > Just van Rossum wrote: > > > class Outer: > > [snip snip] > > def __iter__(self): > > while not self.isAtEnd(): > > result = self.outer.currentState() > > self.outer.advanceState() > > yield result > > PS, getting more off-topic, I'm somewhat surprised that the above is more > compact and readable than the obvious generator-less equivalent: > > class Outer: > [snip snip] > def __iter__(self): > return self > def next(self): > if self.isAtEnd(): > raise StopIteration > else: > result = self.outer.currentState() > self.outer.advanceState() > return result Hey, wait a sec! It doesn't surprise me at all that the above is more compact. I don't see any next() interface, nor any raising of StopIteration. Does it really satisfy the iterator protocol? -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com From guido@python.org Tue Nov 5 15:03:51 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 10:03:51 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "Tue, 05 Nov 2002 13:35:52 +0100." <3DC7BB28.8020006@livinglogic.de> References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> <3DC7BB28.8020006@livinglogic.de> Message-ID: <200211051503.gA5F3p818976@odiug.zope.com> > > Can someone provide a reason why you'd want to use nested classes? > > I've never felt this need myself. What are the motivations? > > XIST (http://www.livinglogic.de/Python/xist/) uses nested classes > to map XML element types and their attributes to Python classes. > For example the HTML element type img looks like this in XIST: > > class img(Element): > class Attrs(Element.Attrs): > class src(URLAttr): required = True > class alt(TextAttr): required = True > class align(TextAttr): values = ("top", "middle", ...) That's cool. --Guido van Rossum (home page: http://www.python.org/~guido/) From just@letterror.com Tue Nov 5 15:08:51 2002 From: just@letterror.com (Just van Rossum) Date: Tue, 5 Nov 2002 16:08:51 +0100 Subject: [Python-Dev] metaclass insanity In-Reply-To: Message-ID: [JvR] > > class Outer: > > [snip snip] > > def __iter__(self): > > while not self.isAtEnd(): > > result = self.outer.currentState() > > self.outer.advanceState() > > yield result [David Abrahams] > This only works for 'self-iterable' types like files, right? No, the generator-iterator can hold state in the form of local variables. That isn't used in this example, though. [later] > Hey, wait a sec! > > It doesn't surprise me at all that the above is more compact. I don't > see any next() interface, nor any raising of StopIteration. Does it > really satisfy the iterator protocol? Heh, yeah, that's what generators do. You call a generator-function (like the __iter__() method above) and it returns a generator-iterator. The magic word is "yield". Just From dave@boost-consulting.com Tue Nov 5 14:58:21 2002 From: dave@boost-consulting.com (David Abrahams) Date: 05 Nov 2002 09:58:21 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: References: Message-ID: Just van Rossum writes: > [JvR] > > > class Outer: > > > [snip snip] > > > def __iter__(self): > > > while not self.isAtEnd(): > > > result = self.outer.currentState() > > > self.outer.advanceState() > > > yield result > > [David Abrahams] > > This only works for 'self-iterable' types like files, right? > > No, the generator-iterator can hold state in the form of local variables. That > isn't used in this example, though. Ahh, closures... > [later] > > Hey, wait a sec! > > > > It doesn't surprise me at all that the above is more compact. I don't > > see any next() interface, nor any raising of StopIteration. Does it > > really satisfy the iterator protocol? > > Heh, yeah, that's what generators do. You call a generator-function (like the > __iter__() method above) and it returns a generator-iterator. The magic word is > "yield". Too cool! -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com From aahz@pythoncraft.com Tue Nov 5 15:13:33 2002 From: aahz@pythoncraft.com (Aahz) Date: Tue, 5 Nov 2002 10:13:33 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: References: Message-ID: <20021105151333.GC8908@panix.com> On Tue, Nov 05, 2002, Just van Rossum wrote: > Just van Rossum wrote: >> >> class Outer: >> [snip snip] >> def __iter__(self): >> while not self.isAtEnd(): >> result = self.outer.currentState() >> self.outer.advanceState() >> yield result > > PS, getting more off-topic, I'm somewhat surprised that the above is more > compact and readable than the obvious generator-less equivalent: > > class Outer: > [snip snip] > def __iter__(self): > return self > def next(self): > if self.isAtEnd(): > raise StopIteration > else: > result = self.outer.currentState() > self.outer.advanceState() > return result Hmmmm... I'm surprised that you're surprised. Here's my slide from OSCON2002: Generators simpler No need to explicitly maintain state Iterators more controllable Can manipulate internal state (e.g., __del__ method on iterator object to handle cleanup) Can be backported to older versions of Python Generally speaking, I can think of few reasons to build an actual iterator class rather than a generator. -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ Project Vote Smart: http://www.vote-smart.org/ From aahz@pythoncraft.com Tue Nov 5 15:16:16 2002 From: aahz@pythoncraft.com (Aahz) Date: Tue, 5 Nov 2002 10:16:16 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: References: Message-ID: <20021105151616.GD8908@panix.com> On Tue, Nov 05, 2002, David Abrahams wrote: > Just van Rossum writes: >> >> No, the generator-iterator can hold state in the form of local >> variables. That isn't used in this example, though. > > Ahh, closures... Huh. Do other people agree that generators are a form of closures? -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ Project Vote Smart: http://www.vote-smart.org/ From just@letterror.com Tue Nov 5 15:17:06 2002 From: just@letterror.com (Just van Rossum) Date: Tue, 5 Nov 2002 16:17:06 +0100 Subject: [Python-Dev] metaclass insanity In-Reply-To: Message-ID: David Abrahams wrote: > > No, the generator-iterator can hold state in the form of local > > variables. That isn't used in this example, though. > > Ahh, closures... Nah, it's just a resumable function... Just From guido@python.org Tue Nov 5 15:38:41 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 10:38:41 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "Tue, 05 Nov 2002 04:18:15 +0100." <20021105031815.GC8926@toshi> References: <20021105031815.GC8926@toshi> Message-ID: <200211051538.gA5FcfT19252@odiug.zope.com> > > Can someone provide a reason why you'd want to use nested classes? > > I've never felt this need myself. What are the motivations? > > Yes. In Reality (a Twisted Matrix Labs project), a text-based virtual > world simulation framework, we have a system that can automatically > generate Interface definitions from classes (with a metaclass that > builds them at class-def time). (Beware of using a custom metaclass per class though -- it can prevent multiple inheritance, as I explained before in this thread.) > Twisted has an Interface/adapter/ > componentized system similar to Zope's in twisted.python.components, > and we use components/adapters to their fullest in Reality. > > The problem is, our virtual worlds are persistent, so most of our > objects get pickled (or serialized with some similar mechanism), and > our objects hold references to these automatically-generated > Interfaces. We would *like* to have these automatically-generated > Interfaces as attributes of the classes they were generated for, > but we have to use a hack that does something like:: > > setattr(subclass.__module__, interfaceName, NewInterface) > > This is really horrible, IMO. Why didn't you choose to avoid nesting classes and use a naming convention instead, since the nesting causes problems? > Anyway, regardless that this may be a somewhat obscure use-case > (although one we rely heavily on), It _is_ wrong that str(klass) is > lying about its location, and I seriously doubt fixing it would cause > any problems. Agreed. I'll respond to MvL's proposal. --Guido van Rossum (home page: http://www.python.org/~guido/) From jacobs@penguin.theopalgroup.com Tue Nov 5 15:42:02 2002 From: jacobs@penguin.theopalgroup.com (Kevin Jacobs) Date: Tue, 5 Nov 2002 10:42:02 -0500 (EST) Subject: [Python-Dev] metaclass insanity In-Reply-To: Message-ID: On Tue, 5 Nov 2002, Just van Rossum wrote: > David Abrahams wrote: > > > > No, the generator-iterator can hold state in the form of local > > > variables. That isn't used in this example, though. > > > > Ahh, closures... > > Nah, it's just a resumable function... Which is a closure, aka lexically scoped namespace. It is not, however, a continuation. -Kevin -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From guido@python.org Tue Nov 5 15:49:20 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 10:49:20 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "04 Nov 2002 17:10:46 GMT." <2mbs55b6qx.fsf@starship.python.net> References: <2md6psyowr.fsf@starship.python.net> <200210301644.g9UGiaZ18410@odiug.zope.com> <2my98faj0p.fsf@starship.python.net> <200210302044.g9UKicx22801@odiug.zope.com> <2mznsu3nhq.fsf@starship.python.net> <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> Message-ID: <200211051549.gA5FnKx19365@odiug.zope.com> > > > Oh, and while we're at it, here's a bogosity: > > > > > > >>> class C(object): > > > ... pass > > > ... > > > >>> C.__module__ > > > '__main__' > > > >>> C.__module__ = 1 > > > >>> C.__module__ > > > Traceback (most recent call last): > > > File "", line 1, in ? > > > AttributeError: __module__ > > > > > > caused by lax testing in type_set_module. > > > > Oops. Can you fix it? Or are there complications? > > No, should be easy. > > if (!PyString_Check(arg)) { > PyErr_SetString(PyExc_TypeError, > "don't do that"); > return -1; > } > > or something like that. I think the fix should be not to typecheck __module__ when returning it. If someone sets __module__ to a non-string, that's their problem: *** typeobject.c 18 Oct 2002 16:33:12 -0000 2.185 --- typeobject.c 5 Nov 2002 15:47:46 -0000 *************** *** 63,69 **** if (!(type->tp_flags & Py_TPFLAGS_HEAPTYPE)) return PyString_FromString("__builtin__"); mod = PyDict_GetItemString(type->tp_dict, "__module__"); ! if (mod != NULL && PyString_Check(mod)) { Py_INCREF(mod); return mod; } --- 63,69 ---- if (!(type->tp_flags & Py_TPFLAGS_HEAPTYPE)) return PyString_FromString("__builtin__"); mod = PyDict_GetItemString(type->tp_dict, "__module__"); ! if (mod != NULL) { Py_INCREF(mod); return mod; } --Guido van Rossum (home page: http://www.python.org/~guido/) From just@letterror.com Tue Nov 5 15:32:57 2002 From: just@letterror.com (Just van Rossum) Date: Tue, 5 Nov 2002 16:32:57 +0100 Subject: [Python-Dev] metaclass insanity In-Reply-To: <20021105151333.GC8908@panix.com> Message-ID: Aahz wrote: > Hmmmm... I'm surprised that you're surprised. Yeah, I see now that I was somewhat naive... > Here's my slide from > OSCON2002: > > Generators simpler > No need to explicitly maintain state [ ... ] Well, in this particular example no state is maintained in either version. Generators are simply a higher level way of building an iterator, because there's no need to explicitly raise StopIteration. Just From guido@python.org Tue Nov 5 16:21:07 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 11:21:07 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "Tue, 05 Nov 2002 10:42:02 EST." References: Message-ID: <200211051621.gA5GL7R19507@odiug.zope.com> > > Nah, it's just a resumable function... > > Which is a closure, aka lexically scoped namespace. It is not, however, a > continuation. I don't think it makes much sense to call all Python functions closures. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Tue Nov 5 16:29:29 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 11:29:29 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "Tue, 05 Nov 2002 10:16:16 EST." <20021105151616.GD8908@panix.com> References: <20021105151616.GD8908@panix.com> Message-ID: <200211051629.gA5GTTx19610@odiug.zope.com> [Just] > >> No, the generator-iterator can hold state in the form of local > >> variables. That isn't used in this example, though. [David] > > Ahh, closures... [Aahz] > Huh. Do other people agree that generators are a form of closures? Not really. Closures (in Python) reference local variables of outer functions. Generators don't have to do that. Maybe David meant continuations? Generators aren't really continuations either -- they save only a single Python stack frame. But they are definitely related to them. I see no connection with closures. --Guido van Rossum (home page: http://www.python.org/~guido/) From jacobs@penguin.theopalgroup.com Tue Nov 5 16:34:36 2002 From: jacobs@penguin.theopalgroup.com (Kevin Jacobs) Date: Tue, 5 Nov 2002 11:34:36 -0500 (EST) Subject: [Python-Dev] metaclass insanity In-Reply-To: <200211051621.gA5GL7R19507@odiug.zope.com> Message-ID: On Tue, 5 Nov 2002, Guido van Rossum wrote: > > > Nah, it's just a resumable function... > > > > Which is a closure, aka lexically scoped namespace. It is not, however, a > > continuation. > > I don't think it makes much sense to call all Python functions > closures. I was taught that a closure is a first class function or object that holds references to its own lexically defined namespace(s). So as I understand it, they _are_ closures, though limited to only two levels of lexical scope (or three if you count the builtin scope). -Kevin -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From tim.one@comcast.net Tue Nov 5 16:47:20 2002 From: tim.one@comcast.net (Tim Peters) Date: Tue, 05 Nov 2002 11:47:20 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: <20021105151616.GD8908@panix.com> Message-ID: [Aahz] > Huh. Do other people agree that generators are a form of closures? Closure = chunk o' code + lexical environmemt, so sure, but it's not a *useful* characterization because generators also capture a program counter, local bindings, and the internal eval stack. "Resumable function" is still (I believe) the easiest way for a non-Scheme-head to think abou them. "Closure" only covers the "function" part; it's the "resumable" part that makes a generator more than just another function. From guido@python.org Tue Nov 5 16:52:46 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 11:52:46 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "Tue, 05 Nov 2002 11:34:36 EST." References: Message-ID: <200211051652.gA5Gqkg19729@odiug.zope.com> > > I don't think it makes much sense to call all Python functions > > closures. > > I was taught that a closure is a first class function or object that > holds references to its own lexically defined namespace(s). So as I > understand it, they _are_ closures, though limited to only two > levels of lexical scope (or three if you count the builtin scope). You're behind the times. Python 2.2 (and 2.1 if you did "from __future__ import namesepaces") allows references to outer functions as well. --Guido van Rossum (home page: http://www.python.org/~guido/) From jacobs@penguin.theopalgroup.com Tue Nov 5 16:59:10 2002 From: jacobs@penguin.theopalgroup.com (Kevin Jacobs) Date: Tue, 5 Nov 2002 11:59:10 -0500 (EST) Subject: [Python-Dev] metaclass insanity In-Reply-To: <200211051652.gA5Gqkg19729@odiug.zope.com> Message-ID: On Tue, 5 Nov 2002, Guido van Rossum wrote: > > > I don't think it makes much sense to call all Python functions > > > closures. > > > > I was taught that a closure is a first class function or object that > > holds references to its own lexically defined namespace(s). So as I > > understand it, they _are_ closures, though limited to only two > > levels of lexical scope (or three if you count the builtin scope). > > You're behind the times. Python 2.2 (and 2.1 if you did "from > __future__ import namesepaces") allows references to outer functions > as well. I thought there were still a limited number of levels of true lexical scope, but it seems that I'm wrong. Cool! -Kevin -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From guido@python.org Tue Nov 5 17:00:45 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 12:00:45 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "Tue, 05 Nov 2002 11:36:34 +0200." References: <02110500582403.27027@arthur> <200211050151.gA51pRg23551@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <200211051700.gA5H0j719786@odiug.zope.com> > Keeping a definition of some X (whether X is a class, a function, > whatever) as local as feasible has one obvious merit which I have > already pointed out: a reader of the code need NOT ask himself > "where is this X defined" when X is used, nor "how is this X used" > when X is defined -- the answers become immediately obvious by the > very way the code is structure... X is defined right here and used > right here. Yes. However this is counterbalanced by the distraction for the reader though when the details of X's implementation make it even a moderately large lump of code (above a couple of lines); usually these details aren't of immediate interest to the reader where X is used. > > I also think that it's better that the iterator class *is* > > accessible to the user -- that way you can do an isinstance() > > check for it, for example. > > To me, that's a good reason to AVOID making the class directly > accessible to the user. I find that most uses that programmers > make of type-testing in Python (e.g. with isinstance) are more > damaging than helpful; anything that happens to discourage such > damaging practices therefore looks good to me thereby. So use a name starting with an underscore. That's enough of a hint. > In my experience so far, I find that wrapper classes tend to > be rather small -- maybe that comes from my general style. Wrapper classes are just one example of helper classes. The argument for keeping helper classes out of the way applies broadly. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Tue Nov 5 17:05:21 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 12:05:21 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "Tue, 05 Nov 2002 11:21:14 +0100." <3DC79B9A.9010504@livinglogic.de> References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> <3DC79B9A.9010504@livinglogic.de> Message-ID: <200211051705.gA5H5LJ19816@odiug.zope.com> > Even better would to have "X.Y.__outerclass__ is X", > i.e. __outerclass__ as a (weak) reference to the class > in which X.Y was defined. What I came up with is this: > > class nestedtype(type): > def __new__(cls, name, bases, dict): > dict["__outerclass__"] = None > res = type.__new__(cls, name, bases, dict) > for (key, value) in dict.items(): > if isinstance(value, type): > value.__outerclass__ = res > return res > > def __fullname__(cls): > name = cls.__name__ > while 1: > cls = cls.__outerclass__ > if cls is None: > return name > name = cls.__name__ + "." + name > > def __repr__(cls): > return "" % \ > (cls.__module__, cls.__fullname__(), id(cls)) I kind of like __fullname__ as a useful attribute (though I'd make it an attribute or property [== computed attribute] rather than a method). But if setting the __name__ of an inner class to "Outer.Inner" doesn't break too much existing code, I'd prefer that -- no new attributes needed. I don't see any use for __outerclass__ *except* for computing the full name. And there are better ways to do that if you have help from the parser. --Guido van Rossum (home page: http://www.python.org/~guido/) From niemeyer@conectiva.com Tue Nov 5 17:32:09 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Tue, 5 Nov 2002 15:32:09 -0200 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: References: <20021104171536.A30400@ibook.distro.conectiva> Message-ID: <20021105153208.A10565@ibook.distro.conectiva> > Thank you! If you haven't noticed yet, you also have developer privs on the > Python project now; it should make your life harder, but the good news is > that it should make ours easier . No pain, no gain! ;-) I hope I can really help you guys to improve, or at least maintain, the great work you've been doing for so much time. feeling-super-cow-powers-ly y'rs -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From pje@telecommunity.com Tue Nov 5 17:49:39 2002 From: pje@telecommunity.com (Phillip J. Eby) Date: Tue, 05 Nov 2002 12:49:39 -0500 Subject: [Python-Dev] Metaclass insanity - another use case Message-ID: <5.1.1.6.0.20021105123313.01f742f0@mail.rapidsite.net> I was reading the "metaclass insanity" thread this morning about dotted __name__ for nested classes, and I thought I should chime in that I've actually *used* this technique. That is, I've had classes of the form: class outer: class inner: ... And then regenerated 'inner' such that inner.__name__ == 'outer.inner' and stored it in globals()['outer.inner']. This *does* work with existing pickle code; in fact it worked as far back as Python 1.5.2! As for *why* I had inner classes, the following might be a good example: class ModelElement(Element): class isSpecification(model.Field): isRequired = 1 qualifiedName = 'Foundation.Core.ModelElement.isSpecification' _XMINames = ('Foundation.Core.ModelElement.isSpecification',) name = 'isSpecification' referencedType = 'Boolean' class stereotype(model.Reference): isNavigable = 0 isRequired = 1 _XMINames = ('Foundation.Core.ModelElement.stereotype',) name = 'stereotype' referencedType = 'Stereotype' refTypeQN = 'Foundation.Core.ModelElement.stereotype' referencedEnd = 'extendedElements' This is a tiny excerpt from some automatically generated code which defines the UML 1.3 metamodel as a set of Python classes. The nested classes represent structural features of the defined outer class, and enforce properties such as multiplicity, requiredness, etc., as you can see. In an earlier version of this work, the inner classes had metaclasses whose __get__ method instantiated the inner class, and placed an instance in the outer object instance. That is, each ModelElement instance would have a 'stereotype' instance created on demand, which would manage the 'stereotype' property. And for this to be picklable, it was necessary for the inner classes to be accessible by the pickle machinery -- hence my use of qualified class names, as described above. I will admit that I no longer use this approach for the inner classes; I found a way to get by without ever instantiating them. The above code is still valid, but the metaclasses used are different. So the pickling is simpler and faster. I'm not sure if this is really an argument for or against, but at least it's actual usage. :) From mwh@python.net Tue Nov 5 17:50:48 2002 From: mwh@python.net (Michael Hudson) Date: Tue, 5 Nov 2002 17:50:48 +0000 (GMT) Subject: [Python-Dev] Re: List slice assignment and custom sequences In-Reply-To: <17ED8833-F0E4-11D6-A85E-0003931CFE24@cistron.nl> Message-ID: On Tue, 5 Nov 2002, Ronald Oussoren wrote: > > On Tuesday, Nov 5, 2002, at 15:03 Europe/Amsterdam, Michael Hudson > wrote: > > > Ronald Oussoren writes: > > > >> No, I want to replace part of a sequence by another sequence. I don't > >> understand _why_ the RHS must be a list if the LHS is one. > > > > Because Objects/listobject.c:list_ass_slice pokes directly into the > > object passed on the RHS. > > > > A patch to change this would have some chance of getting accepted; > > wouldn't like to guess what, but I'd hazard non-zero. > I sure hope so. I've posted a bug+patch at SF for a related problem: > With the introduction of new-style classes 'isinstance(obj, list)' no > longer guarantees that you can savely poke in the RHS instead of using > the __getitem__ accessor function. Hmm, that's something (slightly) else. I wonder if PySequence_FAST should use PyList_CheckExact and not PyList_Chec? Guido? Cheers, M. From jeremy@alum.mit.edu Tue Nov 5 17:57:32 2002 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Tue, 5 Nov 2002 12:57:32 -0500 Subject: [Python-Dev] bz2module.c breaks regression tests Message-ID: <15816.1676.295467.449565@slothrop.zope.com> I just did a cvs update and a fresh build of Python dies with a segfault immediately after running test_bz2. build-pydebug> ./python ../Lib/test/regrtest.py test_bz2 test_grammar test_bz2 /usr/home/jeremy/src/python/dist/src/Lib/test/test_bz2.py:30: RuntimeWarning: mktemp is a potential security risk to your program self.filename = tempfile.mktemp("bz2") test_grammar Segmentation fault (core dumped) A partial gdb stack trace shows that it's dying in PyObject_Malloc(). I'm running with a debug build of Python. Not sure what, if anything, it would do with a release build. #0 0x080828b7 in PyObject_Malloc (nbytes=17) at ../Objects/obmalloc.c:581 #1 0x08082f5c in _PyObject_DebugMalloc (nbytes=1) at ../Objects/obmalloc.c:992 #2 0x080553e5 in parsetok (tok=0x81b2fc0, g=0x81466a8, start=257, err_ret=0xbfffdfe0, flags=0) at ../Parser/parsetok.c:137 #3 0x08055218 in PyParser_ParseStringFlagsFilename ( s=0x401fcd44 "if 1:\n exec u'z=1+1\\n'\n if z != 2: raise TestFailed, 'exec u\\'z=1+1\\'\\\\n'\n del z\n exec u'z=1+1'\n if z != 2: raise TestFailed, 'exec u\\'z=1+1\\''\n", filename=0x0, g=0x81466a8, start=257, err_ret=0xbfffdfe0, flags=0) at ../Parser/parsetok.c:56 #4 0x08055156 in PyParser_ParseStringFlags ( s=0x401fcd44 "if 1:\n exec u'z=1+1\\n'\n if z != 2: raise TestFailed, 'exec u\\'z=1+1\\'\\\\n'\n del z\n exec u'z=1+1'\n if z != 2: raise TestFailed, 'exec u\\'z=1+1\\''\n", g=0x81466a8, start=257, err_ret=0xbfffdfe0, flags=0) at ../Parser/parsetok.c:31 #5 0x080e653f in PyParser_SimpleParseStringFlags ( str=0x401fcd44 "if 1:\n exec u'z=1+1\\n'\n if z != 2: raise TestFailed, 'exec u\\'z=1+1\\'\\\\n'\n del z\n exec u'z=1+1'\n if z != 2: raise TestFailed, 'exec u\\'z=1+1\\''\n", start=257, flags=0) at ../Python/pythonrun.c:1193 #6 0x080e6580 in PyParser_SimpleParseString ( str=0x401fcd44 "if 1:\n exec u'z=1+1\\n'\n if z != 2: raise TestFailed, 'exec u\\'z=1+1\\'\\\\n'\n del z\n exec u'z=1+1'\n if z != 2: raise TestFailed, 'exec u\\'z=1+1\\''\n", start=257) at ../Python/pythonrun.c:1203 #7 0x080e5fbc in PyRun_String ( str=0x401fcd44 "if 1:\n exec u'z=1+1\\n'\n if z != 2: raise TestFailed, 'exec u\\'z=1+1\\'\\\\n'\n del z\n exec u'z=1+1'\n if z != 2: raise TestFailed, 'exec u\\'z=1+1\\''\n", start=257, globals=0x4020c994, locals=0x4020c3f4) at ../Python/pythonrun.c:1025 #8 0x080c431e in exec_statement (f=0x818410c, prog=0x401fcd28, globals=0x4020c994, locals=0x4020c3f4) at ../Python/ceval.c:3786 #9 0x080bca30 in eval_frame (f=0x818410c) at ../Python/ceval.c:1539 #10 0x080c0b69 in PyEval_EvalCodeEx (co=0x40263b28, globals=0x4020c994, locals=0x0, args=0x815907c, argcount=0, kws=0x815907c, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2542 #11 0x080c2996 in fast_function (func=0x402222b4, pp_stack=0xbfffe328, n=0, na=0, nk=0) at ../Python/ceval.c:3282 #12 0x080c2818 in call_function (pp_stack=0xbfffe328, oparg=0) at ../Python/ceval.c:3251 #13 0x080beb80 in eval_frame (f=0x8158f24) at ../Python/ceval.c:1997 #14 0x080c0b69 in PyEval_EvalCodeEx (co=0x4021e238, globals=0x4020c994, locals=0x4020c994, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2542 #15 0x080b6abc in PyEval_EvalCode (co=0x4021e238, globals=0x4020c994, locals=0x4020c994) at ../Python/ceval.c:478 #16 0x080dadc3 in PyImport_ExecCodeModuleEx ( name=0xbfffed48 "test.test_grammar", co=0x4021e238, pathname=0xbfffe440 "/usr/home/jeremy/src/python/dist/src/Lib/test/test_grammar.pyc") at ../Python/import.c:530 Jeremy From guido@python.org Tue Nov 5 17:58:33 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 12:58:33 -0500 Subject: [Python-Dev] Metaclass insanity - another use case In-Reply-To: Your message of "Tue, 05 Nov 2002 12:49:39 EST." <5.1.1.6.0.20021105123313.01f742f0@mail.rapidsite.net> References: <5.1.1.6.0.20021105123313.01f742f0@mail.rapidsite.net> Message-ID: <200211051758.gA5HwXQ20157@odiug.zope.com> > I was reading the "metaclass insanity" thread this morning about > dotted __name__ for nested classes, and I thought I should chime in > that I've actually *used* this technique. That is, I've had classes > of the form: > > class outer: > class inner: > ... > > And then regenerated 'inner' such that inner.__name__ == > 'outer.inner' and stored it in globals()['outer.inner']. This > *does* work with existing pickle code; in fact it worked as far back > as Python 1.5.2! OK. Setting __name__ to 'outer.inner' seems to be the best solution, so I'll add that to my TODO list. [SF bug #633930] > As for *why* I had inner classes, the following might be a good example: > > > class ModelElement(Element): > > class isSpecification(model.Field): > isRequired = 1 > qualifiedName = 'Foundation.Core.ModelElement.isSpecification' > _XMINames = ('Foundation.Core.ModelElement.isSpecification',) > name = 'isSpecification' > referencedType = 'Boolean' > > class stereotype(model.Reference): > isNavigable = 0 > isRequired = 1 > _XMINames = ('Foundation.Core.ModelElement.stereotype',) > name = 'stereotype' > referencedType = 'Stereotype' > refTypeQN = 'Foundation.Core.ModelElement.stereotype' > referencedEnd = 'extendedElements' > > This is a tiny excerpt from some automatically generated code which > defines the UML 1.3 metamodel as a set of Python classes. The > nested classes represent structural features of the defined outer > class, and enforce properties such as multiplicity, requiredness, > etc., as you can see. > > In an earlier version of this work, the inner classes had > metaclasses whose __get__ method instantiated the inner class, and > placed an instance in the outer object instance. That is, each > ModelElement instance would have a 'stereotype' instance created on > demand, which would manage the 'stereotype' property. And for this > to be picklable, it was necessary for the inner classes to be > accessible by the pickle machinery -- hence my use of qualified > class names, as described above. > > I will admit that I no longer use this approach for the inner > classes; I found a way to get by without ever instantiating them. > The above code is still valid, but the metaclasses used are > different. So the pickling is simpler and faster. > > I'm not sure if this is really an argument for or against, but at > least it's actual usage. :) And I think there's nothing wrong with it -- it's a different use of classes, but makes sense; it's a good use of nested namespaces (as opposed to nesting helper classes inside the main class which they help). --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Tue Nov 5 18:00:16 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 13:00:16 -0500 Subject: [Python-Dev] Re: List slice assignment and custom sequences In-Reply-To: Your message of "Tue, 05 Nov 2002 17:50:48 GMT." References: Message-ID: <200211051800.gA5I0GT20176@odiug.zope.com> > > I sure hope so. I've posted a bug+patch at SF for a related problem: > > With the introduction of new-style classes 'isinstance(obj, list)' no > > longer guarantees that you can savely poke in the RHS instead of using > > the __getitem__ accessor function. > > Hmm, that's something (slightly) else. > > I wonder if PySequence_FAST should use PyList_CheckExact and not > PyList_Chec? Guido? +1 --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy@alum.mit.edu Tue Nov 5 18:11:14 2002 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Tue, 5 Nov 2002 13:11:14 -0500 Subject: [Python-Dev] bz2module.c breaks regression tests In-Reply-To: <15816.1676.295467.449565@slothrop.zope.com> References: <15816.1676.295467.449565@slothrop.zope.com> Message-ID: <15816.2498.408898.77281@slothrop.zope.com> Gustavo, I submitted the bug report via sourceforge, which is what I should have done in the first place. I also wanted to mention, since this is your first checkin, that you shouldn't feel too bad :-). Everyone is going to make some mistakes while they're learning the ropes. Jeremy From niemeyer@conectiva.com Tue Nov 5 18:17:01 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Tue, 5 Nov 2002 16:17:01 -0200 Subject: [Python-Dev] bz2module.c breaks regression tests In-Reply-To: <15816.2498.408898.77281@slothrop.zope.com> References: <15816.1676.295467.449565@slothrop.zope.com> <15816.2498.408898.77281@slothrop.zope.com> Message-ID: <20021105161701.A17716@ibook.distro.conectiva> > I submitted the bug report via sourceforge, which is what I should > have done in the first place. I'm not sure about what is going on there, since it's working just fine here. That makes it a little bit harder to find out what's going wrong. :-( Can you please provide me with the compilation steps you've used? > I also wanted to mention, since this is your first checkin, that you > shouldn't feel too bad :-). Everyone is going to make some mistakes > while they're learning the ropes. Thank you! :-) -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From neal@metaslash.com Tue Nov 5 18:19:21 2002 From: neal@metaslash.com (Neal Norwitz) Date: Tue, 05 Nov 2002 13:19:21 -0500 Subject: [Python-Dev] bz2module.c breaks regression tests In-Reply-To: <15816.2498.408898.77281@slothrop.zope.com> References: <15816.1676.295467.449565@slothrop.zope.com> <15816.2498.408898.77281@slothrop.zope.com> Message-ID: <20021105181921.GC14715@epoch.metaslash.com> On Tue, Nov 05, 2002 at 01:11:14PM -0500, Jeremy Hylton wrote: > Gustavo, > > I submitted the bug report via sourceforge, which is what I should > have done in the first place. > > I also wanted to mention, since this is your first checkin, that you > shouldn't feel too bad :-). Everyone is going to make some mistakes > while they're learning the ropes. But it's always good to run the tests before checking in. :-) I was just running the tests and ran into the same problem. Fix checked in. I still need to do a code review. Neal From niemeyer@conectiva.com Tue Nov 5 18:29:47 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Tue, 5 Nov 2002 16:29:47 -0200 Subject: [Python-Dev] bz2module.c breaks regression tests In-Reply-To: <20021105181921.GC14715@epoch.metaslash.com> References: <15816.1676.295467.449565@slothrop.zope.com> <15816.2498.408898.77281@slothrop.zope.com> <20021105181921.GC14715@epoch.metaslash.com> Message-ID: <20021105162947.B15409@ibook.distro.conectiva> > But it's always good to run the tests before checking in. :-) Of course I've tested before checking in. I wrote the tests, after all. :-) [niemeyer@ibook ..python/dist/src]% ./python Lib/test/regrtest.py test_bz2 test_grammar test_bz2 /home/niemeyer/src/python/dist/src/Lib/test/test_bz2.py:30: RuntimeWarning: mktemp is a potential security risk to your program self.filename = tempfile.mktemp("bz2") test_grammar All 2 tests OK. I have no idea why it is not breaking here. I should have run valdgrind over it. > I was just running the tests and ran into the same problem. > Fix checked in. I still need to do a code review. Thank you very much! -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From jeremy@alum.mit.edu (Jeremy Hylton) Tue Nov 5 18:35:45 2002 From: jeremy@alum.mit.edu (Jeremy Hylton) (Jeremy Hylton) Date: Tue, 5 Nov 2002 13:35:45 -0500 Subject: [Python-Dev] bz2module.c breaks regression tests In-Reply-To: <20021105162947.B15409@ibook.distro.conectiva> References: <15816.1676.295467.449565@slothrop.zope.com> <15816.2498.408898.77281@slothrop.zope.com> <20021105181921.GC14715@epoch.metaslash.com> <20021105162947.B15409@ibook.distro.conectiva> Message-ID: <15816.3969.106647.179096@slothrop.zope.com> Did you run with a debug build of Python (configure --with-pydebug)? I find that a debug build is essential for tracking down refcount and memory problems. Jeremy From niemeyer@conectiva.com Tue Nov 5 18:54:13 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Tue, 5 Nov 2002 16:54:13 -0200 Subject: [Python-Dev] bz2module.c breaks regression tests In-Reply-To: <15816.3969.106647.179096@slothrop.zope.com> References: <15816.1676.295467.449565@slothrop.zope.com> <15816.2498.408898.77281@slothrop.zope.com> <20021105181921.GC14715@epoch.metaslash.com> <20021105162947.B15409@ibook.distro.conectiva> <15816.3969.106647.179096@slothrop.zope.com> Message-ID: <20021105165413.A25195@ibook.distro.conectiva> > Did you run with a debug build of Python (configure --with-pydebug)? > I find that a debug build is essential for tracking down refcount > and memory problems. I'm ashamed to confess that I wasn't aware about it. Sorry. :-( -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From niemeyer@conectiva.com Tue Nov 5 22:02:59 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Tue, 5 Nov 2002 20:02:59 -0200 Subject: [Python-Dev] [#527371] sre bug/patch Message-ID: <20021105200259.A28375@ibook.distro.conectiva> Patch #527371, from Greg Chapman, seems to be pretty obvious, and is laying around for quite some time. The same pattern even happens in a few other places in the sre module. I suggest we apply it, including the fixes for lastindex mentioned in the discussion. As a minor observation, I'd include the "state->lastmark = lastmark;" line inside the 'if' block when applying, as every other occurence of this algorithm. No big deal, but saves a few cycles. -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From guido@python.org Tue Nov 5 22:07:21 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 17:07:21 -0500 Subject: [Python-Dev] [#527371] sre bug/patch In-Reply-To: Your message of "Tue, 05 Nov 2002 20:02:59 -0200." <20021105200259.A28375@ibook.distro.conectiva> References: <20021105200259.A28375@ibook.distro.conectiva> Message-ID: <200211052207.gA5M7Mk24781@odiug.zope.com> > Patch #527371, from Greg Chapman, seems to be pretty obvious, and is > laying around for quite some time. The same pattern even happens in > a few other places in the sre module. I suggest we apply it, including > the fixes for lastindex mentioned in the discussion. > > As a minor observation, I'd include the "state->lastmark = lastmark;" > line inside the 'if' block when applying, as every other occurence of > this algorithm. No big deal, but saves a few cycles. Just be sure to keep the code compatible with Python 1.5.1 (see PEP 291). --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Tue Nov 5 22:07:43 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 17:07:43 -0500 Subject: [Python-Dev] [#527371] sre bug/patch In-Reply-To: Your message of "Tue, 05 Nov 2002 17:07:21 EST." Message-ID: <200211052207.gA5M7h324793@odiug.zope.com> > Just be sure to keep the code compatible with Python 1.5.1 (see PEP > 291). Oops, I meant Python 1.5.2. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Tue Nov 5 22:44:11 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 17:44:11 -0500 Subject: [Python-Dev] Possible ideas from Strongtalk (static typing for Smalltalk)? In-Reply-To: Your message of "Thu, 31 Oct 2002 16:18:36 GMT." <5.1.1.6.0.20021031160812.03bb5390@spey.st-andrews.ac.uk> References: <5.1.1.6.0.20021031160812.03bb5390@spey.st-andrews.ac.uk> Message-ID: <200211052244.gA5MiB225849@odiug.zope.com> > I recently came across an announcement about the the Strongtalk system, > which contains the first fully developed strong, static type system for > Smalltalk. I wondered whether there might be useful ideas there for those > looking into static typing for Python. > > http://www.cs.ucsb.edu/projects/strongtalk/pages/index.html You have to download and install their Windows app and run it in order to get some documentation about the type system. I did, and it's pretty theoretically clean (as you'd expect from a Smalltalk system). It has a pretty conventional way to describe method signatures, including a way to define type variables implicitly, so you can define e.g. a function taking two lists whose return type is a tuple of the item types of the arguments. One thing that I hadn't seen before is the little language they use that allows you to say things like that. --Guido van Rossum (home page: http://www.python.org/~guido/) From greg@cosc.canterbury.ac.nz Tue Nov 5 23:12:47 2002 From: greg@cosc.canterbury.ac.nz (Greg Ewing) Date: Wed, 06 Nov 2002 12:12:47 +1300 (NZDT) Subject: [Python-Dev] Metaclass insanity - another use case In-Reply-To: <200211051758.gA5HwXQ20157@odiug.zope.com> Message-ID: <200211052312.gA5NClO18455@kuku.cosc.canterbury.ac.nz> "Phillip J. Eby" : > As for *why* I had inner classes, the following might be a good example: > > class ModelElement(Element): > > class isSpecification(model.Field): > isRequired = 1 > qualifiedName = 'Foundation.Core.ModelElement.isSpecification' > _XMINames = ('Foundation.Core.ModelElement.isSpecification',) > name = 'isSpecification' > referencedType = 'Boolean' I'm not sure if it would be quite what Phillip needs, but I've been wondering recently whether Python could benefit from having an "instance" statement which does for instances what the "class" statement does for classes. The idea is you'd be able to say something like instance isSpecification(model.Field): isRequired = 1 qualifiedName = 'Foundation.Core.ModelElement.isSpecification' and it would be equivalent to isSpecification = model.Field(isRequired = 1, qualifiedName = 'Foundation.Core.ModelElement.isSpecification') One of the use cases I have in mind is GUI programming, where you frequently need to build complicated nested structures with lots of keyword arguments to constructors. A construct like this might help you to lay out the code more neatly and readably. I haven't really thought through the details properly, though. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg@cosc.canterbury.ac.nz +--------------------------------------+ From neal@metaslash.com Tue Nov 5 23:23:19 2002 From: neal@metaslash.com (Neal Norwitz) Date: Tue, 05 Nov 2002 18:23:19 -0500 Subject: [Python-Dev] Re: [snake-farm] test test_slice failed -- [9, 7, 5, 3, 1] == [0] In-Reply-To: <20021105213817.GA26245@lysator.liu.se> References: <20021105213817.GA26245@lysator.liu.se> Message-ID: <20021105232319.GE14715@epoch.metaslash.com> On Tue, Nov 05, 2002 at 10:38:18PM +0100, Anders Qvist wrote: > Between 1036491804 and 1036522446 (should be GMT 2002-11-05, 10:20 and > 2002-11-05, 18:54 respectively), test_slice started failing with the > following message: > > test test_slice failed -- [9, 7, 5, 3, 1] == [0] > > om moghedien; linux/alpha. There has been some changes containing > pointer arithmetic around line 160 of sliceobjects.c, which might be > guilty (just a shot in the dark; 64-bit clean?): It is a 64-bit problem, but doesn't have to do with the checkin to sliceobject. The new test (below) broke with old code too. vereq(range(10)[::sys.maxint - 1], [0]) The problem is that sizeof(int) != sizeof(long). The parameters for PySlice_GetIndices() and PySlice_GetIndicesEx() take ints, not longs. So the code is doing: range(10)[::-2] since the high 32 bits are lost. The only solution I can think of is to have 2 interfaces: - the old API which takes int/int*, PySlice_GetIndices() deprecate this API and raise overflow exception if (int)value != (long)value - the new API which takes long/long*, PySlice_GetIndicesEx() PySlice_GetIndicesEx() was added in 2.3. Otherwise, we are stuck with slices only being able to support 32 bits even on 64 bit architectures. Other ideas? Neal From aahz@pythoncraft.com Tue Nov 5 23:47:51 2002 From: aahz@pythoncraft.com (Aahz) Date: Tue, 5 Nov 2002 18:47:51 -0500 Subject: [Python-Dev] Metaclass insanity - another use case In-Reply-To: <200211052312.gA5NClO18455@kuku.cosc.canterbury.ac.nz> References: <200211051758.gA5HwXQ20157@odiug.zope.com> <200211052312.gA5NClO18455@kuku.cosc.canterbury.ac.nz> Message-ID: <20021105234751.GB5365@panix.com> On Wed, Nov 06, 2002, Greg Ewing wrote: > "Phillip J. Eby" : >> >> As for *why* I had inner classes, the following might be a good example: >> >> class ModelElement(Element): >> >> class isSpecification(model.Field): >> isRequired = 1 >> qualifiedName = 'Foundation.Core.ModelElement.isSpecification' >> _XMINames = ('Foundation.Core.ModelElement.isSpecification',) >> name = 'isSpecification' >> referencedType = 'Boolean' > > I'm not sure if it would be quite what Phillip needs, but > I've been wondering recently whether Python could benefit > from having an "instance" statement which does for instances > what the "class" statement does for classes. > > The idea is you'd be able to say something like > > instance isSpecification(model.Field): > isRequired = 1 > qualifiedName = 'Foundation.Core.ModelElement.isSpecification' > > and it would be equivalent to > > isSpecification = model.Field(isRequired = 1, > qualifiedName = 'Foundation.Core.ModelElement.isSpecification') Yeah, but that doesn't gain you all that much over isSpecification = model.Field( isRequired = 1, qualifiedName = 'Foundation.Core.ModelElement.isSpecification' ) -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ Project Vote Smart: http://www.vote-smart.org/ From guido@python.org Wed Nov 6 02:32:27 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 21:32:27 -0500 Subject: [Python-Dev] Metaclass insanity - another use case In-Reply-To: Your message of "Wed, 06 Nov 2002 12:12:47 +1300." <200211052312.gA5NClO18455@kuku.cosc.canterbury.ac.nz> References: <200211052312.gA5NClO18455@kuku.cosc.canterbury.ac.nz> Message-ID: <200211060232.gA62WRd26101@pcp02138704pcs.reston01.va.comcast.net> > I've been wondering recently whether Python could benefit > from having an "instance" statement which does for instances > what the "class" statement does for classes. > > The idea is you'd be able to say something like > > instance isSpecification(model.Field): > isRequired = 1 > qualifiedName = 'Foundation.Core.ModelElement.isSpecification' > > and it would be equivalent to > > isSpecification = model.Field(isRequired = 1, > qualifiedName = 'Foundation.Core.ModelElement.isSpecification') > > One of the use cases I have in mind is GUI programming, where > you frequently need to build complicated nested structures with > lots of keyword arguments to constructors. A construct like this > might help you to lay out the code more neatly and readably. -1. TOOWTDI. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Wed Nov 6 02:39:18 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 05 Nov 2002 21:39:18 -0500 Subject: [Python-Dev] Re: [snake-farm] test test_slice failed -- [9, 7, 5, 3, 1] == [0] In-Reply-To: Your message of "Tue, 05 Nov 2002 18:23:19 EST." <20021105232319.GE14715@epoch.metaslash.com> References: <20021105213817.GA26245@lysator.liu.se> <20021105232319.GE14715@epoch.metaslash.com> Message-ID: <200211060239.gA62dIa26126@pcp02138704pcs.reston01.va.comcast.net> > It is a 64-bit problem, but doesn't have to do with the checkin to > sliceobject. The new test (below) broke with old code too. > > vereq(range(10)[::sys.maxint - 1], [0]) > > The problem is that sizeof(int) != sizeof(long). The parameters for > PySlice_GetIndices() and PySlice_GetIndicesEx() take ints, not longs. > So the code is doing: > > range(10)[::-2] > > since the high 32 bits are lost. > > The only solution I can think of is to have 2 interfaces: > - the old API which takes int/int*, PySlice_GetIndices() > deprecate this API and raise overflow exception > if (int)value != (long)value > - the new API which takes long/long*, PySlice_GetIndicesEx() > > PySlice_GetIndicesEx() was added in 2.3. You meant "must be added" right? > Otherwise, we are stuck with slices only being able to support > 32 bits even on 64 bit architectures. But *everything* having to do with sequences only supports 32 bits: ob_size, PySequence_Length() the arguments to PySequence_GetItem(), etc. Unless you want to fix all those (breaking backwards compatibility), I'm not sure why you'd want to fix PySlice_GetIndices()... --Guido van Rossum (home page: http://www.python.org/~guido/) From walter@livinglogic.de Wed Nov 6 10:48:59 2002 From: walter@livinglogic.de (=?ISO-8859-15?Q?Walter_D=F6rwald?=) Date: Wed, 06 Nov 2002 11:48:59 +0100 Subject: [Python-Dev] metaclass insanity In-Reply-To: <200211051503.gA5F3p818976@odiug.zope.com> References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> <3DC7BB28.8020006@livinglogic.de> <200211051503.gA5F3p818976@odiug.zope.com> Message-ID: <3DC8F39B.2060903@livinglogic.de> Guido van Rossum wrote: >> > Can someone provide a reason why you'd want to use nested classes? >> > I've never felt this need myself. What are the motivations? >> >>XIST (http://www.livinglogic.de/Python/xist/) uses nested classes >>to map XML element types and their attributes to Python classes. >>For example the HTML element type img looks like this in XIST: >> >>class img(Element): >> class Attrs(Element.Attrs): >> class src(URLAttr): required = True >> class alt(TextAttr): required = True >> class align(TextAttr): values = ("top", "middle", ...) > > > That's cool. But the __name__ problem becomes apparent for error handling. Checking whether required attributes are specified is the job of the nested Attrs class. The error message should include the name of the affected element, but this name is not available in the inner class (but see my workaround in the posting with the __outerclass__/__fullname__ example.) Bye, Walter Dörwald From walter@livinglogic.de Wed Nov 6 10:56:57 2002 From: walter@livinglogic.de (=?ISO-8859-15?Q?Walter_D=F6rwald?=) Date: Wed, 06 Nov 2002 11:56:57 +0100 Subject: [Python-Dev] metaclass insanity In-Reply-To: <200211051705.gA5H5LJ19816@odiug.zope.com> References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> <3DC79B9A.9010504@livinglogic.de> <200211051705.gA5H5LJ19816@odiug.zope.com> Message-ID: <3DC8F579.2060601@livinglogic.de> Guido van Rossum wrote: >>Even better would to have "X.Y.__outerclass__ is X", >>i.e. __outerclass__ as a (weak) reference to the class >>in which X.Y was defined. What I came up with is this: >> >>class nestedtype(type): >> def __new__(cls, name, bases, dict): >> dict["__outerclass__"] = None >> res = type.__new__(cls, name, bases, dict) >> for (key, value) in dict.items(): >> if isinstance(value, type): >> value.__outerclass__ = res >> return res >> >> def __fullname__(cls): >> name = cls.__name__ >> while 1: >> cls = cls.__outerclass__ >> if cls is None: >> return name >> name = cls.__name__ + "." + name >> >> def __repr__(cls): >> return "" % \ >> (cls.__module__, cls.__fullname__(), id(cls)) > > > I kind of like __fullname__ as a useful attribute (though I'd make it > an attribute or property [== computed attribute] rather than a > method). But if setting the __name__ of an inner class to > "Outer.Inner" doesn't break too much existing code, I'd prefer that -- > no new attributes needed. I guess this shouldn't be a problem. > I don't see any use for __outerclass__ *except* for computing the full > name. And there are better ways to do that if you have help from the > parser. Exactly, I only used __outerclass__, because I didn't have the help of the parser and couldn't think of any other way to implement it. What I wonder is how this will work with classes that are defined outside but assigned inside an outer class, i.e.: class NotInner: pass class Outer: Inner = NotInner Will this set NotInner.__name__ to "Outer.NotInner" or not? Bye, Walter Dörwald From mwh@python.net Wed Nov 6 11:04:57 2002 From: mwh@python.net (Michael Hudson) Date: 06 Nov 2002 11:04:57 +0000 Subject: [Python-Dev] Re: [snake-farm] test test_slice failed -- [9, 7, 5, 3, 1] == [0] In-Reply-To: Neal Norwitz's message of "Tue, 05 Nov 2002 18:23:19 -0500" References: <20021105213817.GA26245@lysator.liu.se> <20021105232319.GE14715@epoch.metaslash.com> Message-ID: <2mpttj6js6.fsf@starship.python.net> Neal Norwitz writes: > On Tue, Nov 05, 2002 at 10:38:18PM +0100, Anders Qvist wrote: > > Between 1036491804 and 1036522446 (should be GMT 2002-11-05, 10:20 and > > 2002-11-05, 18:54 respectively), test_slice started failing with the > > following message: > > > > test test_slice failed -- [9, 7, 5, 3, 1] == [0] > > > > om moghedien; linux/alpha. There has been some changes containing > > pointer arithmetic around line 160 of sliceobjects.c, which might be > > guilty (just a shot in the dark; 64-bit clean?): Oops! This is my fault. > It is a 64-bit problem, but doesn't have to do with the checkin to > sliceobject. The new test (below) broke with old code too. > > vereq(range(10)[::sys.maxint - 1], [0]) > > The problem is that sizeof(int) != sizeof(long). The parameters for > PySlice_GetIndices() and PySlice_GetIndicesEx() take ints, not longs. > So the code is doing: > > range(10)[::-2] > > since the high 32 bits are lost. > > The only solution I can think of is to have 2 interfaces: > - the old API which takes int/int*, PySlice_GetIndices() > deprecate this API and raise overflow exception > if (int)value != (long)value > - the new API which takes long/long*, PySlice_GetIndicesEx() > > PySlice_GetIndicesEx() was added in 2.3. So you suggest changing PySlice_GetIndicesEx(), right? > Otherwise, we are stuck with slices only being able to support > 32 bits even on 64 bit architectures. > > Other ideas? I think the better idea is to call _PyEval_SliceIndex for the step element of the slice too. And maybe change the latter from else if (x < -INT_MAX) x = 0; to else if (x < -INT_MAX) x = -INT_MAX; Can you test this on a 64 bit platform or shall I just check it in? Cheers, M. -- After a heavy night I travelled on, my face toward home - the comma being by no means guaranteed. -- paraphrased from cam.misc From niemeyer@conectiva.com Wed Nov 6 13:16:42 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Wed, 6 Nov 2002 11:16:42 -0200 Subject: [Python-Dev] [#527371] sre bug/patch In-Reply-To: <200211052207.gA5M7Mk24781@odiug.zope.com> References: <20021105200259.A28375@ibook.distro.conectiva> <200211052207.gA5M7Mk24781@odiug.zope.com> Message-ID: <20021106111641.A1984@ibook.distro.conectiva> > Just be sure to keep the code compatible with Python 1.5.1 (see PEP > 291). Is there any reason to keep it compatible with 1.5.2 even though this module didn't existed at that release? Shouldn't PEP 291 mention 1.6.1 for sre? -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From martin@v.loewis.de Wed Nov 6 13:24:28 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 06 Nov 2002 14:24:28 +0100 Subject: [Python-Dev] [#527371] sre bug/patch In-Reply-To: <20021106111641.A1984@ibook.distro.conectiva> References: <20021105200259.A28375@ibook.distro.conectiva> <200211052207.gA5M7Mk24781@odiug.zope.com> <20021106111641.A1984@ibook.distro.conectiva> Message-ID: Gustavo Niemeyer writes: > > Just be sure to keep the code compatible with Python 1.5.1 (see PEP > > 291). > > Is there any reason to keep it compatible with 1.5.2 even though this > module didn't existed at that release? Shouldn't PEP 291 mention 1.6.1 > for sre? That is precisely the reason to keep it compatible: People might want to install SRE in Python 1.5.2, if they have code that uses SRE. I believe Fredrik Lundh still provides a package of SRE to install on Python 1.5 (without Unicode support, of course). He wants this package to be line-for-line identical with the Python CVS. Regards, Martin From niemeyer@conectiva.com Wed Nov 6 13:30:02 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Wed, 6 Nov 2002 11:30:02 -0200 Subject: [Python-Dev] [#527371] sre bug/patch In-Reply-To: References: <20021105200259.A28375@ibook.distro.conectiva> <200211052207.gA5M7Mk24781@odiug.zope.com> <20021106111641.A1984@ibook.distro.conectiva> Message-ID: <20021106113002.B2448@ibook.distro.conectiva> > That is precisely the reason to keep it compatible: People might want > to install SRE in Python 1.5.2, if they have code that uses SRE. I > believe Fredrik Lundh still provides a package of SRE to install on > Python 1.5 (without Unicode support, of course). He wants this package > to be line-for-line identical with the Python CVS. It's nice to have sre for 1.5.2 indeed. Well, it was just out of curiosity, because there's no reason to break compatibility anyway. Thanks for explaining. -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From neal@metaslash.com Wed Nov 6 13:32:45 2002 From: neal@metaslash.com (Neal Norwitz) Date: Wed, 06 Nov 2002 08:32:45 -0500 Subject: [Python-Dev] Re: [snake-farm] test test_slice failed -- [9, 7, 5, 3, 1] == [0] In-Reply-To: <2mpttj6js6.fsf@starship.python.net> References: <20021105213817.GA26245@lysator.liu.se> <20021105232319.GE14715@epoch.metaslash.com> <2mpttj6js6.fsf@starship.python.net> Message-ID: <20021106133245.GI14715@epoch.metaslash.com> On Wed, Nov 06, 2002 at 11:04:57AM +0000, Michael Hudson wrote: > > So you suggest changing PySlice_GetIndicesEx(), right? Yes. But based on Guido's comments and some thinking, I don't think the API should change to longs. Your solution below make sense, but didn't work. > I think the better idea is to call _PyEval_SliceIndex for the step > element of the slice too. And maybe change the latter from > > else if (x < -INT_MAX) > x = 0; > > to > > else if (x < -INT_MAX) > x = -INT_MAX; > > Can you test this on a 64 bit platform or shall I just check it in? Tested on SF compile farm (ssh to compile.sf.net), they have a very fast Alpha. Here's the trimmed down diff: Index: Python/ceval.c (in _PyEval_SliceIndex) @@ -3507,7 +3507,7 @@ else if (x < -INT_MAX) - x = 0; + x = -INT_MAX; *pi = x; Index: Objects/sliceobject.c @@ -119,7 +119,7 @@ if (r->step == Py_None) { *step = 1; } else { - *step = PyInt_AsLong(r->step); + if (!_PyEval_SliceIndex(r->step, step)) return -1; if (*step == -1 && PyErr_Occurred()) { return -1; But this didn't produce the correct results: >>> range(10)[::sys.maxint-1] [] Neal From mwh@python.net Wed Nov 6 13:56:54 2002 From: mwh@python.net (Michael Hudson) Date: 06 Nov 2002 13:56:54 +0000 Subject: [Python-Dev] Re: [snake-farm] test test_slice failed -- [9, 7, 5, 3, 1] == [0] In-Reply-To: Neal Norwitz's message of "Wed, 06 Nov 2002 08:32:45 -0500" References: <20021105213817.GA26245@lysator.liu.se> <20021105232319.GE14715@epoch.metaslash.com> <2mpttj6js6.fsf@starship.python.net> <20021106133245.GI14715@epoch.metaslash.com> Message-ID: <2m1y5ypzrt.fsf@starship.python.net> Neal Norwitz writes: > On Wed, Nov 06, 2002 at 11:04:57AM +0000, Michael Hudson wrote: > > > > So you suggest changing PySlice_GetIndicesEx(), right? > > Yes. But based on Guido's comments and some thinking, > I don't think the API should change to longs. Your > solution below make sense, but didn't work. Nuts. > > I think the better idea is to call _PyEval_SliceIndex for the step > > element of the slice too. And maybe change the latter from > > > > else if (x < -INT_MAX) > > x = 0; > > > > to > > > > else if (x < -INT_MAX) > > x = -INT_MAX; > > > > Can you test this on a 64 bit platform or shall I just check it in? > > Tested on SF compile farm (ssh to compile.sf.net), they have a very > fast Alpha. > > Here's the trimmed down diff: > > Index: Python/ceval.c (in _PyEval_SliceIndex) > @@ -3507,7 +3507,7 @@ > else if (x < -INT_MAX) > - x = 0; > + x = -INT_MAX; > *pi = x; > > Index: Objects/sliceobject.c > @@ -119,7 +119,7 @@ > if (r->step == Py_None) { > *step = 1; > } else { > - *step = PyInt_AsLong(r->step); > + if (!_PyEval_SliceIndex(r->step, step)) return -1; > if (*step == -1 && PyErr_Occurred()) { > return -1; > > But this didn't produce the correct results: > > >>> range(10)[::sys.maxint-1] > [] Wierd. Shall dig. Nice how you get warnings from the system header files on that alpha isn't it? ... Works for me! mwh@usf-cf-alpha-linux-1:~/python$ ./python Python 2.3a0 (#2, Nov 6 2002, 05:44:13) [GCC 2.95.4 20011002 (Debian prerelease)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> range(10)[::sys.maxint - 2] Traceback (most recent call last): File "", line 1, in ? NameError: name 'sys' is not defined >>> import sys >>> range(10)[::sys.maxint - 2] [0] >>> Try again? Can you try playing with gdb to see what's going on? Cheers, M. -- -Dr. Olin Shivers, Ph.D., Cranberry-Melon School of Cucumber Science -- seen in comp.lang.scheme From neal@metaslash.com Wed Nov 6 13:56:54 2002 From: neal@metaslash.com (Neal Norwitz) Date: Wed, 06 Nov 2002 08:56:54 -0500 Subject: [Python-Dev] Re: [snake-farm] test test_slice failed -- [9, 7, 5, 3, 1] == [0] In-Reply-To: <200211060239.gA62dIa26126@pcp02138704pcs.reston01.va.comcast.net> References: <20021105213817.GA26245@lysator.liu.se> <20021105232319.GE14715@epoch.metaslash.com> <200211060239.gA62dIa26126@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021106135654.GJ14715@epoch.metaslash.com> On Tue, Nov 05, 2002 at 09:39:18PM -0500, Guido van Rossum wrote: > > > PySlice_GetIndicesEx() was added in 2.3. > > You meant "must be added" right? No, this was an API Michael added when doing the extended slicing, AFAIK. > > Otherwise, we are stuck with slices only being able to support > > 32 bits even on 64 bit architectures. > > But *everything* having to do with sequences only supports 32 bits: > ob_size, PySequence_Length() the arguments to PySequence_GetItem(), > etc. > > Unless you want to fix all those (breaking backwards compatibility), > I'm not sure why you'd want to fix PySlice_GetIndices()... You're right. This gets me thinking about starting to remove the 32 bit limitation... I looked at how hard it would be to support 64 bit sequences. If we did this, 64 bit users would be able to use long sequences at the expense of backwards compatibility. Also, we would need something like Py_seq_len, use a configure option, and all the ob_size, arguments etc. would have to use the new type name. I did a grep for int, there were a ton. But it looks like those used for sequence lengths were much fewer. I suppose this change could be accomplished in stages. Step 1, typedef int Py_seq_len; for all platforms. Step 2, start using it for the proper APIs in the header files. Step 3, change the C files. Repeat steps 2 & 3 until done. I'm not volunteering, of course. :-) I don't need this and have no system to use for testing. So this is all academic at best. Neal From neal@metaslash.com Wed Nov 6 14:12:44 2002 From: neal@metaslash.com (Neal Norwitz) Date: Wed, 06 Nov 2002 09:12:44 -0500 Subject: [Python-Dev] Re: [snake-farm] test test_slice failed -- [9, 7, 5, 3, 1] == [0] In-Reply-To: <2m1y5ypzrt.fsf@starship.python.net> References: <20021105213817.GA26245@lysator.liu.se> <20021105232319.GE14715@epoch.metaslash.com> <2mpttj6js6.fsf@starship.python.net> <20021106133245.GI14715@epoch.metaslash.com> <2m1y5ypzrt.fsf@starship.python.net> Message-ID: <20021106141244.GK14715@epoch.metaslash.com> On Wed, Nov 06, 2002 at 01:56:54PM +0000, Michael Hudson wrote: > > But this didn't produce the correct results: > > > > >>> range(10)[::sys.maxint-1] > > [] > > Wierd. Shall dig. > > Nice how you get warnings from the system header files on that alpha > isn't it? :-) Quite nice. > Works for me! Hmmm. The only difference I can see is that I'm running a debug build. > mwh@usf-cf-alpha-linux-1:~/python$ ./python > Python 2.3a0 (#2, Nov 6 2002, 05:44:13) > [GCC 2.95.4 20011002 (Debian prerelease)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import sys > >>> range(10)[::sys.maxint - 2] > [0] > > Try again? Can you try playing with gdb to see what's going on? I built again w/o debug and it made no difference. Both times I did a make clean first. I'll take a look later. If it works for you, go ahead and check in. Maybe we patched something different. Neal From mwh@python.net Wed Nov 6 15:11:01 2002 From: mwh@python.net (Michael Hudson) Date: 06 Nov 2002 15:11:01 +0000 Subject: [Python-Dev] Re: [snake-farm] test test_slice failed -- [9, 7, 5, 3, 1] == [0] In-Reply-To: Neal Norwitz's message of "Wed, 06 Nov 2002 09:12:44 -0500" References: <20021105213817.GA26245@lysator.liu.se> <20021105232319.GE14715@epoch.metaslash.com> <2mpttj6js6.fsf@starship.python.net> <20021106133245.GI14715@epoch.metaslash.com> <2m1y5ypzrt.fsf@starship.python.net> <20021106141244.GK14715@epoch.metaslash.com> Message-ID: <2mr8dyvim2.fsf@starship.python.net> Neal Norwitz writes: > > Works for me! > > Hmmm. The only difference I can see is that I'm running a debug build. Tried that. No change. > > mwh@usf-cf-alpha-linux-1:~/python$ ./python > > Python 2.3a0 (#2, Nov 6 2002, 05:44:13) > > [GCC 2.95.4 20011002 (Debian prerelease)] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > > >>> import sys > > >>> range(10)[::sys.maxint - 2] > > [0] > > > > Try again? Can you try playing with gdb to see what's going on? > > I built again w/o debug and it made no difference. Both times > I did a make clean first. I'll take a look later. You have run cvs up on the compile farm in the last day? You might be seeing the first bug... > If it works for you, go ahead and check in. Maybe we patched > something different. Will check in momentarily. Cheers, M. -- If i don't understand lisp, it would be wise to not bray about how lisp is stupid or otherwise criticize, because my stupidity would be archived and open for all in the know to see. -- Xah, comp.lang.lisp From guido@python.org Wed Nov 6 16:06:04 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 06 Nov 2002 11:06:04 -0500 Subject: [Python-Dev] Re: [snake-farm] test test_slice failed -- [9, 7, 5, 3, 1] == [0] In-Reply-To: Your message of "Wed, 06 Nov 2002 08:56:54 EST." <20021106135654.GJ14715@epoch.metaslash.com> References: <20021105213817.GA26245@lysator.liu.se> <20021105232319.GE14715@epoch.metaslash.com> <200211060239.gA62dIa26126@pcp02138704pcs.reston01.va.comcast.net> <20021106135654.GJ14715@epoch.metaslash.com> Message-ID: <200211061606.gA6G64s27802@pcp02138704pcs.reston01.va.comcast.net> > I looked at how hard it would be to support 64 bit sequences. > > If we did this, 64 bit users would be able to use long sequences at > the expense of backwards compatibility. Also, we would need something > like Py_seq_len, use a configure option, and all the ob_size, > arguments etc. would have to use the new type name. > > I did a grep for int, there were a ton. But it looks like > those used for sequence lengths were much fewer. > > I suppose this change could be accomplished in stages. > > Step 1, typedef int Py_seq_len; for all platforms. > Step 2, start using it for the proper APIs in the header files. > Step 3, change the C files. > Repeat steps 2 & 3 until done. > > I'm not volunteering, of course. :-) I don't need this and have no > system to use for testing. So this is all academic at best. I have no time to do it either, but I hereby officially pronounce that I wouldn't mind breaking binary compatibility for this, once, at 2.3 or 2.4. Maybe someone gets inspired. The proper type should probably be size_t or perhaps ssize_t, BTW. Also, Python ints should use an integral type that's at least as wide as a pointer; at least on Windows64, long is 32 bits but pointers are 64 bits, so long isn't it. If Python ints could be 64 bits even on 32 bit platforms (if long long is supported) that would be cool to. It could possibly break more stuff if sys.maxint doesn't fit in a C long -- but that would already be a problem for Win64. --Guido van Rossum (home page: http://www.python.org/~guido/) From aleax@aleax.it Wed Nov 6 16:05:51 2002 From: aleax@aleax.it (Alex Martelli) Date: Wed, 6 Nov 2002 17:05:51 +0100 Subject: [Python-Dev] would warnings be appropriate for probably-misused global statements? Message-ID: I've recently helped clarify a few users' confusion about what the "global" statement is for, and I'm starting to think that it might help overcome that confusion if "global" gave warnings when used in probably-inappropriate ways: not inside a function, or about a variable that the function does not, in fact, bind nor re-bind. Is there agreement that this might be helpful? Alex From skip@pobox.com Wed Nov 6 16:21:48 2002 From: skip@pobox.com (Skip Montanaro) Date: Wed, 6 Nov 2002 10:21:48 -0600 Subject: [Python-Dev] would warnings be appropriate for probably-misused global statements? In-Reply-To: References: Message-ID: <15817.16796.358733.404398@montanaro.dyndns.org> Alex> I've recently helped clarify a few users' confusion about what the Alex> "global" statement is for, and I'm starting to think that it might Alex> help overcome that confusion if "global" gave warnings when used Alex> in probably-inappropriate ways: not inside a function, or about a Alex> variable that the function does not, in fact, bind nor re-bind. Alex> Is there agreement that this might be helpful? Yes, but perhaps it should be left to pychecker. Skip From mchermside@ingdirect.com Wed Nov 6 18:33:31 2002 From: mchermside@ingdirect.com (Chermside, Michael) Date: Wed, 6 Nov 2002 13:33:31 -0500 Subject: [Python-Dev] would warnings be appropriate for probably-misused global statements? Message-ID: <902A1E710FEAB740966EC991C3A38A8903C2789A@INGDEXCHANGEC1.ingdirect.com> > I'm starting to think that it might help overcome that=20 > confusion if "global" gave warnings when used in = probably-inappropriate ways:=20 > not inside a function, or about a variable that the function does not, = in=20 > fact, bind nor re-bind. Is there agreement that this might be = helpful? That sounds helpful to me. Essentially you're declaring that it's an error (well... warning) to use the global statement in a context where it can perform no possible function. And given the name, it's an easy mistake for newbies to make... assuming that all "global variables" need the "global" statement. -- Michael Chermside From guido@python.org Wed Nov 6 18:42:59 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 06 Nov 2002 13:42:59 -0500 Subject: [Python-Dev] metaclass insanity In-Reply-To: Your message of "Wed, 06 Nov 2002 11:56:57 +0100." <3DC8F579.2060601@livinglogic.de> References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> <3DC79B9A.9010504@livinglogic.de> <200211051705.gA5H5LJ19816@odiug.zope.com> <3DC8F579.2060601@livinglogic.de> Message-ID: <200211061842.gA6Igxb28233@pcp02138704pcs.reston01.va.comcast.net> > What I wonder is how this will work with classes that are defined > outside but assigned inside an outer class, i.e.: > > class NotInner: > pass > > class Outer: > Inner = NotInner > > Will this set NotInner.__name__ to "Outer.NotInner" or not? __name__ should be set to reflect the lexical position of the class statement. What you do with assignment is your business. Thanks for any work you can do towards implementing this! --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Wed Nov 6 18:55:52 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 06 Nov 2002 13:55:52 -0500 Subject: [Python-Dev] would warnings be appropriate for probably-misused global statements? In-Reply-To: Your message of "Wed, 06 Nov 2002 13:33:31 EST." <902A1E710FEAB740966EC991C3A38A8903C2789A@INGDEXCHANGEC1.ingdirect.com> References: <902A1E710FEAB740966EC991C3A38A8903C2789A@INGDEXCHANGEC1.ingdirect.com> Message-ID: <200211061855.gA6ItqU28336@pcp02138704pcs.reston01.va.comcast.net> > > I'm starting to think that it might help overcome that > > confusion if "global" gave warnings when used in probably-inappropriate ways: > > not inside a function, or about a variable that the function does not, in > > fact, bind nor re-bind. Is there agreement that this might be helpful? > > That sounds helpful to me. Essentially you're declaring that it's an > error (well... warning) to use the global statement in a context where > it can perform no possible function. And given the name, it's an easy > mistake for newbies to make... assuming that all "global variables" need > the "global" statement. It should be a PyChecker function, definitely. I'm not sure if Python itself should warn about this. The needed analysis isn't easily available (at least not in the current parser). And there are lots of similar things that we should be warning about, if we're going to warn about this one. I'm not prepared (yet) to incorporate large-scale PyChecker's functionality in the Python parser. --Guido van Rossum (home page: http://www.python.org/~guido/) From tjreedy@udel.edu Wed Nov 6 18:39:22 2002 From: tjreedy@udel.edu (Terry Reedy) Date: Wed, 6 Nov 2002 13:39:22 -0500 Subject: [Python-Dev] Re: would warnings be appropriate for probably-misused global statements? References: Message-ID: "Alex Martelli" wrote in message news:E189ShF-00064c-00@mail.python.org... > I've recently helped clarify a few users' confusion about what the "global" > statement is for, and I'm starting to think that it might help overcome that > confusion if "global" gave warnings when used in probably-inappropriate ways: > not inside a function, Yes. This could even be made a syntax error, as with other contextual statements: >>> continue SyntaxError: 'continue' not properly in loop >>> break SyntaxError: 'break' outside loop # these messages should be same >>> return 0 SyntaxError: 'return' outside function # so why not " 'global' outside function" > or about a variable that the function does not, in > fact, bind nor re-bind. No. This can serve as documentation and/or preparation/protection for possible revision that does rebind. Terry J. Reedy From neal@metaslash.com Wed Nov 6 22:30:33 2002 From: neal@metaslash.com (Neal Norwitz) Date: Wed, 06 Nov 2002 17:30:33 -0500 Subject: [Python-Dev] Re: [snake-farm] test test_slice failed -- [9, 7, 5, 3, 1] == [0] In-Reply-To: <2mr8dyvim2.fsf@starship.python.net> References: <20021105213817.GA26245@lysator.liu.se> <20021105232319.GE14715@epoch.metaslash.com> <2mpttj6js6.fsf@starship.python.net> <20021106133245.GI14715@epoch.metaslash.com> <2m1y5ypzrt.fsf@starship.python.net> <20021106141244.GK14715@epoch.metaslash.com> <2mr8dyvim2.fsf@starship.python.net> Message-ID: <20021106223033.GL14715@epoch.metaslash.com> On Wed, Nov 06, 2002 at 03:11:01PM +0000, Michael Hudson wrote: > > I built again w/o debug and it made no difference. Both times > > I did a make clean first. I'll take a look later. > > You have run cvs up on the compile farm in the last day? You might be > seeing the first bug... Yes. At least I thought I did. > > If it works for you, go ahead and check in. Maybe we patched > > something different. > > Will check in momentarily. I tested your change and it worked fine. The snake farm is also happy again (as happy as it was, there are still AIX problems). Thanks! Neal From martin@v.loewis.de Wed Nov 6 22:52:03 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 06 Nov 2002 23:52:03 +0100 Subject: [Python-Dev] Re: would warnings be appropriate for probably-misused global statements? In-Reply-To: References: Message-ID: "Terry Reedy" writes: > Yes. This could even be made a syntax error, as with other contextual > statements: This somewhat contradicts Guido's judgement that this would not be easy to implement. Can you come up with a patch? Regards, Martin From guido@python.org Wed Nov 6 23:17:31 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 06 Nov 2002 18:17:31 -0500 Subject: [Python-Dev] Re: would warnings be appropriate for probably-misused global statements? In-Reply-To: Your message of "06 Nov 2002 23:52:03 +0100." References: Message-ID: <200211062317.gA6NHVC11882@pcp02138704pcs.reston01.va.comcast.net> > > Yes. This could even be made a syntax error, as with other contextual > > statements: > > This somewhat contradicts Guido's judgement that this would not be > easy to implement. Can you come up with a patch? Alex suggested two different things to test for: global outside a function, and global for a variable that is not actually set. Global outside a function would seem easy to detect (at least as easy as return outside a function), but you need to be careful: in a string passed to the exec statement, one could have distinct locals and globals, and then 'global x' outside a function would make sense. Also, 'global x' makes sense inside a class statement. Global for a variable that's not set should at best trigger a warning. To detect this, you'd have to analyze all the assignments in a block. That currently happens as part of the local variable determination, and it may be possible to fold this in; I'm not clear how easy it would be to get the error message refer to the correct line number though. But I'm not excited about this. A similar amount of analysis can discover assigning to a variable that's not used. Isn't that more useful? Using a name that's not defined anywhere. Etc. There are tons of these things. PyChecker watches for all of them. I'd prefer to make a project out of integrating PyChecker functionality into Python, eventually, rather than attacking random things one at a time -- the more of these we do in an ad-hoc fashion, the harder it will be to do in a more systematic way. --Guido van Rossum (home page: http://www.python.org/~guido/) From pobrien@orbtech.com Thu Nov 7 00:54:19 2002 From: pobrien@orbtech.com (Patrick K. O'Brien) Date: Wed, 6 Nov 2002 18:54:19 -0600 Subject: [Python-Dev] inspect.getargspec() Message-ID: <200211061854.19601.pobrien@orbtech.com> I was patching up some bugs in the inspect module, and I've reached a particular bug where I'd like some advice. The bug on SF is # 620190 and it relates to the fact that inspect.getargspec() fails when called with a method, rather than a function. I've got the code that fixes this (it's actually been in PyCrust for a while) but fixing it also changes the semantics, because getargspec() is clear that it only works on functions. Of course, expanding getargspec() to work on any callable is fairly trivial, so I see no reason why it should be limited to functions. In any case, I'll outline some of the issues below and post the code I've been using in PyCrust. I'm hoping that someone from the inner circle will work with me to update the module, update the unit tests, and check the changes in (or create patches). I'd hate to work up a patch just to find out that my solution wasn't appropriate. The main issues are: * Is there any reason to not expand the scope of getargspec()? * For builtin functions whose arguments cannot be determined (at least not that I know of) what should getargspec() do? Should it raise an error or return empty values? * For a bound method, should getargspec() include the first argument (usually self) since Python passes that value implicitly? Perhaps there should be a optional argument to toggle this. Here is what the current getargspec() looks like: def getargspec(func): """Get the names and default values of a function's arguments. A tuple of four things is returned: (args, varargs, varkw, defaults). 'args' is a list of the argument names (it may contain nested lists). 'varargs' and 'varkw' are the names of the * and ** arguments or None. 'defaults' is an n-tuple of the default values of the last n arguments.""" if not isfunction(func): raise TypeError, 'arg is not a Python function' args, varargs, varkw = getargs(func.func_code) return args, varargs, varkw, func.func_defaults Here is what I use in PyCrust to get the argument spec for any callable (At least the ones that have an argspec. Builtin functions are still an issue.) Basically, I take the object in question and run it through the following function to get an object that will work with inspect.getargspec(). I'm also determining whether I should drop the first argument (self) when I display the calltip for this object, since Python passes that implicitly on bound methods: def getBaseObject(object): """Return base object and dropSelf indicator for an object.""" if inspect.isbuiltin(object): # Builtin functions don't have an argspec that we can get. dropSelf = 0 elif inspect.ismethod(object): # Get the function from the object otherwise inspect.getargspec() # complains that the object isn't a Python function. try: if object.im_self is None: # This is an unbound method so we do not drop self from the # argspec, since an instance must be passed as the first arg. dropSelf = 0 else: dropSelf = 1 object = object.im_func except AttributeError: dropSelf = 0 elif inspect.isclass(object): # Get the __init__ method function for the class. constructor = getConstructor(object) if constructor is not None: object = constructor dropSelf = 1 else: dropSelf = 0 elif callable(object): # Get the __call__ method instead. try: object = object.__call__.im_func dropSelf = 1 except AttributeError: dropSelf = 0 else: dropSelf = 0 return object, dropSelf def getConstructor(object): """Return constructor for class object, or None if there isn't one.""" try: return object.__init__.im_func except AttributeError: for base in object.__bases__: constructor = getConstructor(base) if constructor is not None: return constructor return None And here is a snippet of the code that makes use of the previous functions, just for completeness: name = '' object, dropSelf = getBaseObject(object) try: name = object.__name__ except AttributeError: pass tip1 = '' argspec = '' if inspect.isbuiltin(object): # Builtin functions don't have an argspec that we can get. pass elif inspect.isfunction(object): # tip1 is a string like: "getCallTip(command='', locals=None)" argspec = apply(inspect.formatargspec, inspect.getargspec(object)) if dropSelf: # The first parameter to a method is a reference to an # instance, usually coded as "self", and is usually passed # automatically by Python and therefore we want to drop it. temp = argspec.split(',') if len(temp) == 1: # No other arguments. argspec = '()' else: # Drop the first argument. argspec = '(' + ','.join(temp[1:]).lstrip() tip1 = name + argspec Thanks in advance for the help. -- Patrick K. O'Brien Orbtech http://www.orbtech.com/web/pobrien ----------------------------------------------- "Your source for Python programming expertise." ----------------------------------------------- From mgilfix@eecs.tufts.edu Thu Nov 7 01:30:34 2002 From: mgilfix@eecs.tufts.edu (Michael Gilfix) Date: Wed, 6 Nov 2002 20:30:34 -0500 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: <20021101111333.E12803@ibook.distro.conectiva> References: <20021031204732.B32673@ibook.distro.conectiva> <200211010025.gA10POc02765@pcp02138704pcs.reston01.va.comcast.net> <20021101111333.E12803@ibook.distro.conectiva> Message-ID: <20021107013033.GC2347@eecs.tufts.edu> I know I'm a little behind in my dev mail, per usual these days, but I just want to say that this is a good thing (tm). I actually have need for bzip2 in an application and would be quite happy for its support. In a related question, does python have tar support (I haven't had a chance to investigate thoroughly)? I got the impression it didn't, which is kinda weird since zip is a supporting archive, and gz and bz2 will be, but there's no unix-y tar. But I'm sure someone will correct me and say I'm wrong :) Regards, -- Mike On Fri, Nov 01 @ 11:13, Gustavo Niemeyer wrote: > > If there are no licensing issues, and if there's a decent bz2 library, > > it should be welcome in the core. Batteries included. > > Great! This module is currently distributed under LGPL, but I have no > problems in changing it to Python's license. About being decent, I'd > be suspect to evaluate my own code. I accept comments and suggestions > about it though. > > -- > Gustavo Niemeyer > > [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] > > _______________________________________________ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev -- Michael Gilfix mgilfix@eecs.tufts.edu For my gpg public key: http://www.eecs.tufts.edu/~mgilfix/contact.html From niemeyer@conectiva.com Thu Nov 7 01:45:40 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Wed, 6 Nov 2002 23:45:40 -0200 Subject: [Python-Dev] ConfigParser with a single section Message-ID: <20021106234540.A2116@ibook.distro.conectiva> Patch #549037 implements an interesting feature: it allows one to have configuration files without headers. I'd like to know what's your opinion about this issue, so that we can either forget the issue or implement the functionality, closing that bug. A few comments about the issue: I would use a different implementation than the one provided in the patch. The patch includes a new parameter in read functions, stating what's the first section name. It means that we could have other sections after the first unheaded section. IMO, that situation should still be considered an error. One possible way to implement it is to include a "noheaders" boolean parameter for the constructor. Then, the user would have to know what's the standard single section name, to pass it to functions like get(). Another way would be to include something like a "singlesection" parameter in the constructor. This parameter would accept a string option, which would name the single section. As an argument against the whole issue, I'm not sure how unconfortable it is to simply include a header in the file to satisfy the parser. As an argument favorable, this could allow ConfigParser to parse simple (no escapes or variables) shell configuration files and other simple configurations using NAME=VALUE style. What's your suggestion? -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From tim.one@comcast.net Thu Nov 7 02:32:16 2002 From: tim.one@comcast.net (Tim Peters) Date: Wed, 06 Nov 2002 21:32:16 -0500 Subject: [Python-Dev] inspect.getargspec() In-Reply-To: <200211061854.19601.pobrien@orbtech.com> Message-ID: [Patrick K. O'Brien] > ... > * Is there any reason to not expand the scope of getargspec()? No -- go for it. > * For builtin functions whose arguments cannot be determined (at > least not that I know of) Those implemented in C are indeed hopeless; I expect that's all of them. > what should getargspec() do? Should it raise an error or return empty > values? What does it do now? Don't rock the boot needlessly . > * For a bound method, should getargspec() include the first argument > (usually self) since Python passes that value implicitly? Unfortunately, Python is inconsistent about this. On the one hand, >>> x = [] >>> push = x.append >>> push() Traceback (most recent call last): File "", line 1, in ? TypeError: append() takes exactly one argument (0 given) >>> On the other, >>> class X(list): ... def append(self, item): ... pass ... >>> push = X().append >>> push() Traceback (most recent call last): File "", line 1, in ? TypeError: append() takes exactly 2 arguments (1 given) >>> I care about matching the error msgs because error msgs are a prime trigger for looking up help. OTOH, when the error msgs are inconsistent, you can't win. I'd count self, myself, as you can only suck the signature out of Python-defined callables, and those seem consistent about reporting the "real" number of arguments required. > Perhaps there should be a optional argument to toggle this. Bleech. no-time-for-more-ly y'rs - tim From tim.one@comcast.net Thu Nov 7 02:40:56 2002 From: tim.one@comcast.net (Tim Peters) Date: Wed, 06 Nov 2002 21:40:56 -0500 Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: <20021106234540.A2116@ibook.distro.conectiva> Message-ID: [Gustavo Niemeyer] > Patch #549037 implements an interesting feature: it allows one to have > configuration files without headers. I'd like to know what's your > opinion about this issue, so that we can either forget the issue or > implement the functionality, closing that bug. -0. Complicates the code and the docs and the mental model to save what, 3 measly characters of typing (e.g., "[x]")? From drifty@bigfoot.com Thu Nov 7 03:33:57 2002 From: drifty@bigfoot.com (Brett Cannon) Date: Wed, 6 Nov 2002 19:33:57 -0800 (PST) Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: <20021106234540.A2116@ibook.distro.conectiva> Message-ID: [Gustavo Niemeyer] > Patch #549037 implements an interesting feature: it allows one to have > configuration files without headers. I'd like to know what's your > opinion about this issue, so that we can either forget the issue or > implement the functionality, closing that bug. > I vote for only doing it if the patch does not complicate the code. Adding one '[section]' header is not that painful. But perhaps this conversation should continue on SF? -Brett From pobrien@orbtech.com Thu Nov 7 03:58:51 2002 From: pobrien@orbtech.com (Patrick K. O'Brien) Date: Wed, 6 Nov 2002 21:58:51 -0600 Subject: [Python-Dev] inspect.getargspec() In-Reply-To: References: Message-ID: <200211062158.51253.pobrien@orbtech.com> On Wednesday 06 November 2002 08:32 pm, Tim Peters wrote: > Unfortunately, Python is inconsistent about this. On the one hand, > > >>> x = [] > >>> push = x.append > >>> push() > > Traceback (most recent call last): > File "", line 1, in ? > TypeError: append() takes exactly one argument (0 given) > > > On the other, > > >>> class X(list): > > ... def append(self, item): > ... pass > ... > > >>> push = X().append > >>> push() > > Traceback (most recent call last): > File "", line 1, in ? > TypeError: append() takes exactly 2 arguments (1 given) > > > I care about matching the error msgs because error msgs are a prime > trigger for looking up help. OTOH, when the error msgs are inconsistent, > you can't win. I'd count self, myself, as you can only suck the > signature out of Python-defined callables, and those seem consistent > about reporting the "real" number of arguments required. Except the net (args required minus args given) is the same in both cases. Which would argue for not including self in the results of getargspec(). Plus, you'll get an error if you try to explicitly pass the first argument. So while technically self *is* part of the argspec, I think in practice most applications will end up wanting to eliminate it because Python takes care of it implicitly. If that turns out to be 90% of the use cases, we save everyone a lot of trouble by taking the practical approach and not including self in the results returned by getargspec(). -- Patrick K. O'Brien Orbtech http://www.orbtech.com/web/pobrien ----------------------------------------------- "Your source for Python programming expertise." ----------------------------------------------- From ping@zesty.ca Thu Nov 7 10:19:22 2002 From: ping@zesty.ca (Ka-Ping Yee) Date: Thu, 7 Nov 2002 04:19:22 -0600 (CST) Subject: [Python-Dev] inspect.getargspec() In-Reply-To: <200211062158.51253.pobrien@orbtech.com> Message-ID: On Wed, 6 Nov 2002, Patrick K. O'Brien wrote: > Except the net (args required minus args given) is the same in both cases. > Which would argue for not including self in the results of getargspec(). > Plus, you'll get an error if you try to explicitly pass the first argument. > So while technically self *is* part of the argspec, I think in practice > most applications will end up wanting to eliminate it because Python takes > care of it implicitly. I think it would make the most sense to display the signature you would actually use when calling the method object. That is, if the method is unbound, then 'self' should appear. If the method is bound, then 'self' should not appear. -- ?!ng From guido@python.org Thu Nov 7 15:19:58 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 07 Nov 2002 10:19:58 -0500 Subject: [Python-Dev] inspect.getargspec() In-Reply-To: Your message of "Wed, 06 Nov 2002 21:32:16 EST." References: Message-ID: <200211071519.gA7FJwR27292@odiug.zope.com> > Unfortunately, Python is inconsistent about this. On the one hand, > > >>> x = [] > >>> push = x.append > >>> push() > Traceback (most recent call last): > File "", line 1, in ? > TypeError: append() takes exactly one argument (0 given) > >>> > > On the other, > > >>> class X(list): > ... def append(self, item): > ... pass > ... > >>> push = X().append > >>> push() > Traceback (most recent call last): > File "", line 1, in ? > TypeError: append() takes exactly 2 arguments (1 given) > >>> > > I care about matching the error msgs because error msgs are a prime > trigger for looking up help. OTOH, when the error msgs are > inconsistent, you can't win. I'd count self, myself, as you can > only suck the signature out of Python-defined callables, and those > seem consistent about reporting the "real" number of arguments > required. I'm with Ping -- the signature returned should only report arguments you need to supply. So in the above, the signature for X.append would be (self, item) but the signature for X().append would be (item). --Guido van Rossum (home page: http://www.python.org/~guido/) From pobrien@orbtech.com Thu Nov 7 16:15:43 2002 From: pobrien@orbtech.com (Patrick K. O'Brien) Date: Thu, 7 Nov 2002 10:15:43 -0600 Subject: [Python-Dev] inspect.getargspec() In-Reply-To: <200211071519.gA7FJwR27292@odiug.zope.com> References: <200211071519.gA7FJwR27292@odiug.zope.com> Message-ID: <200211071015.43100.pobrien@orbtech.com> On Thursday 07 November 2002 09:19 am, Guido van Rossum wrote: > I'm with Ping -- the signature returned should only report arguments > you need to supply. So in the above, the signature for X.append would > be (self, item) but the signature for X().append would be (item). And that's what my current code does. So I'll work up a patch in the next day or two and make this functionality part of inspect. Any opinions on how to handle builtins? Should getargspec() return empty values or raise an error? The current version raises TypeError on anything that fails isinstance(object, types.FunctionType). The new version will support functions, bound methods, unbound methods, classes (which returns the contructor's arguments), and objects with a __call__() method (and will drop self as appropriate for all permutations of the preceding). It doesn't seem right to me to raise an error for a builtin, especially if there will come a day when we *can* introspect the arguments for builtins. I'd rather return empty values and just document the fact that we don't have a way to return the argspec on builtins yet. Yay, nay, other? Neither solution is entirely satisfactory. Is there no trick that could be employed to expose the argspec for builtins? -- Patrick K. O'Brien Orbtech http://www.orbtech.com/web/pobrien ----------------------------------------------- "Your source for Python programming expertise." ----------------------------------------------- From walter@livinglogic.de Thu Nov 7 16:39:07 2002 From: walter@livinglogic.de (=?ISO-8859-15?Q?Walter_D=F6rwald?=) Date: Thu, 07 Nov 2002 17:39:07 +0100 Subject: [Python-Dev] metaclass insanity In-Reply-To: <200211061842.gA6Igxb28233@pcp02138704pcs.reston01.va.comcast.net> References: <200210311858.g9VIw0509968@odiug.zope.com> <2mbs55b6qx.fsf@starship.python.net> <200211041236.59893.pobrien@orbtech.com> <200211042054.gA4KsmP21986@pcp02138704pcs.reston01.va.comcast.net> <3DC79B9A.9010504@livinglogic.de> <200211051705.gA5H5LJ19816@odiug.zope.com> <3DC8F579.2060601@livinglogic.de> <200211061842.gA6Igxb28233@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <3DCA972B.7020705@livinglogic.de> Guido van Rossum wrote: >>What I wonder is how this will work with classes that are defined >>outside but assigned inside an outer class, i.e.: >> >>class NotInner: >> pass >> >>class Outer: >> Inner = NotInner >> >>Will this set NotInner.__name__ to "Outer.NotInner" or not? > > __name__ should be set to reflect the lexical position of the class > statement. What you do with assignment is your business. > > Thanks for any work you can do towards implementing this! I'm not sure I'm up to the task, as I've never messed with the Python parser before. Bye, Walter Dörwald From jeremy@alum.mit.edu Thu Nov 7 16:40:40 2002 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Thu, 7 Nov 2002 11:40:40 -0500 Subject: [Python-Dev] Re: would warnings be appropriate for probably-misused global statements? In-Reply-To: <200211062317.gA6NHVC11882@pcp02138704pcs.reston01.va.comcast.net> References: <200211062317.gA6NHVC11882@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <15818.38792.348035.434980@slothrop.zope.com> >>>>> "GvR" == Guido van Rossum writes: GvR> Global outside a function would seem easy to detect (at least GvR> as easy as return outside a function), but you need to be GvR> careful: in a string passed to the exec statement, one could GvR> have distinct locals and globals, and then 'global x' outside a GvR> function would make sense. Also, 'global x' makes sense inside GvR> a class statement. IIRC I added a warning/error to the compiler about global at the module level, but later removed it because it was too hard (impossible?) to determine whether a block of code was being compiled to exec or being compiled for the body of a module. Anyone interested could check the CVS log for compile.c. GvR> But I'm not excited about this. A similar amount of analysis GvR> can discover assigning to a variable that's not used. Isn't GvR> that more useful? Using a name that's not defined anywhere. GvR> Etc. There are tons of these things. PyChecker watches for GvR> all of them. I'd prefer to make a project out of integrating GvR> PyChecker functionality into Python, eventually, rather than GvR> attacking random things one at a time I didn't check myself, because I agree with you on this point. Jeremy From guido@python.org Thu Nov 7 16:39:12 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 07 Nov 2002 11:39:12 -0500 Subject: [Python-Dev] inspect.getargspec() In-Reply-To: Your message of "Thu, 07 Nov 2002 10:15:43 CST." <200211071015.43100.pobrien@orbtech.com> References: <200211071519.gA7FJwR27292@odiug.zope.com> <200211071015.43100.pobrien@orbtech.com> Message-ID: <200211071639.gA7GdCj27966@odiug.zope.com> > Any opinions on how to handle builtins? Should getargspec() return > empty values or raise an error? The current version raises TypeError > on anything that fails isinstance(object, types.FunctionType). The > new version will support functions, bound methods, unbound methods, > classes (which returns the contructor's arguments), and objects with > a __call__() method (and will drop self as appropriate for all > permutations of the preceding). It should raise TypeError if it can't find the info requested. > It doesn't seem right to me to raise an error for a builtin, > especially if there will come a day when we *can* introspect the > arguments for builtins. I'd rather return empty values and just > document the fact that we don't have a way to return the argspec on > builtins yet. Yay, nay, other? Making something that currently raises TypeError return a useful value is deemed backwards compatible. Making something that currently returns one value return a different value is deemed incompatible (even if the new value is more useful). > Neither solution is entirely satisfactory. Is there no trick that > could be employed to expose the argspec for builtins? Many builtins have the argspec as the first line of their docstring. If you can parse the common cases, I'd be happy to accept patches to fix a few cases where the docstring lies or is otherwise unhelpful. But you'd still have to watch out for docstrings that don't have this info, as there's no way to fix all 3rd party extension modules. --Guido van Rossum (home page: http://www.python.org/~guido/) From pyth@devel.trillke.net Thu Nov 7 16:43:24 2002 From: pyth@devel.trillke.net (holger krekel) Date: Thu, 7 Nov 2002 17:43:24 +0100 Subject: [Python-Dev] inspect.getargspec() In-Reply-To: <200211071015.43100.pobrien@orbtech.com>; from pobrien@orbtech.com on Thu, Nov 07, 2002 at 10:15:43AM -0600 References: <200211071519.gA7FJwR27292@odiug.zope.com> <200211071015.43100.pobrien@orbtech.com> Message-ID: <20021107174324.B30315@prim.han.de> Patrick K. O'Brien wrote: > Any opinions on how to handle builtins? Should getargspec() return empty > values or raise an error? The current version raises TypeError on anything > that fails isinstance(object, types.FunctionType). The new version will > support functions, bound methods, unbound methods, classes (which returns > the contructor's arguments), and objects with a __call__() method (and will > drop self as appropriate for all permutations of the preceding). > > It doesn't seem right to me to raise an error for a builtin, especially if > there will come a day when we *can* introspect the arguments for builtins. > I'd rather return empty values and just document the fact that we don't > have a way to return the argspec on builtins yet. Yay, nay, other? > > Neither solution is entirely satisfactory. Is there no trick that could be > employed to expose the argspec for builtins? For a documentation browser i used the technique to parse the docstring of the builtins (which contains the signature). It's difficult to get exact information out of them, though. The problem is that some docstrings are 'incorrect' or slightly non-standard. regards, holger From neal@metaslash.com Thu Nov 7 19:32:58 2002 From: neal@metaslash.com (Neal Norwitz) Date: Thu, 07 Nov 2002 14:32:58 -0500 Subject: [Python-Dev] [SourceForge] Notice of scheduled outages Message-ID: <20021107193258.GP14715@epoch.metaslash.com> FYI, I received the info below from SourceForge. CVS will be unavailable on 2002-11-17 (Sunday) between 1300 - 0100 (Monday) EST (if I did the math correct). Neal -- """ On 2002-11-17 (Sunday), project CVS services, project shell services, project web services (including all VHOSTs), and project database services will be offline for a period of up to twelve hours, starting at 10:00 Pacific (GMT-8). Project web services will be restored first, but will be brought up initially with read-only access to project group directory space. Static web content will be served correctly during this time period, but application-driven and database-dependent content and CGI scripts will not function correctly. Issues encountered during this time period SHOULD NOT be reported to SourceForge.net; they are an expected side-effect of this outage. Both outages (2002-11-14 at 16:00 for 3 hours, and 2002-11-17 at 10:00 for 12 hours) have been scheduled to permit the relocation of site hardware by SourceForge.net staff. """ From niemeyer@conectiva.com Thu Nov 7 21:20:16 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Thu, 7 Nov 2002 19:20:16 -0200 Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: References: <20021106234540.A2116@ibook.distro.conectiva> Message-ID: <20021107192015.A31586@ibook.distro.conectiva> > -0. Complicates the code and the docs and the mental model to save what, 3 > measly characters of typing (e.g., "[x]")? Tim, I've submited to patch #549037 a possible implementation of the singlesection algorithm I described in my last mail. If possible, could you please verify if it's simple enough to change your mind? PS. If accepted, I'll include documentation and tests before commiting. Thank you! -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From martin@v.loewis.de Thu Nov 7 21:28:43 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 07 Nov 2002 22:28:43 +0100 Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: <20021107192015.A31586@ibook.distro.conectiva> References: <20021106234540.A2116@ibook.distro.conectiva> <20021107192015.A31586@ibook.distro.conectiva> Message-ID: Gustavo Niemeyer writes: > > -0. Complicates the code and the docs and the mental model to save what, 3 > > measly characters of typing (e.g., "[x]")? > > Tim, I've submited to patch #549037 a possible implementation of the > singlesection algorithm I described in my last mail. If possible, could > you please verify if it's simple enough to change your mind? I doubt that. With no proposed documentation change, it is hard to evaluate whether the doc changes are simple. There is no way to get rid of the complication in the mental model. For the code changes: shouldn't there be a change to .write also? Regards, Martin From niemeyer@conectiva.com Thu Nov 7 21:35:11 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Thu, 7 Nov 2002 19:35:11 -0200 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: <20021107013033.GC2347@eecs.tufts.edu> References: <20021031204732.B32673@ibook.distro.conectiva> <200211010025.gA10POc02765@pcp02138704pcs.reston01.va.comcast.net> <20021101111333.E12803@ibook.distro.conectiva> <20021107013033.GC2347@eecs.tufts.edu> Message-ID: <20021107193511.B31586@ibook.distro.conectiva> Hi Michael! > In a related question, does python have tar support (I haven't had > a chance to investigate thoroughly)? I got the impression it didn't, > which is kinda weird since zip is a supporting archive, and gz and bz2 > will be, but there's no unix-y tar. But I'm sure someone will correct > me and say I'm wrong :) Not yet. But there's a module, named tarfile, being actively developed by Lars Gustäbel. It already includes most of the important GNU extensions for the tar format. I hope to include in the standard library eventually. More information at http://www.gustaebel.de/lars/tarfile/. -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From tim.one@comcast.net Thu Nov 7 21:48:55 2002 From: tim.one@comcast.net (Tim Peters) Date: Thu, 07 Nov 2002 16:48:55 -0500 Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: <20021107192015.A31586@ibook.distro.conectiva> Message-ID: >> -0. Complicates the code and the docs and the mental model to >> save what, 3 measly characters of typing (e.g., "[x]")? [Gustavo Niemeyer] > Tim, I've submited to patch #549037 a possible implementation of the > singlesection algorithm I described in my last mail. If possible, could > you please verify if it's simple enough to change your mind? A vote of -0 means I don't want to make a career out of this . http://www.python.org/dev/process.html A -1 vote is a strong objection; I don't have a strong objection here, I see *any* support for anonymous sections in ConfigParser as complicating the whole thing for no particular benefit. OTOH, it won't be a disaster for Python if some version is accepted, either; I'd just rather it weren't. Or, IOW, -0. That can't be changed unless someone presents a compelling use case for an anonymous section where making up a section name would be a believable burden. From niemeyer@conectiva.com Thu Nov 7 22:09:37 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Thu, 7 Nov 2002 20:09:37 -0200 Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: References: <20021106234540.A2116@ibook.distro.conectiva> <20021107192015.A31586@ibook.distro.conectiva> Message-ID: <20021107200937.C31586@ibook.distro.conectiva> > I doubt that. With no proposed documentation change, it is hard to > evaluate whether the doc changes are simple. There is no way to get > rid of the complication in the mental model. For the code changes: > shouldn't there be a change to .write also? Sorry. I tried to give an idea about how simple it could be, but failed to provide a good patch. I've uploaded an updated version, including the needed changes in the documentation and .write method. but-it-still-don't-have-tests-ly y'rs -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From niemeyer@conectiva.com Thu Nov 7 22:22:46 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Thu, 7 Nov 2002 20:22:46 -0200 Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: References: <20021107192015.A31586@ibook.distro.conectiva> Message-ID: <20021107202246.D31586@ibook.distro.conectiva> > A vote of -0 means I don't want to make a career out of this . > > http://www.python.org/dev/process.html > > A -1 vote is a strong objection; I don't have a strong objection here, I see > *any* support for anonymous sections in ConfigParser as complicating the > whole thing for no particular benefit. OTOH, it won't be a disaster for > Python if some version is accepted, either; I'd just rather it weren't. Or, Thanks for explaining! I wasn't aware that there were a formal description for [01][+-] :-) > IOW, -0. That can't be changed unless someone presents a compelling use > case for an anonymous section where making up a section name would be a > believable burden. The most compeling reason I could see is: >>> import ConfigParser >>> cfg = ConfigParser.ConfigParser(singlesection="shell") >>> cfg.read("/etc/sysconfig/network-scripts/ifcfg-lo") >>> cfg.get("shell", "ipaddr") '127.0.0.1' Besides that, I'm ashamed to say that I'm acting more like a bug-closing monkey. ;-) -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From fredrik@pythonware.com Thu Nov 7 23:40:25 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Fri, 8 Nov 2002 00:40:25 +0100 Subject: [Python-Dev] ConfigParser with a single section References: <20021107192015.A31586@ibook.distro.conectiva> <20021107202246.D31586@ibook.distro.conectiva> Message-ID: <003e01c286b7$0d1ee1a0$ced241d5@hagrid> Gustavo Niemeyer wrote: > The most compeling reason I could see is: > > >>> import ConfigParser > >>> cfg = ConfigParser.ConfigParser(singlesection="shell") > >>> cfg.read("/etc/sysconfig/network-scripts/ifcfg-lo") > >>> cfg.get("shell", "ipaddr") > '127.0.0.1' don't forget that you can subclass stuff in Python: import ConfigParser, StringIO class MyConfigParser(ConfigParser.ConfigParser): def read(self, filename): try: text = open(filename).read() except IOError: pass else: file = StringIO.StringIO("[shell]\n" + text) self.readfp(file, filename) cfg = MyConfigParser() cfg.read("/etc/sysconfig/network-scripts/ifcfg-lo") cfg.get("shell", "ipaddr") From niemeyer@conectiva.com Fri Nov 8 00:06:59 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Thu, 7 Nov 2002 22:06:59 -0200 Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: <003e01c286b7$0d1ee1a0$ced241d5@hagrid> References: <20021107192015.A31586@ibook.distro.conectiva> <20021107202246.D31586@ibook.distro.conectiva> <003e01c286b7$0d1ee1a0$ced241d5@hagrid> Message-ID: <20021107220659.A1204@ibook.distro.conectiva> > don't forget that you can subclass stuff in Python: Where's the write method? Ok.. I know you can write it too. But IMO, that's not an argument against including support easily accessible for not-so-brilliant users. Anyway.. nobody seems to like the idea. I'll close the bug as WONTFIX. Thank you. -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From pf@artcom-gmbh.de Fri Nov 8 07:12:42 2002 From: pf@artcom-gmbh.de (Peter Funk) Date: Fri, 8 Nov 2002 08:12:42 +0100 (CET) Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: <20021107202246.D31586@ibook.distro.conectiva> from Gustavo Niemeyer at "Nov 7, 2002 08:22:46 pm" Message-ID: Hi, a typical Linux system is populated with a lot of "sectionless" config files in /etc/sysconfig. With the proposed change ConfigParser would be able to read these files. The concept of sections is in fact an additional mental model only needed for more complicated configuration files. So if we think of sections as an additional feature the proposed change could even simplify the mental model for starters. Gustavo: > Besides that, I'm ashamed to say that I'm acting more like a bug-closing > monkey. ;-) Gustavo: I appreciate your championship of this and I believe it will improve the range of applications for this library module. I didn't actually had a look at your patch yet. That said I believe a "sectionless" config file could be parsed into the DEFAULT section. Some methods (for example .options() ) could accept None as a section argument accessing the default section. If you include a doc patch, please don't forget to add the \versionchanged[...foo...]{2.3} calls. Regards, Peter -- Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen, Germany) From martin@v.loewis.de Fri Nov 8 08:09:31 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 08 Nov 2002 09:09:31 +0100 Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: References: Message-ID: pf@artcom-gmbh.de (Peter Funk) writes: > a typical Linux system is populated with a lot of "sectionless" > config files in /etc/sysconfig. With the proposed change > ConfigParser would be able to read these files. These are really shell scripts, right? I doubt ConfigParser could read those in their full generality, since it does not support # comments except in column 1. However, in most cases, such files can be easily read with execfile, passing an empty dictionary. > The concept of sections is in fact an additional mental model > only needed for more complicated configuration files. Indeed, these are the files implemented in ConfigParser. Regards, Martin From fredrik@pythonware.com Fri Nov 8 08:42:34 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Fri, 8 Nov 2002 09:42:34 +0100 Subject: [Python-Dev] ConfigParser with a single section References: <20021107192015.A31586@ibook.distro.conectiva> <20021107202246.D31586@ibook.distro.conectiva> <003e01c286b7$0d1ee1a0$ced241d5@hagrid> <20021107220659.A1204@ibook.distro.conectiva> Message-ID: <007401c28702$c9b9da40$0900a8c0@spiff> Gustavo Niemeyer wrote: > > don't forget that you can subclass stuff in Python: >=20 > Where's the write method? Ok.. I know you can write it too. But IMO, > that's not an argument against including support easily accessible > for not-so-brilliant users. there is no such thing as a not-so-brilliant user. From mwh@python.net Fri Nov 8 12:19:26 2002 From: mwh@python.net (Michael Hudson) Date: 08 Nov 2002 12:19:26 +0000 Subject: [Python-Dev] [SourceForge] Notice of scheduled outages In-Reply-To: Neal Norwitz's message of "Thu, 07 Nov 2002 14:32:58 -0500" References: <20021107193258.GP14715@epoch.metaslash.com> Message-ID: <2mbs50tfsh.fsf@starship.python.net> Neal Norwitz writes: > FYI, I received the info below from SourceForge. Which SA scored for me as spam... gotta-install-spambayes-ly y'rs M. -- Those who have deviant punctuation desires should take care of their own perverted needs. -- Erik Naggum, comp.lang.lisp From niemeyer@conectiva.com Fri Nov 8 13:10:23 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Fri, 8 Nov 2002 11:10:23 -0200 Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: References: Message-ID: <20021108111022.A3434@ibook.distro.conectiva> > However, in most cases, such files can be easily read with execfile, > passing an empty dictionary. Unfortunately, variables without quotes is a common shell idiom: >>> execfile("/etc/sysconfig/network-scripts/ifcfg-lo") Traceback (most recent call last): File "", line 1, in ? File "/etc/sysconfig/network-scripts/ifcfg-lo", line 2 IPADDR=127.0.0.1 ^ SyntaxError: invalid syntax (no, I haven't tweaked the file just to show that.. :-) -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From guido@python.org Fri Nov 8 13:24:28 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 08 Nov 2002 08:24:28 -0500 Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: Your message of "Fri, 08 Nov 2002 11:10:23 -0200." <20021108111022.A3434@ibook.distro.conectiva> References: <20021108111022.A3434@ibook.distro.conectiva> Message-ID: <200211081324.gA8DOSX23226@pcp02138704pcs.reston01.va.comcast.net> > > However, in most cases, such files can be easily read with execfile, > > passing an empty dictionary. > > Unfortunately, variables without quotes is a common shell idiom: > > >>> execfile("/etc/sysconfig/network-scripts/ifcfg-lo") > Traceback (most recent call last): > File "", line 1, in ? > File "/etc/sysconfig/network-scripts/ifcfg-lo", line 2 > IPADDR=127.0.0.1 > ^ > SyntaxError: invalid syntax > > (no, I haven't tweaked the file just to show that.. :-) Listen, these files are shell scripts. Any attempts to parse them with other means than feeding them to the shell are doomed. Don't try to tweak ConfigParser to do something it's not good at anyway. --Guido van Rossum (home page: http://www.python.org/~guido/) From lannert-pydev@lannert.rz.uni-duesseldorf.de Fri Nov 8 15:50:56 2002 From: lannert-pydev@lannert.rz.uni-duesseldorf.de (Detlef Lannert) Date: Fri, 8 Nov 2002 16:50:56 +0100 Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: <20021107035901.12268.17075.Mailman@mail.python.org> References: <20021107035901.12268.17075.Mailman@mail.python.org> Message-ID: <20021108165056.X1284@lannert.rz.uni-duesseldorf.de> [Gustavo Niemeyer] > Patch #549037 implements an interesting feature: it allows one to have > configuration files without headers. I'd like to know what's your > opinion about this issue, so that we can either forget the issue or > implement the functionality, closing that bug. [Tim Peters] > -0. Complicates the code and the docs and the mental model to save what, 3 > measly characters of typing (e.g., "[x]")? It also saves me to explain to the users of my program why they _have_ to start each configuration file with a line containing "[Options]", even if there can't be anything other than options in this file. (Making them type "[x]" instead of "[Options]" is even harder <.5 wink>). Unfortunately I cannot easily supply the section line when parsing the config file, even if I use ConfigParser.readfp(); the "obvious solution" prefix = cStringIO.StringIO("[Options]\n") cfp = ConfigParser.ConfigParser() cfp.readfp(prefix, filename) cfp.read(filename) doesn't work because the ConfigParser object expects a new section header for each read*() call. A generator that encapsulates the file and supplies an initial section header doesn't work either because the ConfigParser uses the readline() method; it doesn't iterate over the file. Subclassing the ConfigParser class doesn't help much because the _read() method does most of the relevant work and at the same time requires the section header line. So this would basically mean I had to copy that method (~60 lines) and tweak it to handle the case of missing section headers. [I had tried this before I submitted the patch.] [Gustavo Niemeyer, in a preceding mail] > The patch includes a new parameter in read functions, stating > what's the first section name. It means that we could have other > sections after the first unheaded section. IMO, that situation should > still be considered an error. Not necessarily an error; I was thinking of a configuration file like spam = available frob = required # ... more general options [Holidays] frob = optional # ... more options for a special case # etc. where labelled, optional sections may follow the (usually unlabelled) main configuration section. If the caller doesn't want this, he can easily detect this situation (checking cfp.sections()) and reject/handle it. Detlef From guido@python.org Fri Nov 8 16:55:44 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 08 Nov 2002 11:55:44 -0500 Subject: [Python-Dev] ConfigParser with a single section In-Reply-To: Your message of "Fri, 08 Nov 2002 16:50:56 +0100." <20021108165056.X1284@lannert.rz.uni-duesseldorf.de> References: <20021107035901.12268.17075.Mailman@mail.python.org> <20021108165056.X1284@lannert.rz.uni-duesseldorf.de> Message-ID: <200211081655.gA8Gtjx15062@pcp02138704pcs.reston01.va.comcast.net> > Unfortunately I cannot easily supply the section line when parsing the > config file, even if I use ConfigParser.readfp(); the "obvious solution" > > prefix = cStringIO.StringIO("[Options]\n") > cfp = ConfigParser.ConfigParser() > cfp.readfp(prefix, filename) > cfp.read(filename) > > doesn't work because the ConfigParser object expects a new section header > for each read*() call. So copy everything into one memory buffer. --Guido van Rossum (home page: http://www.python.org/~guido/) From martin@v.loewis.de Fri Nov 8 19:35:28 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 08 Nov 2002 20:35:28 +0100 Subject: [Python-Dev] Reindenting unicodedata.c Message-ID: While working on unicodedata.c, I found that its indentation does not follow PEP 7. Is it ok to reindent it (i.e. use tabs)? Regards, Martin From niemeyer@conectiva.com Fri Nov 8 20:04:56 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Fri, 8 Nov 2002 18:04:56 -0200 Subject: [Python-Dev] Restricted interpreter Message-ID: <20021108180456.A10379@ibook.distro.conectiva> This weekend I'm going to work on a "restricted" python interpreter for http://acm.uva.es/problemset/. That site offers online programming contests, including an online judge to check algorithm implementations for hundreds of problems. I belive it'd be nice for the Python community to have access to something like that. This interpreter should have limited functionality so that malicious users won't be able to access the filesystem, sockets, and other "dangerous" functionality. I'm not sure if that will be useful for the stock Python interpreter, as its application is very specific, but at least it could be a nice starting point for similar projects. I've included here a quick list of changes to the python interpreter to achieve that. Do you remember about any other possible problems? - include a '-r' flag, which enables a global restricted flag, and implies -E, and -S. - depending on the flag, don't let scripts import posixmodule, (we can't remove it, or python won't compile); - depending on the flag, change the way module imports work, using only the sys.path Python has started with; - depending on the flag, limit instantiation of 'file' types (remember that type(sys.stdout) returns the 'file' type, so removing it from builtins is not enough). - remove all, but the builtin modules which could be useful for some algorithm: _codecs, array, cmath, binascii, crypt, cStringIO, md5, math, _locale, _sre, pcre, pyexpat, regex, sha, strop, timing, struct, time, xreadlines, unicodedata, _weakref; -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From pyth@devel.trillke.net Fri Nov 8 20:19:59 2002 From: pyth@devel.trillke.net (holger krekel) Date: Fri, 8 Nov 2002 21:19:59 +0100 Subject: [Python-Dev] Restricted interpreter In-Reply-To: <20021108180456.A10379@ibook.distro.conectiva>; from niemeyer@conectiva.com on Fri, Nov 08, 2002 at 06:04:56PM -0200 References: <20021108180456.A10379@ibook.distro.conectiva> Message-ID: <20021108211959.P30315@prim.han.de> Gustavo Niemeyer wrote: > This weekend I'm going to work on a "restricted" python interpreter for > http://acm.uva.es/problemset/. That site offers online programming > contests, including an online judge to check algorithm implementations > for hundreds of problems. I belive it'd be nice for the Python community > to have access to something like that. > > This interpreter should have limited functionality so that malicious users > won't be able to access the filesystem, sockets, and other "dangerous" > functionality. If i were to seriously do something like this i'd try to use 'jails' as found in free-bsd or similar in UserModeLinux (haven't really checked the lattter). They offer kernel-level sandboxes and if your execution runs within them it can't compromise the system even if its manages to become the root user. there is a fine introductory read regarding security granularity and about jails: http://docs.freebsd.org/44doc/papers/jail/jail.html have fun, holger From guido@python.org Fri Nov 8 20:22:28 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 08 Nov 2002 15:22:28 -0500 Subject: [Python-Dev] Restricted interpreter In-Reply-To: Your message of "Fri, 08 Nov 2002 18:04:56 -0200." <20021108180456.A10379@ibook.distro.conectiva> References: <20021108180456.A10379@ibook.distro.conectiva> Message-ID: <200211082022.gA8KMSv17155@pcp02138704pcs.reston01.va.comcast.net> > This weekend I'm going to work on a "restricted" python interpreter for > http://acm.uva.es/problemset/. That site offers online programming > contests, including an online judge to check algorithm implementations > for hundreds of problems. I belive it'd be nice for the Python community > to have access to something like that. > > This interpreter should have limited functionality so that malicious users > won't be able to access the filesystem, sockets, and other "dangerous" > functionality. > > I'm not sure if that will be useful for the stock Python interpreter, > as its application is very specific, but at least it could be a nice > starting point for similar projects. > > I've included here a quick list of changes to the python interpreter to > achieve that. Do you remember about any other possible problems? > > - include a '-r' flag, which enables a global restricted flag, and > implies -E, and -S. > > - depending on the flag, don't let scripts import posixmodule, (we can't > remove it, or python won't compile); > > - depending on the flag, change the way module imports work, using only > the sys.path Python has started with; > > - depending on the flag, limit instantiation of 'file' types (remember that > type(sys.stdout) returns the 'file' type, so removing it from builtins is > not enough). > > - remove all, but the builtin modules which could be useful for some > algorithm: _codecs, array, cmath, binascii, crypt, cStringIO, md5, math, > _locale, _sre, pcre, pyexpat, regex, sha, strop, timing, struct, time, > xreadlines, unicodedata, _weakref; Are you aware of the standard library module 'rexec'? --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one@comcast.net Fri Nov 8 20:27:46 2002 From: tim.one@comcast.net (Tim Peters) Date: Fri, 08 Nov 2002 15:27:46 -0500 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: Message-ID: [Martin v. Loewis] > While working on unicodedata.c, I found that its indentation does not > follow PEP 7. Is it ok to reindent it (i.e. use tabs)? Probably not, but I'd be in favor of it. unicodeobject.c too, and _sre.c, and _hotshot.c, and ... the PEP 7 std is simply ignored. From barry@python.org Fri Nov 8 20:36:02 2002 From: barry@python.org (Barry A. Warsaw) Date: Fri, 8 Nov 2002 15:36:02 -0500 Subject: [Python-Dev] Reindenting unicodedata.c References: Message-ID: <15820.8242.637626.609642@gargle.gargle.HOWL> >>>>> "TP" == Tim Peters writes: TP> [Martin v. Loewis] >> While working on unicodedata.c, I found that its indentation >> does not follow PEP 7. Is it ok to reindent it (i.e. use tabs)? TP> Probably not, but I'd be in favor of it. unicodeobject.c too, TP> and _sre.c, and _hotshot.c, and ... the PEP 7 std is simply TP> ignored. And just to make sure Martin totally abandons this folly , I'll start a little mini-flamewar on the all-tabs recommendation in PEP 7. :) I think all-tabs is just fine for existing C code, but I think we should encourage new C code (or re-indented C code) to use all spaces and 4 space indents. Think of how much more readable eval_frame() would be if that were the case. :) My specific recommendation is Emacs-specific: use the CC Mode "python" style, but with (setq c-basic-offset 8) and (setq indent-tabs-mode nil) I can provide some elisp if there's any interest. -Barry From guido@python.org Fri Nov 8 20:36:16 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 08 Nov 2002 15:36:16 -0500 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: Your message of "Fri, 08 Nov 2002 15:27:46 EST." References: Message-ID: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> > [Martin v. Loewis] > > While working on unicodedata.c, I found that its indentation does not > > follow PEP 7. Is it ok to reindent it (i.e. use tabs)? > > Probably not, but I'd be in favor of it. unicodeobject.c too, and _sre.c, > and _hotshot.c, and ... the PEP 7 std is simply ignored. I'm in favor of reindenting too, but only if you're doing major work on a module. I.e. don't just reindent a module when you're fixing a few lines of code; but when you're doing a major refactoring, it should be fine. CVS-wise, it's better to do the reindent as a separate checkin, so you know you're not changing anything else. (Preferably just *before* you start doing any major surgery.) --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Fri Nov 8 20:39:20 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 08 Nov 2002 15:39:20 -0500 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: Your message of "Fri, 08 Nov 2002 15:36:02 EST." <15820.8242.637626.609642@gargle.gargle.HOWL> References: <15820.8242.637626.609642@gargle.gargle.HOWL> Message-ID: <200211082039.gA8KdKg17289@pcp02138704pcs.reston01.va.comcast.net> > And just to make sure Martin totally abandons this folly , I'll > start a little mini-flamewar on the all-tabs recommendation in PEP > 7. :) One word: fuhgeddaboudid. If it's good enough for the Linux kernel, it's good enough for Python. --Guido van Rossum (home page: http://www.python.org/~guido/) From cnetzer@mail.arc.nasa.gov Fri Nov 8 20:40:24 2002 From: cnetzer@mail.arc.nasa.gov (Chad Netzer) Date: Fri, 8 Nov 2002 12:40:24 -0800 Subject: [Python-Dev] Restricted interpreter In-Reply-To: <20021108180456.A10379@ibook.distro.conectiva> References: <20021108180456.A10379@ibook.distro.conectiva> Message-ID: <200211082040.MAA13378@mail.arc.nasa.gov> On Friday 08 November 2002 12:04, Gustavo Niemeyer wrote: > This weekend I'm going to work on a "restricted" python interpreter for > http://acm.uva.es/problemset/. Not that I want to discourage what could possibly be a useful effort; but perhaps you would be better off creating a restricted environment under which Python is run? What I mean is, for each run of python, the operating system environment would be setup so that everyone is isolated and can't do damage (except to their own limited environment). There are a number of ways to do this under (for example) modern Unix systems. I had considered setting up just such an environment for Python on Linux, using "User Mode Linux". Then, I could make an interactive python tutorial on the web, and anyone running it would think they had there own separate Linux environment (sockets, files, etc), but would in fact be running under a virtual environment that was fully isolated. They could change things, and try them out in the tutorial (with online feedback), and I could be assured they weren't abusing my machine (and I could enforce time limits, or CPU and memory usage, etc.) There are other ways of doing it with a virtual machine (using VMware, or Bochs, or Plex86). On FreeBSD you could probably use the 'jail()' call to launch your Python interpreter. There may be other such resources for Solaris, or Windows NT (suggestions?) I mention this because I will almost guarantee it is a LOT less work than what you would have to do to make a "restricted" Python (as well as being maintained and tested already). In addition, depending on the machine resources and type of virtual environment, it may not be all that much more resource intensive. -- Chad Netzer cnetzer@mail.arc.nasa.gov From nas@python.ca Fri Nov 8 20:48:23 2002 From: nas@python.ca (Neil Schemenauer) Date: Fri, 8 Nov 2002 12:48:23 -0800 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: <15820.8242.637626.609642@gargle.gargle.HOWL> References: <15820.8242.637626.609642@gargle.gargle.HOWL> Message-ID: <20021108204823.GA16213@glacier.arctrix.com> Barry A. Warsaw wrote: >> I think all-tabs is just fine for existing C code, but I think we > should encourage new C code (or re-indented C code) to use all spaces > and 4 space indents. Heathen. Neil From pyth@devel.trillke.net Fri Nov 8 20:52:09 2002 From: pyth@devel.trillke.net (holger krekel) Date: Fri, 8 Nov 2002 21:52:09 +0100 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net>; from guido@python.org on Fri, Nov 08, 2002 at 03:36:16PM -0500 References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021108215209.Q30315@prim.han.de> Guido van Rossum wrote: > > [Martin v. Loewis] > > > While working on unicodedata.c, I found that its indentation does not > > > follow PEP 7. Is it ok to reindent it (i.e. use tabs)? > > > > Probably not, but I'd be in favor of it. unicodeobject.c too, and _sre.c, > > and _hotshot.c, and ... the PEP 7 std is simply ignored. > > I'm in favor of reindenting too, but only if you're doing major work > on a module. I.e. don't just reindent a module when you're fixing a > few lines of code; but when you're doing a major refactoring, it > should be fine. > > CVS-wise, it's better to do the reindent as a separate checkin, so you > know you're not changing anything else. (Preferably just *before* you > start doing any major surgery.) Probably backporting patches (e.g. python2.2.3) gets harder. Some bigger projects i know *strictly* prohibit cosmetic-only patches because of this very problem. Only if you are actually changing code can you reindent. regards, holger From guido@python.org Fri Nov 8 20:55:35 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 08 Nov 2002 15:55:35 -0500 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: Your message of "Fri, 08 Nov 2002 21:52:09 +0100." <20021108215209.Q30315@prim.han.de> References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <20021108215209.Q30315@prim.han.de> Message-ID: <200211082055.gA8Kta717451@pcp02138704pcs.reston01.va.comcast.net> > > CVS-wise, it's better to do the reindent as a separate checkin, so you > > know you're not changing anything else. (Preferably just *before* you > > start doing any major surgery.) > > Probably backporting patches (e.g. python2.2.3) gets harder. > > Some bigger projects i know *strictly* prohibit cosmetic-only > patches because of this very problem. Only if you are actually > changing code can you reindent. I try to be very conservative with reindents and other cosmetics, for this very reason -- but there are limits. Readability also counts. --Guido van Rossum (home page: http://www.python.org/~guido/) From niemeyer@conectiva.com Fri Nov 8 21:02:48 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Fri, 8 Nov 2002 19:02:48 -0200 Subject: [Python-Dev] Restricted interpreter In-Reply-To: <200211082022.gA8KMSv17155@pcp02138704pcs.reston01.va.comcast.net> References: <20021108180456.A10379@ibook.distro.conectiva> <200211082022.gA8KMSv17155@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021108190248.A11511@ibook.distro.conectiva> [...] > Are you aware of the standard library module 'rexec'? In fileobject.c: /* rexec.py can't stop a user from getting the file() constructor -- all they have to do is get *any* file object f, and then do type(f). Here we prevent them from doing damage with it. */ if (PyEval_GetRestricted()) { It looks like I was going to reinvent the wheel. Is this being used in some project you know about? Btw, what's the point of FileWrapper, having in mind that it stores 'f' as an accessible attribute? >>> r.s_exec("""import sys; print sys.stdout.f""") ', mode 'w' at 0x100eec30> Thank you! -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From niemeyer@conectiva.com Fri Nov 8 21:06:56 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Fri, 8 Nov 2002 19:06:56 -0200 Subject: [Python-Dev] Restricted interpreter In-Reply-To: <20021108211959.P30315@prim.han.de> References: <20021108180456.A10379@ibook.distro.conectiva> <20021108211959.P30315@prim.han.de> Message-ID: <20021108190656.B11511@ibook.distro.conectiva> > If i were to seriously do something like this i'd try to use 'jails' > as found in free-bsd or similar in UserModeLinux (haven't really > checked the lattter). They offer kernel-level sandboxes > and if your execution runs within them it can't compromise the > system even if its manages to become the root user. I'm not planning to work on the whole system. I'll just help them to integrate python in the framework they already have. Thank you for your suggestion! > there is a fine introductory read regarding security granularity and > about jails: > > http://docs.freebsd.org/44doc/papers/jail/jail.html I'll have a look at it, as this is an interesting topic nevertheless. -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From niemeyer@conectiva.com Fri Nov 8 21:12:19 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Fri, 8 Nov 2002 19:12:19 -0200 Subject: [Python-Dev] [#527371] sre bug/patch In-Reply-To: <200211052207.gA5M7Mk24781@odiug.zope.com> References: <20021105200259.A28375@ibook.distro.conectiva> <200211052207.gA5M7Mk24781@odiug.zope.com> Message-ID: <20021108191219.C11511@ibook.distro.conectiva> [... about changing sre ...] > Just be sure to keep the code compatible with Python 1.5.1 (see PEP > 291). I've just noticed that sre_*.py were changed to include True/False, "foo in dict" and other recent conventions. Don't they break PEP 291? Should we fallback those changes? Or perhaps PEP 291 applies just to C code? -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From guido@python.org Fri Nov 8 21:15:12 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 08 Nov 2002 16:15:12 -0500 Subject: [Python-Dev] Restricted interpreter In-Reply-To: Your message of "Fri, 08 Nov 2002 19:02:48 -0200." <20021108190248.A11511@ibook.distro.conectiva> References: <20021108180456.A10379@ibook.distro.conectiva> <200211082022.gA8KMSv17155@pcp02138704pcs.reston01.va.comcast.net> <20021108190248.A11511@ibook.distro.conectiva> Message-ID: <200211082115.gA8LFC517688@pcp02138704pcs.reston01.va.comcast.net> > > Are you aware of the standard library module 'rexec'? > > In fileobject.c: > > /* rexec.py can't stop a user from getting the file() constructor -- > all they have to do is get *any* file object f, and then do > type(f). Here we prevent them from doing damage with it. */ > if (PyEval_GetRestricted()) { > > It looks like I was going to reinvent the wheel. Glad you noticed. ;-) > Is this being used in some project you know about? Not that I'm aware of, and in fact we've plugged enough security leaks in it so far that I'm not eager to recommend. But then, your reinvented wheel would have the same problem. > Btw, what's the point of FileWrapper, having in mind that it stores > 'f' as an accessible attribute? > > >>> r.s_exec("""import sys; print sys.stdout.f""") > ', mode 'w' at 0x100eec30> Beats me! It looks like a debugging hack that accidentally made it into the code; the code works just as well without self.f, it seems. Unclear if there's any damage, since FileWrapper is only used to wrap stdin, stdout and stderr. But this amplifies the warning about rexec's viability. Maybe you can use the time you were going to spend on reinventing rexec for a security audit instead... --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Fri Nov 8 21:17:06 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 08 Nov 2002 16:17:06 -0500 Subject: [Python-Dev] [#527371] sre bug/patch In-Reply-To: Your message of "Fri, 08 Nov 2002 19:12:19 -0200." <20021108191219.C11511@ibook.distro.conectiva> References: <20021105200259.A28375@ibook.distro.conectiva> <200211052207.gA5M7Mk24781@odiug.zope.com> <20021108191219.C11511@ibook.distro.conectiva> Message-ID: <200211082117.gA8LH6917726@pcp02138704pcs.reston01.va.comcast.net> > I've just noticed that sre_*.py were changed to include True/False, > "foo in dict" and other recent conventions. Don't they break PEP 291? > Should we fallback those changes? Or perhaps PEP 291 applies just to > C code? Ask Fredrik Lundh. It looks like I made a mistake there. --Guido van Rossum (home page: http://www.python.org/~guido/) From kvthan@wm.edu Sat Nov 9 09:44:22 2002 From: kvthan@wm.edu (kapil thangavelu) Date: Sat, 9 Nov 2002 01:44:22 -0800 Subject: [Python-Dev] Restricted interpreter In-Reply-To: <20021108180456.A10379@ibook.distro.conectiva> References: <20021108180456.A10379@ibook.distro.conectiva> Message-ID: <200211090144.22333.kvthan@wm.edu> there are some similarities here to what zope does (ie fine grained security for untrusted code). so another option might be to use http://cvs.zope.org/Zope3/lib/python/Zope/Security/ and execute online code in a restricted sandbox of the interpreter. -kapil On Friday 08 November 2002 12:04 pm, Gustavo Niemeyer wrote: > This weekend I'm going to work on a "restricted" python interpreter for > http://acm.uva.es/problemset/. That site offers online programming > contests, including an online judge to check algorithm implementations > for hundreds of problems. I belive it'd be nice for the Python community > to have access to something like that. > From martin@v.loewis.de Fri Nov 8 22:08:12 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 08 Nov 2002 23:08:12 +0100 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> Message-ID: Guido van Rossum writes: > I'm in favor of reindenting too, but only if you're doing major work > on a module. This request was actually triggered by python.org/sf/626485 and python.org/sf/626548, which are pending on the issue of reindentation. > I.e. don't just reindent a module when you're fixing a few lines of > code; but when you're doing a major refactoring, it should be fine. I don't know whether these patches are major changes; I would (unless advised otherwise) reindent them in a single commit, then update the patches for MAL to reconsider them. Regards, Martin From martin@v.loewis.de Fri Nov 8 22:10:55 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 08 Nov 2002 23:10:55 +0100 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: <15820.8242.637626.609642@gargle.gargle.HOWL> References: <15820.8242.637626.609642@gargle.gargle.HOWL> Message-ID: barry@python.org (Barry A. Warsaw) writes: > And just to make sure Martin totally abandons this folly , I'll > start a little mini-flamewar on the all-tabs recommendation in PEP > 7. :) I actually don't consider this recommendation foolish. This is what Emacs does when I start editing a Python C file, so it is convenient for me to follow this PEP, and inconvenient to not follow it. > My specific recommendation is Emacs-specific: use the CC Mode "python" > style, but with (setq c-basic-offset 8) and (setq indent-tabs-mode nil) > I can provide some elisp if there's any interest. Did you mean (setq c-basic-offset 4) here? If not, what is the advantage of these settings? Regards, Martin From guido@python.org Fri Nov 8 22:20:35 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 08 Nov 2002 17:20:35 -0500 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: Your message of "08 Nov 2002 23:08:12 +0100." References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> > > I'm in favor of reindenting too, but only if you're doing major work > > on a module. > > This request was actually triggered by python.org/sf/626485 and > python.org/sf/626548, which are pending on the issue of reindentation. > > > I.e. don't just reindent a module when you're fixing a few lines of > > code; but when you're doing a major refactoring, it should be fine. > > I don't know whether these patches are major changes; I would (unless > advised otherwise) reindent them in a single commit, then update the > patches for MAL to reconsider them. Why does MAL prefer 4 spaces? --Guido van Rossum (home page: http://www.python.org/~guido/) From barry@python.org Fri Nov 8 22:39:36 2002 From: barry@python.org (Barry A. Warsaw) Date: Fri, 8 Nov 2002 17:39:36 -0500 Subject: [Python-Dev] Reindenting unicodedata.c References: <15820.8242.637626.609642@gargle.gargle.HOWL> Message-ID: <15820.15656.154822.723896@gargle.gargle.HOWL> >>>>> "MvL" == Martin v Loewis writes: MvL> I actually don't consider this recommendation foolish. This MvL> is what Emacs does when I start editing a Python C file, so MvL> it is convenient for me to follow this PEP, and inconvenient MvL> to not follow it. Did you do something special to use the "python" style for Python C files? I ask because that's what I do. Vanilla CC Mode should use the "gnu" style by default, which is a lot different than "python" style. :) >> My specific recommendation is Emacs-specific: use the CC Mode >> "python" style, but with (setq c-basic-offset 8) and (setq >> indent-tabs-mode nil) I can provide some elisp if there's any >> interest. MvL> Did you mean (setq c-basic-offset 4) here? If not, what is MvL> the advantage of these settings? Yep. -Barry From niemeyer@conectiva.com Fri Nov 8 22:55:23 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Fri, 8 Nov 2002 20:55:23 -0200 Subject: [Python-Dev] Restricted interpreter In-Reply-To: <200211082115.gA8LFC517688@pcp02138704pcs.reston01.va.comcast.net> References: <20021108180456.A10379@ibook.distro.conectiva> <200211082022.gA8KMSv17155@pcp02138704pcs.reston01.va.comcast.net> <20021108190248.A11511@ibook.distro.conectiva> <200211082115.gA8LFC517688@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021108205523.A12888@ibook.distro.conectiva> > Unclear if there's any damage, since FileWrapper is only used to wrap > stdin, stdout and stderr. Yes, they probably could be even left unchanged in the restricted code. > But this amplifies the warning about rexec's viability. > > Maybe you can use the time you were going to spend on reinventing > rexec for a security audit instead... Good idea. Here's a first major problem: class S(str): def __eq__(self, obj): return 1 open("/tmp/foo", S("w")).write("Ouch!") I'll keep looking.. -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From guido@python.org Fri Nov 8 23:00:47 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 08 Nov 2002 18:00:47 -0500 Subject: [Python-Dev] Restricted interpreter In-Reply-To: Your message of "Fri, 08 Nov 2002 20:55:23 -0200." <20021108205523.A12888@ibook.distro.conectiva> References: <20021108180456.A10379@ibook.distro.conectiva> <200211082022.gA8KMSv17155@pcp02138704pcs.reston01.va.comcast.net> <20021108190248.A11511@ibook.distro.conectiva> <200211082115.gA8LFC517688@pcp02138704pcs.reston01.va.comcast.net> <20021108205523.A12888@ibook.distro.conectiva> Message-ID: <200211082300.gA8N0lG19325@pcp02138704pcs.reston01.va.comcast.net> > > Maybe you can use the time you were going to spend on reinventing > > rexec for a security audit instead... > > Good idea. Here's a first major problem: > > class S(str): > def __eq__(self, obj): > return 1 > open("/tmp/foo", S("w")).write("Ouch!") > > I'll keep looking.. Can you collect those in the SF bug tracker? (Patches would be great too, of course. ;-) --Guido van Rossum (home page: http://www.python.org/~guido/) From martin@v.loewis.de Fri Nov 8 23:03:34 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 09 Nov 2002 00:03:34 +0100 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> Message-ID: Guido van Rossum writes: > > I don't know whether these patches are major changes; I would (unless > > advised otherwise) reindent them in a single commit, then update the > > patches for MAL to reconsider them. > > Why does MAL prefer 4 spaces? I'm not sure; he did not answer this question. I'm not even sure whether he prefers 4 spaces, or whether he merely prefers not to change it. Regards, Martin From martin@v.loewis.de Fri Nov 8 23:04:54 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 09 Nov 2002 00:04:54 +0100 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: <15820.15656.154822.723896@gargle.gargle.HOWL> References: <15820.8242.637626.609642@gargle.gargle.HOWL> <15820.15656.154822.723896@gargle.gargle.HOWL> Message-ID: barry@python.org (Barry A. Warsaw) writes: > Did you do something special to use the "python" style for Python C > files? I ask because that's what I do. Vanilla CC Mode should use > the "gnu" style by default, which is a lot different than "python" > style. :) Yes: '(c-default-style "python") through custom-set-variables. Regards, Martin From niemeyer@conectiva.com Fri Nov 8 22:29:34 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Fri, 8 Nov 2002 20:29:34 -0200 Subject: [Python-Dev] [#527371] sre bug/patch In-Reply-To: <200211082117.gA8LH6917726@pcp02138704pcs.reston01.va.comcast.net> References: <20021105200259.A28375@ibook.distro.conectiva> <200211052207.gA5M7Mk24781@odiug.zope.com> <20021108191219.C11511@ibook.distro.conectiva> <200211082117.gA8LH6917726@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021108202934.A12628@ibook.distro.conectiva> > > I've just noticed that sre_*.py were changed to include True/False, > > "foo in dict" and other recent conventions. Don't they break PEP 291? > > Should we fallback those changes? Or perhaps PEP 291 applies just to > > C code? > > Ask Fredrik Lundh. It looks like I made a mistake there. Fredrik, could you please give us your opinion about that? -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From DavidA@ActiveState.com Sat Nov 9 00:30:27 2002 From: DavidA@ActiveState.com (David Ascher) Date: Fri, 08 Nov 2002 16:30:27 -0800 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: References: <15820.8242.637626.609642@gargle.gargle.HOWL> <200211082039.gA8KdKg17289@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <3DCC5723.4090405@ActiveState.com> Guido van Rossum wrote: > >And just to make sure Martin totally abandons this folly , I'll > >start a little mini-flamewar on the all-tabs recommendation in PEP > >7. :) > > > One word: fuhgeddaboudid. > > If it's good enough for the Linux kernel, it's good enough for Python. Oh, so we should stop using CVS and start using mboxes instead? . --david From guido@python.org Sat Nov 9 01:40:31 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 08 Nov 2002 20:40:31 -0500 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: Your message of "09 Nov 2002 00:03:34 +0100." References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <200211090140.gA91eVA19736@pcp02138704pcs.reston01.va.comcast.net> > > Why does MAL prefer 4 spaces? > > I'm not sure; he did not answer this question. I'm not even sure > whether he prefers 4 spaces, or whether he merely prefers not to > change it. As you may have noticed, this is a religious issue. I strongly prefer 8-space tabs, but for code that I don't visit very often I feel I can't to enforce it. For all new code I request 8-space tabs. --Guido van Rossum (home page: http://www.python.org/~guido/) From martin@v.loewis.de Sat Nov 9 06:31:10 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 09 Nov 2002 07:31:10 +0100 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: <3DCC5723.4090405@ActiveState.com> References: <15820.8242.637626.609642@gargle.gargle.HOWL> <200211082039.gA8KdKg17289@pcp02138704pcs.reston01.va.comcast.net> <3DCC5723.4090405@ActiveState.com> Message-ID: David Ascher writes: > > If it's good enough for the Linux kernel, it's good enough for Python. > > Oh, so we should stop using CVS and start using mboxes instead? . You are behind the most recent developments (I think): We should then be using BitKeeper, now. Regards, Martin From pobrien@orbtech.com Sat Nov 9 13:54:33 2002 From: pobrien@orbtech.com (Patrick K. O'Brien) Date: Sat, 9 Nov 2002 07:54:33 -0600 Subject: [Python-Dev] Pickling Question Message-ID: <200211090754.33492.pobrien@orbtech.com> In my pickling article [1] I looked at various ways to handle schema evolution issues. When it came to module name and location changes, I wrote the following: "A module name or location change is conceptually similar to a class name change but must be handled quite differently. That's because the module information is stored in the pickle but is not an attribute that can be modified through the standard pickle interface. In fact, the only way to change the module information is to perform a search and replace operation on the actual pickle file itself. Exactly how you would do this depends on your operating system and the tools you have at your disposal. And obviously this is a situation where you will want to back up your files in case you make a mistake. But the change should be fairly straightforward and will work equally well with the binary pickle format as with the text pickle format." I don't feel that this solution is entirely satisfactory and so I thought I would ask (a bit late, I know) whether I am completely correct in my assertions. If not, how else can this be handled. If so, is there any chance of adding a better way to handle this situation? [1] http://www-106.ibm.com/developerworks/library/l-pypers.html -- Patrick K. O'Brien Orbtech http://www.orbtech.com/web/pobrien ----------------------------------------------- "Your source for Python programming expertise." ----------------------------------------------- From guido@python.org Sat Nov 9 14:35:53 2002 From: guido@python.org (Guido van Rossum) Date: Sat, 09 Nov 2002 09:35:53 -0500 Subject: [Python-Dev] Pickling Question In-Reply-To: Your message of "Sat, 09 Nov 2002 07:54:33 CST." <200211090754.33492.pobrien@orbtech.com> References: <200211090754.33492.pobrien@orbtech.com> Message-ID: <200211091435.gA9EZrU22077@pcp02138704pcs.reston01.va.comcast.net> > In my pickling article [1] I looked at various ways to handle schema > evolution issues. When it came to module name and location changes, I > wrote the following: > > "A module name or location change is conceptually similar to a class > name change but must be handled quite differently. That's because the > module information is stored in the pickle but is not an attribute that > can be modified through the standard pickle interface. In fact, the > only way to change the module information is to perform a search and > replace operation on the actual pickle file itself. Exactly how you > would do this depends on your operating system and the tools you have > at your disposal. And obviously this is a situation where you will want > to back up your files in case you make a mistake. But the change should > be fairly straightforward and will work equally well with the binary > pickle format as with the text pickle format." > > I don't feel that this solution is entirely satisfactory and so I > thought I would ask (a bit late, I know) whether I am completely > correct in my assertions. If not, how else can this be handled. If so, > is there any chance of adding a better way to handle this situation? > > [1] http://www-106.ibm.com/developerworks/library/l-pypers.html I don't believe a search-and-replace on a pickle can ever be safe. In a binary pickle, it might interfere with length fields. And in either kind of pickle, you might accidentally replace data that happens to look like a module name. I'd suggest something else instead: when you have a pickle referencing module A which has since been renamed to B, create a dummy module A that contains "from B import *". Then load the pickle, and write it back again. The loading should work because a reference to class A.C will find it (as an alias for B.C); the storing should store it as B.C because that's the real name of class C. --Guido van Rossum (home page: http://www.python.org/~guido/) From pobrien@orbtech.com Sat Nov 9 14:46:55 2002 From: pobrien@orbtech.com (Patrick K. O'Brien) Date: Sat, 9 Nov 2002 08:46:55 -0600 Subject: [Python-Dev] Pickling Question In-Reply-To: <200211091435.gA9EZrU22077@pcp02138704pcs.reston01.va.comcast.net> References: <200211090754.33492.pobrien@orbtech.com> <200211091435.gA9EZrU22077@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <200211090846.55018.pobrien@orbtech.com> On Saturday 09 November 2002 08:35 am, Guido van Rossum wrote: > > I don't feel that this solution is entirely satisfactory and so I > > thought I would ask (a bit late, I know) whether I am completely > > correct in my assertions. If not, how else can this be handled. If > > so, is there any chance of adding a better way to handle this > > situation? > > > > [1] http://www-106.ibm.com/developerworks/library/l-pypers.html > > I don't believe a search-and-replace on a pickle can ever be safe. > In a binary pickle, it might interfere with length fields. And in > either kind of pickle, you might accidentally replace data that > happens to look like a module name. Yes. And that's why this was nagging me. I should have asked about it sooner. > I'd suggest something else instead: when you have a pickle > referencing module A which has since been renamed to B, create a > dummy module A that contains "from B import *". Then load the > pickle, and write it back again. The loading should work because a > reference to class A.C will find it (as an alias for B.C); the > storing should store it as B.C because that's the real name of class I like this. Wish I had thought of it. Thanks. -- Patrick K. O'Brien Orbtech http://www.orbtech.com/web/pobrien ----------------------------------------------- "Your source for Python programming expertise." ----------------------------------------------- From DavidA@ActiveState.com Sun Nov 10 01:47:42 2002 From: DavidA@ActiveState.com (David Ascher) Date: Sat, 09 Nov 2002 17:47:42 -0800 Subject: [Python-Dev] Reindenting unicodedata.c References: <15820.8242.637626.609642@gargle.gargle.HOWL> <200211082039.gA8KdKg17289@pcp02138704pcs.reston01.va.comcast.net> <3DCC5723.4090405@ActiveState.com> Message-ID: <3DCDBABE.1010803@ActiveState.com> Martin v. Loewis wrote: >David Ascher writes: > > > >>>If it's good enough for the Linux kernel, it's good enough for Python. >>> >>> >>Oh, so we should stop using CVS and start using mboxes instead? . >> >> > >You are behind the most recent developments (I think): We should then >be using BitKeeper, now. > > Yeah, I know. But bad prior decisions can be reversed. If Linus can change his mind about SCC, then I can only hope that Guido can too w.r.t. indentations. The only reason I won't argue for 4-space indents is that Guido seems to be stuck in his arcane ways. Luckily, he has other redeeming qualities. ;-) --david From skip@manatee.mojam.com Sun Nov 10 13:00:31 2002 From: skip@manatee.mojam.com (Skip Montanaro) Date: Sun, 10 Nov 2002 07:00:31 -0600 Subject: [Python-Dev] Weekly Python Bug/Patch Summary Message-ID: <200211101300.gAAD0V2G017202@manatee.mojam.com> Bug/Patch Summary ----------------- 310 open / 3027 total bugs (-12) 99 open / 1768 total patches (no change) New Bugs -------- list slice ass ignores subtypes of list (2002-11-04) http://python.org/sf/633152 cgi.py drops commandline arguments (2002-11-04) http://python.org/sf/633504 HeaderParseError: no header value (2002-11-04) http://python.org/sf/633550 Nested class __name__ (2002-11-05) http://python.org/sf/633930 pdb quit breakage (2002-11-05) http://python.org/sf/634116 RFC 2112 in email package (2002-11-06) http://python.org/sf/634412 Don't define _XOPEN_SOURCE on OpenBSD (2002-11-07) http://python.org/sf/635034 int(1e200) should return long (2002-11-07) http://python.org/sf/635115 re.sub() coerces u'' to '' (2002-11-08) http://python.org/sf/635398 urllib.urlretrieve() with ftp error (2002-11-08) http://python.org/sf/635453 remove debug prints from macmain.c (2002-11-08) http://python.org/sf/635570 Misleading description of \w in regexs (2002-11-08) http://python.org/sf/635595 cStringIO().write TypeError (2002-11-08) http://python.org/sf/635814 2.2.2 build on Solaris (2002-11-09) http://python.org/sf/635929 No error "not all arguments converted" (2002-11-09) http://python.org/sf/635969 New Patches ----------- Problem at the end of misformed mailbox (2002-11-03) http://python.org/sf/632934 _getdefaultlocale for OS X (2002-11-03) http://python.org/sf/632973 Patch for sre bug 610299 (2002-11-04) http://python.org/sf/633359 Plural forms support for gettext (2002-11-04) http://python.org/sf/633547 Cleanup of test_strptime.py (2002-11-04) http://python.org/sf/633633 Better inspect.BlockFinder fixes bug (2002-11-06) http://python.org/sf/634557 general corrections to 2.2.2 refman, p.1 (2002-11-07) http://python.org/sf/634866 os.tempnam behavior in Windows (2002-11-08) http://python.org/sf/635656 make some type attrs writable (2002-11-09) http://python.org/sf/635933 Filter unicode into unicode (2002-11-09) http://python.org/sf/636005 Typo in PEP249 (2002-11-10) http://python.org/sf/636159 Closed Bugs ----------- Ugly traceback for DistutilsPlatformError (2001-02-20) http://python.org/sf/233259 g++ must be called for c++ extensions (2001-04-03) http://python.org/sf/413582 non-greedy regexp duplicating match bug (2001-06-01) http://python.org/sf/429357 freeze: global symbols not exported (2001-06-25) http://python.org/sf/436131 distutils cannot link C++ code with GCC (2001-08-21) http://python.org/sf/454030 always return 0 command status (2001-09-07) http://python.org/sf/459705 ability to specify a 'verify' script (2001-09-28) http://python.org/sf/466200 sre bug with nested groups (2001-10-12) http://python.org/sf/470582 remember to sync trees (2002-03-25) http://python.org/sf/534669 installation atop 2.2 fails (2002-04-12) http://python.org/sf/543244 cgitb variable dumps a little flaky (2002-04-26) http://python.org/sf/549038 unknown locale de_DE@euro on Suse 8.0 Linux (2002-05-10) http://python.org/sf/554676 FixTk.py logic wrong (2002-06-04) http://python.org/sf/564729 bdist_rpm and the changelog option (2002-06-18) http://python.org/sf/570655 Provoking infinite scanner loops (2002-07-13) http://python.org/sf/581080 win32 build_ext problem (2002-09-24) http://python.org/sf/614051 fpectl module broken on Linux (2002-09-24) http://python.org/sf/614060 tempfile crashes (2002-10-15) http://python.org/sf/623464 missing names in telnetlib.py (2002-10-19) http://python.org/sf/625823 curses library not found during make (2002-10-23) http://python.org/sf/627864 Set constructor fails with NameError (2002-10-24) http://python.org/sf/628246 bdist_rpm target breaks with rpm 4.1 (2002-10-28) http://python.org/sf/630195 Mac OS 10 configure problem (2002-10-30) http://python.org/sf/631247 Tkinter (?) refct (?) bug (2002-11-01) http://python.org/sf/632323 Typo string instead of sting in LibDoc (2002-11-03) http://python.org/sf/632864 Closed Patches -------------- Fix for sre bug 470582 (2002-03-08) http://python.org/sf/527371 ConfigParser: optional section header (2002-04-26) http://python.org/sf/549037 LDFLAGS support for build_ext.py (2002-07-30) http://python.org/sf/588809 rm email package dependency on rfc822.py (2002-09-23) http://python.org/sf/613434 getframe hook (Psyco #1) (2002-10-01) http://python.org/sf/617309 Tiny profiling info (Psyco #2) (2002-10-01) http://python.org/sf/617311 debugger-controlled jumps (Psyco #3) (2002-10-01) http://python.org/sf/617312 gzip.py and files > 2G (2002-10-03) http://python.org/sf/618135 telnetlib.py: don't block on IAC and enhancement (2002-10-29) http://python.org/sf/630829 Exceptions raised by line trace function (2002-10-30) http://python.org/sf/631276 New pdb command "pp" (2002-10-31) http://python.org/sf/631678 From marc@informatik.uni-bremen.de Sun Nov 10 17:23:17 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: 10 Nov 2002 18:23:17 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> Message-ID: <1036948999.746.102.camel@leeloo.intern.geht.de> --=-+VgZ8mFH+pNg5ehotgxF Content-Type: text/plain Content-Transfer-Encoding: 7bit > I don't like this approach. For -CURRENT, I would outright reject any > such patch; there should be a way to enable extensions even if A somewhat simpler solution/work-around would be to define __BSD_VISIBLE (patch attached). But the cleanest way would be to not define _XOPEN_SOURCE, XOPEN_SOURCE_EXTENDED and _POSIX_C_SOURCE on FreeBSD 5. > _POSIX_C_SOURCE is defined. Perhaps they reconsider until they release > the system. Unlikely. The release is scheduled for the 20th of November. > OTOH, why absence of chroot a problem? Should not HAVE_CHROOT be > undefined if chroot is hidden? It isn't. Regards, Marc -- "Premature optimization is the root of all evil." -- Donald E. Knuth --=-+VgZ8mFH+pNg5ehotgxF Content-Disposition: attachment; filename=pyconfig.h.in.diff Content-Transfer-Encoding: quoted-printable Content-Type: text/x-patch; name=pyconfig.h.in.diff; charset=ISO-8859-1 *** pyconfig.h.in.orig Sun Nov 10 17:59:23 2002 --- pyconfig.h.in Sun Nov 10 18:01:04 2002 *************** *** 805,810 **** --- 805,813 ---- /* Define to activate Unix95-and-earlier features */ #undef _XOPEN_SOURCE_EXTENDED =20 + /* Enable extensions for FreeBSD. */ + #undef __BSD_VISIBLE +=20 /* Define to 1 if type `char' is unsigned and you are not using gcc. */ #ifndef __CHAR_UNSIGNED__ # undef __CHAR_UNSIGNED__ --=-+VgZ8mFH+pNg5ehotgxF Content-Disposition: attachment; filename=configure.in.diff Content-Transfer-Encoding: quoted-printable Content-Type: text/x-patch; name=configure.in.diff; charset=ISO-8859-1 *** configure.in.orig Sun Nov 10 17:59:34 2002 --- configure.in Sun Nov 10 18:02:01 2002 *************** *** 43,48 **** --- 43,49 ---- # we define it globally. AC_DEFINE(_XOPEN_SOURCE_EXTENDED, 1, Define to activate Unix95-and-earlie= r features) AC_DEFINE(_POSIX_C_SOURCE, 199506L, Define to activate features from IEEE= Stds 1003.{123}-1995) + AC_DEFINE(__BSD_VISIBLE, 1, Enable extensions for FreeBSD.) =20 # Arguments passed to configure. AC_SUBST(CONFIG_ARGS) --=-+VgZ8mFH+pNg5ehotgxF-- From marc@informatik.uni-bremen.de Sun Nov 10 18:49:41 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: 10 Nov 2002 19:49:41 +0100 Subject: [Python-Dev] [PATCH] compile fixes for FreeBSD 5 (-current) Message-ID: <1036954184.746.112.camel@leeloo.intern.geht.de> --=-sBYLt0RAvRhOvbWu8wIH Content-Type: multipart/mixed; boundary="=-+Z6zFzeGQ0E6xLHjbrVa" --=-+Z6zFzeGQ0E6xLHjbrVa Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Hi! I've made some patches against the current CVS version to get it build on FreeBSD 5-current. If build with "--without-pymalloc" it passes all tests except "test_re" (Bus error). With "--with-pymalloc" the install core-dumps. Regards, Marc --=20 "Premature optimization is the root of all evil." -- Donald E. Knuth --=-+Z6zFzeGQ0E6xLHjbrVa Content-Disposition: attachment; filename=FreeBSD_fixes.tgz Content-Type: application/x-gzip; name=FreeBSD_fixes.tgz Content-Transfer-Encoding: base64 H4sIAKmozj0AA+1Z+3PiRhL2r76/ooNdKWTxEG97E64WG9mQYKAQJL7acqlm0QAqg0T0sJfL5n9P j0YSAiS8D7LJXTG7LsN0T09P9/TXn+TlamwaE32am+V0I6fpk8nJwYdUkKRquXwiSVKhVilEf7NR KJWLJ1KtUi1ViqVaSUL9klQsn4B0eFd2h2s7xAI4WRBrvE/vNfn/6Li4uIDlxh0wLX16qrgGdM1n KEhQuHpTunwjlaEoScV/ZbPZTf1d1dIlV73YHOw71K5qmUu05U0A5C+gSSe6QcExYWJaYwquTcGc gDOzKNGyNplQoJZlmBmYqf4HYmhgOjNqwcQ1xo5uGjZc5NHcmWtodALqQJa7w0GjO8S5jV1MA0bd 9gPbjeDCZ+JQeOjf5SswocRxLWrngJn6Ds70Cbf10OvLXVXpjQY3MtsCNG5qQwAVSWJCamAFsU1Z mPhZa4Cfs9/orOLmLsx+1PjaBAbi1qL0WmnmmDkxNDdsDeRGU1UatzLOfnnwIP0dD5SW3oiUAN9/ D2uR6nuhqoIAHz9COk7ClkS+w49QET49FX93hf2zB69kTN5fB/+v4D+Cfam6xv9qDfWL+O2I/99i MFDeuAOvw39UfVezXElG/2I5c3kZYH/jRlVG18owrfR+kQdKu9dlRR1+qRdykoekWOfDGYU5wo3F Kx7/MRRj0LlZ+ppuk/dzasOYWg7RjRCYPCu4oqMb7ocM2Ca8UDAo1UC9646C5QhsFs1Sg9lA3KQL SC/IE9Xocwachfpf06BCDk2h5035tt2V05HVGShk4N0aKr2tNrCSzOcw199bxFqFjj0KaC88Iz9d eLaIa0vToYajo4kVjAniug0ExjNiTP0OQuGZzF3KD7oVlxwoZoCVuoPOMSMm4iPbiZn0DtVSMfLX jWH7Pv1uY/ljBsXvPr8PZBOaaHYfcmd95PYCEwl0VDHDNMPA9QwYWm61DCND/wDlnHSbCa7JdGsD MrdZin9zdXQyEu3YqKnyw1DuNuWm14PVfk9pP6g3oQumBXSO/dWmz9Qic2j02/yaEQuvlumg+fEc P2sYfd0Yswul24D/PSfY3cPbh1fUNhcUU8oyiqQAWv3s6CHjGXqhkaRN5+b7MFXxYQkd9q7imgWE SWIBuqpk8ThZSqy5zkiFn7GtaG8ftnB1VZGqnVirgQmYWOYC2rIsg+JoNiKCVMr9XiiW/siy5WG+ GtbUXeBltmFJbBuDgMbWkML94MBw0+vetu/UxuBOYasjX+upczJWw1UqsaZ2im+gGXP4zSUawphF ljMbUm9/fPM25SUx9fbNv/Hj0jI1FzOSeudPP6ZYJlgRma6zdB3uBG6kyt3GdUdOTyyyoC+m9cQy szlQsSV3+qoyHLS7d+msjx/ZcAnbvt7uKsNGp9NsD5gLGbh29bkG6Xsy7ikPH5vEetENAcI1gldy +HPKbgacc5tY4egnm11hxthhT09DST3f4diSvw2ssPt4Sm0yTjZkmIJnpv+fYQvDO2jcy7/2Bj/X 4ybR+bphrg8Wp9Mf4A16iF2+DkE94rkamuOzP/zAfl3Ee9VfOTOvWmN949Lcp/i3jsMrnp7Hrs9v T6OqZ2hpYX182Fm1Npj/hVo2Y+D5c7/PRXP0mPFyHpeMT8hFcir2ZyIuEQH88kLcWr9Phkb3iblT +zTWznmIgUz+zK/FX9vDVlpbzbWMN7u3CF90Z5aN190aIyyLdG9JDcWhy4+DGVnaprYSQFsZZKGP sWMbT9QSBI/6IIEpXvEHujh+UygVMoVSlTMc7xGwmCmUi8ET4ETntOdeucPHNmXUGabP7xs3rabc F/jTGy9Sf45hEs5N8Lnnva1hSYjAykME/vEQtMgzcyBehLYOS4xEWB/zAMyIn/XzqRE71z5uhPIv IEfMm1h25LmZSI+YNOBHmwGPIUhBAA/BkGKj9yUUyTN0EI7kWUogScmx+TqWtGn3L6FJQZV7TcFL 4aucSUzgTOLrnMnb4CCcSUzmTOI34kwiciYxluqIAWcSX+VMot+PEw0xziTGcaaYyZ0+HaMTNOoY UbRTi7GcSeScSfQ4U5xXPmdK8G2HMyX7F+FM+z39DM4kfiFniuSIcSYxhjPtzsXlIjkV+zMRlwgP kpM5U7LM4zrJ4oAzJWtEOZP30vYMOyAdP3G4JHOkCAZBHKKsjLHEF/aaiNy05JufWRUyVU6csKCz 0/FY4Eoh9cKpTDztKUrIe4pShPjgtxpOFcoB98EWOpTv+53GUE7f+G+ZlUxAC/QJrEwXZuTZIxiA NT6DMX91zTBv/ESm9FHYsrN+981YHoQk4yvfr/M8RveJvBTnYBbZ6ZPese+4zgLqm9303Y/ECzEc DvKLpY6sC6HWcjWd9QDGkviGYLvLpWk5nvWEzBSvMA2lcjQzJZaZcinIjA/Ifk+LOCb4wjPezjyy EEacnc5e2Q6SQXZlwGuFEGTMxjuEWM1a/My1nNzONmtDgt8dIrJIuAMflqatf1B983WEcn+eq/au f6qnOJTluU7OTDExatE5htShtgO8/z2rT0uuk4I6pNBUKiF0FRanajEaumoVp2rFrdBF+bxr6Laj qZyT2Kq/lS3wv0R9XRBQ6lUrNs9GUx6kgwLJzZBg703hWhjW3q6oNRo01XVtIj06/u3kOI7jOI7j OI7jOI7j/278CZS+DNgAKAAA --=-+Z6zFzeGQ0E6xLHjbrVa-- --=-sBYLt0RAvRhOvbWu8wIH Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (FreeBSD) iD8DBQA9zqpF7YQCetAaG3MRAqthAJ0d5yFyw/48hpvk9vn736Yh1SJSUQCfX4D2 QTm/JPKyUtwKa9dnI/EIvUA= =ccoI -----END PGP SIGNATURE----- --=-sBYLt0RAvRhOvbWu8wIH-- From Gerson.Kurz@t-online.de Sun Nov 10 19:22:19 2002 From: Gerson.Kurz@t-online.de (Gerson Kurz) Date: Sun, 10 Nov 2002 20:22:19 +0100 Subject: [Python-Dev] bdb.py change from 1.31 to 1.31.2.1 Message-ID: I have a problem with the current version of Pythonwin that I think I have tracked down to a change in bdb.py change 1.31 to 1.31.2.1, which dates 2002/05/29. Before you moan: "Pythonwin <> Python", please read on. * PROBLEM DESCRIPTION * In old versions of Pythonwin, pressing F5 = Debugger\Go when a source is active results in the code getting executed normally. Only when the debugger hits a breakpoint, (or an exception), the debugger is actually entered. In the current version of Pythonwin (that is, "PythonWin 2.2.2 (#37, Oct 14 2002, 17:02:34) [MSC 32 bit (Intel)] on win32." using win32all build 150), pressing F5 = Debugger\Go will stop on the first line. Actually, it is worse, because the current version doesn't break on the first line of code, but somewhere inside pywin.debugger.debugger.Debugger. There was a thread about this recently on clp, and Mark Hammond proposed a fix (see ) which takes on this bit - but doesn't change the "Go = break at first line of code" behaviour. * BUT IS IT REALLY A PROBLEM * I think so, because a) it is a (for me unexpected) change in behaviour from Pythonwin versions, b) it is redundant - because you can always use "Debugger\Step in" to step into the source using the first line if you really want to c) I know, people will stare at me madly for saying this, but in other Debuggers I've worked with, "Debugger\Go" does not mean "Debugger\Go but only after another Go after the first line", but, rather, "Debugger\Go", as one would think the name indicates. * WORKAROUND * After lots of debugging I narrowed the problem down to bdb.py, and further to dispatch_call(). It seems that in the old version, the code read # First call of dispatch since reset() self.botframe = frame and was changed to # First call of dispatch since reset() # (CT) Note that this may also be None! self.botframe = frame.f_back When I revert the change (i.e. use "frame" rather than "frame.f_back") there is no initial callback, and everything works fine. I'm not quite clear just as to *why* it works, but it *does* work. FYI: The code actually breaks later in stop_here(), which was *also* changed as part of that particular checkin. * QUESTION * Actually, two questions 1) Can somebody explain that change to me? What was it intended to solve? (I don't know what the suffix "CT" stands for - there is no username to that effect in the python sourceforge page that I could find.) 2) Is it a bug or a feature (i.e. a pythonwin bug rather than a bdb.py bug) or not-a-bug-at-all ? From fredrik@pythonware.com Sun Nov 10 19:25:02 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Sun, 10 Nov 2002 20:25:02 +0100 Subject: [Python-Dev] [PATCH] compile fixes for FreeBSD 5 (-current) References: <1036954184.746.112.camel@leeloo.intern.geht.de> Message-ID: <02d901c288ee$e08260b0$ced241d5@hagrid> Marc Recht wrote: > I've made some patches against the current CVS version to get it build > on FreeBSD 5-current. If build with "--without-pymalloc" it passes all > tests except "test_re" (Bus error). try increasing the stack size. From marc@informatik.uni-bremen.de Sun Nov 10 19:46:26 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: 10 Nov 2002 20:46:26 +0100 Subject: [Python-Dev] [PATCH] compile fixes for FreeBSD 5 (-current) In-Reply-To: <02d901c288ee$e08260b0$ced241d5@hagrid> References: <1036954184.746.112.camel@leeloo.intern.geht.de> <02d901c288ee$e08260b0$ced241d5@hagrid> Message-ID: <1036957587.746.115.camel@leeloo.intern.geht.de> --=-IFhqdcwn33UzLVwhcbZl Content-Type: text/plain Content-Transfer-Encoding: quoted-printable > > I've made some patches against the current CVS version to get it build > > on FreeBSD 5-current. If build with "--without-pymalloc" it passes all > > tests except "test_re" (Bus error). >=20 > try increasing the stack size. Increasing the stack size to 32MB didn't help. Marc --=20 "Premature optimization is the root of all evil." -- Donald E. Knuth --=-IFhqdcwn33UzLVwhcbZl Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (FreeBSD) iD8DBQA9zreS7YQCetAaG3MRAvHhAKCAmVUSC64JoHd1IiTCe9RmwO1kFQCeLNIj CqMjFl/9p7ikbJ5rjDHizhI= =p1wW -----END PGP SIGNATURE----- --=-IFhqdcwn33UzLVwhcbZl-- From martin@v.loewis.de Sun Nov 10 20:40:55 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 10 Nov 2002 21:40:55 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: <1036948999.746.102.camel@leeloo.intern.geht.de> References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036948999.746.102.camel@leeloo.intern.geht.de> Message-ID: Marc Recht writes: > > I don't like this approach. For -CURRENT, I would outright reject any > > such patch; there should be a way to enable extensions even if > A somewhat simpler solution/work-around would be to define __BSD_VISIBLE > (patch attached). But the cleanest way would be to not define > _XOPEN_SOURCE, XOPEN_SOURCE_EXTENDED and _POSIX_C_SOURCE on FreeBSD 5. Notice that issues are different on the various BSDs. I think of Python on Unix as "POSIX+Extensions". On all POSIX systems, _XOPEN_SOURCE should be defined. If additional defines are needed to activate extensions, we should define them. If FreeBSD has no mechanisms to request extensions other than defining __BSD_VISIBLE, we should define it. Before doing so, I'd like to know what Python features would need that. Please don't post patches to python-dev. > > OTOH, why absence of chroot a problem? Should not HAVE_CHROOT be > > undefined if chroot is hidden? > It isn't. Ok, then this needs to be investigated. This is a clear bug. Regards, Martin From marc@informatik.uni-bremen.de Sun Nov 10 21:10:57 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: 10 Nov 2002 22:10:57 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036948999.746.102.camel@leeloo.intern.geht.de> Message-ID: <1036962658.746.129.camel@leeloo.intern.geht.de> --=-mOAlS1+EMNx63+zZlxlu Content-Type: text/plain Content-Transfer-Encoding: quoted-printable > > (patch attached). But the cleanest way would be to not define > > _XOPEN_SOURCE, XOPEN_SOURCE_EXTENDED and _POSIX_C_SOURCE on FreeBSD 5. >=20 > Notice that issues are different on the various BSDs. I know. And if there are issues they should be resolved for each BSD. > I think of Python on Unix as "POSIX+Extensions". On all POSIX systems, > _XOPEN_SOURCE should be defined. If additional defines are needed to > activate extensions, we should define them. The clean way on FreeBSD is then _not_ to define the above defines. We then get everything we want.=20 If you define _POSIX_C_SOURCE you get _POSIX_C_SOURCE. Not more, not less.. > If FreeBSD has no mechanisms to request extensions other than defining > __BSD_VISIBLE, we should define it. Before doing so, I'd like to know > what Python features would need that. Setting __BSD_VISIBLE is rather a hack and shouldn't be done.. Please have a look at the patch, which I submitted on SourceForge. (It's so= mewhat like=20 Hye-Shik Chang patch.) > Please don't post patches to python-dev. Sorry.. Marc --=20 "Premature optimization is the root of all evil." -- Donald E. Knuth --=-mOAlS1+EMNx63+zZlxlu Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (FreeBSD) iD8DBQA9zsth7YQCetAaG3MRArcVAJ40R4tStYSd0pG7TPUTo1tIN+v7AQCeK/W4 A/H8KnoBTjVCNZ1DjJf03c8= =C+M4 -----END PGP SIGNATURE----- --=-mOAlS1+EMNx63+zZlxlu-- From recht@contentmedia.de Sun Nov 10 21:12:40 2002 From: recht@contentmedia.de (Marc Recht) Date: 10 Nov 2002 22:12:40 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: <20021104192937.GA78845@fallin.lv> References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> Message-ID: <1036962763.746.132.camel@leeloo.intern.geht.de> > But, this build gets segfault on installing. > > ... make install ... > ./python -E ./setup.py install --prefix=/home/perky/test --install-scripts=/home/perky/test/bin --install-platlib=/home/perky/test/lib/python2.3/lib-dynload > running install > running build > running build_ext > Segmentation fault (core dumped) > *** Error code 139 Try with "--without-pymalloc" that solved this issue for me. But the tests still fail in "test_re". Marc From martin@v.loewis.de Sun Nov 10 22:00:14 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 10 Nov 2002 23:00:14 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: <1036962658.746.129.camel@leeloo.intern.geht.de> References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036948999.746.102.camel@leeloo.intern.geht.de> <1036962658.746.129.camel@leeloo.intern.geht.de> Message-ID: Marc Recht writes: > The clean way on FreeBSD is then _not_ to define the above defines. We > then get everything we want. > If you define _POSIX_C_SOURCE you get _POSIX_C_SOURCE. Not more, not > less.. Can you elaborate? We also define _XOPEN_SOURCE. So we are entitled to get all functions defined for UNIX. I consider it a bug that FreeBSD does not provide a mode for "Conforming XSI Application Using Extensions", according to http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap02.html > Setting __BSD_VISIBLE is rather a hack and shouldn't be done.. If it is a hack to work around a bug in FreeBSD, then I think it is acceptable. > Please have a look at the patch, which I submitted on SourceForge. It is way too large to be acceptable, and takes a "I care only about one system" position. Try writing your code in a way so that it simultaneously works with many systems, instead of special-casing each system individually. For example, why is it necessary to move the enable-framework processing? Regards, Martin From marc@informatik.uni-bremen.de Sun Nov 10 22:31:35 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: 10 Nov 2002 23:31:35 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036948999.746.102.camel@leeloo.intern.geht.de> <1036962658.746.129.camel@leeloo.intern.geht.de> Message-ID: <1036967497.746.161.camel@leeloo.intern.geht.de> --=-j2lb3JvQTinZU3q9S/38 Content-Type: text/plain Content-Transfer-Encoding: quoted-printable > Can you elaborate? We also define _XOPEN_SOURCE. So we are entitled to > get all functions defined for UNIX. Hmm.. I'm not that standards guy and didn't read that out of that document. It enable the extensions. (Though I'm not that sure which that are..) > I consider it a bug that FreeBSD does not provide a mode for > "Conforming XSI Application Using Extensions", according to FreeBSD does support that. _XOPEN_SOURCE =3D 500 means: #define __XSI_VISIBLE 500 #undef _POSIX_C_SOURCE #define _POSIX_C_SOURCE 199506 > http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap02.html > > Setting __BSD_VISIBLE is rather a hack and shouldn't be done.. >=20 > If it is a hack to work around a bug in FreeBSD, then I think it is > acceptable. I still not think that's a bug in FreeBSD. But I send you sys/cdefs.h and unistd.h, so you can have a look for yourself. Maybe I'm getting something wrong here. IMO it's just a strict interpretation of the standard. =20 > It is way too large to be acceptable, and takes a "I care only about =09 Why? There are only two actual changes the case for FreeBSD and the addition of _THREAD_SAFE to the CFLAGS if Python is compiled with threads. The rest just moves the MACHDEP stuff to the top of configure.in. > one system" position. Try writing your code in a way so that it I don't think so.. Sometimes systems need special treatment... > simultaneously works with many systems, instead of special-casing each > system individually. I didn't saw an other way to do it. The defines were always set and that just doesn't work for FreeBSD. > For example, why is it necessary to move the enable-framework > processing? To define MACHDEP before the XOPEN/POSIX stuff, because of the FreeBSD case. =20 Regards Marc --=20 "Premature optimization is the root of all evil." -- Donald E. Knuth --=-j2lb3JvQTinZU3q9S/38 Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (FreeBSD) iD8DBQA9zt5H7YQCetAaG3MRAnVxAJ9ldKLv3Z0X0GVbgdS8ZPMe7l8LJgCfcuMI qWu/ENcza+jCRmff4MU5+vs= =uhdo -----END PGP SIGNATURE----- --=-j2lb3JvQTinZU3q9S/38-- From mhammond@skippinet.com.au Sun Nov 10 23:56:49 2002 From: mhammond@skippinet.com.au (Mark Hammond) Date: Mon, 11 Nov 2002 10:56:49 +1100 Subject: [Python-Dev] bdb.py change from 1.31 to 1.31.2.1 In-Reply-To: Message-ID: > I have a problem with the current version of Pythonwin that I think I have > tracked down to a change in bdb.py change 1.31 to 1.31.2.1, which dates > 2002/05/29. Before you moan: "Pythonwin <> Python", please read on. Well, Pythonwin <> Python . The executive summary of the next 5 paras: Pythonwin simply worked around a few bdb bugs without bothering to report them. Python subsequently fixed some. Pythonwin broke. mea culpa. Pythonwin worked around a number of "problems" with pdb. A basic debugger has existed for Pythonwin since before Python 1.5.2, and Pythonwin's debugger itself has gone through a number of revisions. pdb was really the only reasonable debugger out there, so bdb has problems that pdb didn't see - so noone saw them. Pythonwin's debugger is complex, mainly due to the "recursive" nature of a GUI debugger doing in-thread debugging of GUI code. Python's debugger is complex, simply as debugging is complex, and the variety of conditions under which your hooks are called can be confusing. Thus, I suffer from the perennial problem of it taking many hours for me to get my head completely around the problem, and only minutes of distraction to lose it again . Thus, sorting out exactly who's fault each problem was, and building demonstrable test cases for bdb shortcomings never really got off the ground. This is further compounded by the fact that I (obviously) rarely use the Pythonwin debugger myself! I rarely run programs inside Pythonwin at all these days - the "in-process" nature coupled with my COM work etc means that running inside Pythonwin means I need to shutdown Pythonwin to (eg) shutdown Outlook/Word cleanly. So I just alt-tab to a command-prompt, and run it there. I generally do my Pythonwin based debugging in the same way - "pywin.debugger.brk()" in the source, and off I go in a new process. I'd love to give Pythonwin out-of-process capabilities. I'd love to do all sorts of things > There was a thread about this recently on clp, and Mark Hammond proposed a > fix (see ) which > takes on this bit - but doesn't change the "Go = break at first line of > code" behaviour. Oops :( Note that the fix was to remove one of the work-arounds I mentioned above. I found it necessary to add an "artificial" call frame when starting debugging, and now it appears I dont. The "Go" fix will be similar when I look for it - removing some pythonwin specific hack! > * BUT IS IT REALLY A PROBLEM * > > I think so, because Yep, it is - but not for the people on this list! Pythonwin and win32 specific stuff isn't discussed on this list. http://mail.python.org/mailman/listinfo/python-win32 is a list where it would be on-topic. > After lots of debugging I narrowed the problem down to bdb.py, and further > to dispatch_call(). It seems that in the old version, the code read As I mentioned, this is complex. I can't answer your questions without getting my head completely around the problem again (see above!) As a clue: > I'm not quite clear just > as to *why* it works, but it *does* work. While I can relate to the feeling, it doesn't really help. *Someone* is going to have to analyze this enough to answer the "why". Without such a person, this will flounder. It is on my list of things to do > 2) Is it a bug or a feature (i.e. a pythonwin bug rather than a > bdb.py bug) Certainly a bug in Pythonwin. Mark. From guido@python.org Mon Nov 11 00:52:52 2002 From: guido@python.org (Guido van Rossum) Date: Sun, 10 Nov 2002 19:52:52 -0500 Subject: [snake-farm] Re: [Python-Dev] Snake farm In-Reply-To: Your message of "10 Nov 2002 22:12:40 +0100." <1036962763.746.132.camel@leeloo.intern.geht.de> References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036962763.746.132.camel@leeloo.intern.geht.de> Message-ID: <200211110052.gAB0qqb13698@pcp02138704pcs.reston01.va.comcast.net> > Try with "--without-pymalloc" that solved this issue for me. But the > tests still fail in "test_re". Marc, pymalloc is supposed to be bulletproof. If there's a segfault that can be avoided by disabling pymalloc, that's a bug in pymalloc. Would you mind helping us find this bug? --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Mon Nov 11 01:02:59 2002 From: guido@python.org (Guido van Rossum) Date: Sun, 10 Nov 2002 20:02:59 -0500 Subject: [Python-Dev] bdb.py change from 1.31 to 1.31.2.1 In-Reply-To: Your message of "Mon, 11 Nov 2002 10:56:49 +1100." References: Message-ID: <200211110102.gAB12xo13770@pcp02138704pcs.reston01.va.comcast.net> > I'd love to give Pythonwin out-of-process capabilities. This has been added to IDLE (in the idlefork project at SF). I know others have done this too (Wing IDE, PythonWorks, Komodo). I wonder if some sort of standard solution could be worked out for this? --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one@comcast.net Mon Nov 11 01:28:24 2002 From: tim.one@comcast.net (Tim Peters) Date: Sun, 10 Nov 2002 20:28:24 -0500 Subject: [snake-farm] Re: [Python-Dev] Snake farm In-Reply-To: <200211110052.gAB0qqb13698@pcp02138704pcs.reston01.va.comcast.net> Message-ID: [Guido] > Marc, pymalloc is supposed to be bulletproof. Not necessarily. It's not possible to write a memory manager in 100% std C, and some platform may be bizarre enough that pymalloc just can't be used on it. We haven't found one yet, though. Simple example: if some platform requires 16-byte aligned addresses, pymalloc will plain fail on that platform. Or, I'm unsure what would happen on a word-adressed machine, but I am sure it wouldn't be pretty . Or the OS and/or HW may do read protection at the byte level, so that pymalloc's belief that it can always read stuff from "the next lower pool boundary" is wrong here. There are lots of things that *could* go wrong. > If there's a segfault that can be avoided by disabling pymalloc, that's > a bug in pymalloc. Again not necessarily: there's tricky casting code throughout pymalloc (as there is in any low-level memory manager), and a compiler optimization and/or codegen bug would be my first guess. So the first thing I'd do is recompile pymalloc without optimization. > Would you mind helping us find this bug? Please yes -- there's no way to guess. From frank.horowitz@csiro.au Mon Nov 11 05:38:50 2002 From: frank.horowitz@csiro.au (Frank Horowitz) Date: 11 Nov 2002 13:38:50 +0800 Subject: [Python-Dev] Continuous Sets in 2.3 set implementation? Message-ID: <1036993130.29862.15.camel@bonzo> G'Day folks, I've had a quick read through of the CVS version of the set.py code. The class as implemented looks nice and clean for what I would call "discrete sets" (e.g. iterable). (I do like the idea of using the dict implementation!) However, one thing bothers me. Certain operations, such as unions, intersections, and symmetric differences, make sense for "continuous sets" too (think about intervals on the line). I know it's probably too late in the development cycle to get any meaningful continuous set operations supported, but is there any hope of renaming the implementation to something along the lines of "dsets" to remind the user of the discreteness? (Am I being too pedantic?) Cheers, Frank Horowitz From tim.one@comcast.net Mon Nov 11 05:50:14 2002 From: tim.one@comcast.net (Tim Peters) Date: Mon, 11 Nov 2002 00:50:14 -0500 Subject: [Python-Dev] Continuous Sets in 2.3 set implementation? In-Reply-To: <1036993130.29862.15.camel@bonzo> Message-ID: [Frank Horowitz] > I've had a quick read through of the CVS version of the set.py code. The > class as implemented looks nice and clean for what I would call > "discrete sets" (e.g. iterable). (I do like the idea of using the dict > implementation!) > > However, one thing bothers me. Certain operations, such as unions, > intersections, and symmetric differences, make sense for "continuous > sets" too (think about intervals on the line). > > I know it's probably too late in the development cycle to get any > meaningful continuous set operations supported, but is there any hope of > renaming the implementation to something along the lines of "dsets" to > remind the user of the discreteness? (Am I being too pedantic?) No and yes. Discrete sets have been suggested as a Python addition non-stop for at least 10 years. This is the (I think) second time I've heard someone ask for intervals. So like when you import math you get real-valued math functions, and need to import cmath if you want complex-valued ones, even more so when you import sets you should get the flavor of set 99.9999167% of the world's population expects to get. The other two of you can create a new module with a clumsier name (I suggest intset.py, just to confuse newbies ). From martin@v.loewis.de Mon Nov 11 05:57:48 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 11 Nov 2002 06:57:48 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: <1036967497.746.161.camel@leeloo.intern.geht.de> References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036948999.746.102.camel@leeloo.intern.geht.de> <1036962658.746.129.camel@leeloo.intern.geht.de> <1036967497.746.161.camel@leeloo.intern.geht.de> Message-ID: Marc Recht writes: > > I consider it a bug that FreeBSD does not provide a mode for > > "Conforming XSI Application Using Extensions", according to > FreeBSD does support that. _XOPEN_SOURCE = 500 means: > #define __XSI_VISIBLE 500 > #undef _POSIX_C_SOURCE > #define _POSIX_C_SOURCE 199506 So how do I request extensions on that system, beyond the functions defined in XPG/5 or XPG/6? > > If it is a hack to work around a bug in FreeBSD, then I think it is > > acceptable. > I still not think that's a bug in FreeBSD. But I send you sys/cdefs.h > and unistd.h, so you can have a look for yourself. Maybe I'm getting > something wrong here. IMO it's just a strict interpretation of the > standard. I have looked at them before, hence my claim that I think FreeBSD has a bug here. > Why? There are only two actual changes the case for FreeBSD and the > addition of _THREAD_SAFE to the CFLAGS if Python is compiled with > threads. The rest just moves the MACHDEP stuff to the top of > configure.in. I misread the patch: That it moves the MACHDEP stuff is not visible. Instead, it moves the --enable-framework stuff downwards. Still, you add a case with "FreeBSD", and "rest of the world", the "rest of the world" case being where _GNU_SOURCE, _XOPEN_SOURCE, _XOPEN_SOURCE_EXTENDED, and _POSIX_C_SOURCE is defined. Why does it hurt if either _GNU_SOURCE or _XOPEN_SOURCE_EXTENDED would be defined on FreeBSD? > > one system" position. Try writing your code in a way so that it > I don't think so.. Sometimes systems need special treatment... Only if all other options have been exhausted. What problems occur if _XOPEN_SOURCE is defined? > > simultaneously works with many systems, instead of special-casing each > > system individually. > I didn't saw an other way to do it. The defines were always set and that > just doesn't work for FreeBSD. Did you try defining _GNU_SOURCE and _XOPEN_SOURCE_EXTENDED? I can see that you don't want _POSIX_C_SOURCE to be defined, either. Regards, Martin From mal@lemburg.com Mon Nov 11 08:40:52 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 11 Nov 2002 09:40:52 +0100 Subject: [Python-Dev] Reindenting unicodedata.c References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <3DCF6D14.3000502@lemburg.com> Martin v. Loewis wrote: > Guido van Rossum writes: > > >>>I don't know whether these patches are major changes; I would (unless >>>advised otherwise) reindent them in a single commit, then update the >>>patches for MAL to reconsider them. >> >>Why does MAL prefer 4 spaces? I'm not really sure why, but one apparent reason is that Python uses the same indent. Also, 8 spaces or TABs don't "look right" too me - obviously, that's a matter of taste and hard to argue about. > I'm not sure; he did not answer this question. Sorry, I am too busy with other things at the moment. > I'm not even sure > whether he prefers 4 spaces, or whether he merely prefers not to > change it. Just like Fredrik, I prefer 4 spaces indents over TABs and since the RE and Unicode code base used these 4 spaces indents all along, I'd also prefer to keep it that way. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From esr@thyrsus.com Mon Nov 11 09:08:45 2002 From: esr@thyrsus.com (Eric S. Raymond) Date: Mon, 11 Nov 2002 04:08:45 -0500 Subject: [Python-Dev] Continuous Sets in 2.3 set implementation? In-Reply-To: <1036993130.29862.15.camel@bonzo> References: <1036993130.29862.15.camel@bonzo> Message-ID: <20021111090845.GA6879@thyrsus.com> Frank Horowitz : > I know it's probably too late in the development cycle to get any > meaningful continuous set operations supported, but is there any hope of > renaming the implementation to something along the lines of "dsets" to > remind the user of the discreteness? (Am I being too pedantic?) You're being too pedantic. Even most mathematicians tend to assume a "set" will be drawn from a finite or discretely-infinite universe unless the specific context points at a nondenumerable one. Most mathematicians (other than metamathematicians) would be more likely to think of a continuous subset of the reals as an `interval'. -- Eric S. Raymond From marc@informatik.uni-bremen.de Mon Nov 11 10:00:33 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: 11 Nov 2002 11:00:33 +0100 Subject: [snake-farm] Re: [Python-Dev] Snake farm In-Reply-To: <200211110052.gAB0qqb13698@pcp02138704pcs.reston01.va.comcast.net> References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036962763.746.132.camel@leeloo.intern.geht.de> <200211110052.gAB0qqb13698@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <1037008835.746.204.camel@leeloo.intern.geht.de> --=-tuuTKiD/ra3Duz9v1MjA Content-Type: text/plain Content-Transfer-Encoding: quoted-printable > Marc, pymalloc is supposed to be bulletproof. If there's a segfault > that can be avoided by disabling pymalloc, that's a bug in pymalloc. > Would you mind helping us find this bug? No problem. Environment: FreeBSD-current (Nov. 8th), Python CVS Sources Nov. 11th (~ 10:00 CEST). limit: cputime unlimited filesize unlimited datasize 512MB stacksize 64MB coredumpsize unlimited memoryuse unlimited memorylocked unlimited maxproc 7390 descriptors 32768 sockbufsize unlimited vmemorysize unlimited I compiled Python with no optimization flags set and my changes to pyconfig.h.in/configure.in (removes XOPEN_*,POSIX_* in the FreeBSD case to get it compiled). The only configure argument was prefix. Error=20 /usr/bin/install -c ./install-sh /opt/local/python/lib/python2.3/config/install-sh ./python -E ./setup.py install \ --prefix=3D/opt/local/python \ --install-scripts=3D/opt/local/python/bin \ --install-platlib=3D/opt/local/python/lib/python2.3/lib-dynload running install running build running build_ext gmake: *** [sharedinstall] Segmentation fault (core dumped) 713 if (ADDRESS_IN_RANGE(p, pool->arenaindex)) { (gdb) bt #0 0x080779c0 in PyObject_Free (p=3D0x800) at Objects/obmalloc.c:713 #1 0x080e00a0 in function_call (func=3D0x82641ec, arg=3D0x8258b8c, kw=3D0x826bbdc) at Objects/funcobject.c:481 #2 0x080599fb in PyObject_Call (func=3D0x82641ec, arg=3D0x8258b8c, kw=3D0x826bbdc) at Objects/abstract.c:1688 #3 0x080a7950 in ext_do_call (func=3D0x82641ec, pp_stack=3D0xbfbfdfa4, flags=3D2, na=3D1, nk=3D0) at Python/ceval.c:3453 #4 0x080a4b64 in eval_frame (f=3D0x81d400c) at Python/ceval.c:2043 #5 0x080a5e1e in PyEval_EvalCodeEx (co=3D0x820e620, globals=3D0x819a57c, locals=3D0x0, args=3D0x831ad70, argcount=3D1, kws=3D0x831ad74, kwcount=3D0, defs=3D0x0, defcount=3D0, closure=3D0x0) at Python/ceval.c:2554 #6 0x080a73c3 in fast_function (func=3D0x82378ec, pp_stack=3D0xbfbfe194, n=3D1, na=3D1, nk=3D0) at Python/ceval.c:3297 #7 0x080a72af in call_function (pp_stack=3D0xbfbfe194, oparg=3D0) at Python/ceval.c:3266 #8 0x080a4a50 in eval_frame (f=3D0x831ac0c) at Python/ceval.c:2009 [...] (gdb) l 708=09 709 if (p =3D=3D NULL) /* free(NULL) has no effect */ 710 return; 711=09 712 pool =3D POOL_ADDR(p); 713 if (ADDRESS_IN_RANGE(p, pool->arenaindex)) { 714 /* We allocated this address. */ 715 LOCK(); 716 /* 717 * Link p to the start of the pool's freeblock list. Since Here ? (gdb) p pool $1 =3D (struct pool_header *) 0x0 (gdb) p p $2 =3D (void *) 0x800 (gdb) x 0x800 0x800: Cannot access memory at address 0x800 This doesn't happen if either --without-pymalloc or --with-pydebug is given. HTH, Marc --=20 "Premature optimization is the root of all evil." -- Donald E. Knuth --=-tuuTKiD/ra3Duz9v1MjA Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (FreeBSD) iD8DBQA9z3/B7YQCetAaG3MRArVTAKCMXN0FbfVRiPLX0Oexa5eoU4RldACgieIk hRUOXsYIgSdxj+QwDDkqf4M= =pMq1 -----END PGP SIGNATURE----- --=-tuuTKiD/ra3Duz9v1MjA-- From marc@informatik.uni-bremen.de Mon Nov 11 10:25:20 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: 11 Nov 2002 11:25:20 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036948999.746.102.camel@leeloo.intern.geht.de> <1036962658.746.129.camel@leeloo.intern.geht.de> <1036967497.746.161.camel@leeloo.intern.geht.de> Message-ID: <1037010321.746.226.camel@leeloo.intern.geht.de> --=-4ColdFWWQKBVqy6EG8So Content-Type: text/plain Content-Transfer-Encoding: quoted-printable > So how do I request extensions on that system, beyond the functions > defined in XPG/5 or XPG/6? The used standard is defined by_POSIX_SOURCE, _POSIX_C_SOURCE, _ANSI_SOURCE and _C99_SOURCE. In that order (sys/cdefs.h). These set the internal defines __POSIX_VISIBLE, __XSI_VISIBLE, __BSD_VISIBLE, __ISO_C_VISIBLE according to the given standard. If nothing is given=20 then #define __POSIX_VISIBLE 200112 #define __XSI_VISIBLE 600 #define __BSD_VISIBLE 1 #define __ISO_C_VISIBLE 1999 is set. In this environment is everything we want. > I have looked at them before, hence my claim that I think FreeBSD has There have been a lot of changes between September and late October. > a bug here. I'm still not convinced that FreeBSD's behaviour is a bug. IMO it's only strict. If you define a standard you get, if not you get all. Although I'm quite sure the FreeBSD community doesn't like it I'll make a patch against sys/defs.h to put a define _BSD_SOURCE at the top of the chain which sets the default environment and post it to the freebsd-current discussion list. > Still, you add a case with "FreeBSD", and "rest of the world", the I'm quite sure it's an object to be extended.. There are some strange systems out there which all need special treatment.. > "rest of the world" case being where _GNU_SOURCE, _XOPEN_SOURCE, > _XOPEN_SOURCE_EXTENDED, and _POSIX_C_SOURCE is defined. Why does it > hurt if either _GNU_SOURCE or _XOPEN_SOURCE_EXTENDED would be defined > on FreeBSD? They both don't hurt. I only moved them into the "*" case, because they're not needed for the FreeBSD case.=20 > Only if all other options have been exhausted. What problems occur if > _XOPEN_SOURCE is defined? _XOPEN_SOURCE sets _POSIX_C_SOURCE. > Did you try defining _GNU_SOURCE and _XOPEN_SOURCE_EXTENDED? I can see It's no problem to define them. They're just not needed. > that you don't want _POSIX_C_SOURCE to be defined, either. If it's set then we don't get __BSD_VISIBLE.=20 Regards, Marc --=20 "Premature optimization is the root of all evil." -- Donald E. Knuth --=-4ColdFWWQKBVqy6EG8So Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (FreeBSD) iD8DBQA9z4WQ7YQCetAaG3MRAok4AJwMG1zxdpRropuByYejWj+vRu6wqACfUCc3 7uAUPnzwmmaCPdBTn5GMoro= =UBxs -----END PGP SIGNATURE----- --=-4ColdFWWQKBVqy6EG8So-- From martin@v.loewis.de Mon Nov 11 12:24:25 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 11 Nov 2002 13:24:25 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: <1037010321.746.226.camel@leeloo.intern.geht.de> References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036948999.746.102.camel@leeloo.intern.geht.de> <1036962658.746.129.camel@leeloo.intern.geht.de> <1036967497.746.161.camel@leeloo.intern.geht.de> <1037010321.746.226.camel@leeloo.intern.geht.de> Message-ID: Marc Recht writes: > Although I'm quite sure the FreeBSD community doesn't like it I'll make > a patch against sys/defs.h to put a define _BSD_SOURCE at the top of the > chain which sets the default environment and post it to the > freebsd-current discussion list. Could you please call it __EXTENSIONS__? Many of the functions that fall under it are not BSD specific, and _BSD_SOURCE selects the favour-bsd API on other systems. _BSD_SOURCE would be fine for things that are traditionally in BSD, but have been superceded by competing POSIX API. In that case, Python would want to use the POSIX API. So would would not want to define _BSD_SOURCE (just as we don't want to define _OSF_SOURCE and _GNU_SOURCE, but cannot avoid doing so). > I'm quite sure it's an object to be extended.. There are some strange > systems out there which all need special treatment.. ... or just cannot be supported. Please see PEP 11, we'll be phasing out support for a number of such systems. It's just not worth the pain. Python works on Posix systems, and uses extensions where available. It does not work on strange systems; tough luck for users of such systems (we have yet to hear from a DYNIX user who wants to run Python 2.2). > > Only if all other options have been exhausted. What problems occur if > > _XOPEN_SOURCE is defined? > _XOPEN_SOURCE sets _POSIX_C_SOURCE. And why is that a problem? Regards, Martin From martin@v.loewis.de Mon Nov 11 12:31:22 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 11 Nov 2002 13:31:22 +0100 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: <3DCF6D14.3000502@lemburg.com> References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> Message-ID: "M.-A. Lemburg" writes: > >>Why does MAL prefer 4 spaces? > > I'm not really sure why, but one apparent reason is that Python > uses the same indent. It doesn't. Python uses tabs for indentation in its C source code, see PEP 7. > Just like Fredrik, I prefer 4 spaces indents over TABs and since > the RE and Unicode code base used these 4 spaces indents all along, > I'd also prefer to keep it that way. I don't, since it makes editing these files more difficult for me, so I will just change the file, and be done with it. Over time, I expect that the unicodeobject.c will see a similar change - but not until a major change is done to that file simultaneously. I consider sre to be different in this respect, as the "master source" is not the Python CVS, so making an exception seems appropriate. Regards, Martin From martin@v.loewis.de Mon Nov 11 12:43:57 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 11 Nov 2002 13:43:57 +0100 Subject: [snake-farm] Re: [Python-Dev] Snake farm In-Reply-To: <1037008835.746.204.camel@leeloo.intern.geht.de> References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036962763.746.132.camel@leeloo.intern.geht.de> <200211110052.gAB0qqb13698@pcp02138704pcs.reston01.va.comcast.net> <1037008835.746.204.camel@leeloo.intern.geht.de> Message-ID: Marc Recht writes: > #0 0x080779c0 in PyObject_Free (p=0x800) at Objects/obmalloc.c:713 > #1 0x080e00a0 in function_call (func=0x82641ec, arg=0x8258b8c, > kw=0x826bbdc) at Objects/funcobject.c:481 This looks like it is getting difficult. 0x800 is surely a garbage pointer, so it is not surprising that pymalloc crashes when it sees that pointer. Now, the question is: where does that pointer come from? This should be from funcobject.c:457, which says k = PyMem_NEW(PyObject *, 2*nk); Unless I'm mistaken, this expands to malloc(). Can you please try to confirm that malloc indeed return this 0x800? If so, it looks like the C library's malloc got confused. We would then need to find out why that happens. If you can't come up with a debugging strategy, I recommend that you try out Tim's suggestion (ie. build with optimizations disabled). In addition, I also suggest that you build with threads disabled. Of course, if you then cannot reproduce the problem anymore, this is little proof that the problem has to do with optimization, or threads :-( Regards, Martin From mal@lemburg.com Mon Nov 11 13:07:16 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 11 Nov 2002 14:07:16 +0100 Subject: [Python-Dev] Reindenting unicodedata.c References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> Message-ID: <3DCFAB84.2050508@lemburg.com> Martin v. Loewis wrote: > "M.-A. Lemburg" writes: > >>>>Why does MAL prefer 4 spaces? >> >>I'm not really sure why, but one apparent reason is that Python >>uses the same indent. > > It doesn't. Python uses tabs for indentation in its C source code, see > PEP 7. I was talking about Python source code, not C source code. I know that Guido perfers C indentation using TABs and that's why PEP 7 suggests using this scheme. I think that source files use consistent indentation on a per file basis, but no point in fighting over 4 spaces vs. tabs in them. Reindentation doesn't buy us anything accept traffic on this mailing list. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From guido@python.org Mon Nov 11 14:01:31 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 11 Nov 2002 09:01:31 -0500 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: Your message of "Mon, 11 Nov 2002 09:40:52 +0100." <3DCF6D14.3000502@lemburg.com> References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> Message-ID: <200211111401.gABE1VK15175@pcp02138704pcs.reston01.va.comcast.net> > Just like Fredrik, I prefer 4 spaces indents over TABs and since > the RE and Unicode code base used these 4 spaces indents all along, > I'd also prefer to keep it that way. As long as you're actively maintaining that code, I see no reason to reformat it then. But, as they say, you snooze, you lose. :-) --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Mon Nov 11 14:09:57 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 11 Nov 2002 09:09:57 -0500 Subject: [snake-farm] Re: [Python-Dev] Snake farm In-Reply-To: Your message of "11 Nov 2002 11:00:33 +0100." <1037008835.746.204.camel@leeloo.intern.geht.de> References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036962763.746.132.camel@leeloo.intern.geht.de> <200211110052.gAB0qqb13698@pcp02138704pcs.reston01.va.comcast.net> <1037008835.746.204.camel@leeloo.intern.geht.de> Message-ID: <200211111409.gABE9vH15215@pcp02138704pcs.reston01.va.comcast.net> > > Marc, pymalloc is supposed to be bulletproof. If there's a segfault > > that can be avoided by disabling pymalloc, that's a bug in pymalloc. > > Would you mind helping us find this bug? > No problem. Environment: FreeBSD-current (Nov. 8th), Python CVS Sources > Nov. 11th (~ 10:00 CEST). > limit: > cputime unlimited > filesize unlimited > datasize 512MB > stacksize 64MB > coredumpsize unlimited > memoryuse unlimited > memorylocked unlimited > maxproc 7390 > descriptors 32768 > sockbufsize unlimited > vmemorysize unlimited > > I compiled Python with no optimization flags set But there are optimization flags set by default in the Makefile (at OPT=...). Can you take out the -O3 from the OPT variable and start over? As Tim Peters suggested, this may be an optimizer bug. > and my changes to > pyconfig.h.in/configure.in (removes XOPEN_*,POSIX_* in the FreeBSD case > to get it compiled). The only configure argument was prefix. > > Error > /usr/bin/install -c ./install-sh > /opt/local/python/lib/python2.3/config/install-sh > ./python -E ./setup.py install \ > --prefix=/opt/local/python \ > --install-scripts=/opt/local/python/bin \ > --install-platlib=/opt/local/python/lib/python2.3/lib-dynload > running install > running build > running build_ext > gmake: *** [sharedinstall] Segmentation fault (core dumped) > > 713 if (ADDRESS_IN_RANGE(p, pool->arenaindex)) { > > (gdb) bt > #0 0x080779c0 in PyObject_Free (p=0x800) at Objects/obmalloc.c:713 > #1 0x080e00a0 in function_call (func=0x82641ec, arg=0x8258b8c, > kw=0x826bbdc) at Objects/funcobject.c:481 > #2 0x080599fb in PyObject_Call (func=0x82641ec, arg=0x8258b8c, > kw=0x826bbdc) at Objects/abstract.c:1688 > #3 0x080a7950 in ext_do_call (func=0x82641ec, pp_stack=0xbfbfdfa4, > flags=2, na=1, nk=0) at Python/ceval.c:3453 > #4 0x080a4b64 in eval_frame (f=0x81d400c) at Python/ceval.c:2043 > #5 0x080a5e1e in PyEval_EvalCodeEx (co=0x820e620, globals=0x819a57c, > locals=0x0, args=0x831ad70, argcount=1, kws=0x831ad74, kwcount=0, > defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2554 > #6 0x080a73c3 in fast_function (func=0x82378ec, pp_stack=0xbfbfe194, > n=1, na=1, nk=0) at Python/ceval.c:3297 > #7 0x080a72af in call_function (pp_stack=0xbfbfe194, oparg=0) at > Python/ceval.c:3266 > #8 0x080a4a50 in eval_frame (f=0x831ac0c) at Python/ceval.c:2009 > [...] > > (gdb) l > 708 > 709 if (p == NULL) /* free(NULL) has no effect */ > 710 return; > 711 > 712 pool = POOL_ADDR(p); > 713 if (ADDRESS_IN_RANGE(p, pool->arenaindex)) { > 714 /* We allocated this address. */ > 715 LOCK(); > 716 /* > 717 * Link p to the start of the pool's freeblock list. Since > > Here ? > > (gdb) p pool > $1 = (struct pool_header *) 0x0 > > (gdb) p p > $2 = (void *) 0x800 > > (gdb) x 0x800 > 0x800: Cannot access memory at address 0x800 So it looks like PyOblect_Free() was called with 0x800 as an argument, which is a bogus pointer value. Can you go up one stack level and see what the value of k in function_call() is? > This doesn't happen if either --without-pymalloc or --with-pydebug is > given. Well, --without-pymalloc means that this code is never executed. It's disturbing that --with-pydebug doesn't reveal a problem though; that again points in the direction of the optimizer. --Guido van Rossum (home page: http://www.python.org/~guido/) From theller@python.net Mon Nov 11 14:25:03 2002 From: theller@python.net (Thomas Heller) Date: 11 Nov 2002 15:25:03 +0100 Subject: [Python-Dev] PEP html display in IE Message-ID: Some PEPs, probably those formatted with reST, do not display any more in my IE 6 (Version 6.0.2600.0000), for example http://www.python.org/peps/pep-0301.html and http://www.python.org/peps/pep-0258.html. It says: Use of default namespace declaration attribute in DTD not supported. Error processing resource 'http://www.python.org/peps/pep-0258.html'. Line 8, Position 17 Thomas From martin@v.loewis.de Mon Nov 11 14:29:41 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 11 Nov 2002 15:29:41 +0100 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: <3DCFAB84.2050508@lemburg.com> References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> <3DCFAB84.2050508@lemburg.com> Message-ID: "M.-A. Lemburg" writes: > Reindentation doesn't buy us anything accept traffic on this mailing > list. There wouldn't be any traffic if PEP 7 was implemented :-( Regards, Martin From marc@informatik.uni-bremen.de Mon Nov 11 14:32:08 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: 11 Nov 2002 15:32:08 +0100 Subject: [snake-farm] Re: [Python-Dev] Snake farm In-Reply-To: <200211111409.gABE9vH15215@pcp02138704pcs.reston01.va.comcast.net> References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036962763.746.132.camel@leeloo.intern.geht.de> <200211110052.gAB0qqb13698@pcp02138704pcs.reston01.va.comcast.net> <1037008835.746.204.camel@leeloo.intern.geht.de> <200211111409.gABE9vH15215@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <1037025132.779.74.camel@leeloo.intern.geht.de> --=-0eck4PWYK41vrXLCc4++ Content-Type: text/plain Content-Transfer-Encoding: quoted-printable > But there are optimization flags set by default in the Makefile > (at OPT=3D...). Can you take out the -O3 from the OPT variable and > start over? As Tim Peters suggested, this may be an optimizer bug. I removed them from the Makefile. (Of course :-)) > So it looks like PyOblect_Free() was called with 0x800 as an argument, > which is a bogus pointer value. Can you go up one stack level and see > what the value of k in function_call() is? 713 if (ADDRESS_IN_RANGE(p, pool->arenaindex)) { (gdb) up #1 0x080dfef4 in function_call (func=3D0x826317c, arg=3D0x8256aac, kw=3D0x8269bdc) at Objects/funcobject.c:481 481 PyMem_DEL(k); (gdb) p k $1 =3D (struct _object **) 0x800 > Well, --without-pymalloc means that this code is never executed. It's > disturbing that --with-pydebug doesn't reveal a problem though; that > again points in the direction of the optimizer. CC=3D'gcc' LDSHARED=3D'cc -shared ' OPT=3D'-DNDEBUG -g -Wall -Wstrict-prototypes' ./python -E ./setup.py build; Except "prefix" no option was given to configure. Regards, Marc --=20 "Premature optimization is the root of all evil." -- Donald E. Knuth --=-0eck4PWYK41vrXLCc4++ Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (FreeBSD) iD8DBQA9z79o7YQCetAaG3MRAmmUAJwII/rKDeq8mjUVejNWutikYCQoKgCfRqd1 kbYMHEzUgP6wHhu+/DV0KX4= =hHre -----END PGP SIGNATURE----- --=-0eck4PWYK41vrXLCc4++-- From guido@python.org Mon Nov 11 14:38:20 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 11 Nov 2002 09:38:20 -0500 Subject: [snake-farm] Re: [Python-Dev] Snake farm In-Reply-To: Your message of "11 Nov 2002 15:32:08 +0100." Message-ID: <200211111438.gABEcKX15504@pcp02138704pcs.reston01.va.comcast.net> > > So it looks like PyOblect_Free() was called with 0x800 as an argument, > > which is a bogus pointer value. Can you go up one stack level and see > > what the value of k in function_call() is? > 713 if (ADDRESS_IN_RANGE(p, pool->arenaindex)) { > (gdb) up > #1 0x080dfef4 in function_call (func=0x826317c, arg=0x8256aac, > kw=0x8269bdc) at Objects/funcobject.c:481 > 481 PyMem_DEL(k); > (gdb) p k > $1 = (struct _object **) 0x800 Well, then maybe you can follow MvL's suggestion and find out how come this value was returned by PyMem_NEW(PyObject *, 2*nk)??? --Guido van Rossum (home page: http://www.python.org/~guido/) From skip@pobox.com Mon Nov 11 14:45:34 2002 From: skip@pobox.com (Skip Montanaro) Date: Mon, 11 Nov 2002 08:45:34 -0600 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: <200211111401.gABE1VK15175@pcp02138704pcs.reston01.va.comcast.net> References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> <200211111401.gABE1VK15175@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <15823.49806.584962.556861@montanaro.dyndns.org> >> Just like Fredrik, I prefer 4 spaces indents over TABs and since the >> RE and Unicode code base used these 4 spaces indents all along, I'd >> also prefer to keep it that way. Guido> As long as you're actively maintaining that code, I see no reason Guido> to reformat it then. But, as they say, you snooze, you lose. :-) It seems to me that any files with "special needs" (unicodedata.c and the various sre files come to mind) should have comments right near the top which indicate the constraints which must be maintained (4-space indents, 1.5.2 compatibility, etc). Skip From barry@python.org Mon Nov 11 14:49:35 2002 From: barry@python.org (Barry A. Warsaw) Date: Mon, 11 Nov 2002 09:49:35 -0500 Subject: [Python-Dev] PEP html display in IE References: Message-ID: <15823.50047.832888.213975@gargle.gargle.HOWL> >>>>> "TH" == Thomas Heller writes: TH> Some PEPs, probably those formatted with reST, do not display TH> any more in my IE 6 (Version 6.0.2600.0000) If forwarded this to peps@python.org so David Goodger can't respond. -Barry From guido@python.org Mon Nov 11 14:50:06 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 11 Nov 2002 09:50:06 -0500 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: Your message of "Mon, 11 Nov 2002 08:45:34 CST." <15823.49806.584962.556861@montanaro.dyndns.org> References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> <200211111401.gABE1VK15175@pcp02138704pcs.reston01.va.comcast.net> <15823.49806.584962.556861@montanaro.dyndns.org> Message-ID: <200211111450.gABEo6315644@pcp02138704pcs.reston01.va.comcast.net> > It seems to me that any files with "special needs" (unicodedata.c > and the various sre files come to mind) should have comments right > near the top which indicate the constraints which must be maintained > (4-space indents, > 1.5.2 compatibility, etc). +1. A comment at the top sure seems a better place for a reminder than a note in PEP 291. --Guido van Rossum (home page: http://www.python.org/~guido/) From barry@python.org Mon Nov 11 14:54:38 2002 From: barry@python.org (Barry A. Warsaw) Date: Mon, 11 Nov 2002 09:54:38 -0500 Subject: [Python-Dev] Reindenting unicodedata.c References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> <200211111401.gABE1VK15175@pcp02138704pcs.reston01.va.comcast.net> <15823.49806.584962.556861@montanaro.dyndns.org> Message-ID: <15823.50350.59269.474827@gargle.gargle.HOWL> >>>>> "SM" == Skip Montanaro writes: SM> It seems to me that any files with "special needs" SM> (unicodedata.c and the various sre files come to mind) should SM> have comments right near the top which indicate the SM> constraints which must be maintained (4-space indents, 1.5.2 SM> compatibility, etc). And/or something like this (untested) near the bottom of the file: /* * Local Variables: * c-basic-offset: 4 * indent-tabs-mode: nil * End: */ -Barry From martin@v.loewis.de Mon Nov 11 15:06:59 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 11 Nov 2002 16:06:59 +0100 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: <15823.50350.59269.474827@gargle.gargle.HOWL> References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> <200211111401.gABE1VK15175@pcp02138704pcs.reston01.va.comcast.net> <15823.49806.584962.556861@montanaro.dyndns.org> <15823.50350.59269.474827@gargle.gargle.HOWL> Message-ID: barry@python.org (Barry A. Warsaw) writes: > And/or something like this (untested) near the bottom of the file: > > /* > * Local Variables: > * c-basic-offset: 4 > * indent-tabs-mode: nil Does that have the effect of expanding all tabs? unicodeobject.c does use tabs for indenting multiple levels. In my revised patch for unicodedata.c, I put just /* Local variables: c-basic-offset: 4 End: */ unicodedata.c is inconsistent in this respect: it sometimes uses tabs, and sometimes spaces. Regards, Martin From guido@python.org Mon Nov 11 15:10:33 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 11 Nov 2002 10:10:33 -0500 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: Your message of "11 Nov 2002 16:06:59 +0100." References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> <200211111401.gABE1VK15175@pcp02138704pcs.reston01.va.comcast.net> <15823.49806.584962.556861@montanaro.dyndns.org> <15823.50350.59269.474827@gargle.gargle.HOWL> Message-ID: <200211111510.gABFAXx16418@pcp02138704pcs.reston01.va.comcast.net> > Does that have the effect of expanding all tabs? unicodeobject.c does > use tabs for indenting multiple levels. FWIW, if you want 4-space indents, I think you should use all spaces and not use tabs for double indents. The tabs will be confusing when the code is viewed with a different tab setting (which is sometimes unavoidable). > In my revised patch for unicodedata.c, I put just > > /* > Local variables: > c-basic-offset: 4 > End: > */ > > unicodedata.c is inconsistent in this respect: it sometimes uses tabs, > and sometimes spaces. Even MAL can't complain if you fix that to use all spaces. --Guido van Rossum (home page: http://www.python.org/~guido/) From mal@lemburg.com Mon Nov 11 15:13:00 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 11 Nov 2002 16:13:00 +0100 Subject: [Python-Dev] Reindenting unicodedata.c References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> <200211111401.gABE1VK15175@pcp02138704pcs.reston01.va.comcast.net> <15823.49806.584962.556861@montanaro.dyndns.org> <15823.50350.59269.474827@gargle.gargle.HOWL> <200211111510.gABFAXx16418@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <3DCFC8FC.9010805@lemburg.com> Guido van Rossum wrote: >>/* >>Local variables: >>c-basic-offset: 4 >>End: >>*/ >> >>unicodedata.c is inconsistent in this respect: it sometimes uses tabs, >>and sometimes spaces. > > > Even MAL can't complain if you fix that to use all spaces. I won't :-) -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From aleax@aleax.it Mon Nov 11 15:13:47 2002 From: aleax@aleax.it (Alex Martelli) Date: Mon, 11 Nov 2002 16:13:47 +0100 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: <200211111510.gABFAXx16418@pcp02138704pcs.reston01.va.comcast.net> References: <200211111510.gABFAXx16418@pcp02138704pcs.reston01.va.comcast.net> Message-ID: On Monday 11 November 2002 04:10 pm, Guido van Rossum wrote: > > Does that have the effect of expanding all tabs? unicodeobject.c does > > use tabs for indenting multiple levels. > > FWIW, if you want 4-space indents, I think you should use all spaces > and not use tabs for double indents. The tabs will be confusing when > the code is viewed with a different tab setting (which is sometimes > unavoidable). +1 (just because I can't give a +23...) -- mixed tabs and spaces are just *evil*. Alex From barry@python.org Mon Nov 11 15:14:15 2002 From: barry@python.org (Barry A. Warsaw) Date: Mon, 11 Nov 2002 10:14:15 -0500 Subject: [Python-Dev] Reindenting unicodedata.c References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> <200211111401.gABE1VK15175@pcp02138704pcs.reston01.va.comcast.net> <15823.49806.584962.556861@montanaro.dyndns.org> <15823.50350.59269.474827@gargle.gargle.HOWL> Message-ID: <15823.51527.723116.412654@gargle.gargle.HOWL> >>>>> "MvL" == Martin v Loewis writes: > /* > * Local Variables: > * c-basic-offset: 4 > * indent-tabs-mode: nil MvL> Does that have the effect of expanding all tabs? Ah, no it doesn't. MvL> unicodeobject.c does use tabs for indenting multiple levels. C-x h M-x untabify RET MvL> In my revised patch for unicodedata.c, I put just | /* | Local variables: | c-basic-offset: 4 | End: | */ +1 MvL> unicodedata.c is inconsistent in this respect: it sometimes MvL> uses tabs, and sometimes spaces. Dang. BTW, for the Emacsen in the audience, here's a little elisp I use to normalize whitespace. I mostly use this in Python code so YMMV. It frst untabifies the buffer, and then it deletes bogus trailing whitespace. I think it mostly works. ;) The defalias is just because I can never remember what I called the function and don't have it bound to a keychord. -Barry -------------------- snip snip -------------------- ;; untabify and clean up lines with just whitespace (defun baw-whitespace-normalization () "Like untabify, but also cleans up lines with trailing whitespace." (interactive) (save-excursion (save-restriction (untabify (point-min) (point-max)) (goto-char (point-min)) (while (re-search-forward "[ \t]+$" nil t) (let ((bol (save-excursion (beginning-of-line) (point))) (eol (save-excursion (end-of-line) (point)))) (goto-char (match-beginning 0)) (if (and (bolp) (eq (char-after) ?\ )) (forward-char 1)) (skip-chars-backward " \t" bol) (delete-region (point) eol) )) ))) (defalias 'baw-normalize-whitespace 'baw-whitespace-normalization) From tim.one@comcast.net Mon Nov 11 15:47:55 2002 From: tim.one@comcast.net (Tim Peters) Date: Mon, 11 Nov 2002 10:47:55 -0500 Subject: [snake-farm] Re: [Python-Dev] Snake farm In-Reply-To: Message-ID: [Martin v. Loewis] > ... > Now, the question is: where does that pointer come from? This should > be from funcobject.c:457, which says > > k = PyMem_NEW(PyObject *, 2*nk); > > Unless I'm mistaken, this expands to malloc(). Which version of Python is getting built here? In 2.3, PyMem_NEW resolves to malloc() in a release build, but not a debug build (in a debug build, all Python memory API calls go through pymalloc). This may be vaguely relevant, since Marc said This doesn't happen if either --without-pymalloc or --with-pydebug is given. In the former case (--without-pymalloc) pymalloc isn't used at all; in the latter case (--with-pydebug) pymalloc is always used. In 2.3. Since I saw a /opt/local/python/lib/python2.3/config/install-sh ^^^^^^^^^ flash by, I figure that's right for this build. From marc@informatik.uni-bremen.de Mon Nov 11 17:07:10 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: 11 Nov 2002 18:07:10 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036948999.746.102.camel@leeloo.intern.geht.de> <1036962658.746.129.camel@leeloo.intern.geht.de> <1036967497.746.161.camel@leeloo.intern.geht.de> <1037010321.746.226.camel@leeloo.intern.geht.de> Message-ID: <1037034432.779.112.camel@leeloo.intern.geht.de> --=-R+GGiPN+KuS6g7PcEhlL Content-Type: text/plain Content-Transfer-Encoding: quoted-printable > Could you please call it __EXTENSIONS__? Many of the functions that > fall under it are not BSD specific, and _BSD_SOURCE selects the > favour-bsd API on other systems.=20 The responsible person seems to be against a solution like this. But, in the discussion it turns out that you are right and there some bugs in FreeBSD's unistd.h. The fixes will (probably) committed today. But=20 there are some issues left mostly in the RPC and socket code, which need __BSD_VISIBLE. Some problems are left, because some functions (like ftello) are only defined at a higher POSIX level. =20 > out support for a number of such systems. It's just not worth the > pain. Python works on Posix systems, and uses extensions where > available. It does not work on strange systems; tough luck for users > of such systems (we have yet to hear from a DYNIX user who wants to > run Python 2.2). I'm not talking about strange systems. Although I've not tested it BSD/OS comes to mind or NetBSD/OpenBSD.=20 > > > Only if all other options have been exhausted. What problems occur if > > > _XOPEN_SOURCE is defined? > > _XOPEN_SOURCE sets _POSIX_C_SOURCE. >=20 > And why is that a problem? In that case __BSD_VISIBLE is not set. Regards, Marc --=20 "Premature optimization is the root of all evil." -- Donald E. Knuth --=-R+GGiPN+KuS6g7PcEhlL Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (FreeBSD) iD8DBQA9z+O+7YQCetAaG3MRAgQfAJ9bFfnsCsua/VCrJwUUsL+Ysw7v5gCggav9 B7m5cmBNKED5UK+uGarc0Kg= =Fyum -----END PGP SIGNATURE----- --=-R+GGiPN+KuS6g7PcEhlL-- From skip@pobox.com Mon Nov 11 17:21:09 2002 From: skip@pobox.com (Skip Montanaro) Date: Mon, 11 Nov 2002 11:21:09 -0600 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> <200211111401.gABE1VK15175@pcp02138704pcs.reston01.va.comcast.net> <15823.49806.584962.556861@montanaro.dyndns.org> <15823.50350.59269.474827@gargle.gargle.HOWL> Message-ID: <15823.59141.28387.130297@montanaro.dyndns.org> Martin> unicodedata.c is inconsistent in this respect: it sometimes uses Martin> tabs, and sometimes spaces. Probably just a result of having multiple editors and 4-space indents. Skip From martin@v.loewis.de Mon Nov 11 18:20:00 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 11 Nov 2002 19:20:00 +0100 Subject: [Python-Dev] Reindenting unicodedata.c In-Reply-To: <15823.51527.723116.412654@gargle.gargle.HOWL> References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> <200211111401.gABE1VK15175@pcp02138704pcs.reston01.va.comcast.net> <15823.49806.584962.556861@montanaro.dyndns.org> <15823.50350.59269.474827@gargle.gargle.HOWL> <15823.51527.723116.412654@gargle.gargle.HOWL> Message-ID: barry@python.org (Barry A. Warsaw) writes: > > * indent-tabs-mode: nil > > MvL> Does that have the effect of expanding all tabs? > > Ah, no it doesn't. This wasn't actually my question... I really meant to ask whether it has the effect of always inserting spaces, and never inserting tabs. Does it have that effect? Regards, Martin From martin@v.loewis.de Mon Nov 11 18:22:30 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 11 Nov 2002 19:22:30 +0100 Subject: [snake-farm] Re: [Python-Dev] Snake farm In-Reply-To: References: Message-ID: Tim Peters writes: > In 2.3, PyMem_NEW resolves to malloc() in a release build, but not a debug > build (in a debug build, all Python memory API calls go through pymalloc). > This may be vaguely relevant, since Marc said > > This doesn't happen if either --without-pymalloc or --with-pydebug > is given. To summarize your analysis: In Marc's build, malloc is being called directly, for PyMem_NEW. Right? Regards, Martin From martin@v.loewis.de Mon Nov 11 18:25:55 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 11 Nov 2002 19:25:55 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: <1037034432.779.112.camel@leeloo.intern.geht.de> References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036948999.746.102.camel@leeloo.intern.geht.de> <1036962658.746.129.camel@leeloo.intern.geht.de> <1036967497.746.161.camel@leeloo.intern.geht.de> <1037010321.746.226.camel@leeloo.intern.geht.de> <1037034432.779.112.camel@leeloo.intern.geht.de> Message-ID: Marc Recht writes: > The responsible person seems to be against a solution like this. But, in > the discussion it turns out that you are right and there some bugs in > FreeBSD's unistd.h. The fixes will (probably) committed today. But > there are some issues left mostly in the RPC and socket code, which need > __BSD_VISIBLE. Please be as precise as possible when talking about this stuff. What issues precisely? I'm not going to accept patches for platform-specific changes just because a user said that these changes are needed "to make it work". Instead, I want to record the *precise* cause that triggered these changes, so that others can re-evaluate whether they are still needed five years from now. > Some problems are left, because some functions (like ftello) are > only defined at a higher POSIX level. Why is that a problem? > > > > Only if all other options have been exhausted. What problems occur if > > > > _XOPEN_SOURCE is defined? > > > _XOPEN_SOURCE sets _POSIX_C_SOURCE. > > > > And why is that a problem? > In that case __BSD_VISIBLE is not set. And why is that a problem? This is exhausting. Regards, Martin From barry@python.org Mon Nov 11 18:41:16 2002 From: barry@python.org (Barry A. Warsaw) Date: Mon, 11 Nov 2002 13:41:16 -0500 Subject: [Python-Dev] Reindenting unicodedata.c References: <200211082036.gA8KaGL17255@pcp02138704pcs.reston01.va.comcast.net> <200211082220.gA8MKZm19010@pcp02138704pcs.reston01.va.comcast.net> <3DCF6D14.3000502@lemburg.com> <200211111401.gABE1VK15175@pcp02138704pcs.reston01.va.comcast.net> <15823.49806.584962.556861@montanaro.dyndns.org> <15823.50350.59269.474827@gargle.gargle.HOWL> <15823.51527.723116.412654@gargle.gargle.HOWL> Message-ID: <15823.63948.364549.188219@gargle.gargle.HOWL> >>>>> "MvL" == Martin v Loewis writes: > > * indent-tabs-mode: nil > > MvL> Does that have the effect of expanding all tabs? > > Ah, no it doesn't. MvL> This wasn't actually my question... I really meant to ask MvL> whether it has the effect of always inserting spaces, and MvL> never inserting tabs. Does it have that effect? Yes, I believe so, unless of course you insert the tab with "C-q TAB" -Barry From marc@informatik.uni-bremen.de Mon Nov 11 19:10:38 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: 11 Nov 2002 20:10:38 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036948999.746.102.camel@leeloo.intern.geht.de> <1036962658.746.129.camel@leeloo.intern.geht.de> <1036967497.746.161.camel@leeloo.intern.geht.de> <1037010321.746.226.camel@leeloo.intern.geht.de> <1037034432.779.112.camel@leeloo.intern.geht.de> Message-ID: <1037041843.779.150.camel@leeloo.intern.geht.de> --=-RQ8GwOeZv4dqCuDOg3NK Content-Type: text/plain Content-Transfer-Encoding: quoted-printable > Please be as precise as possible when talking about this stuff. What > issues precisely? Sorry. Trying to. I've posted more detail (including a build log) on Source= Forge. But again the major problem is certain defines and typedefs like PF_INET or u_char, u_int (see SourceForge) are only defined in the __BSD_VISIBLE case. These defines typedefs are needed by certain Python modules like sockemodule, nismodule. =20 > I'm not going to accept patches for platform-specific changes just > because a user said that these changes are needed "to make it I also a friend of generic/standard solutions,I don't see one here. I would be more than happy proved to be wrong... You say you want no platform specific stuff in configure and the person responsible for the standard stuff on the FreeBSD side says: "The whole point of the standards constants is to specify a strict environment. If you want a BSD environment don't specify a particular standard, it's simple.". > work". Instead, I want to record the *precise* cause that triggered > these changes, so that others can re-evaluate whether they are still > needed five years from now. I understand that, but this is a build fix. So the evaluation is quite simple.. > > Some problems are left, because some functions (like ftello) are > > only defined at a higher POSIX level. >=20 > Why is that a problem? Because Python tries to use them. =20 > > In that case __BSD_VISIBLE is not set. > And why is that a problem? Because certain stuff Python seems to rely on isn't there. (see SourceForge) An other option could be that Python doesn't use non-POSIX functions/definitions were they aren't available.. > This is exhausting. I know and talking to both camps is rather frustrating.. Maybe we should continue this discussion on SourceForge. Regards,=20 Marc --=20 "Premature optimization is the root of all evil." -- Donald E. Knuth --=-RQ8GwOeZv4dqCuDOg3NK Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (FreeBSD) iD8DBQA90ACu7YQCetAaG3MRAvIWAJ4nN4owMNFQ89N87cD4NQu1ls3e7ACfai1B lL5t+bHJV85NeSP/J5N+7Tw= =9+IR -----END PGP SIGNATURE----- --=-RQ8GwOeZv4dqCuDOg3NK-- From DavidA@ActiveState.com Mon Nov 11 19:25:42 2002 From: DavidA@ActiveState.com (David Ascher) Date: Mon, 11 Nov 2002 11:25:42 -0800 Subject: [Python-Dev] bdb.py change from 1.31 to 1.31.2.1 References: <200211110102.gAB12xo13770@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <3DD00436.5060208@ActiveState.com> Guido van Rossum wrote: >>I'd love to give Pythonwin out-of-process capabilities. >> >> > >This has been added to IDLE (in the idlefork project at SF). I know >others have done this too (Wing IDE, PythonWorks, Komodo). I wonder >if some sort of standard solution could be worked out for this? > > Speaking for Komodo, I'm up for that, assuming it's extensible. --david From guido@python.org Mon Nov 11 19:14:11 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 11 Nov 2002 14:14:11 -0500 Subject: [Python-Dev] bdb.py change from 1.31 to 1.31.2.1 In-Reply-To: Your message of "Mon, 11 Nov 2002 11:25:42 PST." <3DD00436.5060208@ActiveState.com> References: <200211110102.gAB12xo13770@pcp02138704pcs.reston01.va.comcast.net> <3DD00436.5060208@ActiveState.com> Message-ID: <200211111914.gABJEBm18853@pcp02138704pcs.reston01.va.comcast.net> > Speaking for Komodo, I'm up for that, assuming it's extensible. Now would be the time to get in touch with the idlefork folks. idle-dev@python.org should reach them. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one@comcast.net Mon Nov 11 19:29:51 2002 From: tim.one@comcast.net (Tim Peters) Date: Mon, 11 Nov 2002 14:29:51 -0500 Subject: [snake-farm] Re: [Python-Dev] Snake farm In-Reply-To: Message-ID: [martin@v.loewis.de] > To summarize your analysis: In Marc's build, malloc is being called > directly, for PyMem_NEW. Right? I haven't read a Unixish build trail in years. If he was doing a 2.3 release build (my best guess), yes. From martin@v.loewis.de Mon Nov 11 19:56:52 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 11 Nov 2002 20:56:52 +0100 Subject: [Python-Dev] Snake farm In-Reply-To: <1037041843.779.150.camel@leeloo.intern.geht.de> References: <20021102235101.GA28348@epoch.metaslash.com> <20021104192937.GA78845@fallin.lv> <1036948999.746.102.camel@leeloo.intern.geht.de> <1036962658.746.129.camel@leeloo.intern.geht.de> <1036967497.746.161.camel@leeloo.intern.geht.de> <1037010321.746.226.camel@leeloo.intern.geht.de> <1037034432.779.112.camel@leeloo.intern.geht.de> <1037041843.779.150.camel@leeloo.intern.geht.de> Message-ID: Marc Recht writes: > Sorry. Trying to. I've posted more detail (including a build log) on > SourceForge. But again the major problem is certain defines and > typedefs like PF_INET or u_char, u_int (see SourceForge) are only > defined in the __BSD_VISIBLE case. These defines typedefs are needed > by certain Python modules like sockemodule, nismodule. As I commented on SF: Python does not use u_char or u_int. So if that fails to compile, it's a bug on FreeBSD (it is the system headers, after all, which fail to compile - something I have no regrets for). > I also a friend of generic/standard solutions,I don't see one here. I > would be more than happy proved to be wrong... Taking the failure to compile system headers away (which really needs to be fixed in the system), then Python does compile correctly, doesn't it? Looks like an almost-generic solution to me (if it wasn't for the warnings about implicitly-defined functions - which all get defined correctly implicitly). > "The whole point of the standards constants is to specify a strict > environment. If you want a BSD environment don't specify a particular > standard, it's simple.". We do want a POSIX environment. If the system offers alternatives, we want the POSIX alternative, not the native alternative. BSD has a tradition of doing things differently than POSIX, so I have the strong urge to tell the system "I want POSIX, dammit". If the system offers extensions over POSIX, we want them as well. There is unfortunately no standard way to access those extensions, but most systems do have a means to access them (while *still* allowing to favour POSIX over any native semantics). It is really unfortunate that neither OpenBSD nor FreeBSD offer such an environment. > I understand that, but this is a build fix. So the evaluation is quite > simple.. No, it isn't. Python builds just fine without your patch, if we consider the bugs in the system headers fixed. > > > Some problems are left, because some functions (like ftello) are > > > only defined at a higher POSIX level. > > > > Why is that a problem? > Because Python tries to use them. Ah, that is indeed a problem. Autoconf is bad in finding out that a function is available, but has no prototype. This can be fixed (on a case-by-case basis), and I have fixed it for chroot, link, and symlink. This is not a big problem, though. In C, a prototype is implied from the first call, and that implied prototype is correct. > Because certain stuff Python seems to rely on isn't there. (see > SourceForge) Python relies on none of this. Instead, Python wraps the functions, and exposes them to the application. There is no "internal" use of any of the features we are talking about. So if configure determines that a feature is absent, it just won't be exposed. No big deal. > An other option could be that Python doesn't use non-POSIX > functions/definitions were they aren't available.. That is indeed an option. As I said, Python does *not* use these functions. Not wrapping them is really simple. Regards, Martin From greg@cosc.canterbury.ac.nz Mon Nov 11 21:47:08 2002 From: greg@cosc.canterbury.ac.nz (Greg Ewing) Date: Tue, 12 Nov 2002 10:47:08 +1300 (NZDT) Subject: [Python-Dev] Continuous Sets in 2.3 set implementation? In-Reply-To: <1036993130.29862.15.camel@bonzo> Message-ID: <200211112147.gABLl8b21217@kuku.cosc.canterbury.ac.nz> Frank Horowitz : > is there any hope of > renaming the implementation to something along the lines of "dsets" to > remind the user of the discreteness? (Am I being too pedantic?) I think you're being too pedantic. In the context of programming languages, "set" always means a discrete set, in my experience. A non-discrete set would better be called an "interval" or something like that. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg@cosc.canterbury.ac.nz +--------------------------------------+ From goodger@python.org Tue Nov 12 01:48:31 2002 From: goodger@python.org (David Goodger) Date: Mon, 11 Nov 2002 20:48:31 -0500 Subject: [Python-Dev] PEP html display in IE In-Reply-To: Message-ID: Thomas Heller wrote: > Some PEPs, probably those formatted with reST, do not display any more > in my IE 6 (Version 6.0.2600.0000), for example > http://www.python.org/peps/pep-0301.html > and > http://www.python.org/peps/pep-0258.html. > > It says: > > Use of default namespace declaration attribute in DTD not > supported. Error processing resource > 'http://www.python.org/peps/pep-0258.html'. Line 8, Position 17 > I've put (truncated) variations of PEP 258 on the web with DOCTYPE, comment, and/or "" removed. Please take a look at each variation and tell me the results (any errors or differences seen): A) http://www.python.org/peps/pep-0258.html B) http://www.python.org/peps/pep-0258-moved-comment.html C) http://www.python.org/peps/pep-0258-no-comment.html D) http://www.python.org/peps/pep-0258-no-doctype.html E) http://www.python.org/peps/pep-0258-no-xml.html F) http://www.python.org/peps/pep-0258-no-nothing.html G) http://docutils.sf.net/spec/pep-0258.html (to compare servers) I don't have MSIE6 at home, and MacOS/MSIE5.1 has no problem with the HTML, so I'm depending on reports. The only thing I've done recently is to add an "AUTO-GENERATED HTML; DO NOT EDIT!" comment to PEPs. I did the same thing to plaintext PEPs though (PEP 0, PEP 1). Plaintext PEPs don't have the "" processing instruction at the top, so it may be an interaction. When I look at the PEP from the web (Win2K, MSIE 5.00.3315.1000), I'm shown the XHTML source (*not* the rendered page), but no error. When I look at the exact same HTML & stylesheet from my local HD, there's no problem at all; I get the page rendered properly. I ran http://www.python.org/peps/pep-0258.html through the XML validator at http://www.stg.brown.edu/service/xmlvalid/ and it spewed out a bunch of errors related to the XHTML DTD, and a couple for pep-0258.html itself. The latter (marginwidth and marginheight attributes not allowed on ) were easy to fix on both reST and plaintext PEPs. The W3C XHTML DTD errors are not so easy to fix ;). ReStructuredText PEPs use the following DOCTYPE line: Then I tried http://www.python.org/peps/pep-0001.html, a plaintext PEP, and the results from XML validator were even worse. The DOCTYPE for plaintext PEPs is: That's an SGML-based DTD though, not XML, so I'd expect the XML validator to barf. The W3C validator, http://validator.w3.org/, reported only the document errors, no DTD errors. From all this I'm doubtful that the DTD errors (or validator bugs and/or over-strictness) are the culprit. I did a Google search for "Use of default namespace declaration attribute in DTD not supported" and got a bunch of hits. Short form: this appears to be an MSIE/MSXML bug. Highlights: >From http://www.biglist.com/lists/xsl-list/archives/200011/msg00135.html: From: Dimitre Novatchev In case you use msxml.dll version 8.0.5226.0 (from W2K SP1), then you'll get the error message "Use of default namespace declaration attribute in DTD not supported". >From http://www.biglist.com/lists/xsl-list/archives/200011/msg00266.html: Subject: IE5 xmlns DTD attribute BUG was Re: Use of default namespace declaration From: "Jonathan Borden" Date: Mon, 6 Nov 2000 15:10:40 -0500 Joshua Allen wrote: > I think this was lax conformance on part of the earlier parser, > and has since been tightened. See > http://www.rpbourret.com/xml/NamespacesFAQ.htm#q7_2. Err... no. This is a BUG. Explicitly an xmlns attribute can be defaulted in a DTD (see section 4.3 in the article you quote above.) This error message from IE5 is a non-conformance. See http://www.w3.org/TR/REC-xml-names Namespace Constraint: Prefix Declared if a question remains. Another BUG in IE5s handling of DTDs ... parsing of DTDs (using the Sept MSXML3) hangs the browser when the SYSTEM ID is a URI of the form http://... but not when the same DTD file is located on the local disc. (This behavior is not constant but reproducable across multiple installations and people in multiple organizations ... i.e. I am not the only one having this problem). So what's the solution? Ignore the browser bug or work around it? -- David Goodger http://www.python.org/peps/ Python Enhancement Proposal (PEP) Editor (Please cc: all PEP correspondence to .) From mhammond@skippinet.com.au Tue Nov 12 05:23:41 2002 From: mhammond@skippinet.com.au (Mark Hammond) Date: Tue, 12 Nov 2002 16:23:41 +1100 Subject: [Python-Dev] PEP html display in IE In-Reply-To: Message-ID: > A) http://www.python.org/peps/pep-0258.html Error as before: "Use of default namespace declaration attribute in DTD not ..." > B) http://www.python.org/peps/pep-0258-moved-comment.html Same error, different place. > C) http://www.python.org/peps/pep-0258-no-comment.html Works > D) http://www.python.org/peps/pep-0258-no-doctype.html Works as XML view of doc > E) http://www.python.org/peps/pep-0258-no-xml.html Works. > F) http://www.python.org/peps/pep-0258-no-nothing.html Works > G) http://docutils.sf.net/spec/pep-0258.html (to compare servers) Error as for A). > The only thing I've done recently is to add an "AUTO-GENERATED HTML; > DO NOT EDIT!" comment to PEPs. I did the same thing to plaintext PEPs > though (PEP 0, PEP 1). Plaintext PEPs don't have the "" > processing instruction at the top, so it may be an interaction. Moving this comment inside the html entity also seems to work, ie: > So what's the solution? Ignore the browser bug or work around it? Ignoring wont work . Mark. From goodger@python.org Tue Nov 12 05:44:45 2002 From: goodger@python.org (David Goodger) Date: Tue, 12 Nov 2002 00:44:45 -0500 Subject: [Python-Dev] PEP html display in IE In-Reply-To: Message-ID: Thanks for the report and the workaround, Mark. I've moved the comment inside the tag to appease MSIE, so the affected PEPs ought to work now (12, 256, 257, 258, 268, 290, 301). Please let me know if anything's still amiss. >> So what's the solution? Ignore the browser bug or work around it? > > Ignoring wont work . Yes, it's hard to ignore a cranky 400-lb gorilla. -- David Goodger http://www.python.org/peps/ Python Enhancement Proposal (PEP) Editor (Please cc: all PEP correspondence to .) From theller@python.net Tue Nov 12 07:18:35 2002 From: theller@python.net (Thomas Heller) Date: 12 Nov 2002 08:18:35 +0100 Subject: [Python-Dev] PEP html display in IE In-Reply-To: References: Message-ID: David Goodger writes: > Thanks for the report and the workaround, Mark. > > I've moved the comment inside the tag to appease MSIE, so the > affected PEPs ought to work now (12, 256, 257, 258, 268, 290, 301). Please > let me know if anything's still amiss. > All these display fine now. Thanks. Thomas From fredrik@pythonware.com Tue Nov 12 10:00:51 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Tue, 12 Nov 2002 11:00:51 +0100 Subject: [Python-Dev] PEP html display in IE References: Message-ID: <072801c28a32$63c9e990$0900a8c0@spiff> David Goodger wrote. > So what's the solution? Ignore the browser bug or work around it? Reading the XHTML specification might also help: you're using an XHTML = DTD, but the files are missing the mandatory xmlns declaration on the html = tag (see section 3.1.1, item 3 of the XHTML specification). From skip@pobox.com Tue Nov 12 10:53:21 2002 From: skip@pobox.com (Skip Montanaro) Date: Tue, 12 Nov 2002 04:53:21 -0600 Subject: [Python-Dev] Minix? Message-ID: <15824.56737.228282.491216@montanaro.dyndns.org> I was perusing the README file looking for clues why I'm suddenly unable to build on MacOSX when I noticed a platform note about Minix: Minix: When using ack, use "CC=cc AR=aal RANLIB=: ./configure"! I find it hard to believe Python actually still builds on Minix. On the other hand, I don't see it mentioned in PEP 11. Skip From theller@python.net Tue Nov 12 11:05:44 2002 From: theller@python.net (Thomas Heller) Date: 12 Nov 2002 12:05:44 +0100 Subject: [Python-Dev] Minix? In-Reply-To: <15824.56737.228282.491216@montanaro.dyndns.org> References: <15824.56737.228282.491216@montanaro.dyndns.org> Message-ID: Skip Montanaro writes: > I was perusing the README file looking for clues why I'm suddenly unable to > build on MacOSX when I noticed a platform note about Minix: > > Minix: When using ack, use "CC=cc AR=aal RANLIB=: ./configure"! > > I find it hard to believe Python actually still builds on Minix. On the > other hand, I don't see it mentioned in PEP 11. Offtopic: When I used minix, I wasn't aware of Python. And minix (the version I had) stopped working on recent PC's because there was a bug in the memory size calculation. The memory size was calculated modulo 16 MB, so it always complains that *no* memory was found. Funny! Thomas From martin@v.loewis.de Tue Nov 12 13:06:49 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 12 Nov 2002 14:06:49 +0100 Subject: [Python-Dev] Minix? In-Reply-To: <15824.56737.228282.491216@montanaro.dyndns.org> References: <15824.56737.228282.491216@montanaro.dyndns.org> Message-ID: Skip Montanaro writes: > I find it hard to believe Python actually still builds on Minix. On the > other hand, I don't see it mentioned in PEP 11. Python's configure.in still as AC_MINIX, but I agree that that it is likely broken. Under PEP 11, we have to keep the support code in Python 2.3, and can only start removing it in Python 2.4. I've updated PEP 11 accordingly. Regards, Martin From bkc@murkworks.com Tue Nov 12 13:41:35 2002 From: bkc@murkworks.com (Brad Clements) Date: Tue, 12 Nov 2002 08:41:35 -0500 Subject: [Python-Dev] PEP html display in IE In-Reply-To: References: Message-ID: <3DD0BDD4.21671.E56352E@localhost> >From Win2k SP2 with IE 6.0.2600.0000 I'm running MSXML3 in replace mode .. but I've also installed MSXML 4, however it's not used in this context by IE On 11 Nov 2002 at 20:48, David Goodger wrote: > A) http://www.python.org/peps/pep-0258.html looks good > B) http://www.python.org/peps/pep-0258-moved-comment.html no doctype error > C) http://www.python.org/peps/pep-0258-no-comment.html looks good, but Python "logo" is strange, messed up > D) http://www.python.org/peps/pep-0258-no-doctype.html I see the raw xml > E) http://www.python.org/peps/pep-0258-no-xml.html looks good, but Python "logo" is strange, messed up > F) http://www.python.org/peps/pep-0258-no-nothing.html looks good, but Python "logo" is strange, messed up > G) http://docutils.sf.net/spec/pep-0258.html (to compare servers) http://docutils.sf.net/spec/pep-0258.html Brad Clements, bkc@murkworks.com (315)268-1000 http://www.murkworks.com (315)268-9812 Fax AOL-IM: BKClements From fredrik@pythonware.com Tue Nov 12 13:58:00 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Tue, 12 Nov 2002 14:58:00 +0100 Subject: [Python-Dev] PEP html display in IE References: <3DD0BDD4.21671.E56352E@localhost> Message-ID: <097501c28a53$8523f4c0$0900a8c0@spiff> brad wrote: > > E) http://www.python.org/peps/pep-0258-no-xml.html >=20 > looks good, but Python "logo" is strange, messed up I guess you have to blame the designer for that one ;-) the python.org page generator picks a python logo by random, from the 64 PyBanner variants available under: http://www.python.org/pics/ some logos are more messed up than others... From just@letterror.com Tue Nov 12 13:57:52 2002 From: just@letterror.com (Just van Rossum) Date: Tue, 12 Nov 2002 14:57:52 +0100 Subject: [Python-Dev] PEP html display in IE In-Reply-To: <3DD0BDD4.21671.E56352E@localhost> Message-ID: Brad Clements wrote: > > C) http://www.python.org/peps/pep-0258-no-comment.html > > looks good, but Python "logo" is strange, messed up FWIW, that's "design"... There is a set of (I think 64) Python pictures, from which one is chosen at random at site-generation time. Next time it'll be a different one. Btw. the pictures are generated by a Python script, with some random parameters. Just From martin@v.loewis.de Tue Nov 12 15:44:37 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: Tue, 12 Nov 2002 16:44:37 +0100 Subject: [Python-Dev] Removing support for IRIX 4 dynamic linking Message-ID: <200211121544.gACFibfZ001744@mira.informatik.hu-berlin.de> Python currently has an option --with-sgi-dl, which is documented as "IRIX 4 dynamic linking". I understand the current version of IRIX is 6.5, so I wonder whether this code is still in use, or should be scheduled for removal (if it is in use on IRIX 6 as well, the documentation should change). I also wonder whether IRIX 4 is still in use by Python users. If not, I would schedule support for this system for removal. There are only a few places which seem to be specific to IRIX 4, e.g. BAD_EXEC_PROTOTYPES (which would give a configure abort if it happens on some other system, in Python 2.3). Does anybody know whether the comment # Most SVR4 platforms (e.g. Solaris) need -lsocket and -lnsl. # However on SGI IRIX, these exist but are broken. applies to IRIX 5 or later? Regards, Martin From skip@pobox.com Tue Nov 12 16:54:01 2002 From: skip@pobox.com (Skip Montanaro) Date: Tue, 12 Nov 2002 10:54:01 -0600 Subject: [Python-Dev] Minix? In-Reply-To: References: <15824.56737.228282.491216@montanaro.dyndns.org> Message-ID: <15825.12841.732963.355745@montanaro.dyndns.org> >> I find it hard to believe Python actually still builds on Minix. On >> the other hand, I don't see it mentioned in PEP 11. Martin> Python's configure.in still as AC_MINIX, but I agree that that Martin> it is likely broken. Under PEP 11, we have to keep the support Martin> code in Python 2.3, and can only start removing it in Python Martin> 2.4. I've updated PEP 11 accordingly. So, one of the first things we should do after 2.3 is released is make a pass through PEP 11 and unsupport a bunch of stuff. I just deleted the reference in the README file. Skip From martin@v.loewis.de Tue Nov 12 15:02:39 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 12 Nov 2002 16:02:39 +0100 Subject: [Python-Dev] Printing and __unicode__ Message-ID: In c.l.p, Henry Thompson wondered why printing would ignore __unicode__. Consider this: import codecs stream = codecs.open("/tmp/bla","w", encoding="cp1252") class Foo: def __unicode__(self): return u"\N{EURO SIGN}" foo = Foo() print >>stream, foo This succeeds, but /tmp/bla now contains <__main__.Foo instance at 0x4026e68c> He argues that it instead should invoke __unicode__, similar to invoking automatically __str__ when writing to a byte stream. I agree that this is desirable, but I wonder what the best approach would be: A. Printing tries __str__, __unicode__, and __repr__, in this order. B. A file indicates "unicode-awareness" somehow. For a Unicode-aware file, it tries __unicode__, __str__, and __repr__, in order. C. A file indicates that it is "unicode-requiring" somehow. For a unicode-requiring file, it tries __unicode__; if that fails, it tries __repr__ and converts the result to Unicode. Which of these, if any, would be most Pythonish? Regards, Martin From guido@python.org Tue Nov 12 17:21:48 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 12 Nov 2002 12:21:48 -0500 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: Your message of "12 Nov 2002 16:02:39 +0100." References: Message-ID: <200211121721.gACHLm504767@odiug.zope.com> > In c.l.p, Henry Thompson wondered why printing would ignore > __unicode__. Consider this: > > import codecs > stream = codecs.open("/tmp/bla","w", encoding="cp1252") > > class Foo: > def __unicode__(self): > return u"\N{EURO SIGN}" > > foo = Foo() > print >>stream, foo > > This succeeds, but /tmp/bla now contains > > <__main__.Foo instance at 0x4026e68c> > > He argues that it instead should invoke __unicode__, similar to > invoking automatically __str__ when writing to a byte stream. > > I agree that this is desirable, but I wonder what the best approach > would be: > > A. Printing tries __str__, __unicode__, and __repr__, in this order. If you try __str__ before __unicode__, you'll always get the default __str__ for all new-style classes. > B. A file indicates "unicode-awareness" somehow. For a Unicode-aware > file, it tries __unicode__, __str__, and __repr__, in order. I like this. > C. A file indicates that it is "unicode-requiring" somehow. For a > unicode-requiring file, it tries __unicode__; if that fails, it > tries __repr__ and converts the result to Unicode. Falling back to __repr__ without __str__ doesn't make sense. > Which of these, if any, would be most Pythonish? B. --Guido van Rossum (home page: http://www.python.org/~guido/) From ark@research.att.com Tue Nov 12 18:46:13 2002 From: ark@research.att.com (Andrew Koenig) Date: 12 Nov 2002 13:46:13 -0500 Subject: [Python-Dev] binutils 2.13.1 Message-ID: I reported earlier that binutils 2.13 does not allow Python to build properly with Solaris. FSF has just released 2.13.1, which fixes one of the problems but not the other. I have reported this fact to the binutils maintainers, who say that they will incorporate the missing patch into 2.13.2, which they will release soon. Meanwhile, here is the patch: *** bfd/elflink.h 22 Aug 2002 01:27:19 -0000 1.185 --- bfd/elflink.h 19 Sep 2002 14:33:09 -0000 *************** elf_fix_symbol_flags (h, eif) *** 3886,3894 **** { struct elf_link_hash_entry *weakdef; BFD_ASSERT (h->root.type == bfd_link_hash_defined || h->root.type == bfd_link_hash_defweak); - weakdef = h->weakdef; BFD_ASSERT (weakdef->root.type == bfd_link_hash_defined || weakdef->root.type == bfd_link_hash_defweak); BFD_ASSERT (weakdef->elf_link_hash_flags & ELF_LINK_HASH_DEF_DYNAMIC); --- 3886,3897 ---- { struct elf_link_hash_entry *weakdef; + weakdef = h->weakdef; + while (h->root.type == bfd_link_hash_indirect) + h = (struct elf_link_hash_entry *) h->root.u.i.link; + BFD_ASSERT (h->root.type == bfd_link_hash_defined || h->root.type == bfd_link_hash_defweak); BFD_ASSERT (weakdef->root.type == bfd_link_hash_defined || weakdef->root.type == bfd_link_hash_defweak); BFD_ASSERT (weakdef->elf_link_hash_flags & ELF_LINK_HASH_DEF_DYNAMIC); -- Andrew Koenig, ark@research.att.com, http://www.research.att.com/info/ark From martin@v.loewis.de Tue Nov 12 19:05:23 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 12 Nov 2002 20:05:23 +0100 Subject: [Python-Dev] Minix? In-Reply-To: <15825.12841.732963.355745@montanaro.dyndns.org> References: <15824.56737.228282.491216@montanaro.dyndns.org> <15825.12841.732963.355745@montanaro.dyndns.org> Message-ID: Skip Montanaro writes: > So, one of the first things we should do after 2.3 is released is make a > pass through PEP 11 and unsupport a bunch of stuff. I just deleted the > reference in the README file. Actually, *before* 2.3, we will need to add warnings. So anybody using one of the unsupported systems will get a build failure, and thus can step forward to maintain that port. In 2.3, it will be easy to restore the support: Just remove the build failure. In 2.4, the maintainer would have to produce some larger patch, restoring the removed code, and likely fixing it. I don't really expect anybody to step forward, atleast in one case, the user decided that a system upgrade (from SunOS 4 to Solaris) would be easier. But we do need to give users a chance, since experience tells that users want support for strange systems. In the specific case of Minix build instructions, it is probably ok that it is removed already, but I'd like to ask everybody to stick to the plan of PEP 11. Regards, Martin From martin@v.loewis.de Tue Nov 12 19:12:09 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 12 Nov 2002 20:12:09 +0100 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: <200211121721.gACHLm504767@odiug.zope.com> References: <200211121721.gACHLm504767@odiug.zope.com> Message-ID: Guido van Rossum writes: > > B. A file indicates "unicode-awareness" somehow. For a Unicode-aware > > file, it tries __unicode__, __str__, and __repr__, in order. > > I like this. Ok, then the question is: How can a file indicate its unicode-awareness? I propose that presence of an attribute "encoding" is taken as such an indication; this would cover all existing cases with no change to the file-like objects. In case the stream is "natively" Unicode (i.e. doesn't ever convert to byte strings), setting encoding to None should be allowed (this actually indicates that StringIO should have the encoding attribute). Regards, Martin From guido@python.org Tue Nov 12 19:16:02 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 12 Nov 2002 14:16:02 -0500 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: Your message of "12 Nov 2002 20:12:09 +0100." References: <200211121721.gACHLm504767@odiug.zope.com> Message-ID: <200211121916.gACJG2016245@odiug.zope.com> > > > B. A file indicates "unicode-awareness" somehow. For a Unicode-aware > > > file, it tries __unicode__, __str__, and __repr__, in order. > > > > I like this. > > Ok, then the question is: How can a file indicate its > unicode-awareness? I propose that presence of an attribute "encoding" > is taken as such an indication; this would cover all existing cases > with no change to the file-like objects. +1 > In case the stream is "natively" Unicode (i.e. doesn't ever convert to > byte strings), setting encoding to None should be allowed (this > actually indicates that StringIO should have the encoding attribute). +1 --Guido van Rossum (home page: http://www.python.org/~guido/) From tim@multitalents.net Wed Nov 13 01:35:53 2002 From: tim@multitalents.net (Tim Rice) Date: Tue, 12 Nov 2002 17:35:53 -0800 (PST) Subject: [Python-Dev] Removing support for IRIX 4 dynamic linking In-Reply-To: <200211121544.gACFibfZ001744@mira.informatik.hu-berlin.de> Message-ID: On Tue, 12 Nov 2002, Martin v. Loewis wrote: > > Does anybody know whether the comment > > # Most SVR4 platforms (e.g. Solaris) need -lsocket and -lnsl. > # However on SGI IRIX, these exist but are broken. > > applies to IRIX 5 or later? I have don't IRIX here, but in OpenSSH we don't do anything special around libsocket or libnsl on IRIX 5 or IRIX 6. > > Regards, > Martin > -- Tim Rice Multitalents (707) 887-1469 tim@multitalents.net From goodger@python.org Wed Nov 13 02:07:43 2002 From: goodger@python.org (David Goodger) Date: Tue, 12 Nov 2002 21:07:43 -0500 Subject: [Python-Dev] PEP html display in IE In-Reply-To: <072801c28a32$63c9e990$0900a8c0@spiff> Message-ID: Fredrik Lundh wrote: > Reading the XHTML specification might also help: I have. So many specs, so little time :) > you're using an XHTML DTD, but the files are missing the mandatory xmlns > declaration on the html tag (see section 3.1.1, item 3 of the XHTML > specification). Corrected. Thanks for the pointer. As the original author of pep2html.py, let me ask you something. Plaintext PEPs use the HTML 4 transitional DTD (loose.dtd), but pep2html.py claims to produce XHTML. Should the DOCTYPE be switched to the XHTML DTD? Or is this a case of "if it ain't broke, don't fix it"? -- David Goodger http://www.python.org/peps/ Python Enhancement Proposal (PEP) Editor (Please cc: all PEP correspondence to .) From martin@v.loewis.de Wed Nov 13 08:51:30 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 13 Nov 2002 09:51:30 +0100 Subject: [Python-Dev] Removing support for IRIX 4 dynamic linking In-Reply-To: References: Message-ID: Tim Rice writes: > > Does anybody know whether the comment > > > > # Most SVR4 platforms (e.g. Solaris) need -lsocket and -lnsl. > > # However on SGI IRIX, these exist but are broken. > > > > applies to IRIX 5 or later? > > I have don't IRIX here, but in OpenSSH we don't do anything special > around libsocket or libnsl on IRIX 5 or IRIX 6. Thanks, that is good enough for me to eliminate the special case. Regards, Martin From mal@lemburg.com Wed Nov 13 09:24:34 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 13 Nov 2002 10:24:34 +0100 Subject: [Python-Dev] Printing and __unicode__ References: <200211121721.gACHLm504767@odiug.zope.com> Message-ID: <3DD21A52.7080205@lemburg.com> Martin v. Loewis wrote: > Guido van Rossum writes: > > >>>B. A file indicates "unicode-awareness" somehow. For a Unicode-aware >>> file, it tries __unicode__, __str__, and __repr__, in order. >> >>I like this. > > > Ok, then the question is: How can a file indicate its > unicode-awareness? I propose that presence of an attribute "encoding" > is taken as such an indication; this would cover all existing cases > with no change to the file-like objects. Thanks to the time machine, this attribute is already available on stream objects created with codecs.open(). +1 > In case the stream is "natively" Unicode (i.e. doesn't ever convert to > byte strings), setting encoding to None should be allowed (this > actually indicates that StringIO should have the encoding attribute). -1 The presence of .encoding should indicate that it is safe to write Unicode objects to .write(). Let the stream decide what to do with the Unicode object (e.g. it would probably encode the Unicode object using the .encoding and only then write it to the outside world). -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From guido@python.org Wed Nov 13 15:54:11 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 10:54:11 -0500 Subject: [Python-Dev] Adopting Optik Message-ID: <200211131554.gADFsBw28797@odiug.zope.com> I want to start working on an alpha release of Python 2.3. I'd like to be able to release 2.3a1 before Xmas. PEP 283 has a list of things to be done. One of the tasks is to adopt Greg Ward's options parsing module, Optik. I propose to adopt this under the name "options". Any comments? --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy@alum.mit.edu Wed Nov 13 16:16:36 2002 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Wed, 13 Nov 2002 11:16:36 -0500 Subject: [Python-Dev] removing inst_persistent_id from pickle Message-ID: <15826.31460.602354.85306@slothrop.zope.com> Pickle and cPickle have an undocumented hook called inst_persistent_id(). When Barry updated the pickle documentation (round about the 2.2 release), no one could figure out what it did. I think we've figured out what it does, finally, but I think we should get rid of it before someone tries to use it. The pickler has a hook for handling persistent objects. The hook basically allows objects to be pickled by reference to some mechanism external to the pickler. The pickler gets a persistent_id() function that returns the external reference. The unpickler needs a persistent_load() function that returns the object given the reference from persistent_id(). This process is fairly general, although the only use I'm familiar with is ZODB. The inst_persistent_id() hook seems to be designed for a very special case -- that the persistent_id() function returns an object that is unpicklable. The function is only called when the pickler encounters an object that it hasn't handled via persistent_id() or the dispatch table. The object returned by inst_persistent_id() is always passed to save_pers(), just like persistent_id(). We imagine the intended control flow is: - pickler created in binary mode - persistent_id() returns an unpicklable object - inst_persistent_id() is called to convert this to a picklable object I don't think this odd case is worth the added complexity, particularly since the hook function probably won't be called for a text-mode pickler. Anyone object to its removal? Jeremy From fredrik@pythonware.com Wed Nov 13 16:22:15 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Wed, 13 Nov 2002 17:22:15 +0100 Subject: [Python-Dev] PEP html display in IE References: Message-ID: <001701c28b30$d6829b90$ced241d5@hagrid> David Goodger wrote: > > you're using an XHTML DTD, but the files are missing the mandatory xmlns > > declaration on the html tag (see section 3.1.1, item 3 of the XHTML > > specification). > > Corrected. Thanks for the pointer. > > As the original author of pep2html.py, let me ask you something. Plaintext > PEPs use the HTML 4 transitional DTD (loose.dtd), but pep2html.py claims to > produce XHTML. Well, I'm pretty sure it didn't produce valid XHTML when I checked it in... > Should the DOCTYPE be switched to the XHTML DTD? Or is this a case of > "if it ain't broke, don't fix it"? I checked a few random PEPs, and found two small problems: - the stylesheet link tag in the header has no end tag. - there's no xmlns attribute on the html element. Except from that, the output looks like XHTML to me and to the XML parser I used. So if you have the time... From guido@python.org Wed Nov 13 16:32:24 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 11:32:24 -0500 Subject: [Python-Dev] logging docs needed Message-ID: <200211131632.gADGWOY29161@odiug.zope.com> I've checked in Vinay Sajip's logging package. It is documented at http://www.red-dove.com/python_logging.html. I need a volunteer to convert those docs to LaTeX and check them in. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Wed Nov 13 16:40:27 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 11:40:27 -0500 Subject: [Python-Dev] Adopting Optik In-Reply-To: Your message of "Wed, 13 Nov 2002 17:30:25 +0100." References: Message-ID: <200211131640.gADGeRT29280@odiug.zope.com> > > I want to start working on an alpha release of Python 2.3. I'd like > > to be able to release 2.3a1 before Xmas. PEP 283 has a list of things > > to be done. One of the tasks is to adopt Greg Ward's options parsing > > module, Optik. I propose to adopt this under the name "options". Any > > comments? > > What about the discussion in May 2002: > http://mail.python.org/pipermail/getopt-sig/2002-May/000191.html Looking at the code, I think there are too many classes to speak about a single dominant class; then the guideline becomes "use short, lower-case module names". Cute names are against Python's tradition IMO. > Since some projects (for exmaple docutils) already started to use > Optik it is becoming increasingly late for a name change. The docutils author can speak for himself; I think docutils can deal with the change. It's also simple enough to add something like this: try: from options import OptionParse except ImportError: from optik import OptionParse --Guido van Rossum (home page: http://www.python.org/~guido/) From jim@zope.com Wed Nov 13 16:42:48 2002 From: jim@zope.com (Jim Fulton) Date: Wed, 13 Nov 2002 11:42:48 -0500 Subject: [Python-Dev] Re: removing inst_persistent_id from pickle References: <15826.31460.602354.85306@slothrop.zope.com> Message-ID: <3DD28108.2000509@zope.com> Jeremy Hylton wrote: > Pickle and cPickle have an undocumented hook called > inst_persistent_id(). When Barry updated the pickle documentation > (round about the 2.2 release), no one could figure out what it did. I > think we've figured out what it does, finally, but I think we should > get rid of it before someone tries to use it. > > The pickler has a hook for handling persistent objects. The hook > basically allows objects to be pickled by reference to some mechanism > external to the pickler. The pickler gets a persistent_id() function > that returns the external reference. The unpickler needs a > persistent_load() function that returns the object given the reference > from persistent_id(). This process is fairly general, although the > only use I'm familiar with is ZODB. > > The inst_persistent_id() hook seems to be designed for a very special > case -- that the persistent_id() function returns an object that is > unpicklable. The function is only called when the pickler encounters > an object that it hasn't handled via persistent_id() or the dispatch > table. The object returned by inst_persistent_id() is always passed > to save_pers(), just like persistent_id(). > > We imagine the intended control flow is: > > - pickler created in binary mode > - persistent_id() returns an unpicklable object > - inst_persistent_id() is called to convert this to a picklable > object > > I don't think this odd case is worth the added complexity, > particularly since the hook function probably won't be called for a > text-mode pickler. > > Anyone object to its removal? No. In fact, I endorse it's removal. Jim -- Jim Fulton mailto:jim@zope.com Python Powered! CTO (888) 344-4332 http://www.python.org Zope Corporation http://www.zope.com http://www.zope.org From pf@artcom-gmbh.de Wed Nov 13 17:24:14 2002 From: pf@artcom-gmbh.de (Peter Funk) Date: Wed, 13 Nov 2002 18:24:14 +0100 (CET) Subject: [Python-Dev] Adopting Optik In-Reply-To: <200211131640.gADGeRT29280@odiug.zope.com> from Guido van Rossum at "Nov 13, 2002 11:40:27 am" Message-ID: Hi, Guido van Rossum: [...] > > http://mail.python.org/pipermail/getopt-sig/2002-May/000191.html > > Looking at the code, I think there are too many classes to speak about > a single dominant class; [...] I beg to differ. In http://mail.python.org/pipermail/getopt-sig/2002-May/000204.html Greg wrote: '''I strongly prefer OptionParser, because that's the main class; it's the one that's always used (ie. directly instantiated). There are always instances of Option, OptionValues, and the various exception classes floating around -- but most Optik applications don't have to import those names directly.''' Regards, Peter -- Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen, Germany) From guido@python.org Wed Nov 13 17:34:13 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 12:34:13 -0500 Subject: [Python-Dev] Adopting Optik In-Reply-To: Your message of "Wed, 13 Nov 2002 18:24:14 +0100." References: Message-ID: <200211131734.gADHYDI04190@odiug.zope.com> [PF] > I beg to differ. In > http://mail.python.org/pipermail/getopt-sig/2002-May/000204.html > Greg wrote: > '''I strongly prefer OptionParser, because that's the main class; it's the > one that's always used (ie. directly instantiated). There are always > instances of Option, OptionValues, and the various exception classes > floating around -- but most Optik applications don't have to import > those names directly.''' Too bad. Greg also said he preferred short lowercase module names. --Guido van Rossum (home page: http://www.python.org/~guido/) From hpk@devel.trillke.net Wed Nov 13 17:45:53 2002 From: hpk@devel.trillke.net (holger krekel) Date: Wed, 13 Nov 2002 18:45:53 +0100 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211131734.gADHYDI04190@odiug.zope.com>; from guido@python.org on Wed, Nov 13, 2002 at 12:34:13PM -0500 References: <200211131734.gADHYDI04190@odiug.zope.com> Message-ID: <20021113184553.F14762@prim.han.de> [Guido van Rossum Wed, Nov 13, 2002 at 12:34:13PM -0500] > [PF] > > I beg to differ. In > > http://mail.python.org/pipermail/getopt-sig/2002-May/000204.html > > Greg wrote: > > '''I strongly prefer OptionParser, because that's the main class; it's the > > one that's always used (ie. directly instantiated). There are always > > instances of Option, OptionValues, and the various exception classes > > floating around -- but most Optik applications don't have to import > > those names directly.''' > > Too bad. Greg also said he preferred short lowercase module names. But 'options' is not as descriptive as 'OptionParser'. To me it compares to 'urlparse'. We don't say 'import url'. regards, holger From martin@v.loewis.de Wed Nov 13 18:29:17 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 13 Nov 2002 19:29:17 +0100 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: <3DD21A52.7080205@lemburg.com> References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> Message-ID: "M.-A. Lemburg" writes: > > In case the stream is "natively" Unicode (i.e. doesn't ever convert to > > byte strings), setting encoding to None should be allowed (this > > actually indicates that StringIO should have the encoding attribute). > > -1 > > The presence of .encoding should indicate that it is > safe to write Unicode objects to .write(). Let the stream > decide what to do with the Unicode object (e.g. it would > probably encode the Unicode object using the .encoding > and only then write it to the outside world). So should StringIO object have an .encoding attribute or not? If not, should f = StringIO.StringIO() print >>f,x try to invoke Unicode conversion or not? If it should, how should it find out that this is safe to do? Regards, Martin From paul@pfdubois.com Wed Nov 13 18:09:17 2002 From: paul@pfdubois.com (Paul F Dubois) Date: Wed, 13 Nov 2002 10:09:17 -0800 Subject: [Python-Dev] Python interface to attribute descriptors Message-ID: <000201c28b3f$c841c830$6501a8c0@NICKLEBY> Like a few other people before me, I was trying to understand and exploit attribute descriptors and in so doing encountered two facts: a. The C-API document doesn't document these functions beyond giving their signatures. This is even true in the development version, as far as I can tell. b. There is no Python API; you can define a class with some methods with the right names and signatures and it will come tantalizingly close to working for some purposes but sooner or later you find that you just aren't allowed to play as a first-class citizen from Python. Has anyone already done either of these tasks? On the bright side, I did manage to make a metaclass without my head going critical. Smoke came out my ears, though. From mal@lemburg.com Wed Nov 13 18:50:24 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 13 Nov 2002 19:50:24 +0100 Subject: [Python-Dev] Printing and __unicode__ References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> Message-ID: <3DD29EF0.8090504@lemburg.com> Martin v. Loewis wrote: > "M.-A. Lemburg" writes: > > >>>In case the stream is "natively" Unicode (i.e. doesn't ever convert to >>>byte strings), setting encoding to None should be allowed (this >>>actually indicates that StringIO should have the encoding attribute). >> >>-1 >> >>The presence of .encoding should indicate that it is >>safe to write Unicode objects to .write(). Let the stream >>decide what to do with the Unicode object (e.g. it would >>probably encode the Unicode object using the .encoding >>and only then write it to the outside world). > > > So should StringIO object have an .encoding attribute or not? > > If not, should > > f = StringIO.StringIO() > print >>f,x > > try to invoke Unicode conversion or not? StringIO should be considered a non-Unicode aware stream, so it should not implement .encoding. Instead, PyFile_WriteObject() will simply call __str__ on the Unicode object and thus use the default encoding for conversion (this is what StringIO does currently). If somebody wants to use a StringIO object as Unicode aware stream, the tools in codecs.py can be used for this (basically by doing the same kind of wrapping as codecs.open() does). > If it should, how should it > find out that this is safe to do? -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From martin@v.loewis.de Wed Nov 13 19:21:39 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 13 Nov 2002 20:21:39 +0100 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: <3DD29EF0.8090504@lemburg.com> References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> Message-ID: "M.-A. Lemburg" writes: > StringIO should be considered a non-Unicode aware stream, > so it should not implement .encoding. Instead, PyFile_WriteObject() > will simply call __str__ on the Unicode object and thus use > the default encoding for conversion (this is what StringIO > does currently). This is not what StringIO does currently: >>> s=StringIO.StringIO() >>> print >>s,u"Hallo" >>> s.getvalue() u'Hallo\n' print special-cases Unicode objects and passes them to the stream. So printing Unicode objects on a StringIO builds up a Unicode value. > If somebody wants to use a StringIO object as Unicode aware > stream StringIO *is* Unicode-aware. Regards, Martin From guido@python.org Wed Nov 13 19:26:52 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 14:26:52 -0500 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: Your message of "13 Nov 2002 20:21:39 +0100." References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> Message-ID: <200211131926.gADJQqc05097@odiug.zope.com> > StringIO *is* Unicode-aware. Though it acts somewhat as if its default encoding is "ascii". This is somewhat inconsistent: you can write arbitrary Unicode strings, but the Unicode won't be converted to ASCII. ASCII is converted to Unicode though. And of course cStringIO doesn't support Unicode at all. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Wed Nov 13 19:33:28 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 14:33:28 -0500 Subject: [Python-Dev] Python interface to attribute descriptors In-Reply-To: Your message of "Wed, 13 Nov 2002 10:09:17 PST." <000201c28b3f$c841c830$6501a8c0@NICKLEBY> References: <000201c28b3f$c841c830$6501a8c0@NICKLEBY> Message-ID: <200211131933.gADJXSC05270@odiug.zope.com> > a. The C-API document doesn't document these functions beyond giving > their signatures. This is even true in the development version, as far > as I can tell. Yes. Alas, the documention project for new-style classes is way behind. I could use some help. :-( > b. There is no Python API; you can define a class with some methods with > the right names and signatures and it will come tantalizingly close to > working for some purposes but sooner or later you find that you just > aren't allowed to play as a first-class citizen from Python. Can you show an example of something that doesn't work? I'm pretty sure this works correctly, as long as you stay far away from classic classes. > Has anyone already done either of these tasks? > > On the bright side, I did manage to make a metaclass without my head > going critical. Smoke came out my ears, though. What color? --Guido van Rossum (home page: http://www.python.org/~guido/) From akuchlin@mems-exchange.org Wed Nov 13 19:52:16 2002 From: akuchlin@mems-exchange.org (Andrew Kuchling) Date: Wed, 13 Nov 2002 14:52:16 -0500 Subject: [Python-Dev] logging docs needed In-Reply-To: <200211131632.gADGWOY29161@odiug.zope.com> References: <200211131632.gADGWOY29161@odiug.zope.com> Message-ID: <20021113195215.GA26348@ute.mems-exchange.org> On Wed, Nov 13, 2002 at 11:32:24AM -0500, Guido van Rossum wrote: >I've checked in Vinay Sajip's logging package. It is documented at >http://www.red-dove.com/python_logging.html. I need a volunteer to >convert those docs to LaTeX and check them in. Unless someone else has already volunteered, I'll do this. (Since I'll need to go over them for "What's New" anyway...) --amk (www.amk.ca) PROSPERO: The time 'twixt six and now must by us both be spent most preciously. -- _The Tempest_, I, ii From pobrien@orbtech.com Wed Nov 13 19:55:26 2002 From: pobrien@orbtech.com (Patrick K. O'Brien) Date: Wed, 13 Nov 2002 13:55:26 -0600 Subject: [Python-Dev] IDLE local scope cleanup Message-ID: <200211131355.26133.pobrien@orbtech.com> How does IDLE clean up it's local scope on startup? By that I mean the following. When you start IDLE and do a dir() you get the following: Python 2.2.2 (#1, Oct 28 2002, 17:22:19) [GCC 3.2 (Mandrake Linux 9.0 3.2-1mdk)] on linux2 Type "copyright", "credits" or "license" for more information. GRPC IDLE Fork 0.8.9 >>> dir() ['__builtins__', '__doc__', '__name__'] >>> Only three items are returned by dir(), just like in the regular Python interpreter. Now I know how IDLE sets this up in its code, but I can't seem to achieve the exact same results with PyCrust. And when I add a print statement to the IDLE source code (PyShell.py): class ModifiedInterpreter(InteractiveInterpreter): def __init__(self, tkconsole): self.tkconsole = tkconsole locals = sys.modules['__main__'].__dict__ print locals.keys() InteractiveInterpreter.__init__(self, locals=locals) self.save_warnings_filters = None I can see that locals contains a bunch of stuff (well, one extra item when you run idle.py, and a bunch of stuff when you run PyShell.py), similar to what I see in PyCrust. So where does it all go by the time IDLE is up and running? Where does locals get "cleaned up"? Every one of my hunches has lead to me down a dead end. I give up! Please help. -- Patrick K. O'Brien Orbtech http://www.orbtech.com/web/pobrien ----------------------------------------------- "Your source for Python programming expertise." ----------------------------------------------- From guido@python.org Wed Nov 13 20:00:05 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 15:00:05 -0500 Subject: [Python-Dev] IDLE local scope cleanup In-Reply-To: Your message of "Wed, 13 Nov 2002 13:55:26 CST." <200211131355.26133.pobrien@orbtech.com> References: <200211131355.26133.pobrien@orbtech.com> Message-ID: <200211132000.gADK05m05462@odiug.zope.com> > How does IDLE clean up it's local scope on startup? By that I mean the > following. When you start IDLE and do a dir() you get the following: > > Python 2.2.2 (#1, Oct 28 2002, 17:22:19) > [GCC 3.2 (Mandrake Linux 9.0 3.2-1mdk)] on linux2 > Type "copyright", "credits" or "license" for more information. > GRPC IDLE Fork 0.8.9 ^^^^ ^This is a clue. > >>> dir() > ['__builtins__', '__doc__', '__name__'] > >>> > > Only three items are returned by dir(), just like in the regular Python > interpreter. Now I know how IDLE sets this up in its code, but I can't > seem to achieve the exact same results with PyCrust. And when I add a > print statement to the IDLE source code (PyShell.py): > > class ModifiedInterpreter(InteractiveInterpreter): > > def __init__(self, tkconsole): > self.tkconsole = tkconsole > locals = sys.modules['__main__'].__dict__ > print locals.keys() > InteractiveInterpreter.__init__(self, locals=locals) > self.save_warnings_filters = None > > I can see that locals contains a bunch of stuff (well, one extra item > when you run idle.py, and a bunch of stuff when you run PyShell.py), > similar to what I see in PyCrust. So where does it all go by the time > IDLE is up and running? Where does locals get "cleaned up"? Every one > of my hunches has lead to me down a dead end. I give up! Please help. IDLE doesn't use these locals any more; they're decoys. The GRPC version runs the interpreter in a subprocess and the subprocess is more careful. --Guido van Rossum (home page: http://www.python.org/~guido/) From pobrien@orbtech.com Wed Nov 13 20:16:00 2002 From: pobrien@orbtech.com (Patrick K. O'Brien) Date: Wed, 13 Nov 2002 14:16:00 -0600 Subject: [Python-Dev] IDLE local scope cleanup In-Reply-To: <200211132000.gADK05m05462@odiug.zope.com> References: <200211131355.26133.pobrien@orbtech.com> <200211132000.gADK05m05462@odiug.zope.com> Message-ID: <200211131416.00964.pobrien@orbtech.com> On Wednesday 13 November 2002 02:00 pm, Guido van Rossum wrote: > IDLE doesn't use these locals any more; they're decoys. The GRPC > version runs the interpreter in a subprocess and the subprocess is > more careful. Ah. I suspected that, but didn't understand the subprocess code enough to figure that out. Thanks. Here is my real problem. I used to just pass a regular dictionary to code.InteractiveInterpreter, which worked well enough. But I just discovered an issue with pickling in the PyCrust shell, illustrated below: Welcome To PyCrust 0.8 - The Flakiest Python Shell Sponsored by Orbtech - Your source for Python programming expertise. Python 2.2.2 (#1, Oct 28 2002, 17:22:19) [GCC 3.2 (Mandrake Linux 9.0 3.2-1mdk)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pickle >>> def foo(): ... pass ... >>> pickle.dumps(foo) Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.2/pickle.py", line 978, in dumps Pickler(file, bin).dump(object) File "/usr/local/lib/python2.2/pickle.py", line 115, in dump self.save(object) File "/usr/local/lib/python2.2/pickle.py", line 225, in save f(self, object) File "/usr/local/lib/python2.2/pickle.py", line 519, in save_global raise PicklingError( PicklingError: Can't pickle : it's not found as __main__.foo >>> So I decided to switch to using sys.modules['__main__'].__dict__, which eliminated the pickling error, but introduced a bunch of clutter in the local namespace. Any suggestions? -- Patrick K. O'Brien Orbtech http://www.orbtech.com/web/pobrien ----------------------------------------------- "Your source for Python programming expertise." ----------------------------------------------- From martin@v.loewis.de Wed Nov 13 20:23:36 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 13 Nov 2002 21:23:36 +0100 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: <200211131926.gADJQqc05097@odiug.zope.com> References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> <200211131926.gADJQqc05097@odiug.zope.com> Message-ID: Guido van Rossum writes: > Though it acts somewhat as if its default encoding is "ascii". This > is somewhat inconsistent: you can write arbitrary Unicode strings, but > the Unicode won't be converted to ASCII. ASCII is converted to > Unicode though. It is the only case of a "pure Unicode" stream in Python, where the underlying "native" sequence is not one of bytes, but one of Unicode characters. The real problem is that the "orientation" (wide or narrow strings) is determined by the things written into the stream. It might be more reasonable to have StringIO.ByteIO and StringIO.UnicodeIO constructors, which both accept an encoding= argument, and will convert objects of the wrong "orientation" using that encoding (defaulting to the system encoding). Regards, Martin From guido@python.org Wed Nov 13 20:32:32 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 15:32:32 -0500 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: Your message of "13 Nov 2002 21:23:36 +0100." References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> <200211131926.gADJQqc05097@odiug.zope.com> Message-ID: <200211132032.gADKWW605978@odiug.zope.com> > > Though it acts somewhat as if its default encoding is "ascii". This > > is somewhat inconsistent: you can write arbitrary Unicode strings, but > > the Unicode won't be converted to ASCII. ASCII is converted to > > Unicode though. > > It is the only case of a "pure Unicode" stream in Python, where the > underlying "native" sequence is not one of bytes, but one of Unicode > characters. > > The real problem is that the "orientation" (wide or narrow strings) is > determined by the things written into the stream. > > It might be more reasonable to have StringIO.ByteIO and > StringIO.UnicodeIO constructors, which both accept an encoding= > argument, and will convert objects of the wrong "orientation" > using that encoding (defaulting to the system encoding). I'm not sure about those names, but I agree that the encoding should be forced when the StringIO instance is created. Given that using Unicode with these is currently fragile at best, maybe we should say that unless you give an encoding argument, it's a byte stream and doesn't allow Unicode at all? That would be consistent with cStringIO. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Wed Nov 13 20:42:08 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 15:42:08 -0500 Subject: [Python-Dev] IDLE local scope cleanup In-Reply-To: Your message of "Wed, 13 Nov 2002 14:16:00 CST." <200211131416.00964.pobrien@orbtech.com> References: <200211131355.26133.pobrien@orbtech.com> <200211132000.gADK05m05462@odiug.zope.com> <200211131416.00964.pobrien@orbtech.com> Message-ID: <200211132042.gADKg8C06099@odiug.zope.com> > Here is my real problem. I used to just pass a regular dictionary to > code.InteractiveInterpreter, which worked well enough. But I just > discovered an issue with pickling in the PyCrust shell, illustrated > below: > > Welcome To PyCrust 0.8 - The Flakiest Python Shell > Sponsored by Orbtech - Your source for Python programming expertise. > Python 2.2.2 (#1, Oct 28 2002, 17:22:19) > [GCC 3.2 (Mandrake Linux 9.0 3.2-1mdk)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import pickle > >>> def foo(): > ... pass > ... > >>> pickle.dumps(foo) > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/local/lib/python2.2/pickle.py", line 978, in dumps > Pickler(file, bin).dump(object) > File "/usr/local/lib/python2.2/pickle.py", line 115, in dump > self.save(object) > File "/usr/local/lib/python2.2/pickle.py", line 225, in save > f(self, object) > File "/usr/local/lib/python2.2/pickle.py", line 519, in save_global > raise PicklingError( > PicklingError: Can't pickle : it's not found > as __main__.foo > >>> > > So I decided to switch to using sys.modules['__main__'].__dict__, which > eliminated the pickling error, but introduced a bunch of clutter in the > local namespace. Any suggestions? Remove the clutter. This probably means having a very minimal "real" main program. IDLE does this by putting all the real code in the "run" module, and bootstrapping the subprocess as follows: python -c "__import__('run').main()" This is roughly equivalent to executing import run run.main() except that it doesn't create a variable 'run'. --Guido van Rossum (home page: http://www.python.org/~guido/) From mal@lemburg.com Wed Nov 13 20:58:33 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 13 Nov 2002 21:58:33 +0100 Subject: [Python-Dev] Printing and __unicode__ References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> <200211131926.gADJQqc05097@odiug.zope.com> <200211132032.gADKWW605978@odiug.zope.com> Message-ID: <3DD2BCF9.5010102@lemburg.com> Guido van Rossum wrote: >>>Though it acts somewhat as if its default encoding is "ascii". This >>>is somewhat inconsistent: you can write arbitrary Unicode strings, but >>>the Unicode won't be converted to ASCII. ASCII is converted to >>>Unicode though. >> >>It is the only case of a "pure Unicode" stream in Python, where the >>underlying "native" sequence is not one of bytes, but one of Unicode >>characters. >> >>The real problem is that the "orientation" (wide or narrow strings) is >>determined by the things written into the stream. >> >>It might be more reasonable to have StringIO.ByteIO and >>StringIO.UnicodeIO constructors, which both accept an encoding= >>argument, and will convert objects of the wrong "orientation" >>using that encoding (defaulting to the system encoding). > > > I'm not sure about those names, but I agree that the encoding should > be forced when the StringIO instance is created. Given that using > Unicode with these is currently fragile at best, maybe we should say > that unless you give an encoding argument, it's a byte stream and > doesn't allow Unicode at all? That would be consistent with cStringIO. +1 The fact that StringIO works with Unicode (and then only in the case where you *only* pass Unicode to it) is more an implementation detail than a true feature. cStringIO doesn't have this implementation detail, so porting from StringIO to the much faster cStringIO doesn't work at all for Unicode. I think that StringIO and cStringIO should be regarded as binary streams without any encoding knowledge. It is easy enough to wrap these into Unicode aware streams using the codecs.StreamReaderWriter class as is done in codecs.open(). That API already adds the .encoding attribute to the stream object, BTW. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From martin@v.loewis.de Wed Nov 13 21:07:36 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 13 Nov 2002 22:07:36 +0100 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: <200211132032.gADKWW605978@odiug.zope.com> References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> <200211131926.gADJQqc05097@odiug.zope.com> <200211132032.gADKWW605978@odiug.zope.com> Message-ID: Guido van Rossum writes: > I'm not sure about those names, but I agree that the encoding should > be forced when the StringIO instance is created. Given that using > Unicode with these is currently fragile at best, maybe we should say > that unless you give an encoding argument, it's a byte stream and > doesn't allow Unicode at all? That would be consistent with cStringIO. But it would break compatibility, atleast with xml.dom.minidom.Node.write, which support StringIO currently, and will collect Unicode strings in it. Regards, Martin From martin@v.loewis.de Wed Nov 13 21:11:46 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 13 Nov 2002 22:11:46 +0100 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: <3DD2BCF9.5010102@lemburg.com> References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> <200211131926.gADJQqc05097@odiug.zope.com> <200211132032.gADKWW605978@odiug.zope.com> <3DD2BCF9.5010102@lemburg.com> Message-ID: "M.-A. Lemburg" writes: > The fact that StringIO works with Unicode (and then only in the > case where you *only* pass Unicode to it) is more an implementation > detail than a true feature. It's a true feature. You explicitly fixed that feature in revision 1.20 date: 2002/01/06 17:15:05; author: lemburg; state: Exp; lines: +8 -5 Restore Python 2.1 StringIO.py behaviour: support concatenating Unicode string snippets to larger Unicode strings. This fix should also go into Python 2.2.1. after you broke it in revision 1.19 date: 2001/09/24 17:34:52; author: lemburg; state: Exp; lines: +4 -1 branches: 1.19.12; StringIO patch #462596: let's [c]StringIO accept read buffers on input to .write() too. > cStringIO doesn't have this implementation detail, so porting from > StringIO to the much faster cStringIO doesn't work at all for > Unicode. Correct. That still doesn't make it an implementation detail. > I think that StringIO and cStringIO should be regarded as > binary streams without any encoding knowledge. It is easy > enough to wrap these into Unicode aware streams using the > codecs.StreamReaderWriter class as is done in codecs.open(). Then why did you fix that behaviour when you broke it? Regards, Martin From guido@python.org Wed Nov 13 21:17:30 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 16:17:30 -0500 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: Your message of "13 Nov 2002 22:07:36 +0100." References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> <200211131926.gADJQqc05097@odiug.zope.com> <200211132032.gADKWW605978@odiug.zope.com> Message-ID: <200211132117.gADLHU906949@odiug.zope.com> > > I'm not sure about those names, but I agree that the encoding should > > be forced when the StringIO instance is created. Given that using > > Unicode with these is currently fragile at best, maybe we should say > > that unless you give an encoding argument, it's a byte stream and > > doesn't allow Unicode at all? That would be consistent with cStringIO. > > But it would break compatibility, atleast with > xml.dom.minidom.Node.write, which support StringIO currently, and will > collect Unicode strings in it. Would it be acceptable if StringIO required you to be consistent, i.e. write only Unicode *or* only 8-bit strings, and never mix them? That would be some kind of magical behavior; the encoding attribute should be set to reflect the mode after the first write, and should be None initially (or some other way to indicate the magic). --Guido van Rossum (home page: http://www.python.org/~guido/) From drifty@bigfoot.com Wed Nov 13 21:29:24 2002 From: drifty@bigfoot.com (Brett Cannon) Date: Wed, 13 Nov 2002 13:29:24 -0800 (PST) Subject: [Python-Dev] Adopting Optik In-Reply-To: <200211131554.gADFsBw28797@odiug.zope.com> Message-ID: [Guido van Rossum] > I want to start working on an alpha release of Python 2.3. I'd like > to be able to release 2.3a1 before Xmas. PEP 283 has a list of things > to be done. One of the tasks is to adopt Greg Ward's options parsing > module, Optik. I propose to adopt this under the name "options". Any > comments? > +0 The name is basically fine, if just a little vague. But then again I really doubt someone learning programming knows what getopt traditionally means. But I don't have a better name, so I can't really complain. Best I can do is ArgParser or something to try to tie the name into sys.argv. And I completely support making sure that it doesn't have a cute name. -Brett From ping@zesty.ca Wed Nov 13 21:33:03 2002 From: ping@zesty.ca (Ka-Ping Yee) Date: Wed, 13 Nov 2002 15:33:03 -0600 (CST) Subject: [Python-Dev] Adopting Optik In-Reply-To: <200211131640.gADGeRT29280@odiug.zope.com> Message-ID: On Wed, 13 Nov 2002, Guido van Rossum wrote: > the guideline becomes "use short, > lower-case module names". A short lower-case name would be good, but i worry that "options" is too generic a word. There are all sorts of options it might mean. Could we find a name that has something to do with commands or the command line, like "cmdline" or "cmdopts"? -- ?!ng From guido@python.org Wed Nov 13 21:36:24 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 16:36:24 -0500 Subject: [Python-Dev] Adopting Optik In-Reply-To: Your message of "Wed, 13 Nov 2002 15:33:03 CST." References: Message-ID: <200211132136.gADLaOd07157@odiug.zope.com> > > the guideline becomes "use short, > > lower-case module names". > > A short lower-case name would be good, but i worry that "options" is > too generic a word. There are all sorts of options it might mean. > Could we find a name that has something to do with commands or the > command line, like "cmdline" or "cmdopts"? This has been mentioned before and I'm sort of in agreement, especially since I've heard from several people already who have their own module options.py. How about optlib? It's short, un-cute, and follows the *lib pattern used all over the Python stdlib. --Guido van Rossum (home page: http://www.python.org/~guido/) From martin@v.loewis.de Wed Nov 13 21:50:25 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 13 Nov 2002 22:50:25 +0100 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: <200211132117.gADLHU906949@odiug.zope.com> References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> <200211131926.gADJQqc05097@odiug.zope.com> <200211132032.gADKWW605978@odiug.zope.com> <200211132117.gADLHU906949@odiug.zope.com> Message-ID: Guido van Rossum writes: > > But it would break compatibility, atleast with > > xml.dom.minidom.Node.write, which support StringIO currently, and will > > collect Unicode strings in it. > > Would it be acceptable if StringIO required you to be consistent, > i.e. write only Unicode *or* only 8-bit strings, and never mix them? >From a strict point of view, that would be acceptable, since the DOM is specified to be a Unicode thing. Unfortunately, neither the current implementation, nor the common use make such a strict view reasonable. It *would* be reasonable to mandate that all byte strings written to a Unicode StringIO are ASCII, regardless of what the system encoding is; however, the difference of that to the status quo is minor. To give an example, just consider if self.childNodes: writer.write(">%s"%(newl)) for node in self.childNodes: node.writexml(writer,indent+addindent,addindent,newl) writer.write("%s%s" % (indent,self.tagName,newl)) else: writer.write("/>%s"%(newl)) Here, writer is often a StringIO instance; indent, addindent, and newl are byte strings (as are all the literals), self.tagName might be a Unicode string, and the orientation of the StringIO might be wide as well. > That would be some kind of magical behavior; the encoding attribute > should be set to reflect the mode after the first write, and should > be None initially (or some other way to indicate the magic). While I sympathise with that architecture, a migration strategy would be needed. Python 2.3 will eliminate some of the pressure, by allowing applications to specify an encoding when they write back XML; if they do specify an encoding, the resulting stream will be narrow. Of course, it is then up to application to actually specify the output encoding (which, admittedly, should have been mandated from day 1). Regards, Martin From guido@python.org Wed Nov 13 21:56:08 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 16:56:08 -0500 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: Your message of "13 Nov 2002 22:50:25 +0100." References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> <200211131926.gADJQqc05097@odiug.zope.com> <200211132032.gADKWW605978@odiug.zope.com> <200211132117.gADLHU906949@odiug.zope.com> Message-ID: <200211132156.gADLu8L07252@odiug.zope.com> OK, I give up. Let's just keep StringIO exactly as it was. The current behavior is relied upon too much to be able to change it. --Guido van Rossum (home page: http://www.python.org/~guido/) From amk@amk.ca Wed Nov 13 22:30:57 2002 From: amk@amk.ca (A.M. Kuchling) Date: Wed, 13 Nov 2002 17:30:57 -0500 Subject: [Python-Dev] Killing off bdist_dumb Message-ID: <200211132230.gADMUv205719@nyman.amk.ca> [CC'ed to python-dev, distutils-sig; followups to distutils-sig] In the commentary attached to bug #410541, I suggest removing the bdist_dumb command, because no interesting platforms actually install from zip files. Are there any platforms Python supports, such as Slackware, BeOS, AtheOS, or whatever, where the convention is to do binary installations from .zip files? Any objections to killing bdist_dumb? --amk (www.amk.ca) ROSALIND: My pride fell with my fortune. -- _As You Like It_, I, ii From guido@python.org Wed Nov 13 22:03:40 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 17:03:40 -0500 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: Your message of "Wed, 13 Nov 2002 17:30:57 EST." <200211132230.gADMUv205719@nyman.amk.ca> References: <200211132230.gADMUv205719@nyman.amk.ca> Message-ID: <200211132203.gADM3et07324@odiug.zope.com> > In the commentary attached to bug #410541, I suggest removing the > bdist_dumb command, because no interesting platforms actually install > from zip files. > > Are there any platforms Python supports, such as Slackware, BeOS, > AtheOS, or whatever, where the convention is to do binary > installations from .zip files? Any objections to killing bdist_dumb? Aren't zipfiles used as el-cheapo installers on Windows? I've seen plenty of stuff that was distributed as a simple zipfile, with instructions "unpack ". --Guido van Rossum (home page: http://www.python.org/~guido/) From paul@pfdubois.com Wed Nov 13 22:08:57 2002 From: paul@pfdubois.com (Paul F Dubois) Date: Wed, 13 Nov 2002 14:08:57 -0800 Subject: [Python-Dev] Python interface to attribute descriptors In-Reply-To: <200211131933.gADJXSC05270@odiug.zope.com> Message-ID: <000001c28b61$482ea420$6501a8c0@NICKLEBY> We used to say in the math business that knowing whether something is true makes it a lot easier to prove. In this case, Guido telling me that it should work helped me find where I had misread the PEP and I was able to solve my problem. For reference for other people, my small example follows. Question: how could the descriptor "know" the name "x" if it is created by a descriptor-creating statement such as x = descriptor_creator(...). I guess one could do this by making a metaclass that would look for the descriptors in the class and "poke" the name into them but is there another way? In the example below I evaded this question by making the name an argument to positive's constructor. class positive (object): "Attribute that can only be positive." def __init__(self, name, default, doc=""): self.default = default self.__name__ = name self.__doc__ = doc def __get__ (self, obj, metaobj=None): if obj is None: return metaobj.__dict__[self.__name__] else: return getattr(obj, "__" + self.__name__, self.default) def __set__ (self, obj, value): try: v = type(self.default)(value) if not (v > 0): raise ValueError, "Value not positive." except: raise ValueError, "Cannot convert to positive %s" % repr(type(self.default)) setattr(obj, "__" + self.__name__, v) class Simple(object) : """This class has two attributes, x and y, that must be positive floats. """ x = positive("x", 1.0, "x-coordinate") y = positive("y", 2.0, "y-coordinate") def meth (self): return self.x * self.y s = Simple() print s.x, s.y, s.meth() s.x, s.y = (2.,3) print s.x, s.y, s.meth() print Simple.x.__doc__ s2 = Simple() print s2.x, s2.y, s2.meth() try: s.x = -1.0 raise RuntimeError, "Evaded validation" except ValueError, e: print e When run this prints: 1.0 2.0 2.0 2.0 3.0 6.0 x-coordinate 1.0 2.0 2.0 Cannot convert to positive From dave@boost-consulting.com Wed Nov 13 21:59:14 2002 From: dave@boost-consulting.com (David Abrahams) Date: Wed, 13 Nov 2002 16:59:14 -0500 Subject: [Python-Dev] Adopting Optik In-Reply-To: (Ka-Ping Yee's message of "Wed, 13 Nov 2002 15:33:03 -0600 (CST)") References: Message-ID: Ka-Ping Yee writes: > On Wed, 13 Nov 2002, Guido van Rossum wrote: >> the guideline becomes "use short, >> lower-case module names". > > A short lower-case name would be good, but i worry that "options" is > too generic a word. There are all sorts of options it might mean. > Could we find a name that has something to do with commands or the > command line, like "cmdline" or "cmdopts"? +1 Both of those go "thunk" for me. -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Boost support, enhancements, training, and commercial distribution From martin@v.loewis.de Wed Nov 13 22:14:53 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 13 Nov 2002 23:14:53 +0100 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <200211132203.gADM3et07324@odiug.zope.com> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> Message-ID: Guido van Rossum writes: > Aren't zipfiles used as el-cheapo installers on Windows? I've seen > plenty of stuff that was distributed as a simple zipfile, with > instructions "unpack ". Sure, but on Windows, you have bdist_wininst, which isn't any more difficult to use, and far superior. People building distutils packages for Windows appreciate the fancy-without-efforts installer (I'm one of those people myself); I would never consider using bdist_dumb on Windows. In fact, I thought it was meant for systems like Solaris, where the native packaging is not supported. Of course, on Solaris, I would expect to get a .tar.gz, not a .zip. So even though I do use binutils binary packages, I would not suffer from losing bdist_dumb, and I can't imagine anybody who would. Regards, Martin From guido@python.org Wed Nov 13 22:24:14 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 17:24:14 -0500 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: Your message of "13 Nov 2002 23:14:53 +0100." References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> Message-ID: <200211132224.gADMOEG07559@odiug.zope.com> > > Aren't zipfiles used as el-cheapo installers on Windows? I've seen > > plenty of stuff that was distributed as a simple zipfile, with > > instructions "unpack ". > > Sure, but on Windows, you have bdist_wininst, which isn't any more > difficult to use, and far superior. People building distutils packages > for Windows appreciate the fancy-without-efforts installer (I'm one of > those people myself); I would never consider using bdist_dumb on > Windows. > > In fact, I thought it was meant for systems like Solaris, where the > native packaging is not supported. Of course, on Solaris, I would > expect to get a .tar.gz, not a .zip. > > So even though I do use binutils binary packages, I would not suffer > from losing bdist_dumb, and I can't imagine anybody who would. OK, but bdist_wininst feels fragile (especially when I see checkins of a pile of binary gunk each time something has changed). Zip files are a lowest common denominator. Why do you want to lose bdist_dumb? --Guido van Rossum (home page: http://www.python.org/~guido/) From skip@pobox.com Wed Nov 13 22:30:10 2002 From: skip@pobox.com (Skip Montanaro) Date: Wed, 13 Nov 2002 16:30:10 -0600 Subject: [Python-Dev] logging docs needed In-Reply-To: <20021113195215.GA26348@ute.mems-exchange.org> References: <200211131632.gADGWOY29161@odiug.zope.com> <20021113195215.GA26348@ute.mems-exchange.org> Message-ID: <15826.53874.984913.882527@montanaro.dyndns.org> >> I've checked in Vinay Sajip's logging package. It is documented at >> http://www.red-dove.com/python_logging.html. I need a volunteer to >> convert those docs to LaTeX and check them in. amk> Unless someone else has already volunteered, I'll do this. (Since amk> I'll need to go over them for "What's New" anyway...) I started on this earlier today as a short break from the stuff I'm currently working on. The docs that are there seem to be a good tutorial, but the structure is significantly different than how the library reference manual is done. For the purposes of inclusion in the libref manual, it might actually be easier to convert http://www.red-dove.com/logging_pydoc.html It appears Vinay has a pretty full suite of class and method docstrings. Skip From greg@cosc.canterbury.ac.nz Wed Nov 13 22:30:34 2002 From: greg@cosc.canterbury.ac.nz (Greg Ewing) Date: Thu, 14 Nov 2002 11:30:34 +1300 (NZDT) Subject: [Python-Dev] Adopting Optik In-Reply-To: Message-ID: <200211132230.gADMUYg22007@kuku.cosc.canterbury.ac.nz> Brett Cannon : > Best I can do is ArgParser or something to try to tie the name into > sys.argv. How about argvparse, by analogy with urlparse? Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg@cosc.canterbury.ac.nz +--------------------------------------+ From martin@v.loewis.de Wed Nov 13 22:37:48 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 13 Nov 2002 23:37:48 +0100 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <200211132224.gADMOEG07559@odiug.zope.com> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> Message-ID: Guido van Rossum writes: > OK, but bdist_wininst feels fragile (especially when I see checkins of > a pile of binary gunk each time something has changed). Zip files are > a lowest common denominator. That was my impression also, but I regained trust when I understood that we actually do have the source for those binaries :-) see http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/python/distutils/misc/install.c?annotate=1.22 > Why do you want to lose bdist_dumb? I don't want to lose it, but I wouldn't mind losing it if it simplifies something. I guess that makes it +0. Regards, Martin From DavidA@ActiveState.com Wed Nov 13 22:38:31 2002 From: DavidA@ActiveState.com (David Ascher) Date: Wed, 13 Nov 2002 14:38:31 -0800 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <200211132230.gADMUv205719@nyman.amk.ca> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> Message-ID: <3DD2D467.2030405@ActiveState.com> Guido van Rossum wrote: > >>Aren't zipfiles used as el-cheapo installers on Windows? I've seen > >>plenty of stuff that was distributed as a simple zipfile, with > >>instructions "unpack ". > > > >Sure, but on Windows, you have bdist_wininst, which isn't any more > >difficult to use, and far superior. People building distutils packages > >for Windows appreciate the fancy-without-efforts installer (I'm one of > >those people myself); I would never consider using bdist_dumb on > >Windows. > > > >In fact, I thought it was meant for systems like Solaris, where the > >native packaging is not supported. Of course, on Solaris, I would > >expect to get a .tar.gz, not a .zip. > > > >So even though I do use binutils binary packages, I would not suffer > >from losing bdist_dumb, and I can't imagine anybody who would. > > > OK, but bdist_wininst feels fragile (especially when I see checkins of > a pile of binary gunk each time something has changed). Zip files are > a lowest common denominator. Speaking of which -- I asked someone (can't remember who =) to check the source to that binary junk in the tree somewhere. Did that happen? (my cvs tree here is out of date and I'm getting timeouts from sf, or I'd check). From DavidA@ActiveState.com Wed Nov 13 22:39:35 2002 From: DavidA@ActiveState.com (David Ascher) Date: Wed, 13 Nov 2002 14:39:35 -0800 Subject: [Python-Dev] Adopting Optik In-Reply-To: <200211132230.gADMUYg22007@kuku.cosc.canterbury.ac.nz> References: <200211132230.gADMUYg22007@kuku.cosc.canterbury.ac.nz> Message-ID: <3DD2D4A7.8000902@ActiveState.com> Greg Ewing wrote: > Brett Cannon : > > > >Best I can do is ArgParser or something to try to tie the name into > >sys.argv. > > > How about argvparse, by analogy with urlparse? or argparse. The v is archaic and so silent it fades away =) --david From dave@boost-consulting.com Wed Nov 13 22:36:54 2002 From: dave@boost-consulting.com (David Abrahams) Date: Wed, 13 Nov 2002 17:36:54 -0500 Subject: [Python-Dev] property docs? Message-ID: I just had to refer a Boost.Python user to the documentation of 'property', and was unable to find anything in the regular documentation. The closest I could find was: http://www.python.org/doc/current/whatsnew/sect-rellinks.html#SECTION000340000000000000000 Ah, well, I guess there's http://www.python.org/2.2.2/descrintro.html#property But http://www.python.org/current/descrintro.html#property doesn't work. I think the reason why came up before on this list, though it escapes me again. Maybe this complaint will serve as enough of a poke to get that fixed. -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Boost support, enhancements, training, and commercial distribution From dave@boost-consulting.com Wed Nov 13 22:44:47 2002 From: dave@boost-consulting.com (David Abrahams) Date: Wed, 13 Nov 2002 17:44:47 -0500 Subject: [Python-Dev] Adopting Optik In-Reply-To: <3DD2D4A7.8000902@ActiveState.com> (David Ascher's message of "Wed, 13 Nov 2002 14:39:35 -0800") References: <200211132230.gADMUYg22007@kuku.cosc.canterbury.ac.nz> <3DD2D4A7.8000902@ActiveState.com> Message-ID: David Ascher writes: > Greg Ewing wrote: > >> Brett Cannon : >> >> >> >Best I can do is ArgParser or something to try to tie the name into >> >sys.argv. >> >> >> How about argvparse, by analogy with urlparse? > > or argparse. The v is archaic and so silent it fades away =) -1 With one /single/ character, 'argvparse' disambiguates that we're talking about command-line arguments. You can't beat that for semantic value. -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Boost support, enhancements, training, and commercial distribution From martin@v.loewis.de Wed Nov 13 23:06:36 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 14 Nov 2002 00:06:36 +0100 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <3DD2D467.2030405@ActiveState.com> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3DD2D467.2030405@ActiveState.com> Message-ID: David Ascher writes: > Speaking of which -- I asked someone (can't remember who =) to check > the source to that binary junk in the tree somewhere. Did that > happen? (my cvs tree here is out of date and I'm getting timeouts from > sf, or I'd check). It's in the distutils CVS, and it always was. Regards, Martin From Andrew.MacIntyre@aba.gov.au Wed Nov 13 23:16:05 2002 From: Andrew.MacIntyre@aba.gov.au (Andrew MacIntyre) Date: Thu, 14 Nov 2002 10:16:05 +1100 Subject: [Python-Dev] RE: [Distutils] Killing off bdist_dumb Message-ID: At the moment I use ZIP archives for the OS/2 EMX port, but I don't use bdist_dumb - I remember trying to use it, but can't remember why I stopped trying. ----------------------------------------------------------------------- Andrew MacIntyre \ E-mail: andrew.macintyre@aba.gov.au Planning & Licensing Branch \ Tel: +61 2 6256 2812 Australian Broadcasting Authority \ Fax: +61 2 6253 3277 -> "These thoughts are mine alone!" <---------------------------------- > -----Original Message----- > From: A.M. Kuchling [mailto:akuchlin@mems-exchange.org] > Sent: Thursday, 14 November 2002 9:31 AM > To: distutils-sig@python.org > Cc: python-dev@python.org > Subject: [Distutils] Killing off bdist_dumb > > > [CC'ed to python-dev, distutils-sig; followups to distutils-sig] > > In the commentary attached to bug #410541, I suggest removing the > bdist_dumb command, because no interesting platforms actually install > from zip files. > > Are there any platforms Python supports, such as Slackware, BeOS, > AtheOS, or whatever, where the convention is to do binary > installations from .zip files? Any objections to killing bdist_dumb? > > --amk > (www.amk.ca) > ROSALIND: My pride fell with my fortune. > -- _As You Like It_, I, ii > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG@python.org > http://mail.python.org/mailman/listinfo/distutils-sig > From gward@python.net Wed Nov 13 23:37:29 2002 From: gward@python.net (Greg Ward) Date: Wed, 13 Nov 2002 18:37:29 -0500 Subject: [Python-Dev] Re: [getopt-sig] Adopting Optik In-Reply-To: <200211131554.gADFsBw28797@odiug.zope.com> References: <200211131554.gADFsBw28797@odiug.zope.com> Message-ID: <20021113233729.GA3218@cthulhu.gerg.ca> On 13 November 2002, Guido van Rossum said: > I want to start working on an alpha release of Python 2.3. I'd like > to be able to release 2.3a1 before Xmas. PEP 283 has a list of things > to be done. One of the tasks is to adopt Greg Ward's options parsing > module, Optik. I propose to adopt this under the name "options". Any > comments? I have yet to be thrilled by any of the proposed names, and I'm not thrilled by this one. It's possible that I dislike it slightly less than OptionParser, which has been my working title for ages now. BTW, several weeks ago I wrote a script to do much of the grunt work. If you have the Optik CVS tree checked out, this: ./merge optik.py will merge the relevant code from lib/*.py into optik.py. Change the output name to suit your taste, of course. Still haven't done anything about the test suite, which is probably the main reason I've been procrastinating on this. (Oh yeah, the docs too.) Greg -- Greg Ward http://www.gerg.ca/ Are we THERE yet? From gward@python.net Wed Nov 13 23:39:47 2002 From: gward@python.net (Greg Ward) Date: Wed, 13 Nov 2002 18:39:47 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211132136.gADLaOd07157@odiug.zope.com> References: <200211132136.gADLaOd07157@odiug.zope.com> Message-ID: <20021113233947.GB3218@cthulhu.gerg.ca> On 13 November 2002, Guido van Rossum said: > How about optlib? It's short, un-cute, and follows the *lib pattern > used all over the Python stdlib. Oooh, I think I like it! Definitely more than I like cmdline.py or cmdopts.py or anything like that. (I dislike abbreviated words in module names almost as much as Guido dislikes underscores.) Greg -- Greg Ward http://www.gerg.ca/ And I wonder ... will Elvis take the place of Jesus in a thousand years? -- Dead Kennedys From jeremy@alum.mit.edu Wed Nov 13 23:39:46 2002 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Wed, 13 Nov 2002 18:39:46 -0500 Subject: [Python-Dev] Re: [getopt-sig] Adopting Optik In-Reply-To: <20021113233729.GA3218@cthulhu.gerg.ca> References: <200211131554.gADFsBw28797@odiug.zope.com> <20021113233729.GA3218@cthulhu.gerg.ca> Message-ID: <15826.58050.522774.15983@slothrop.zope.com> >>>>> "GW" == Greg Ward writes: GW> Still haven't done anything about the test suite, which is GW> probably the main reason I've been procrastinating on this. (Oh GW> yeah, the docs too.) They can probably wait until the distutils docs are done. Jeremy From gward@python.net Wed Nov 13 23:44:19 2002 From: gward@python.net (Greg Ward) Date: Wed, 13 Nov 2002 18:44:19 -0500 Subject: [Python-Dev] Re: [getopt-sig] Adopting Optik In-Reply-To: <15826.58050.522774.15983@slothrop.zope.com> References: <200211131554.gADFsBw28797@odiug.zope.com> <20021113233729.GA3218@cthulhu.gerg.ca> <15826.58050.522774.15983@slothrop.zope.com> Message-ID: <20021113234419.GC3218@cthulhu.gerg.ca> On 13 November 2002, Jeremy Hylton said: > GW> Still haven't done anything about the test suite, which is > GW> probably the main reason I've been procrastinating on this. (Oh > GW> yeah, the docs too.) > > They can probably wait until the distutils docs are done. Ouch! That was low. ;-) BTW, Optik *is* copiously documented -- it's just a question of LaTeX-ifying the docs. Greg -- Greg Ward http://www.gerg.ca/ I'm a lumberjack and I'm OK / I sleep all night and I work all day From dave@boost-consulting.com Wed Nov 13 23:43:50 2002 From: dave@boost-consulting.com (David Abrahams) Date: Wed, 13 Nov 2002 18:43:50 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <20021113233947.GB3218@cthulhu.gerg.ca> (Greg Ward's message of "Wed, 13 Nov 2002 18:39:47 -0500") References: <200211132136.gADLaOd07157@odiug.zope.com> <20021113233947.GB3218@cthulhu.gerg.ca> Message-ID: Greg Ward writes: > On 13 November 2002, Guido van Rossum said: >> How about optlib? It's short, un-cute, and follows the *lib pattern >> used all over the Python stdlib. > > Oooh, I think I like it! Definitely more than I like cmdline.py or > cmdopts.py or anything like that. > > (I dislike abbreviated words in module names almost as much as Guido > dislikes underscores.) I agree with that sentiment, but find it hard to understand why you'd prefer 'opt' as an abbreviation over the other abbreviations suggested, which are much less-confusable. -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Boost support, enhancements, training, and commercial distribution From akuchlin@mems-exchange.org Thu Nov 14 01:05:17 2002 From: akuchlin@mems-exchange.org (akuchlin@mems-exchange.org) Date: Wed, 13 Nov 2002 20:05:17 -0500 Subject: [Python-Dev] logging docs needed In-Reply-To: <15826.53874.984913.882527@montanaro.dyndns.org> References: <200211131632.gADGWOY29161@odiug.zope.com> <20021113195215.GA26348@ute.mems-exchange.org> <15826.53874.984913.882527@montanaro.dyndns.org> Message-ID: <20021114010517.GA31610@mems-exchange.org> On Wed, Nov 13, 2002 at 04:30:10PM -0600, Skip Montanaro wrote: >I started on this earlier today as a short break from the stuff I'm >currently working on. The docs that are there seem to be a good tutorial, If you've already started, do you want to finish it, then? At this point I've started reading through the docs, but haven't put any work into writing yet. --amk From A.M.Kuchling Thu Nov 14 01:15:03 2002 From: A.M.Kuchling (A.M.Kuchling) Date: Wed, 13 Nov 2002 20:15:03 -0500 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <200211132224.gADMOEG07559@odiug.zope.com> Message-ID: <80E979E0-F76E-11D6-ACEF-000393B6F06C@amk.ca> On Wednesday, November 13, 2002, at 05:24 PM, Guido van Rossum wrote: > Why do you want to lose bdist_dumb? Bug #410541 is that the .zip archive created by bdist_dumb is useless because it contains filenames relative to the root directory. Because Python can be installed anywhere on Windows, the .zip files are therefore useless for Windows installation. They might be usable on Unix platforms, as long as you're sure about whether Python is in /usr/ or /usr/local, and if tying it to a single version is OK (because the path will be /usr/lib/pythonX.Y/site-packages). This could be fixed by making it possible to use relative paths, of course, so you'd have to unpack the resulting .zip in site-packages, but if no one uses .zip files for this purpose, why bother? If bdist_wininst wasn't around, then bdist_dumb would be the only way to install on Windows, and clearly this would have to fixed. With bdist_wininst, I don't know if anyone cares. On Unix bdist_dumb also doesn't handle scripts, apparently. Andrew MacIntyre's use of .zip files on OS/2 may save bdist_dumb, though. Andrew, was the use of full paths the problem that kept you from using it? The options are: 1) leave it as-is, broken and useless. (The current state; this seems pointless.) 2) fix it 3) rip it out --amk From skip@pobox.com Thu Nov 14 01:23:33 2002 From: skip@pobox.com (Skip Montanaro) Date: Wed, 13 Nov 2002 19:23:33 -0600 Subject: [Python-Dev] logging docs needed In-Reply-To: <20021114010517.GA31610@mems-exchange.org> References: <200211131632.gADGWOY29161@odiug.zope.com> <20021113195215.GA26348@ute.mems-exchange.org> <15826.53874.984913.882527@montanaro.dyndns.org> <20021114010517.GA31610@mems-exchange.org> Message-ID: <15826.64277.680464.419059@montanaro.dyndns.org> >> I started on this earlier today as a short break from the stuff I'm >> currently working on. The docs that are there seem to be a good >> tutorial, amk> If you've already started, do you want to finish it, then? At this amk> point I've started reading through the docs, but haven't put any amk> work into writing yet. Sure, I'll at least try to make a first pass at it, though it probably won't be well-polished. Skip From drifty@bigfoot.com Thu Nov 14 01:28:12 2002 From: drifty@bigfoot.com (Brett Cannon) Date: Wed, 13 Nov 2002 17:28:12 -0800 (PST) Subject: [Python-Dev] Adopting Optik In-Reply-To: <200211132230.gADMUYg22007@kuku.cosc.canterbury.ac.nz> Message-ID: [Greg Ewing] > Brett Cannon : > > > Best I can do is ArgParser or something to try to tie the name into > > sys.argv. > > How about argvparse, by analogy with urlparse? +1 As David Abrahams pointed out in another email, having the "v" in their helps deal with any possible ambiguity. Now I know Guido suggested ``optlib`` and Greg liked it. But I don't like the idea of associating the package with the *lib modules in the stdlib. If you look at the stdlib we have modules like ftplib, htmllib, zlib, and urllib. I view these modules as collections of methods and classes that have a common theme, but where each method and class can be used in isolation; they are collections of utility methods and classes. They are not part of some single, large functionality like Optik is. And yes, I realize that urlparse is more like what I described above, but it is not a habit in the library yet. But if this *lib association sticks, I like ``optlib``. And since all of these name suggestions are ending up in all of these emails (and I have to know them for the Summary anyway), the following is best as I know so far. If someone has thrown their support behind another name, I only list their last supported name. optlib: Guido van Rossum, Greg Ward cmdline: cmdopts: Ka-Ping Yee, David Abrahams argvparse: Greg Ewing, Brett Cannon argparse: David Ascher From dave@boost-consulting.com Thu Nov 14 01:25:50 2002 From: dave@boost-consulting.com (David Abrahams) Date: Wed, 13 Nov 2002 20:25:50 -0500 Subject: [Python-Dev] Adopting Optik In-Reply-To: (Brett Cannon's message of "Wed, 13 Nov 2002 17:28:12 -0800 (PST)") References: Message-ID: Brett Cannon writes: > And since all of these name suggestions are ending up in all of these > emails (and I have to know them for the Summary anyway), the following is > best as I know so far. If someone has thrown their support behind another > name, I only list their last supported name. > > optlib: Guido van Rossum, Greg Ward > > cmdline: > cmdopts: Ka-Ping Yee, David Abrahams > > argvparse: Greg Ewing, Brett Cannon I'll vote for argvparse as well if it helps to break a tie. > argparse: David Ascher -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Boost support, enhancements, training, and commercial distribution From goodger@python.org Thu Nov 14 01:58:50 2002 From: goodger@python.org (David Goodger) Date: Wed, 13 Nov 2002 20:58:50 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211131640.gADGeRT29280@odiug.zope.com> Message-ID: Guido van Rossum wrote: > I propose to adopt this under the name "options". Any comments? "optlib" is better. I have no problem with "OptionParser" either; it *is* longer, but matches the "ConfigParser" pattern. >> Since some projects (for exmaple docutils) already started to use >> Optik it is becoming increasingly late for a name change. > > The docutils author can speak for himself; I think docutils can deal > with the change. Docutils has no problem with the name change. We've known for months that it was imminent. Projects that use Optik either ship with it included or require it to be installed separately. I believe Greg Ward plans to maintain an independent distribution, using the old name, for the benefit of such projects and for users of Python 2.0 through 2.2. -- David Goodger Open-source projects: - Python Docutils: http://docutils.sourceforge.net/ (includes reStructuredText: http://docutils.sf.net/rst.html) - The Go Tools Project: http://gotools.sourceforge.net/ From sholden@holdenweb.com Thu Nov 14 02:17:07 2002 From: sholden@holdenweb.com (Steve Holden) Date: Wed, 13 Nov 2002 21:17:07 -0500 Subject: Fw: [Python-Dev] Adopting Optik Message-ID: <019f01c28b83$ef3d3de0$6300000a@holdenweb.com> [mailed only to Guido in error] > [guido] > > > > How about optlib? It's short, un-cute, and follows the *lib pattern > > used all over the Python stdlib. > > > > It's too short, and that brevity tends to lend it an undesirable degree of > cuteness. It's a library that handles options. Why not "optionlib"? > > still-dreaming-of-renamain-everything-ly y'rs - steve > ----------------------------------------------------------------------- > Steve Holden http://www.holdenweb.com/ > Python Web Programming http://pydish.holdenweb.com/pwp/ > Previous .sig file retired to www.homeforoldsigs.com > ----------------------------------------------------------------------- > From python@rcn.com Thu Nov 14 03:19:56 2002 From: python@rcn.com (Raymond Hettinger) Date: Wed, 13 Nov 2002 22:19:56 -0500 Subject: [Python-Dev] Adopting Optik References: <200211132136.gADLaOd07157@odiug.zope.com> Message-ID: <018b01c28b8c$b53002a0$125ffea9@oemcomputer> > How about optlib? It's short, un-cute, and follows the *lib pattern > used all over the Python stdlib. +1 Also consider OptionParser. It follows the library pattern and says what it means. Raymond Hettinger What's in a name? Nothing and everything. From skip@pobox.com Thu Nov 14 04:04:04 2002 From: skip@pobox.com (Skip Montanaro) Date: Wed, 13 Nov 2002 22:04:04 -0600 Subject: [Python-Dev] logging docs needed In-Reply-To: <15826.64277.680464.419059@montanaro.dyndns.org> References: <200211131632.gADGWOY29161@odiug.zope.com> <20021113195215.GA26348@ute.mems-exchange.org> <15826.53874.984913.882527@montanaro.dyndns.org> <20021114010517.GA31610@mems-exchange.org> <15826.64277.680464.419059@montanaro.dyndns.org> Message-ID: <15827.8372.558841.711377@montanaro.dyndns.org> >>> I started on this earlier today as a short break from the stuff I'm >>> currently working on. The docs that are there seem to be a good >>> tutorial, amk> If you've already started, do you want to finish it, then? At this amk> point I've started reading through the docs, but haven't put any amk> work into writing yet. Skip> Sure, I'll at least try to make a first pass at it, though it Skip> probably won't be well-polished. liblogging.tex checked in. It's essentially just a mechanical Latex conversion of pydoc.help(logging). Skip From guido@python.org Thu Nov 14 04:07:12 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 13 Nov 2002 23:07:12 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: Your message of "Wed, 13 Nov 2002 20:58:50 EST." References: Message-ID: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> > Projects that use Optik either ship with it included or require it to > be installed separately. I believe Greg Ward plans to maintain an > independent distribution, using the old name, for the benefit of such > projects and for users of Python 2.0 through 2.2. Of course, it would be easier for prospective users if Greg's distribution used the same name as we adopt for 2.3. :-) Of all the names suggested so far, I like optlib and argvparse equally well. Can we do a tally of votes for those, to decide? --Guido van Rossum (home page: http://www.python.org/~guido/) From dave@boost-consulting.com Thu Nov 14 04:02:10 2002 From: dave@boost-consulting.com (David Abrahams) Date: Wed, 13 Nov 2002 23:02:10 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> (Guido van Rossum's message of "Wed, 13 Nov 2002 23:07:12 -0500") References: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> Message-ID: Guido van Rossum writes: > Of all the names suggested so far, I like optlib and argvparse equally > well. Can we do a tally of votes for those, to decide? +1 on argvparse -1 on optlib (sounds like optimization) -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Boost support, enhancements, training, and commercial distribution From aahz@pythoncraft.com Thu Nov 14 05:38:12 2002 From: aahz@pythoncraft.com (Aahz) Date: Thu, 14 Nov 2002 00:38:12 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> References: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021114053812.GA8827@panix.com> On Wed, Nov 13, 2002, Guido van Rossum wrote: > > Of all the names suggested so far, I like optlib and argvparse equally > well. Can we do a tally of votes for those, to decide? If those are the options: +1 argvparse -0 optlib If it were optionlib, I'd change to +0 or +1. -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ A: No. Q: Is top-posting okay? From ping@zesty.ca Thu Nov 14 05:38:16 2002 From: ping@zesty.ca (Ka-Ping Yee) Date: Wed, 13 Nov 2002 23:38:16 -0600 (CST) Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> Message-ID: On Wed, 13 Nov 2002, Guido van Rossum wrote: > Of all the names suggested so far, I like optlib and argvparse equally > well. Can we do a tally of votes for those, to decide? I find "argvparse" more meaningful. (Although "optlib" is slightly less generic than "options", the "lib" suffix doesn't really mean anything to me. Side question: does the presence or absence of "-lib" have any conventional mening?) -- ?!ng From barry@python.org Thu Nov 14 05:44:41 2002 From: barry@python.org (Barry A. Warsaw) Date: Thu, 14 Nov 2002 00:44:41 -0500 Subject: [Python-Dev] Adopting Optik References: <200211132230.gADMUYg22007@kuku.cosc.canterbury.ac.nz> Message-ID: <15827.14409.174332.155272@gargle.gargle.HOWL> >>>>> "BC" == Brett Cannon writes: BC> optlib: Guido van Rossum, Greg Ward BC> cmdline: BC> cmdopts: Ka-Ping Yee, David Abrahams BC> argvparse: Greg Ewing, Brett Cannon BC> argparse: David Ascher Cute names be celebrated, the obvious choice now is "argument". :) this-is-getting-hit-on-the-head-lessons-ly y'rs, -Barry From goodger@python.org Thu Nov 14 05:46:09 2002 From: goodger@python.org (David Goodger) Date: Thu, 14 Nov 2002 00:46:09 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> Message-ID: Guido van Rossum wrote: > Of all the names suggested so far, I like optlib and argvparse equally > well. Can we do a tally of votes for those, to decide? optlib > argvparse To me, "argvparse" is a bit misleading. Docutils is not only using Optik for command-line option parsing, but also for all runtime settings handling, even when executed programmatically (i.e., no command-line options to parse). I originally named the optik.Values object "options", but recently changed it to "settings" to better reflect its true nature. Optik also interfaces well with config files via ConfigParser. Using Optik with ConfigParser and the runtime settings specs from each of Docutils' components feels just like applying a design pattern: it works, it clicks, it feels *right*. Not sure if it's an existing pattern or a new one though. -- David Goodger Open-source projects: - Python Docutils: http://docutils.sourceforge.net/ (includes reStructuredText: http://docutils.sf.net/rst.html) - The Go Tools Project: http://gotools.sourceforge.net/ From barry@python.org Thu Nov 14 05:55:09 2002 From: barry@python.org (Barry A. Warsaw) Date: Thu, 14 Nov 2002 00:55:09 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik References: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <15827.15037.983947.72530@gargle.gargle.HOWL> >>>>> "GvR" == Guido van Rossum writes: GvR> Of all the names suggested so far, I like optlib and GvR> argvparse equally well. Can we do a tally of votes for GvR> those, to decide? +1 argvparse +0 optlib -Barry From DavidA@ActiveState.com Thu Nov 14 06:18:57 2002 From: DavidA@ActiveState.com (David Ascher) Date: Wed, 13 Nov 2002 22:18:57 -0800 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik References: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <3DD34051.2070909@ActiveState.com> David Abrahams wrote: >-1 on optlib (sounds like optimization) > Agreed, especially with the parenthetical comment. I don't like argvparse much because this "argv" thing is far from obvious to newbies. But given those two choices, I'd pick argvparse over optlib. From mal@lemburg.com Thu Nov 14 08:28:32 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Thu, 14 Nov 2002 09:28:32 +0100 Subject: [Python-Dev] Killing off bdist_dumb References: <80E979E0-F76E-11D6-ACEF-000393B6F06C@amk.ca> Message-ID: <3DD35EB0.5010109@lemburg.com> A.M. Kuchling wrote: > On Wednesday, November 13, 2002, at 05:24 PM, Guido van Rossum wrote: > >> Why do you want to lose bdist_dumb? > > Bug #410541 is that the .zip archive created by bdist_dumb is useless > because it contains filenames relative to the root directory. Because > Python can > be installed anywhere on Windows, the .zip files are therefore useless > for Windows installation. They might be usable on Unix platforms, as > long as > you're sure about whether Python is in /usr/ or /usr/local, and if tying > it to a > single version is OK (because the path will be > /usr/lib/pythonX.Y/site-packages). You are forgetting the the power of distutils lies in the ability to subclass these commands, e.g. I use the following to create Zope product packages: class mx_bdist_zope(bdist_dumb): """ Build binary Zope product distribution. """ def finalize_options (self): # Default to ZIP as format on all platforms if self.format is None: self.format = 'zip' bdist_dumb.finalize_options(self) # Hack to reuse bdist_dumb for our purposes; .run() calls # reinitialize_command() with 'install' as command. def reinitialize_command(self, command, reinit_subcommands=0): cmdobj = bdist_dumb.reinitialize_command(self, command, reinit_subcommands) if command == 'install': cmdobj.install_lib = 'lib/python' cmdobj.install_data = 'lib/python' return cmdobj > This could be fixed by making it possible to use relative paths, of course, > so you'd have to unpack the resulting .zip in site-packages, but if no > one uses .zip files for this purpose, why bother? bdist_dumb is useful on other platforms as well. The fact that it basically zips up a binary installation tree is very handy when you are targetting non-installer platforms or when you want to deploy to custom Python installations. > If bdist_wininst wasn't around, then bdist_dumb would be the only way to > install on Windows, and clearly this would have to fixed. With > bdist_wininst, > I don't know if anyone cares. On Unix bdist_dumb also doesn't handle > scripts, apparently. > > Andrew MacIntyre's use of .zip files on OS/2 may save bdist_dumb, though. > Andrew, was the use of full paths the problem that kept you from using it? > > The options are: > 1) leave it as-is, broken and useless. (The current state; this seems > pointless.) > 2) fix it I all you want is to have the ability to make the paths relative, I suggest you add an option for this to the command. > 3) rip it out You can't rip out code that's currently in use without providing at least some kind of alternative. I also don't see any gain from such an approach. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From mal@lemburg.com Thu Nov 14 09:10:20 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Thu, 14 Nov 2002 10:10:20 +0100 Subject: [Python-Dev] Printing and __unicode__ References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> <200211131926.gADJQqc05097@odiug.zope.com> <200211132032.gADKWW605978@odiug.zope.com> <3DD2BCF9.5010102@lemburg.com> Message-ID: <3DD3687C.5090606@lemburg.com> Martin v. Loewis wrote: > "M.-A. Lemburg" writes: > > >>The fact that StringIO works with Unicode (and then only in the >>case where you *only* pass Unicode to it) is more an implementation >>detail than a true feature. > > It's a true feature. You explicitly fixed that feature in > > revision 1.20 > date: 2002/01/06 17:15:05; author: lemburg; state: Exp; lines: +8 -5 > Restore Python 2.1 StringIO.py behaviour: support concatenating > Unicode string snippets to larger Unicode strings. > > This fix should also go into Python 2.2.1. > > after you broke it in > > revision 1.19 > date: 2001/09/24 17:34:52; author: lemburg; state: Exp; lines: +4 -1 > branches: 1.19.12; > StringIO patch #462596: let's [c]StringIO accept read buffers on > input to .write() too. I doubt that it's a true feature. The fact that I broke it in the above patch by introducing the str(data) call in StringIO.py suggests that whoever complained about this change was using an implementation detail rather than a documented and originally intended feature of StringIO. If you need something like StringIO for Unicode then I would suggest to create a similar object which then only deals with Unicode, e.g. UnicodeIO. cStringIO could then be extended to also support such an object by using the same trick as SRE does to support two native types (putting the code into a .h file and then including it twice). Back to the original question. I don't have a problem with leaving in the Unicode support in StringIO's .write() method, but the introduction of the Unicode print support should not rely on this detail. Instead someone wanting to write Unicode only to a StringIO like object should be directed to UnicodeIO. Now, to satisfy the request of the poster who wanted support for __unicode__ in PyFile_WriteObject() we need to add something which lets PyFile_WriteObject() determine wether to look for __unicode__ or not (per default, it passes through Unicode objects as-is and applies str() to all other objects). I like the idea of using the .encoding attribute as flag for this. What I don't like is that setting it to None should be used for Unicode-only streams (ones that take Unicode on input and use Unicode on output). To me, .encoding = None would signal: this stream doesn't do anything to the input data and passes it to the output stream as-is. Much better, IMHO, would be to use .encoding = 'unicode' on Unicode-only streams such as the mentioned UnicodeIO object. In summary, StringIO objects should not implement .encoding while a new Unicode-only stream-like object UnicodeIO should have .encoding = 'unicode'. The same could then be done with the corresponding cStringIO objects. PS: Some may not know, but the obvious way of fixing printing of Unicode by adding a tp_print slot implementation does not work, since that slot takes a FILE* pointer as file "object" which, of course, cannot include any additional information such as the encoding. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From mwh@python.net Thu Nov 14 12:46:38 2002 From: mwh@python.net (Michael Hudson) Date: 14 Nov 2002 12:46:38 +0000 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: martin@v.loewis.de's message of "13 Nov 2002 22:11:46 +0100" References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> <200211131926.gADJQqc05097@odiug.zope.com> <200211132032.gADKWW605978@odiug.zope.com> <3DD2BCF9.5010102@lemburg.com> Message-ID: <2mr8do71zl.fsf@starship.python.net> martin@v.loewis.de (Martin v. Loewis) writes: > "M.-A. Lemburg" writes: > > > I think that StringIO and cStringIO should be regarded as > > binary streams without any encoding knowledge. It is easy > > enough to wrap these into Unicode aware streams using the > > codecs.StreamReaderWriter class as is done in codecs.open(). > > Then why did you fix that behaviour when you broke it? Because people had come to rely on it and their code broke in the 2.1 -> 2.2 transition. I don't think it was intentional. At least that's what I remember from the time. This argument suggests that we perhaps shouldn't change StringIO again... Cheers, M. -- I wouldn't trust the Anglo-Saxons for much anything else. Given they way English is spelled, who could trust them on _anything_ that had to do with writing things down, anyway? -- Erik Naggum, comp.lang.lisp From guido@python.org Thu Nov 14 13:15:38 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 14 Nov 2002 08:15:38 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: Your message of "Thu, 14 Nov 2002 09:22:03 GMT." <16E1010E4581B049ABC51D4975CEDB885E2DC4@UKDCX001.uk.int.atosorigin.com> References: <16E1010E4581B049ABC51D4975CEDB885E2DC4@UKDCX001.uk.int.atosorigin.com> Message-ID: <200211141315.gAEDFch29632@pcp02138704pcs.reston01.va.comcast.net> In the choice between optlib and argvparse, argvparse wins by a landslide. But I came up with a better one: optparse! This addresses the argument by several Davids that argv is obscure to newbies. I think it doesn't sound like optimization like optlib does. optparse also seems to be what Ruby uses (it even has an OptionParser class :-), and I found an optparse.tcl on the net too. I also note that the recommended rhythm looks better as from optparse import OptionParser than as from argvparse import OptionParser --Guido van Rossum (home page: http://www.python.org/~guido/) From martin@v.loewis.de Thu Nov 14 13:44:53 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 14 Nov 2002 14:44:53 +0100 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: <3DD3687C.5090606@lemburg.com> References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> <200211131926.gADJQqc05097@odiug.zope.com> <200211132032.gADKWW605978@odiug.zope.com> <3DD2BCF9.5010102@lemburg.com> <3DD3687C.5090606@lemburg.com> Message-ID: "M.-A. Lemburg" writes: > Much better, IMHO, would be to use .encoding = 'unicode' > on Unicode-only streams such as the mentioned UnicodeIO > object. That is fine with me as well. Regards, Martin From guido@python.org Thu Nov 14 13:48:58 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 14 Nov 2002 08:48:58 -0500 Subject: [Python-Dev] Printing and __unicode__ In-Reply-To: Your message of "Thu, 14 Nov 2002 10:10:20 +0100." <3DD3687C.5090606@lemburg.com> References: <200211121721.gACHLm504767@odiug.zope.com> <3DD21A52.7080205@lemburg.com> <3DD29EF0.8090504@lemburg.com> <200211131926.gADJQqc05097@odiug.zope.com> <200211132032.gADKWW605978@odiug.zope.com> <3DD2BCF9.5010102@lemburg.com> <3DD3687C.5090606@lemburg.com> Message-ID: <200211141348.gAEDmw429725@pcp02138704pcs.reston01.va.comcast.net> > Martin v. Loewis wrote: > > "M.-A. Lemburg" writes: > > > > > >>The fact that StringIO works with Unicode (and then only in the > >>case where you *only* pass Unicode to it) is more an implementation > >>detail than a true feature. > > > > It's a true feature. You explicitly fixed that feature in > > > > revision 1.20 > > date: 2002/01/06 17:15:05; author: lemburg; state: Exp; lines: +8 -5 > > Restore Python 2.1 StringIO.py behaviour: support concatenating > > Unicode string snippets to larger Unicode strings. > > > > This fix should also go into Python 2.2.1. > > > > after you broke it in > > > > revision 1.19 > > date: 2001/09/24 17:34:52; author: lemburg; state: Exp; lines: +4 -1 > > branches: 1.19.12; > > StringIO patch #462596: let's [c]StringIO accept read buffers on > > input to .write() too. > > I doubt that it's a true feature. The fact that I broke it > in the above patch by introducing the str(data) call in > StringIO.py suggests that whoever complained about this change > was using an implementation detail rather than a documented > and originally intended feature of StringIO. > > If you need something like StringIO for Unicode then I would > suggest to create a similar object which then only deals with > Unicode, e.g. UnicodeIO. But since StringIO already works for Unicode, why bother? > cStringIO could then be extended to also support such an object > by using the same trick as SRE does to support two native > types (putting the code into a .h file and then including > it twice). (Off-topic: each time I fix a bug twice, once in stringobject.c and once in unicodeobject.c, I wish we'd done that for string and unicode objects. But it's too late now, and also may not be realistic given some different implementation choices.) > Back to the original question. I don't have a problem with > leaving in the Unicode support in StringIO's .write() method, > but the introduction of the Unicode print support should not > rely on this detail. Agreed. > Instead someone wanting to write Unicode > only to a StringIO like object should be directed to UnicodeIO. > > Now, to satisfy the request of the poster who wanted support for > __unicode__ in PyFile_WriteObject() we need to add something > which lets PyFile_WriteObject() determine wether to look > for __unicode__ or not (per default, it passes through > Unicode objects as-is and applies str() to all other objects). > > I like the idea of using the .encoding attribute as flag > for this. What I don't like is that setting it to None > should be used for Unicode-only streams (ones that take > Unicode on input and use Unicode on output). To me, > .encoding = None would signal: this stream doesn't do anything > to the input data and passes it to the output stream as-is. But I'm not sure that's a useful feature. Maybe encoding=None could mean the current StringIO behavior. <0.5 wink> > Much better, IMHO, would be to use .encoding = 'unicode' > on Unicode-only streams such as the mentioned UnicodeIO > object. Yes. (Except 'unicode' is not an encoding name, right? Maybe it should be?) > In summary, StringIO objects should not implement .encoding > while a new Unicode-only stream-like object UnicodeIO > should have .encoding = 'unicode'. > > The same could then be done with the corresponding cStringIO > objects. > > PS: Some may not know, but the obvious way of fixing printing > of Unicode by adding a tp_print slot implementation does not > work, since that slot takes a FILE* pointer as file "object" > which, of course, cannot include any additional information > such as the encoding. Yes, tp_print is only an optimization for tp_repr and tp_str when writing to a "real" file object. --Guido van Rossum (home page: http://www.python.org/~guido/) From theller@python.net Thu Nov 14 13:56:19 2002 From: theller@python.net (Thomas Heller) Date: 14 Nov 2002 14:56:19 +0100 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> Message-ID: <3cq4xnjw.fsf@python.net> martin@v.loewis.de (Martin v. Loewis) writes: > Guido van Rossum writes: > > > OK, but bdist_wininst feels fragile (especially when I see checkins of > > a pile of binary gunk each time something has changed). Zip files are > > a lowest common denominator. > > That was my impression also, but I regained trust when I understood > that we actually do have the source for those binaries :-) see > I have already explained several times where the source for bdist_wininst lives, I wont do it again (unless someone needs it). Maybe it could be moved over to the main python module someday. Concerning the 'pile of binary junk' you see on the checkins list each time it has to be recompiled (the bdist_wininst.py module contains the windows exe stub compressed and base64-encoded literally in a large string): This was a design decision which could (and can) be questioned. The checkin messages are one side, the other side is this: it avoids having a binary file (wininst.exe), which only can be created on windows, in the CVS repository and in the distribution. bdist_wininst installers *can* also be created on other systems as long as they only contain pure Python code - although I've never heard of someone actually doing this. So, should the bdist_wininst source code be moved into the python tree, maybe somewhere into PC, and the MSVC .dsw file extended to build the thing, and wininst.exe as binary file go into the distutils directory? Even if this would be done, I'd suggest to wait after the separate distutils release which we have planned on distutils-sig. Thomas From guido@python.org Thu Nov 14 14:00:44 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 14 Nov 2002 09:00:44 -0500 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: Your message of "14 Nov 2002 14:56:19 +0100." <3cq4xnjw.fsf@python.net> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> Message-ID: <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> > I have already explained several times where the source for > bdist_wininst lives, I wont do it again (unless someone needs it). > Maybe it could be moved over to the main python module someday. > > Concerning the 'pile of binary junk' you see on the checkins list > each time it has to be recompiled (the bdist_wininst.py module contains > the windows exe stub compressed and base64-encoded literally in a large > string): > > This was a design decision which could (and can) be questioned. The > checkin messages are one side, the other side is this: it avoids > having a binary file (wininst.exe), which only can be created on > windows, in the CVS repository and in the distribution. > > bdist_wininst installers *can* also be created on other systems > as long as they only contain pure Python code - although I've never > heard of someone actually doing this. > > So, should the bdist_wininst source code be moved into the python > tree, maybe somewhere into PC, and the MSVC .dsw file extended to > build the thing, and wininst.exe as binary file go into the distutils > directory? Even if this would be done, I'd suggest to wait after the > separate distutils release which we have planned on distutils-sig. +1 --Guido van Rossum (home page: http://www.python.org/~guido/) From pobrien@orbtech.com Thu Nov 14 14:21:04 2002 From: pobrien@orbtech.com (Patrick K. O'Brien) Date: Thu, 14 Nov 2002 08:21:04 -0600 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <3DD34051.2070909@ActiveState.com> References: <3DD34051.2070909@ActiveState.com> Message-ID: <200211140821.04628.pobrien@orbtech.com> On Thursday 14 November 2002 12:18 am, David Ascher wrote: > I don't like argvparse much because this "argv" thing is far from > obvious to newbies. But given those two choices, I'd pick argvparse > over optlib. I agree that "argv" isn't obvious, but we're likely stuck with it. Given that, along with David Goodger's observation that Optik does more than parse, I thought I'd throw this name into the fray: argvlib If that doesn't sit well with anyone, I'm +1 on argvparse. -- Patrick K. O'Brien Orbtech http://www.orbtech.com/web/pobrien ----------------------------------------------- "Your source for Python programming expertise." ----------------------------------------------- From theller@python.net Thu Nov 14 14:48:08 2002 From: theller@python.net (Thomas Heller) Date: 14 Nov 2002 15:48:08 +0100 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> Message-ID: > > So, should the bdist_wininst source code be moved into the python > > tree, maybe somewhere into PC, and the MSVC .dsw file extended to > > build the thing, and wininst.exe as binary file go into the distutils > > directory? Even if this would be done, I'd suggest to wait after the > > separate distutils release which we have planned on distutils-sig. > > +1 Ok, so I'll enter a bug for this and assign it to me. Thomas From mclay@nist.gov Thu Nov 14 15:38:29 2002 From: mclay@nist.gov (Michael McLay) Date: Thu, 14 Nov 2002 10:38:29 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <3DD34051.2070909@ActiveState.com> References: <3DD34051.2070909@ActiveState.com> Message-ID: <200211141038.29309.mclay@nist.gov> On Thursday 14 November 2002 01:18 am, David Ascher wrote: > David Abrahams wrote: > >-1 on optlib (sounds like optimization) > > Agreed, especially with the parenthetical comment. > > I don't like argvparse much because this "argv" thing is far from > obvious to newbies. But given those two choices, I'd pick argvparse > over optlib. I was just about to say the same thing about "argv". How about calling it cmdoptionslib or optionslib. For a newbie just using the word "options" makes it diffcult to locate references using google, etc. On google optionlib had three pages of hits, optionslib had two hits and cmdoptionslib had no hits. From guido@python.org Thu Nov 14 15:40:01 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 14 Nov 2002 10:40:01 -0500 Subject: [Python-Dev] Python interface to attribute descriptors In-Reply-To: Your message of "Wed, 13 Nov 2002 14:08:57 PST." <000001c28b61$482ea420$6501a8c0@NICKLEBY> References: <000001c28b61$482ea420$6501a8c0@NICKLEBY> Message-ID: <200211141540.gAEFe1V11484@odiug.zope.com> > Question: how could the descriptor "know" the name "x" if it is created > by a descriptor-creating statement such as x = descriptor_creator(...). > I guess one could do this by making a metaclass that would look for the > descriptors in the class and "poke" the name into them but is there > another way? In the example below I evaded this question by making the > name an argument to positive's constructor. Yes, those are the only ways I know of. In retrospect it might have been useful to give the descriptor API an extra argument for the attribute name, but it's a bit late for that now. --Guido van Rossum (home page: http://www.python.org/~guido/) From nas@python.ca Thu Nov 14 18:42:32 2002 From: nas@python.ca (Neil Schemenauer) Date: Thu, 14 Nov 2002 10:42:32 -0800 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211141315.gAEDFch29632@pcp02138704pcs.reston01.va.comcast.net> References: <16E1010E4581B049ABC51D4975CEDB885E2DC4@UKDCX001.uk.int.atosorigin.com> <200211141315.gAEDFch29632@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021114184232.GA31376@glacier.arctrix.com> Guido van Rossum wrote: > In the choice between optlib and argvparse, argvparse wins by a > landslide. But I came up with a better one: optparse! I don't like argvparse. Only C programmers know what argv means. I think "argument parser" is more correct than "option parser" but that's not a big deal. optparse is good, IMHO. Neil From pinard@iro.umontreal.ca Thu Nov 14 17:40:29 2002 From: pinard@iro.umontreal.ca (=?iso-8859-1?q?Fran=E7ois?= Pinard) Date: 14 Nov 2002 12:40:29 -0500 Subject: [Python-Dev] Re: Adopting Optik In-Reply-To: <200211131554.gADFsBw28797@odiug.zope.com> References: <200211131554.gADFsBw28797@odiug.zope.com> Message-ID: [Guido van Rossum] > [...] One of the tasks is to adopt Greg Ward's options parsing module, > Optik. I propose to adopt this under the name "options". Any comments? My feeling is that Python should much avoid, for a library module, a name which is likely to be a user variable name. This would rule out "options". In my experience so far, the most irritating cases in Python hurding common words for itself have been `string' and `socket'. I know that some people write `s' for a string and would write `o' for options, but this algebraic style is not ideal. I find that using real words, like `counter', `ordinal', `cursor', `index' or such, yields more readable programs. When one "imports" a module, one has to give up using the module name for other purposes. Currently, I think _all_ my callable scripts which handle options already use `options' for a variable name, so I would prefer that `options' be left alone. This is why I think Python should not offer a module named "text" for example. As a principle for the future, let simple, common words be available to users for naming their own variables. -- François Pinard http://www.iro.umontreal.ca/~pinard From pinard@iro.umontreal.ca Thu Nov 14 17:55:19 2002 From: pinard@iro.umontreal.ca (=?iso-8859-1?q?Fran=E7ois?= Pinard) Date: 14 Nov 2002 12:55:19 -0500 Subject: [Python-Dev] Re: Adopting Optik In-Reply-To: <200211131554.gADFsBw28797@odiug.zope.com> References: <200211131554.gADFsBw28797@odiug.zope.com> Message-ID: [Guido van Rossum] > [...] One of the tasks is to adopt Greg Ward's options parsing module, > Optik. I propose to adopt this under the name "options". Any comments? For what it might be worth, from all suggestions I've seen so far, "OptionParser" is the one I like best, because of the pre-existence of "ConfigParser", and the similarity between goals and complexity level. I understand that OptionParser is _also_ a class among others in those offered, and some of us do not see a reason to _give preference_ to one particular class. I would rather the module name also being one of its class name as a mere coincidence or accident, rather than the indication that some preference was given. The objection against it is not strong. In a previous message, I told why "options" looks a bad choice to me. The next worse is probably "optlib", because its ambiguity makes it pretty meaningless. Maybe that "Optik" could be retained as yet another possibility? It might not be so bad, all considered. ;-) -- François Pinard http://www.iro.umontreal.ca/~pinard From exarkun@meson.dyndns.org Thu Nov 14 18:49:17 2002 From: exarkun@meson.dyndns.org (Jp Calderone) Date: Thu, 14 Nov 2002 13:49:17 -0500 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021114184917.GA29284@meson.dyndns.org> --n8g4imXOkfNTN/H1 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Nov 14, 2002 at 09:00:44AM -0500, Guido van Rossum wrote: > [ Who wrote this? ] > > [snip] > > =20 > > bdist_wininst installers *can* also be created on other systems > > as long as they only contain pure Python code - although I've never > > heard of someone actually doing this. > [snip] FWIW, I do this with just about every release I make (and I'm a bit surprised to hear that this isn't a common thing). While I do have a Windows machine I *could* build releases on (with cygwin though, not MSVC), my release process is mostly automated, and runs on a Linux box. I don't think the thread is headed in this direction, but just in case, *please* don't break this feature :) Jp --n8g4imXOkfNTN/H1 Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.0 (GNU/Linux) iD8DBQE90/AtedcO2BJA+4YRAo37AJ9/z1lsbAzSMpqs4XMSNpRbEagoTACgtden W6nsJduLcs9CQIj5jo1pKOE= =bIM2 -----END PGP SIGNATURE----- --n8g4imXOkfNTN/H1-- From aahz@pythoncraft.com Thu Nov 14 19:03:39 2002 From: aahz@pythoncraft.com (Aahz) Date: Thu, 14 Nov 2002 14:03:39 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211141315.gAEDFch29632@pcp02138704pcs.reston01.va.comcast.net> References: <16E1010E4581B049ABC51D4975CEDB885E2DC4@UKDCX001.uk.int.atosorigin.com> <200211141315.gAEDFch29632@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021114190339.GA7387@panix.com> On Thu, Nov 14, 2002, Guido van Rossum wrote: > > In the choice between optlib and argvparse, argvparse wins by a > landslide. But I came up with a better one: optparse! This addresses > the argument by several Davids that argv is obscure to newbies. I > think it doesn't sound like optimization like optlib does. > > optparse also seems to be what Ruby uses (it even has an OptionParser > class :-), and I found an optparse.tcl on the net too. +1 -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ A: No. Q: Is top-posting okay? From theller@python.net Thu Nov 14 19:14:43 2002 From: theller@python.net (Thomas Heller) Date: 14 Nov 2002 20:14:43 +0100 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <20021114184917.GA29284@meson.dyndns.org> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> <20021114184917.GA29284@meson.dyndns.org> Message-ID: Jp Calderone writes: > On Thu, Nov 14, 2002 at 09:00:44AM -0500, Guido van Rossum wrote: > > [ Who wrote this? ] That was me. > > > [snip] > > > > > > bdist_wininst installers *can* also be created on other systems > > > as long as they only contain pure Python code - although I've never > > > heard of someone actually doing this. > > [snip] > > FWIW, I do this with just about every release I make (and I'm a bit > surprised to hear that this isn't a common thing). While I do have a > Windows machine I *could* build releases on (with cygwin though, not MSVC), > my release process is mostly automated, and runs on a Linux box. I don't > think the thread is headed in this direction, but just in case, *please* > don't break this feature :) > Great, but the binary doesn't show on which system it was created. I have more the impression, that people release .tar.gz files containing a distutils setup.py script, but no PKG-INFO, so the tarball isn't created by distutils sdist command. PyChecker is such an example, IIRC. Question for python-dev (Tim?): how would wininst.exe now find it's way into the Python source distribution, or in Linux binary distributions? Thomas From theller@python.net Thu Nov 14 19:23:06 2002 From: theller@python.net (Thomas Heller) Date: 14 Nov 2002 20:23:06 +0100 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <20021114184917.GA29284@meson.dyndns.org> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> <20021114184917.GA29284@meson.dyndns.org> Message-ID: Jp Calderone writes: > > > bdist_wininst installers *can* also be created on other systems > > > as long as they only contain pure Python code - although I've never > > > heard of someone actually doing this. > > [snip] > > FWIW, I do this with just about every release I make (and I'm a bit > surprised to hear that this isn't a common thing). While I do have a > Windows machine I *could* build releases on (with cygwin though, not MSVC), > my release process is mostly automated, and runs on a Linux box. I don't > think the thread is headed in this direction, but just in case, *please* > don't break this feature :) Hm, I sent out the previous post too early. I really didn't know this. What project is this? Thomas From guido@python.org Thu Nov 14 19:23:20 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 14 Nov 2002 14:23:20 -0500 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: Your message of "14 Nov 2002 20:14:43 +0100." References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> <20021114184917.GA29284@meson.dyndns.org> Message-ID: <200211141923.gAEJNKo04121@odiug.zope.com> > > FWIW, I do this with just about every release I make (and I'm a bit > > surprised to hear that this isn't a common thing). While I do have a > > Windows machine I *could* build releases on (with cygwin though, not MSVC), > > my release process is mostly automated, and runs on a Linux box. I don't > > think the thread is headed in this direction, but just in case, *please* > > don't break this feature :) > Great, but the binary doesn't show on which system it was created. I don't understand the relevance of this comment. > I have more the impression, that people release .tar.gz files > containing a distutils setup.py script, but no PKG-INFO, so the > tarball isn't created by distutils sdist command. PyChecker is such an > example, IIRC. Ouch, I didn't know this! (Or I'd forgotten. :-) It's well documented fortunately. (And the docs remind me that it would be neat if Python had a tarfile module.) > Question for python-dev (Tim?): how would wininst.exe now find it's way > into the Python source distribution, or in Linux binary distributions? I don't understand the question, and I doubt that Tim does either. --Guido van Rossum (home page: http://www.python.org/~guido/) From neal@metaslash.com Thu Nov 14 19:41:02 2002 From: neal@metaslash.com (Neal Norwitz) Date: Thu, 14 Nov 2002 14:41:02 -0500 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> <20021114184917.GA29284@meson.dyndns.org> Message-ID: <20021114194101.GJ1335@epoch.metaslash.com> > > > > bdist_wininst installers *can* also be created on other systems > > > > as long as they only contain pure Python code - although I've never > > > > heard of someone actually doing this. > > > > FWIW, I do this with just about every release I make (and I'm a bit > > surprised to hear that this isn't a common thing). While I do have a > > Windows machine I *could* build releases on (with cygwin though, not MSVC), > > my release process is mostly automated, and runs on a Linux box. I don't > > think the thread is headed in this direction, but just in case, *please* > > don't break this feature :) > > > Great, but the binary doesn't show on which system it was created. > > I have more the impression, that people release .tar.gz files > containing a distutils setup.py script, but no PKG-INFO, so the > tarball isn't created by distutils sdist command. PyChecker is such an > example, IIRC. This is correct. I do release PyChecker as a .tar.gz which contains a setup.py, but no PKG-INFO. Should there be one? I don't use setup.py to create the .tar.gz. I've never found the proper way to make releases when I looked. But that was a long time ago. Neal From theller@python.net Thu Nov 14 19:48:49 2002 From: theller@python.net (Thomas Heller) Date: 14 Nov 2002 20:48:49 +0100 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <20021114194101.GJ1335@epoch.metaslash.com> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> <20021114184917.GA29284@meson.dyndns.org> <20021114194101.GJ1335@epoch.metaslash.com> Message-ID: > > I have more the impression, that people release .tar.gz files > > containing a distutils setup.py script, but no PKG-INFO, so the > > tarball isn't created by distutils sdist command. PyChecker is such an > > example, IIRC. > > This is correct. I do release PyChecker as a .tar.gz which contains a > setup.py, but no PKG-INFO. Should there be one? I don't use setup.py > to create the .tar.gz. I only noticed this (and your package is not the only one) when I worked on a PEP 243 (?, package upload mechinism) implementation where uploaded files would be scanned for the PKG-INFO metadata. > > I've never found the proper way to make releases when I looked. > But that was a long time ago. > I normally don't use Linux, but does 'python setup.py sdist' not work there? Thomas From exarkun@meson.dyndns.org Thu Nov 14 19:50:12 2002 From: exarkun@meson.dyndns.org (Jp Calderone) Date: Thu, 14 Nov 2002 14:50:12 -0500 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> <20021114184917.GA29284@meson.dyndns.org> Message-ID: <20021114195012.GA29636@meson.dyndns.org> --r5Pyd7+fXNt84Ff3 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Nov 14, 2002 at 08:23:06PM +0100, Thomas Heller wrote: > Jp Calderone writes: >=20 > > > > bdist_wininst installers *can* also be created on other systems > > > > as long as they only contain pure Python code - although I've never > > > > heard of someone actually doing this. > > > [snip] > >=20 > > FWIW, I do this with just about every release I make (and I'm a bit > > surprised to hear that this isn't a common thing). While I do have a > > Windows machine I *could* build releases on (with cygwin though, not MS= VC), > > my release process is mostly automated, and runs on a Linux box. I don= 't > > think the thread is headed in this direction, but just in case, *please* > > don't break this feature :) >=20 > Hm, I sent out the previous post too early. > I really didn't know this. >=20 > What project is this? I do it for the two I'm maintaining/developing on my own -- 'pynfo' and 'originalgamer' on sourceforge.net (the former of which I just did the initial release for a couple days ago). Twisted (http://www.twistedmatrix.com), which doesn't currently build bdist_wininst packages on Linux, would definitely like to once there's a viable cross-compilation solution for the handful of .c files it includes. The current process for building the windows installer involves vmware, and is generally a pain. I'm assuming native compiled modules wouldn't pose any additional problems for bdist_wininst, since they can just be wrapped up like any other file, and plopped down onto the target system, and anyway, building bdist_wininst on windows must already take care of it, right? Jp -- #!/bin/bash ( LIST=3D(~/.netscape/sigs/*.sig) cat ${LIST[$(($RANDOM % ${#LIST[*]}))]} echo --$'\n' `uptime` ) > ~/.netscape/.signature -- 2:00pm up 13 days, 0:52, 5 users, load average: 0.01, 0.03, 0.06 --r5Pyd7+fXNt84Ff3 Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.0 (GNU/Linux) iD8DBQE90/50edcO2BJA+4YRAimjAJ0YjZ+ieB2Shi4khPrrjS2h+JTkTgCgtKZU 7WmG8L+IkfDn1tecSRJIxx4= =aUYe -----END PGP SIGNATURE----- --r5Pyd7+fXNt84Ff3-- From mgilfix@eecs.tufts.edu Thu Nov 14 20:00:36 2002 From: mgilfix@eecs.tufts.edu (Michael Gilfix) Date: Thu, 14 Nov 2002 15:00:36 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> References: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021114200036.GB199@eecs.tufts.edu> On Wed, Nov 13 @ 23:07, Guido van Rossum wrote: > Of course, it would be easier for prospective users if Greg's > distribution used the same name as we adopt for 2.3. :-) > > Of all the names suggested so far, I like optlib and argvparse equally > well. Can we do a tally of votes for those, to decide? +1 optlib -1 argvparse (ugh) -- Michael Gilfix mgilfix@eecs.tufts.edu For my gpg public key: http://www.eecs.tufts.edu/~mgilfix/contact.html From martin@v.loewis.de Thu Nov 14 20:11:30 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 14 Nov 2002 21:11:30 +0100 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <200211141923.gAEJNKo04121@odiug.zope.com> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> <20021114184917.GA29284@meson.dyndns.org> <200211141923.gAEJNKo04121@odiug.zope.com> Message-ID: Guido van Rossum writes: > > Question for python-dev (Tim?): how would wininst.exe now find it's way > > into the Python source distribution, or in Linux binary distributions? > > I don't understand the question, and I doubt that Tim does either. I think you just voted +1 on the following technology: 1. the wininst.exe sources are added to the PC directory, and the MSVC project file to the PCbuild directory. 2. wininst.exe becomes a file on its own, and is generated from the project file. 3. the base64-coded version is removed from bdist_wininst.py. If so, neither the source distribution nor the Linux binaries would include wininst.exe. Then, the question is how you could create windows installers on non-windows. Regards, Martin From mgilfix@eecs.tufts.edu Thu Nov 14 20:08:15 2002 From: mgilfix@eecs.tufts.edu (Michael Gilfix) Date: Thu, 14 Nov 2002 15:08:15 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211141038.29309.mclay@nist.gov> References: <3DD34051.2070909@ActiveState.com> <200211141038.29309.mclay@nist.gov> Message-ID: <20021114200814.GC199@eecs.tufts.edu> After reading this thread, it seems kind of shame we can't do something like: from parselib import OptionParser which could match with UrlParser and ConfigParser. But, that being said, I like either Guido's "optparse" or the OptionParser name in the end. -- Mike On Thu, Nov 14 @ 10:38, Michael McLay wrote: > On Thursday 14 November 2002 01:18 am, David Ascher wrote: > > David Abrahams wrote: > > >-1 on optlib (sounds like optimization) > > > > Agreed, especially with the parenthetical comment. > > > > I don't like argvparse much because this "argv" thing is far from > > obvious to newbies. But given those two choices, I'd pick argvparse > > over optlib. > > I was just about to say the same thing about "argv". > > How about calling it cmdoptionslib or optionslib. > > For a newbie just using the word "options" makes it diffcult to locate > references using google, etc. On google optionlib had three pages of hits, > optionslib had two hits and cmdoptionslib had no hits. > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev -- Michael Gilfix mgilfix@eecs.tufts.edu For my gpg public key: http://www.eecs.tufts.edu/~mgilfix/contact.html From guido@python.org Thu Nov 14 20:17:26 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 14 Nov 2002 15:17:26 -0500 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: Your message of "14 Nov 2002 21:11:30 +0100." References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> <20021114184917.GA29284@meson.dyndns.org> <200211141923.gAEJNKo04121@odiug.zope.com> Message-ID: <200211142017.gAEKHQE12463@odiug.zope.com> > Guido van Rossum writes: > > > > Question for python-dev (Tim?): how would wininst.exe now find it's way > > > into the Python source distribution, or in Linux binary distributions? > > > > I don't understand the question, and I doubt that Tim does either. > > I think you just voted +1 on the following technology: > > 1. the wininst.exe sources are added to the PC directory, and the MSVC > project file to the PCbuild directory. > > 2. wininst.exe becomes a file on its own, and is generated from the > project file. > > 3. the base64-coded version is removed from bdist_wininst.py. > > If so, neither the source distribution nor the Linux binaries would > include wininst.exe. Then, the question is how you could create > windows installers on non-windows. Then let's check in wininst.exe as a binary file in an appropriate location (probably under distutils). We do the same for the generated file "configure" after all. The maintainer of wininst.c has to remember to commit the wininst.exe file whenever he changes wininst.c. Seems easy enough. --Guido van Rossum (home page: http://www.python.org/~guido/) From cnetzer@mail.arc.nasa.gov Thu Nov 14 20:06:20 2002 From: cnetzer@mail.arc.nasa.gov (Chad Netzer) Date: Thu, 14 Nov 2002 12:06:20 -0800 Subject: [Python-Dev] Re: Adopting Optik In-Reply-To: References: <200211131554.gADFsBw28797@odiug.zope.com> Message-ID: <200211142006.MAA21487@mail.arc.nasa.gov> On Thursday 14 November 2002 09:40, Fran=E7ois Pinard wrote: > When one "imports" a module, one has to give up using the module name f= or > other purposes. Just a quick side note, you can always do things like: import socket SocketModule =3D socket socket =3D open( "blah blah blah") Which may be un-pythonic, but not too onerous, imo. Now that the string=20 module is kind of superfluous, you may be able to start using 'string' as= a=20 variable name (I'd still avoid it for another five years, though. :) I think the point is well taken, however. Global namespace aliasing is a= =20 problem with many languages, and as python officially adopts more and mor= e=20 modules into the mainline, the current ad-hoc naming style could become a= =20 problem. Some early modules already appropriate common useful variable n= ames=20 (signal, thread, etc.), and there seems to be a trend of new modules tryi= ng=20 to avoid this with different methods (Capitalization, -lib suffix, etc.) Perhaps we should discuss (if it hasn't been discussed to death already) = a=20 format to adopt for all future included modules, to make things sane. We= =20 could even consider renaming the old modules that don't fit the pattern=20 (keeping the old names as well, but deprecating them) For example, I think modules should avoid being singular nouns. They sho= uld=20 instead describe (or hint at) what services they provide, not what kind o= f=20 object they provide. So, the 'socket' module would be better as 'sockets= ',=20 because someone is more likely to use a temporary "socket" variable than = the=20 "sockets" variable. Furthermore, I don't particularly like the "lib" suffix, but it is useful= for=20 being short, and makes clear that it provides services. Lots of modules = are=20 using it, maybe we should formalize its use. Should we allow mixed case module names in the standard distribution modu= le=20 names? Yes or no? I suppose I don't really care, if other naming issues= are=20 worked out. But in general I prefer lowercase, which makes choosing the=20 right name more important. So "Queue" would possibly be better as 'queue= s',=20 but not as 'queue'. Oh well, enough rambling. Maybe it isn't that big a deal, not worthy of=20 wasting too much time on. But as time goes on and the module namespace g= ets=20 more crowded, the ad-hoc naming scheme starts to look a little rustic. --=20 Chad Netzer cnetzer@mail.arc.nasa.gov From theller@python.net Thu Nov 14 20:25:41 2002 From: theller@python.net (Thomas Heller) Date: 14 Nov 2002 21:25:41 +0100 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <20021114195012.GA29636@meson.dyndns.org> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> <20021114184917.GA29284@meson.dyndns.org> <20021114195012.GA29636@meson.dyndns.org> Message-ID: Jp Calderone writes: [building windows installers on linux] > I do it for the two I'm maintaining/developing on my own -- 'pynfo' and > 'originalgamer' on sourceforge.net (the former of which I just did the > initial release for a couple days ago). > > Twisted (http://www.twistedmatrix.com), which doesn't currently build > bdist_wininst packages on Linux, would definitely like to once there's a > viable cross-compilation solution for the handful of .c files it includes. > The current process for building the windows installer involves vmware, and > is generally a pain. What do I know about linux, but wouldn't wine work? > > I'm assuming native compiled modules wouldn't pose any additional problems > for bdist_wininst, since they can just be wrapped up like any other file, > and plopped down onto the target system, and anyway, building bdist_wininst > on windows must already take care of it, right? Once you have created these files, there should be no problem. Thanks for the info, Thomas From exarkun@meson.dyndns.org Thu Nov 14 20:46:57 2002 From: exarkun@meson.dyndns.org (Jp Calderone) Date: Thu, 14 Nov 2002 15:46:57 -0500 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: References: <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> <20021114184917.GA29284@meson.dyndns.org> <20021114195012.GA29636@meson.dyndns.org> Message-ID: <20021114204657.GA29924@meson.dyndns.org> --gBBFr7Ir9EOA20Yy Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Nov 14, 2002 at 09:25:41PM +0100, Thomas Heller wrote: > Jp Calderone writes: > [building windows installers on linux] > > [snip] > > The current process for building the windows installer involves vmware,= and > > is generally a pain. >=20 > What do I know about linux, but wouldn't wine work? (Sadly?) Python doesn't run with WINE, and I'm almost positive MSVC's cl.exe doesn't, so no. Jp -- 3:00pm up 13 days, 1:52, 4 users, load average: 0.00, 0.00, 0.00 --gBBFr7Ir9EOA20Yy Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.0 (GNU/Linux) iD8DBQE91AvBedcO2BJA+4YRAgvvAKCMeg2HOjtwPzQsSuK3u+1LgSecLwCfbeUa NJgch9CN8LSHbc7MpF7VNhg= =BeqL -----END PGP SIGNATURE----- --gBBFr7Ir9EOA20Yy-- From guido@python.org Thu Nov 14 20:54:46 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 14 Nov 2002 15:54:46 -0500 Subject: [Python-Dev] Re: [getopt-sig] Adopting Optik In-Reply-To: Your message of "Wed, 13 Nov 2002 18:37:29 EST." <20021113233729.GA3218@cthulhu.gerg.ca> References: <200211131554.gADFsBw28797@odiug.zope.com> <20021113233729.GA3218@cthulhu.gerg.ca> Message-ID: <200211142054.gAEKskw12698@odiug.zope.com> Let's assume we'll stick with optparse as the module name. There are still a few references to Optik in the source code, in particular there's an exception class OptikError. (The other mentions are in comments.) Should I rename OptikError to OptParseError, or leave it? --Guido van Rossum (home page: http://www.python.org/~guido/) From ping@zesty.ca Thu Nov 14 21:02:17 2002 From: ping@zesty.ca (Ka-Ping Yee) Date: Thu, 14 Nov 2002 15:02:17 -0600 (CST) Subject: [Python-Dev] Re: [getopt-sig] Adopting Optik In-Reply-To: <200211142054.gAEKskw12698@odiug.zope.com> Message-ID: On Thu, 14 Nov 2002, Guido van Rossum wrote: > Should I rename OptikError to OptParseError, or leave it? I like optparse, and i think it's a good idea to call the corresponding error OptParseError. -- ?!ng From theller@python.net Thu Nov 14 19:55:45 2002 From: theller@python.net (Thomas Heller) Date: 14 Nov 2002 20:55:45 +0100 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <200211141923.gAEJNKo04121@odiug.zope.com> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> <20021114184917.GA29284@meson.dyndns.org> <200211141923.gAEJNKo04121@odiug.zope.com> Message-ID: <8yzwudry.fsf@python.net> [Jp Calderone, about creating binary windows installers with distutils on non-windows systems] > > > FWIW, I do this with just about every release I make (and I'm a bit > > > surprised to hear that this isn't a common thing). While I do have a > > > Windows machine I *could* build releases on (with cygwin though, not MSVC), > > > my release process is mostly automated, and runs on a Linux box. I don't > > > think the thread is headed in this direction, but just in case, *please* > > > don't break this feature :) > [me] > > Great, but the binary doesn't show on which system it was created. > [Guido] > I don't understand the relevance of this comment. > I don't either ;-) It was sent out too early. I simply meant that I didn't know that people actually were creating those bdist_wininst packages on Linux. > > I have more the impression, that people release .tar.gz files > > containing a distutils setup.py script, but no PKG-INFO, so the > > tarball isn't created by distutils sdist command. PyChecker is such an > > example, IIRC. > > Ouch, I didn't know this! (Or I'd forgotten. :-) It's well documented > fortunately. This is what I don't understand. > (And the docs remind me that it would be neat if Python > had a tarfile module.) > > > Question for python-dev (Tim?): how would wininst.exe now find it's way > > into the Python source distribution, or in Linux binary distributions? > > I don't understand the question, and I doubt that Tim does either. I have seen that you entered the request to move the bdist_wininst sources into the Python tree. We can solve this problem later, I think. Thanks, Thomas From DavidA@ActiveState.com Thu Nov 14 21:22:46 2002 From: DavidA@ActiveState.com (David Ascher) Date: Thu, 14 Nov 2002 13:22:46 -0800 Subject: [Python-Dev] Re: [getopt-sig] Adopting Optik In-Reply-To: References: Message-ID: <3DD41426.7070806@ActiveState.com> Ka-Ping Yee wrote: > On Thu, 14 Nov 2002, Guido van Rossum wrote: > > >Should I rename OptikError to OptParseError, or leave it? > > > I like optparse, and i think it's a good idea to call the corresponding > error OptParseError. Agreed. Now's the time to get rid of the legacy code. From Jack.Jansen@oratrix.com Thu Nov 14 21:38:54 2002 From: Jack.Jansen@oratrix.com (Jack Jansen) Date: Thu, 14 Nov 2002 22:38:54 +0100 Subject: [Python-Dev] Adopting Optik In-Reply-To: <200211132136.gADLaOd07157@odiug.zope.com> Message-ID: <78E5B125-F819-11D6-B877-000A27B19B96@oratrix.com> On woensdag, nov 13, 2002, at 22:36 Europe/Amsterdam, Guido van Rossum wrote: > How about optlib? It's short, un-cute, and follows the *lib pattern > used all over the Python stdlib. optlib may be a bit better than options, but it could still do many different things in my mind. Maybe argvlib (which is most definitely un-cute:-)? -- - Jack Jansen http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma Goldman - From gward@python.net Thu Nov 14 21:40:00 2002 From: gward@python.net (Greg Ward) Date: Thu, 14 Nov 2002 16:40:00 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> References: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021114214000.GB4989@cthulhu.gerg.ca> On 13 November 2002, Guido van Rossum said: > Of course, it would be easier for prospective users if Greg's > distribution used the same name as we adopt for 2.3. :-) Yes -- what I'm planning is the first major Optik release after it's incorporated into Python's stdlib (Optik 1.5?) will include a stub module -- optlib.py, optparse.py, OptionParser.py, whatever -- that emulates the module of the same name from Python 2.3. Or something like that. So developers can say "requires Python 2.3 or Optik 1.5" and just code this: from optparse import OptionParser with no silly "try/except ImportError" hacks. Greg -- Greg Ward http://www.gerg.ca/ It takes a scary kind of illness / To design a place like this for pay Downtown's an endless generic mall / Of video games and fast food chains -- Dead Kennedys From gward@python.net Thu Nov 14 21:42:04 2002 From: gward@python.net (Greg Ward) Date: Thu, 14 Nov 2002 16:42:04 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> References: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021114214204.GC4989@cthulhu.gerg.ca> On 13 November 2002, Guido van Rossum said: > Of all the names suggested so far, I like optlib and argvparse equally > well. Can we do a tally of votes for those, to decide? 'argvparse' turns my stomach. 'optlib' I like. 'optparse' I like even more. I guess I am being hypocritical wrt. abbreviations in identifier names. Oh well. Greg -- Greg Ward http://www.gerg.ca/ MTV -- get off the air! -- Dead Kennedys From pje@telecommunity.com Thu Nov 14 17:53:00 2002 From: pje@telecommunity.com (Phillip J. Eby) Date: Thu, 14 Nov 2002 12:53:00 -0500 Subject: [Python-Dev] Python interface to attribute descriptors Message-ID: <5.1.1.6.0.20021114124517.00ab2250@mail.rapidsite.net> To answer your question about getting attribute names, take a look at: http://cvs.eby-sarna.com/PEAK/src/peak/binding/once.py?rev=1.22&content-type=text/vnd.viewcvs-markup It's a working example of a use of a metaclass (ActiveDescriptors) that will "activate" descriptors added to it, and tell them their names. You'll note that the descriptor objects make copies of themselves when activated, if the name given to them differs from the name they "guessed" when they were created. This is so that descriptors can be safely shared and reused in multiple classes (or even the same one) under different names. peak.binding.once is of course just a small part of the PEAK system, but if you want to use the code, you can. You'll need to strip out some of the PEAK specifics like IBindingFactory, EigenRegistry, and importObject, though. Really, probably all you need for what you're doing is this part: class ActiveDescriptors(type): """Type which gives its descriptors a chance to find out their names""" def __init__(klass, name, bases, dict): for k,v in dict.items(): if isinstance(v,ActiveDescriptor): v.activate(klass,k) super(ActiveDescriptors,klass).__init__(name,bases,dict) class ActiveDescriptor(object): """This is just a (simpler sort of) interface assertion class""" def activate(self,klass,attrName): """Informs the descriptor that it is in 'klass' with name 'attrName'""" raise NotImplementedError and then define a base class which has ActiveDescriptors as its metaclass, and use it for all your classes that will use ActiveDescriptor-subclass instances. From DavidA@ActiveState.com Thu Nov 14 21:47:17 2002 From: DavidA@ActiveState.com (David Ascher) Date: Thu, 14 Nov 2002 13:47:17 -0800 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: References: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> <20021114214204.GC4989@cthulhu.gerg.ca> Message-ID: <3DD419E5.3060805@ActiveState.com> Greg Ward wrote: > 'optparse' I like even more. > I have no problem w/ optparse. From drifty@bigfoot.com Thu Nov 14 21:52:15 2002 From: drifty@bigfoot.com (Brett Cannon) Date: Thu, 14 Nov 2002 13:52:15 -0800 (PST) Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> Message-ID: [Guido van Rossum] > Of all the names suggested so far, I like optlib and argvparse equally > well. Can we do a tally of votes for those, to decide? > +1 argvparse -0 optlib And to just continue with what I had done before, here is the current tally for votes for the two options (based on direct replies to Guido's email calling for this vote except for Raymond's):: argvparse: +1 : (David Abrahams, Brett Cannon, Barry Warsaw, Patrick O'Brien) +0 : () -0 : () -1 : (Michael Gilfix) optlib: +1 : (Michael Gilfix, Raymond Hettinger) +0 : (Barry Warsaw) -0 : (Brett Cannon) -1 : (David Abrahams, David Ascher) Vague votes: Ka-Ping Yee (likes argvparse) David Goodger ("optlib > argvparse") David Ascher ("I'd pick argvparse over optlib") Greg Ward ("argvparse turns my stomach; optlib I like") Notes: David Abrahams would change his vote for ``optlib`` if the name was ``optionslib``. Michael Lay likes this name, too. Patrick O'Brien's vote is only if argvlib doesn't go anywhere. OptionParser, optparse, and argvlib were the wild card names mentioned the most. People also warmed to ``optionslib`` more than ``optlib`` and seemed willing to change there vote if ``optlib`` metamorphosed. -Brett P.S.: Guido, if you call another vote with different names for some reason, can you change the subject line of the email so that Mailman makes all the votes a single thread? Makes it easier for me to count (assuming you want me to keep bothering to count). From Jack.Jansen@oratrix.com Thu Nov 14 21:59:24 2002 From: Jack.Jansen@oratrix.com (Jack Jansen) Date: Thu, 14 Nov 2002 22:59:24 +0100 Subject: [Python-Dev] Adopting Optik In-Reply-To: <78E5B125-F819-11D6-B877-000A27B19B96@oratrix.com> Message-ID: <566D7E3C-F81C-11D6-B877-000A27B19B96@oratrix.com> On donderdag, nov 14, 2002, at 22:38 Europe/Amsterdam, Jack Jansen wrote: > optlib may be a bit better than options, but it could still do many > different things in my mind. Maybe argvlib (which is most definitely > un-cute:-)? Sorry, ignore this message. I wrote it yesterday but it somehow didnt get out of my machine until just yet. I'm all for argvparse at the moment. Incidentally, I think that the fact that argv is unclear to newbees is too bad, as they'll have to learn about sys.argv anyway... -- - Jack Jansen http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma Goldman - From guido@python.org Thu Nov 14 22:01:44 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 14 Nov 2002 17:01:44 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: Your message of "Thu, 14 Nov 2002 13:52:15 PST." References: Message-ID: <200211142201.gAEM1i713141@odiug.zope.com> > P.S.: Guido, if you call another vote with different names for some > reason, can you change the subject line of the email so that Mailman > makes all the votes a single thread? Makes it easier for me to > count (assuming you want me to keep bothering to count). No, thanks! optparse wins hands down in the latest exit poll. I've checked it in (docs and unit tests still missing though). --Guido van Rossum (home page: http://www.python.org/~guido/) From dave@boost-consulting.com Thu Nov 14 22:39:05 2002 From: dave@boost-consulting.com (David Abrahams) Date: Thu, 14 Nov 2002 17:39:05 -0500 Subject: [getopt-sig] Re: [Python-Dev] Adopting Optik In-Reply-To: <3DD419E5.3060805@ActiveState.com> (David Ascher's message of "Thu, 14 Nov 2002 13:47:17 -0800") References: <200211140407.gAE47CU21510@pcp02138704pcs.reston01.va.comcast.net> <20021114214204.GC4989@cthulhu.gerg.ca> <3DD419E5.3060805@ActiveState.com> Message-ID: David Ascher writes: > Greg Ward wrote: > >> 'optparse' I like even more. >> > I have no problem w/ optparse. Me neither. Optimal parsing? I doubt many will read it that way ;-) -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Boost support, enhancements, training, and commercial distribution From Andrew.MacIntyre@aba.gov.au Thu Nov 14 23:33:38 2002 From: Andrew.MacIntyre@aba.gov.au (Andrew MacIntyre) Date: Fri, 15 Nov 2002 10:33:38 +1100 Subject: [Python-Dev] RE: [Distutils] Killing off bdist_dumb Message-ID: > Andrew, was the use of full paths the problem that kept you > from using it? After revisiting some experiments, I can say that full paths was the issue that kept me from using it. While I note MAL's point about sub-classing, that is only useful from the POV of a module author. My POV is that of a 3rd party who wants to build & distribute installable module binaries to accompany a non-PythonLabs Python binary distribution. >From this POV, on the particular platform I'm supporting, what I want is bdist_dumb to default to using paths relative to sys.prefix. I tried to suss out what changes might achieve this, and the simplest option that occurred to me was to move down the extra directory levels when changing directory into the "dumb" directory prior to zipping things up. This is not too bad for OS/2, where the installation structure has always been simple & consistent across releases, but other platforms are more involved. A bdist_dumb option to select between full & relative paths would be Ok. ----------------------------------------------------------------------- Andrew MacIntyre \ E-mail: andrew.macintyre@aba.gov.au Planning & Licensing Branch \ Tel: +61 2 6256 2812 Australian Broadcasting Authority \ Fax: +61 2 6253 3277 -> "These thoughts are mine alone!" <---------------------------------- From mal@lemburg.com Fri Nov 15 08:31:45 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 15 Nov 2002 09:31:45 +0100 Subject: [Python-Dev] Re: [Distutils] Killing off bdist_dumb References: Message-ID: <3DD4B0F1.70106@lemburg.com> Andrew MacIntyre wrote: >>Andrew, was the use of full paths the problem that kept you >>from using it? > > > After revisiting some experiments, I can say that full paths was the > issue that kept me from using it. > > While I note MAL's point about sub-classing, that is only useful from > the POV of a module author. You probably mean "packager". The developer isn't necessarily the same person, e.g. a user might want to build from source then redistribute a package in some other way. > My POV is that of a 3rd party who wants to build & distribute installable > module binaries to accompany a non-PythonLabs Python binary distribution. > >>From this POV, on the particular platform I'm supporting, what I want is > bdist_dumb to default to using paths relative to sys.prefix. Please don't change defaults: this only introduces hassles for packagers since they'll have to add multi-version support to their setup.py which only complicates the process. If you want to add new functionality, provide a command option and then use that in your code. If you want to use this as default, place the option into the setup.cfg file. > I tried to suss out what changes might achieve this, and the simplest > option that occurred to me was to move down the extra directory levels > when changing directory into the "dumb" directory prior to zipping > things up. This is not too bad for OS/2, where the installation > structure has always been simple & consistent across releases, but > other platforms are more involved. > > A bdist_dumb option to select between full & relative paths would be > Ok. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From fredrik@pythonware.com Fri Nov 15 09:55:28 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Fri, 15 Nov 2002 10:55:28 +0100 Subject: [Python-Dev] Killing off bdist_dumb References: <200211132230.gADMUv205719@nyman.amk.ca> Message-ID: <00cb01c28c8d$221739a0$0900a8c0@spiff> amk wrote: > In the commentary attached to bug #410541, I suggest removing the > bdist_dumb command, because no interesting platforms actually install > from zip files. =20 how is bdist_dumb different from a plain bdist? if you decide to keep it in there, can you at least fix the help text: bdist create a built (binary) distribution bdist_dumb create a "dumb" built distribution bdist_rpm create an RPM distribution bdist_wininst create an executable installer for MS Windows oh, dumb really means "dumb". that's helpful. From mal@lemburg.com Fri Nov 15 10:13:11 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 15 Nov 2002 11:13:11 +0100 Subject: [Distutils] Re: [Python-Dev] Killing off bdist_dumb References: <200211132230.gADMUv205719@nyman.amk.ca> <00cb01c28c8d$221739a0$0900a8c0@spiff> Message-ID: <3DD4C8B7.3040703@lemburg.com> Fredrik Lundh wrote: > amk wrote: > > >>In the commentary attached to bug #410541, I suggest removing the >>bdist_dumb command, because no interesting platforms actually install >>from zip files. > > > how is bdist_dumb different from a plain bdist? > > if you decide to keep it in there, can you at least fix the > help text: > > bdist create a built (binary) distribution Hmm, I wonder why bdist is mentioned here: it's the base class driving all the other sub-commands. bdist doesn't do anything on its own. > bdist_dumb create a "dumb" built distribution > bdist_rpm create an RPM distribution > bdist_wininst create an executable installer for MS Windows > > oh, dumb really means "dumb". that's helpful. From the doc-string: create a "dumb" built distribution -- i.e., just an archive to be unpacked under $prefix or $exec_prefix. The short info should probably be: create a drop-in installation archive -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From fredrik@pythonware.com Fri Nov 15 12:16:46 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Fri, 15 Nov 2002 13:16:46 +0100 Subject: [Distutils] Re: [Python-Dev] Killing off bdist_dumb References: <200211132230.gADMUv205719@nyman.amk.ca> <00cb01c28c8d$221739a0$0900a8c0@spiff> <3DD4C8B7.3040703@lemburg.com> Message-ID: <018d01c28ca0$df0073c0$0900a8c0@spiff> mal wrote: > > if you decide to keep it in there, can you at least fix the > > help text: > >=20 > > bdist create a built (binary) distribution >=20 > Hmm, I wonder why bdist is mentioned here: it's the base > class driving all the other sub-commands. bdist doesn't > do anything on its own. have you tried it? on the 2.2 unix install I have here, it builds a tar archive. from what I can tell, it's actually running the same code as bdist_dumb... > From the doc-string: >=20 > create a "dumb" built distribution -- i.e., just an archive > to be unpacked under $prefix or $exec_prefix. which isn't true, obviously, since the archive contains absolute paths... From mal@lemburg.com Fri Nov 15 15:11:48 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 15 Nov 2002 16:11:48 +0100 Subject: [Distutils] Re: [Python-Dev] Killing off bdist_dumb References: <200211132230.gADMUv205719@nyman.amk.ca> <00cb01c28c8d$221739a0$0900a8c0@spiff> <3DD4C8B7.3040703@lemburg.com> <018d01c28ca0$df0073c0$0900a8c0@spiff> Message-ID: <3DD50EB4.6050000@lemburg.com> Fredrik Lundh wrote: > mal wrote: > > >>>if you decide to keep it in there, can you at least fix the >>>help text: >>> >>> bdist create a built (binary) distribution >> >>Hmm, I wonder why bdist is mentioned here: it's the base >>class driving all the other sub-commands. bdist doesn't >>do anything on its own. > > > have you tried it? No, I read the code... looks like these attributes make bdist_dumb the default sub-command to run when no other command is specified: # This won't do in reality: will need to distinguish RPM-ish Linux, # Debian-ish Linux, Solaris, FreeBSD, ..., Windows, Mac OS. default_format = { 'posix': 'gztar', 'nt': 'zip', 'os2': 'zip', } # And the real information. format_command = { 'rpm': ('bdist_rpm', "RPM distribution"), 'zip': ('bdist_dumb', "ZIP file"), 'gztar': ('bdist_dumb', "gzip'ed tar file"), 'bztar': ('bdist_dumb', "bzip2'ed tar file"), 'ztar': ('bdist_dumb', "compressed tar file"), 'tar': ('bdist_dumb', "tar file"), 'wininst': ('bdist_wininst', "Windows executable installer"), 'zip': ('bdist_dumb', "ZIP file"), #'pkgtool': ('bdist_pkgtool', # "Solaris pkgtool distribution"), #'sdux': ('bdist_sdux', "HP-UX swinstall depot"), } Funny; I would have expected that bdist is a no-op. > on the 2.2 unix install I have here, it builds a tar archive. from what > I can tell, it's actually running the same code as bdist_dumb... > > >> From the doc-string: >> >> create a "dumb" built distribution -- i.e., just an archive >> to be unpacked under $prefix or $exec_prefix. > > > which isn't true, obviously, since the archive contains absolute > paths... Right. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From theller@python.net Fri Nov 15 15:36:19 2002 From: theller@python.net (Thomas Heller) Date: 15 Nov 2002 16:36:19 +0100 Subject: [Distutils] Re: [Python-Dev] Killing off bdist_dumb In-Reply-To: <3DD50EB4.6050000@lemburg.com> References: <200211132230.gADMUv205719@nyman.amk.ca> <00cb01c28c8d$221739a0$0900a8c0@spiff> <3DD4C8B7.3040703@lemburg.com> <018d01c28ca0$df0073c0$0900a8c0@spiff> <3DD50EB4.6050000@lemburg.com> Message-ID: > > Funny; I would have expected that bdist is a no-op. > python setup.py bdist --help or python setup.py sdist --help gives also useful info. Also the --help-formats options. Thomas From mal@lemburg.com Fri Nov 15 17:11:50 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 15 Nov 2002 18:11:50 +0100 Subject: [Python-Dev] Killing off bdist_dumb References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> <20021114184917.GA29284@meson.dyndns.org> <20021114194101.GJ1335@epoch.metaslash.com> Message-ID: <3DD52AD6.8040605@lemburg.com> Thomas Heller wrote: >>>I have more the impression, that people release .tar.gz files >>>containing a distutils setup.py script, but no PKG-INFO, so the >>>tarball isn't created by distutils sdist command. PyChecker is such an >>>example, IIRC. >> >>This is correct. I do release PyChecker as a .tar.gz which contains a >>setup.py, but no PKG-INFO. Should there be one? I don't use setup.py >>to create the .tar.gz. > > > I only noticed this (and your package is not the only one) when I > worked on a PEP 243 (?, package upload mechinism) implementation > where uploaded files would be scanned for the PKG-INFO metadata. > > >>I've never found the proper way to make releases when I looked. >>But that was a long time ago. >> > > > I normally don't use Linux, but does 'python setup.py sdist' not work > there? It works just fine. All you have to watch out for is to provide a proper MANIFEST.in or MANIFEST file which tells distutils which files to include in the source distribution (command sdist). -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From mgilfix@eecs.tufts.edu Sat Nov 16 01:30:46 2002 From: mgilfix@eecs.tufts.edu (Michael Gilfix) Date: Fri, 15 Nov 2002 20:30:46 -0500 Subject: [Python-Dev] Killing off bdist_dumb In-Reply-To: <20021114194101.GJ1335@epoch.metaslash.com> References: <200211132230.gADMUv205719@nyman.amk.ca> <200211132203.gADM3et07324@odiug.zope.com> <200211132224.gADMOEG07559@odiug.zope.com> <3cq4xnjw.fsf@python.net> <200211141400.gAEE0iL29815@pcp02138704pcs.reston01.va.comcast.net> <20021114184917.GA29284@meson.dyndns.org> <20021114194101.GJ1335@epoch.metaslash.com> Message-ID: <20021116013046.GA7626@eecs.tufts.edu> Using the sdist option and MANIFEST is you want to have work files sitting around your source tree but still build when the time is right. It's also nice for providing src for windows folks via the sdist --formats=gztar,zip at the same time and does all the name for you... -- Mike On Thu, Nov 14 @ 14:41, Neal Norwitz wrote: > This is correct. I do release PyChecker as a .tar.gz which contains a > setup.py, but no PKG-INFO. Should there be one? I don't use setup.py > to create the .tar.gz. > > I've never found the proper way to make releases when I looked. > But that was a long time ago. > > Neal -- Michael Gilfix mgilfix@eecs.tufts.edu For my gpg public key: http://www.eecs.tufts.edu/~mgilfix/contact.html From pobrien@orbtech.com Sat Nov 16 01:35:29 2002 From: pobrien@orbtech.com (Patrick K. O'Brien) Date: Fri, 15 Nov 2002 19:35:29 -0600 Subject: [Python-Dev] [maintenance doc updates] In-Reply-To: <20021116005959.2E61D18EC39@grendel.zope.com> References: <20021116005959.2E61D18EC39@grendel.zope.com> Message-ID: <200211151935.29103.pobrien@orbtech.com> On Friday 15 November 2002 06:59 pm, Fred L. Drake wrote: > The maintenance version of the documentation has been updated: > > http://www.python.org/dev/doc/maint22/ > > Small updates. > Add embarrassing note about sneaky feeping creaturism: > The "chars" argument to str.strip(), .lstrip(), and .rstrip(), as > well as the str.zfill() method, were all added in Python 2.2.2! > Woe to over-eager backporters! The string module docstrings for lstrip and rstrip need updating as well then. It looks like only strip has been updated so far. And it would be good if this argument was called the same thing in the string type as in the string module. The one calls it `sep` and the other `chars`. In fact, the docstring for string.strip mixes the two: def strip(s, chars=None): """strip(s [,chars]) -> string Return a copy of the string s with leading and trailing whitespace removed. If chars is given and not None, remove characters in sep instead. If chars is unicode, S will be converted to unicode before stripping. """ return s.strip(chars) The docstring for the type still calls it `sep`, as does lstrip and rstrip: PyDoc_STRVAR(strip__doc__, "S.strip([sep]) -> string or unicode\n\ \n\ Return a copy of the string S with leading and trailing\n\ whitespace removed.\n\ If sep is given and not None, remove characters in sep instead.\n\ If sep is unicode, S will be converted to unicode before stripping"); It looks like the documentation is consistent in calling it `chars`, but the implementation is lagging behind. -- Patrick K. O'Brien Orbtech http://www.orbtech.com/web/pobrien ----------------------------------------------- "Your source for Python programming expertise." ----------------------------------------------- From brett@python.org Sat Nov 16 08:28:01 2002 From: brett@python.org (Brett Cannon) Date: Sat, 16 Nov 2002 00:28:01 -0800 (PST) Subject: [Python-Dev] Summary for 2002-11-01 through 2002-11-15 Message-ID: Yes, it's that time of the month folks! I am back in case you didn't notice and thus so is the Summary as penned by me. As usual I am giving the list 24 hours or so to find out how bad of a reporter I am and correct me in all manners and ways. ========================================== This is a summary of traffic on the `python-dev mailing list`_ between November 1, 2002 and November 15, 2002 (inclusive). It is intended to inform the wider Python community of on-going developments on the list that might interest the wider Python community. To comment on anything mentioned here, just post to python-list@python.org or comp.lang.python in the usual way; give your posting a meaningful subject line, and if it's about a PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep iteration). All python-dev members are interested in seeing ideas discussed by the community, so don't hesitate to take a stance on something. And if all of this really interests you then get involved and join Python-dev! This is the fifth summary written by Brett Cannon (after a relaxing two week hiatus; thanks to Raymond Hettinger for doing the Summary while I was gone). All summaries are now archived at http://www.python.org/dev/summary/ (thanks to A.M. Kuchling for setting that up). Please note that this summary is written using reStructuredText_ which can be found at http://docutils.sf.net/rst.html . Any unfamiliar punctuation is probably markup for reST_; you can safely ignore it (although I suggest learning reST; its simple and is accepted for PEP markup). Also, because of the wonders of reformatting thanks to whatever program you are using to read this, I cannot guarantee you will be able to run the text version of this summary through Docutils_ as-is. If you want to do that, get an original copy of the text file. .. _python-dev mailing list: http://mail.python.org/mailman/listinfo/python-dev .. _Docutils: http://docutils.sf.net/ .. _reST: .. _reStructuredText: http://docutils.sf.net/rst.html ====================== Summary Announcements ====================== Not much to say for this summary. The only thread skipped that someone out there might care about was one on getting the PEPs to display properly for IE 6 when the PEP was written in reST_. Thanks goes to Raymond Hettinger for covering the Summary while I was away on vacation. Thanks also goes out to Laura Creighton and Guido for suggesting graduate schools that are Python-friendly. ================================= `Getting python-bz2 into 2.3`__ ================================= __ http://mail.python.org/pipermail/python-dev/2002-November/029830.html continuation: http://mail.python.org/pipermail/python-dev/2002-October/029829.html Gustavo Niemeyer asked how he should go about getting his python-bz2_ module into the standard library. He was basically told to submit a patch complete with the module, docs, regression tests, etc.; everything a healthy module needs. It was also suggested that he provide a MSVC project file. The didn't work for Gustavo (who now has CVS write priveleges; congrats) since he doesn't use Windows. This was a slight problem because if the extension file doesn't build under Windows it can't be included in the PythonLabs Windows distro. So keep in mind that if your module won't directly compile for 6 different versions of Windows it won't be included in the Windows distro. .. _python-bz2: http://python-bz2.sourceforge.net/ =================================== `Becoming a python contributor`__ =================================== __ http://mail.python.org/pipermail/python-dev/2002-November/029831.html continuation: http://mail.python.org/pipermail/python-dev/2002-October/029828.html Gustavo Niemeyer asked about how he could help contribute to Python. He made the observation that "Guido and others [have been] bothered a few times because of the lack of man power" which has led to a "small core of very busy developers working on core/essential/hard stuff *and* in code reviewing as well" (and I can attest to this fact that this is very true; I am amazed the guys at PythonLabs have any form of a life outside of work with the amount of time they put in). Gustavo felt "that the Python development is currently overly centralized". Martin v. Loewis responded first. He said that "the most important aspect I'd like to hand off is the review of patches; to Tim, it is the analysis of bug reports". Martin then listed various points on how to be able to review a patch and how to handle bug reports; the email is at http://mail.python.org/pipermail/python-dev/2002-November/029831.html . I *highly* recommend reading the email because Martin's points are all good and more patch reviewers would be rather nice. M.A. Lemburg commented next, saying that he "wouldn't mind if other developers with some time at hand jump in on already assigned patches and bug reports to help out". Just because a patch has been assigned to someone doesn't mean the patch or the assignee couldn't use more help. Having the patch assigned to someone just means that they take responsibility to apply the patch if it is worthy of being accepted or to reject it. It should not stop other people from making comments or helping out so that the assignee can have a little bit of time saved for other things... like another patch to assign to themselves. MAL also suggested that we have more maintainers that are in charge of chunks of code, e.g. Martin handles all locale code. Martin disagreed with this idea, though. Jack Jansen agreed with Martin. He thought that if something came up that was not within the realm of a specific person that it "either get[s] ignored, or passed on to Guido, or picked up by yourself or Michael [Hudson] or one of the very few other people who do general firefighting". Martin commented later that he would hope that the stewardship of specific code does not get anymore formal. Martin then stated how one goes about getting commit priveleges for Python on SF: "just step forward and say that you want. In the past, Guido has set a policy that people who's commit privilege is fresh will still have to use SF, but can perform the checkin themselves". Tradition has stated that "fresh" is your first two or three SF patches. But please only step forward if you are known to python-dev or PythonLabs since commit priveleges won't be given to people who just wander in off the proverbial street and ask for it. Martin basically states this in a later email by saying that "people should not produce a burst of patches just to get commit privileges. Instead, they should contribute patches steadily (and should have done so in the past), and then get CVS write access as a simplification for the rest of the maintainers". Neal Norwitz pointed out that if a bug ends up with a fix it is best to submit a separate patch instead of attaching it to the bug report. That way there is a bigger chance of the patch being seen and dealth with. But please make sure to mention that it fixes a bug so that the bug can be closed! Martin even suggested mentioning this fact in the title of the patch submission. ==================== `RoundUp status`__ ==================== __ http://mail.python.org/pipermail/python-dev/2002-November/029879.html This is a splinter thread of the 'Becoming a python contributor' thread in which Micheal Hudson asked how using Roundup_ for replacing SF_ was coming along. Guido said things had come up that was holding it up. One was that the test server running at http://www.python.org:8080 had died when the box was restarted (which is now back up). There were also some changes to Roundup that needed to be dealt with in order to get everything over from SF on to Roundup. Guido also just ran out of time to review it more, although he did like what he had reviewed so far. Guido asked for a volunteer. It was asked what was needed. Guido said that Roundup had moved over to Zope-style templating so all the old templates that Gordon (I assuming this is Gordon McMillan) wrote needed to be changed. There were also some bugs that needed to be dealt with that are being tracked at Roundup hosted at http://www.python.org:8080 . And there also will need to be preparations for the day that development is moved over from SF to the Roundup setup; that will require transferring everything over from SF, shutting down SF, and handling any bugs that crop up from the heavy use that the new setup is going to get. So if you have any dislike for SF, then please contribute to Roundup and help get Python off of there! .. _Roundup: http://roundup.sourceforge.net/ .. _SF: http://www.sf.net/ ======================================= `Contributing...current bug status`__ ======================================= __ http://mail.python.org/pipermail/python-dev/2002-November/029846.html Neal Norwitz went through the SF_ bug tracker and counted 325 bugs and generated a very easy to read page listing all the bugs with their relevant info. You can find the HTML version at http://www.metaslash.com/py/sf.data.html . If you find a bug there you think you can help out on, then go to SF and do so! ======================= `Low hanging fruit`__ ======================= __ http://mail.python.org/pipermail/python-dev/2002-November/029850.html Neal Norwitz generated a list of what he thought were easily fixable bugs and put them in this thread (it's the first email so just go to the link for this thread). So if you have a little bit of free time and want to help out why don't you try to tackle one of these bugs? =============== `Snake farm`__ =============== __ http://mail.python.org/pipermail/python-dev/2002-November/029853.html The only reason I am mentioning this thread here is to help get the word out about the `Snake Farm`_ ; hosted by Lysator_ and sponsored by the `Python Business Forum`_ . It is a compile farm that downloads from cvs, compiles, and runs the test suite of Python daily. It has caught a bunch of bugs and has been a great help. The majority of the thread was spent trying to get FreeBSD 4.4 -current to compile Python and trying to work out a possible bug in pymalloc. .. _Snake Farm: http://www.lysator.liu.se/~sfarmer/ .. _Lysator: http://www.lysator.liu.se/ .. _Python Business Forum: http://www.python-in-business.org ==================================== `David Goodger joins PEP editor`__ ==================================== __ http://mail.python.org/pipermail/python-dev/2002-November/029866.html David Goodger has somehow gotten suckered into becoming a PEP editor (I suspect Barry Warsaw had something to do with this since he used to do all of the PEP editing). So you can all welcome David to his new responsibility by flooding him with all of those PEPs you have lying around and were not sure were good enough to submit. =) ======================== `metaclass insanity`__ ======================== __ http://mail.python.org/pipermail/python-dev/2002-November/029872.html continuation: http://mail.python.org/pipermail/python-dev/2002-October/029795.html Michael Hudson posted the question as to how to get writing ``__bases__`` for new-style classes to do the "right thing" since the mro does not seem to be updated. Kevin Jacob gave it a go but ran into a bug. Guido said that currently there is no way to touch the mro from Python code. All of that info is stored in the tp_mro slot in the object's C representation and is stored as a tuple whose members are expected to be either types or classic classes. Guido said he would accept a patch for assigning to the mro if a check was included to make sure the previously mentioned constraint was maintained. He also said he would probably accept a patch that allowed for assignable ``__bases__`` and writing ``__name__`` . Michael Hudson commented about the difficulty of all of these patches. One thing that came up was the connection between ``__bases__`` and ``__base__`` and how assignment to ``__base__`` should not be taken lightly; Guido commented that "the old and new base must be layout compatible, exactly like for assignment to __class__". Guido later pointed out that ``__base__`` becomes the built-in type that you derive from; whether it be object, list, dict, etc. It was agreed that ``__base__`` shouldn't be writable. The point was made that nestable classes are not pickable since only thing at the top level of a module can be pickled. Guido said he considered this a flaw although he couldn't think of why someone would want to embed a class within a class or a function. This spawned comments on how to deal with this. It seemed the best solution was to change ``__name__`` for the inner class to a fully dotted name (e.g. ``X.Y.__name__ = "X.Y"``) and then make a simple change to either ``pickle`` or ``getattr()``. It was eventually agreed upon that setting ``__name__`` to the full dotted name of the class was the best solution and it was filed as `bug #633930`_ . As part of trying to give Guido good examples of why nested classes are good (the best attempt was Walter Dorwald in an email at http://mail.python.org/pipermail/python-dev/2002-November/029906.html that elicited a "that's cool" comment from Guido), the idea of having ``__iter__`` contain a class definition that returned an instance of that class came up. That was shot down because of the performance hit of dealing with the class definition on every call to ``__iter__``. But the idea of defining ``__iter__`` as a generator was pointed out by Just van Rossum. Doing that simplifies the code usually a good amount since the generator handles all ``.next()`` calls and you just need to have the generator stop when you want your iterator to stop. Apparently this is not a widely used idiom, so I am mentioning it hear since it is a great idea that I can personally attest to as being a rather nice way to handle iterators. .. _bug #633930: http://www.python.org/sf/633930 ============================ `[#527371] sre bug/patch`__ ============================ __ http://mail.python.org/pipermail/python-dev/2002-November/029936.html I am mentioning this thread not because the bug is that big of a deal, but because of the PEP that was brought up when dealing with the bug; `PEP 291`_ . This informational PEP, among other things, lists modules that must be kept compatible with certain versions of Python; in this case sre has to be kept compatible with Python 1.5.2. Except for sre, they are all packages that have made their way into the library. You might want to have a look at the list if you are hacking on any packages in the stdlib. .. _PEP 291: http://www.python.org/peps/pep-0291.html ====================================================================== `Re: [snake-farm] test test_slice failed -- [9, 7, 5, 3, 1] == [0]`__ ====================================================================== __ http://mail.python.org/pipermail/python-dev/2002-November/029941.html Once again this is not to meant to mention directly what as discussed in the thread but a point made. The bug that was discovered was an issue with 64-bit machines and the 32-bit limitation of lists and slicing. The 32-bit limit currently is hard-coded into the C code. Obviously some people would like to see this changed. Neal Norwitz laid down a rough outline on how one could go about changing this at http://mail.python.org/pipermail/python-dev/2002-November/029953.html . Guido then made a pronouncement later stating that he is willing to break binary compatibility once for Python 2.3 or 2.4 to get this done. He also mentioned some other things to make sure to do. As of this writing no one has stepped forward to take this on. ============================== `Reindenting unicodedata.c`__ ============================== __ http://mail.python.org/pipermail/python-dev/2002-November/029997.html While doing some work on `Modules/unicodedata.c`_ , Martin v. Loewis noticed that the indentation style didn't follow `PEP 7`_ and he wondered if it would be okay to re-indent the file. This brought up two points. One was that PEP 7 was not stringently followed. The PEP says to "Use single-tab indents, where a tab is worth 8 spaces". Now that goes against Python coding style where you are supposed to use 4-space indents. So the question of whether one should still use the tab style in new C code came up. Guido said he wished new C code would, but for files he doesn't touch very often he doesn't feel he can enforce it. Barry and Martin came up with a style of notation at the top of any file that uses a non-PEP style which can be found at http://mail.python.org/pipermail/python-dev/2002-November/030067.html . But following the PEP is still the "officially" supported style. But you should try to follow PEP 7 for all new C code and PEP 8 for Python code. The other point was when to re-indent. It was agreed upon to only do that when a major change in the code was occuring. And when you do re-indent, do it as a separate check-in for CVS. .. _Modules/unicodedata.c: http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/python/python/dist/src/Modules/unicodedata.c .. _PEP 7: http://www.python.org/peps/pep-0007.html ============================= `Printing and __unicode__`__ ============================= __ http://mail.python.org/pipermail/python-dev/2002-November/030098.html Martin v. Loewis brought up a point made by Henry Thompson on c.l.py asking why printing ignores ``__unicode__``. Martin thought it shouldn't and listed a bunch of options on how to make printing work with ``__unicode__``. The winner was to have "A file indicates "unicode-awareness" somehow. For a Unicode-aware file, it tries ``__unicode__``, ``__str__``, and ``__repr__``, in order". The agreed solution was to add an ``.encoding`` attribute. This attribute can be set to None when the stream is in Unicode and never converts to a byte-stream. But then M.A. Lemburg chimed in. He was fine with the addition of the ``.encoding``; "this attribute is already available on stream objects created with codecs.open()". What he didn't like was having ``.encoding`` set to None mean the stream would accept Unicode. Martin asked then if ``StringIO`` should have ``.encoding``. MAL replied that "``StringIO`` should be considered a non-Unicode aware stream, so it should not implement .encoding". He thought that if someone wanted ``StringIO`` to be Unicode-aware they could use "the tools in codecs.py [since they] can be used for this (basically by doing the same kind of wrapping as ``codecs.open()`` does)". But then Martin pointed out that StringIO is already Unicode-aware. This debate continued between MAL, Martin, and Guido. But then Guido just said he gave up since the current behavior was relied upon too much. =================== `Adopting Optik`__ =================== __ http://mail.python.org/pipermail/python-dev/2002-November/030108.html Splinter threads: - `Re: [getopt-sig] Adopting Optik `__ - `Re: Adopting Optik `__ - `Fw: [Python-Dev] Adopting Optik `__ - `[getopt-sig] Re: [Python-Dev] Adopting Optik `__ Guido sent an email saying that he would like to get Python 2.3a out the door by X-mas. This means trying to come up with a new name for Optik_ , Greg Ward's command-line options parser. The complaint about the current name is that it's too cute. Guido started it by suggesting the name ``options``. Peter Funk brought up two things. One was that Docutils already used the module under its Optik name. To this Guido responded that they could just change the name in the code. Peter also said that Greg liked ``OptionsParser``. There was also the point that module names are preferred to be short and lowercase. Ka-Ping Yee and myself brought up the point that ``options`` is very generic. Guido agreed with this point while also mentioning that many people already have their own modules name options.py. He then suggested ``optlib`` since "It's short, un-cute, and follows the \*lib pattern used all over the Python stdlib". Greg Ward liked this suggestion. I personally suggested ``ArgParser``, trying to get a tie-in for sys.argv. Greg Ewing built off of this and suggested ``argvparse``, similar to ``urlparse``. David Ascher preferred ``argparse`` since "The v is archaic and so silent it fades away =)". David Abrahams disagreed, stating how the "v" deals with any ambiguity. David Abrahams also said he would vote for ``argvparse`` to break a tie. Ka-Ping Yee suggested ``cmdline`` and ``cmdopts``. David Abrahams liked both. Steve Holden built off of Guido's ``optlib`` and pushed for ``optionlib``. But then Guido asked for a call to votes between ``optlib`` and ``argvparse``. I tallied the votes at one point with it kind of split between them. Then Guido announced that ``optparse`` won (and no, that is not a typo). .. _Optik: http://optik.sf.net/ =========================== `Killing off bdist_dumb`__ =========================== __ http://mail.python.org/pipermail/python-dev/2002-November/030139.html Splinter threads: - `RE: [Distutils] Killing off bdist_dumb `__ - `E: [Distutils] Killing off bdist_dumb `__ A.M. Kuchling asked if anyone would mind if bdist_dumb was removed from distutils (the conversation also happened with `distutils-sig`_ ). Apparently it is rather broken in terms of the paths of the files because it makes them all relative. M.A. Lemburg, though, spoke up stating that he uses bdist_dumb. It still wasn't fully resolved as of this writing. The point of being able to build bdist_wininst on non-Windows platforms came up during this discussion as well. It was decided to move the binary files needed for bdist_wininst into CVS (now filed as `bug #638595`_ ). .. _distutils-sig: http://www.python.org/sigs/distutils-sig/ .. _bug #638595: http://www.python.org/sf/638595 =============================================== `Python interface to attribute descriptors`__ =============================================== __ http://mail.python.org/pipermail/python-dev/2002-November/030118.html Splinter threads: - `Python interface to attribute descriptors `__ Paul Dubois decided to cause himself some grief and attempt to comprehend the miracle that is descriptors (more info can be found in the `2.2.2 What's New`_ doc and in `PEP 252`_). He was wondering if their was more documentation than the signatures of the functions for the C API. He also wanted to know if there was a way to play with them in the Python world since all of his attempts fell short of them being "first-class citizen from Python". Guido admitted that the docs were lacking. He then asked for some help in writing the docs. He responded to Paul's second question by saying that he thought it should work as long as you avoided classic classes. Thanks to Guido saying that is should work, Paul realized that he misread the PEP. To be nice he emailed out an example of how a descriptor could know where it came from at http://mail.python.org/pipermail/python-dev/2002-November/030141.html . Phillip Ebey also chimed in on how to write a metaclass that said the name of the descriptor at http://mail.python.org/pipermail/python-dev/2002-November/030213.html . .. _2.2.2 What's New: http://www.python.org/doc/2.2.2/whatsnew/sect-rellinks.html#SECTION000320000000000000000 .. _PEP 252: http://www.python.org/peps/pep-0252.html ============================== `IDLE local scope cleanup`__ ============================== __ http://mail.python.org/pipermail/python-dev/2002-November/030124.html Patrick O'Brien (author of PyCrust_ ) wondered how IDLE managed to keep its local scope so clean once it finished launching. Guido said that IDLE (or at least the GRPC version) runs the shell in a subprocess that is careful not to pollute the namespace. Patrick thanked Guido and then presented his real question: how to handle not polluting the namespace with a way getting around a pickling issue. He "used to just pass a regular dictionary to code.InteractiveInterpreter, which worked well enough", but there was an issue with pickling. So then he tried passing ``sys.modules['__main__'].__dict__``, which worked but cluttered the namespace. Guido's response: "Remove the clutter". Guido said that this would most likely require a minimalistic main program that bootstrapped using ``__import__('run').main()`` which would run the code without adding to the namespace. .. _PyCrust: http://www.orbtech.com/wiki/PyCrust From skip@manatee.mojam.com Sun Nov 17 13:00:17 2002 From: skip@manatee.mojam.com (Skip Montanaro) Date: Sun, 17 Nov 2002 07:00:17 -0600 Subject: [Python-Dev] Weekly Python Bug/Patch Summary Message-ID: <200211171300.gAHD0HAr020265@manatee.mojam.com> Bug/Patch Summary ----------------- 305 open / 3051 total bugs (-6) 101 open / 1784 total patches (+2) New Bugs -------- --with-dl-dld section of the readme (2002-11-10) http://python.org/sf/636313 ftplib bails out on empty responses (2002-11-11) http://python.org/sf/636685 print to unicode stream should __unicode (2002-11-12) http://python.org/sf/637094 inspect.getargspec: None instead of () (2002-11-12) http://python.org/sf/637217 "u#" doesn't check object type (2002-11-12) http://python.org/sf/637547 logging package undocumented (2002-11-13) http://python.org/sf/637847 __getstate__() documentation is vague (2002-11-13) http://python.org/sf/637941 Move bdist_wininst source code into PC (2002-11-14) http://python.org/sf/638595 Instantiation init-less class with param (2002-11-14) http://python.org/sf/638610 optparse module undocumented (2002-11-14) http://python.org/sf/638703 Install script goes into infinite loop (2002-11-15) http://python.org/sf/639022 archiver should use zipfile before zip (2002-11-15) http://python.org/sf/639118 Tkinter sliently discards all Tcl errors (2002-11-15) http://python.org/sf/639266 urllib.basejoin() mishandles '' (2002-11-16) http://python.org/sf/639311 crash (SEGV) in Py_EndInterpreter() (2002-11-17) http://python.org/sf/639611 New Patches ----------- Fix for major rexec bugs (2002-11-11) http://python.org/sf/636769 Modulefinder doesn't handle PyXML (2002-11-13) http://python.org/sf/637835 Allow any file-like object on dis module (2002-11-13) http://python.org/sf/637906 SimpleSets (2002-11-13) http://python.org/sf/638095 LaTeX documentation for logging package (2002-11-14) http://python.org/sf/638299 Added Proxyauth command to imaplib (2002-11-14) http://python.org/sf/638673 Logging 0.4.7 & patches thereto (2002-11-15) http://python.org/sf/638825 _strptime fixes for None locale and tz (2002-11-15) http://python.org/sf/639112 Remove type-check from urllib2 (2002-11-15) http://python.org/sf/639139 new string method -- format (2002-11-16) http://python.org/sf/639307 Removal of FreeBSD 5.0 specific test (2002-11-16) http://python.org/sf/639371 New icon for .py files (2002-11-17) http://python.org/sf/639635 Closed Bugs ----------- PyImport_ExecCodeModule not correct (2001-05-14) http://python.org/sf/424106 list.sort crasher (2001-08-20) http://python.org/sf/453523 freeze doesn't like DOS files on Linux (2001-09-24) http://python.org/sf/464405 Upgrade TCL for windows installer (2001-11-22) http://python.org/sf/484715 Support for page & shortcut icons (2001-11-27) http://python.org/sf/486360 socket module fails to build on HPUX10 (2002-01-18) http://python.org/sf/505427 Version number handling (2002-04-29) http://python.org/sf/550364 ext module generation problem (2002-08-23) http://python.org/sf/599248 time.struct_time undocumented (2002-09-03) http://python.org/sf/604128 Use C3 MRO algorithm (2002-10-04) http://python.org/sf/618704 HTMLParser:endtag events in comments (2002-10-08) http://python.org/sf/620243 distutils mixed stdin/stdout output (2002-10-08) http://python.org/sf/620630 Broken link in local documentation. (2002-10-16) http://python.org/sf/624024 strptime() always returns 0 in dst field (2002-10-21) http://python.org/sf/626570 int("123123123123123123") doesn't work (2002-10-28) http://python.org/sf/629989 https via httplib trips over IIS defect (2002-10-31) http://python.org/sf/631683 cgi.py drops commandline arguments (2002-11-04) http://python.org/sf/633504 Don't define _XOPEN_SOURCE on OpenBSD (2002-11-07) http://python.org/sf/635034 Misleading description of \w in regexs (2002-11-08) http://python.org/sf/635595 2.2.2 build on Solaris (2002-11-09) http://python.org/sf/635929 No error "not all arguments converted" (2002-11-09) http://python.org/sf/635969 Closed Patches -------------- Fast-path for interned string compares (2001-11-08) http://python.org/sf/479615 hasattr catches only AttributeError (2002-01-16) http://python.org/sf/504714 PEP 282 Implementation (2002-07-07) http://python.org/sf/578494 py2texi.el update (2002-08-02) http://python.org/sf/590352 Bugfix: content-type header parsing (2002-09-23) http://python.org/sf/613605 C3 MRO algorithm implementation (2002-10-06) http://python.org/sf/619475 Bytecode copy bug in freeze (2002-10-24) http://python.org/sf/627900 Add a sample selection method to random.py (2002-10-27) http://python.org/sf/629637 os.tempnam behavior in Windows (2002-11-08) http://python.org/sf/635656 Typo in PEP249 (2002-11-10) http://python.org/sf/636159 From just@letterror.com Sun Nov 17 17:33:29 2002 From: just@letterror.com (Just van Rossum) Date: Sun, 17 Nov 2002 18:33:29 +0100 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Lib nntplib.py,1.30,1.31 In-Reply-To: Message-ID: Just van Rossum wrote: > esr@users.sourceforge.net wrote: > > > + # If no login/password was specified, try to get them from ~/.netrc > > + # Presume that if .netc has an entry, NNRP authentication is > required. > > + if not user: > > + import netrc > > + credentials = netrc.netrc() > > + auth = credentials.authenticators(host) > > + if auth: > > + user = auth[0] > > + password = auth[2] > > + # Perform NNRP authentication if needed. > > Erm, doesn't this make anonymous nntp access fail if there's no $HOME or no > ..netrc file in $HOME? Ok, since I didn't get a reply (I posted the above to python-checkins), I tried it, and yes it does break: [python:~] just% python2.3 Python 2.3a0 (#2, Nov 17 2002, 18:16:38) [GCC 3.1 20020420 (prerelease)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from nntplib import NNTP >>> s = NNTP('news') Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.3/nntplib.py", line 140, in __init__ credentials = netrc.netrc() File "/usr/local/lib/python2.3/netrc.py", line 29, in __init__ fp = open(file) IOError: [Errno 2] No such file or directory: '/Users/just/.netrc' >>> Just From esr@thyrsus.com Sun Nov 17 17:36:22 2002 From: esr@thyrsus.com (Eric S. Raymond) Date: Sun, 17 Nov 2002 12:36:22 -0500 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Lib nntplib.py,1.30,1.31 In-Reply-To: References: Message-ID: <20021117173622.GB29885@thyrsus.com> Just van Rossum : > Just van Rossum wrote: > > > esr@users.sourceforge.net wrote: > > > > > + # If no login/password was specified, try to get them from > ~/.netrc > > > + # Presume that if .netc has an entry, NNRP authentication is > > required. > > > + if not user: > > > + import netrc > > > + credentials = netrc.netrc() > > > + auth = credentials.authenticators(host) > > > + if auth: > > > + user = auth[0] > > > + password = auth[2] > > > + # Perform NNRP authentication if needed. > > > > Erm, doesn't this make anonymous nntp access fail if there's no $HOME or no > > ..netrc file in $HOME? > > Ok, since I didn't get a reply (I posted the above to python-checkins), I tried > it, and yes it does break: > > [python:~] just% python2.3 > Python 2.3a0 (#2, Nov 17 2002, 18:16:38) > [GCC 3.1 20020420 (prerelease)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> from nntplib import NNTP > >>> s = NNTP('news') > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/local/lib/python2.3/nntplib.py", line 140, in __init__ > credentials = netrc.netrc() > File "/usr/local/lib/python2.3/netrc.py", line 29, in __init__ > fp = open(file) > IOError: [Errno 2] No such file or directory: '/Users/just/.netrc' > >>> Sorry about the lack of reply. I'll put an appropriate guard around the .netrc code. -- Eric S. Raymond From mgilfix@eecs.tufts.edu Mon Nov 18 07:59:05 2002 From: mgilfix@eecs.tufts.edu (Michael Gilfix) Date: Mon, 18 Nov 2002 02:59:05 -0500 Subject: [Python-Dev] Anon CVS Message-ID: <20021118075905.GC372@eecs.tufts.edu> Just tried to anon out the python CVS tree off of SF and got: bash-2.05a$ cvs -d:pserver:anonymous@cvs.python.sourceforge.net:/cvsroot/python login Logging in to :pserver:anonymous@cvs.python.sourceforge.net:2401/cvsroot/python CVS password: cvs login: authorization failed: server cvs.python.sourceforge.net rejected access to /cvsroot/python for user anonymous Anybody else having this problem? Or is it just my end? -- Mike -- Michael Gilfix mgilfix@eecs.tufts.edu For my gpg public key: http://www.eecs.tufts.edu/~mgilfix/contact.html From aleax@aleax.it Mon Nov 18 08:51:20 2002 From: aleax@aleax.it (Alex Martelli) Date: Mon, 18 Nov 2002 09:51:20 +0100 Subject: [Python-Dev] Anon CVS In-Reply-To: <20021118075905.GC372@eecs.tufts.edu> References: <20021118075905.GC372@eecs.tufts.edu> Message-ID: On Monday 18 November 2002 08:59 am, Michael Gilfix wrote: > Just tried to anon out the python CVS tree off of SF and got: > > bash-2.05a$ cvs > -d:pserver:anonymous@cvs.python.sourceforge.net:/cvsroot/python login > Logging in to > :pserver:anonymous@cvs.python.sourceforge.net:2401/cvsroot/python CVS > password: > cvs login: authorization failed: server cvs.python.sourceforge.net rejected > access to /cvsroot/python for user anonymous > > Anybody else having this problem? Or is it just my end? Sourceforce had a scheduled outage, starting I believe at 10:00 Sunday, Pacific Time; it may be that some or all of SF's servers are still out and that this is causing the symptom you're observing. Alex From barry@python.org Mon Nov 18 14:53:23 2002 From: barry@python.org (Barry A. Warsaw) Date: Mon, 18 Nov 2002 09:53:23 -0500 Subject: [Python-Dev] Anon CVS References: <20021118075905.GC372@eecs.tufts.edu> Message-ID: <15832.65251.994061.492964@gargle.gargle.HOWL> >>>>> "AM" == Alex Martelli writes: AM> Sourceforce had a scheduled outage, starting I believe at AM> 10:00 Sunday, Pacific Time; it may be that some or all of SF's AM> servers are still out and that this is causing the symptom AM> you're observing. I was able to do an authenticated checkout at about 1am EST last night, but they're definitely mucking around with their cvs servers. I noticed the checkin message now seemed to come from bwarsaw@projects.sourceforge.net. I don't know if this is a permanent change, but it certainly mucked up the Mailman sender filters. :/ Checking now, it looks like non-anon cvs is working ok. -Barry From mal@lemburg.com Mon Nov 18 17:01:59 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 18 Nov 2002 18:01:59 +0100 Subject: [Python-Dev] Re: PyNumber_Check() References: Message-ID: <3DD91D07.3000704@lemburg.com> gvanrossum@projects.sourceforge.net wrote: > *** NEWS 18 Nov 2002 16:19:39 -0000 1.526 > --- NEWS 18 Nov 2002 16:27:16 -0000 1.527 > *************** > *** 694,698 **** > - PyNumber_Check() now returns true for string and unicode objects. > This is a result of these types having a partially defined > ! tp_as_number slot. > > - The string object's layout has changed: the pointer member > --- 694,700 ---- > - PyNumber_Check() now returns true for string and unicode objects. > This is a result of these types having a partially defined > ! tp_as_number slot. (This is not a feature, but an indication that > ! PyNumber_check() is not very useful to determine numeric behavior. > ! It may be deprecated.) Perhaps PyNumber_Check() should check that at least one of nb_int, nb_long, nb_float is available (in addition to the tp_as_number slot) ?! -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From guido@python.org Mon Nov 18 17:24:42 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 18 Nov 2002 12:24:42 -0500 Subject: [Python-Dev] Re: PyNumber_Check() In-Reply-To: Your message of "Mon, 18 Nov 2002 18:01:59 +0100." <3DD91D07.3000704@lemburg.com> References: <3DD91D07.3000704@lemburg.com> Message-ID: <200211181724.gAIHOgx27579@pcp02138704pcs.reston01.va.comcast.net> > gvanrossum@projects.sourceforge.net wrote: > > *** NEWS 18 Nov 2002 16:19:39 -0000 1.526 > > --- NEWS 18 Nov 2002 16:27:16 -0000 1.527 > > *************** > > *** 694,698 **** > > - PyNumber_Check() now returns true for string and unicode objects. > > This is a result of these types having a partially defined > > ! tp_as_number slot. > > > > - The string object's layout has changed: the pointer member > > --- 694,700 ---- > > - PyNumber_Check() now returns true for string and unicode objects. > > This is a result of these types having a partially defined > > ! tp_as_number slot. (This is not a feature, but an indication that > > ! PyNumber_check() is not very useful to determine numeric behavior. > > ! It may be deprecated.) > > Perhaps PyNumber_Check() should check that at least > one of nb_int, nb_long, nb_float is available (in addition to the > tp_as_number slot) ?! No, I think PyNumber_Check() should be deprecated. I don't think there's a valid use case for it. If you really want a fuzzy definition like "at least one of nb_int, nb_long, nb_float is available", you can test for that yourself -- IMO that doesn't really make it a number. PyNumber_Check() comes from an old era, when the presence or absence of the as_number "extension" to the type object was thought to be useful. If I had to do it over, I wouldn't provide PyNumber_Check() at all (nor PySequence_Check() nor PyMapping_Check()). --Guido van Rossum (home page: http://www.python.org/~guido/) From mal@lemburg.com Mon Nov 18 17:40:50 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 18 Nov 2002 18:40:50 +0100 Subject: [Python-Dev] Re: PyNumber_Check() References: <3DD91D07.3000704@lemburg.com> <200211181724.gAIHOgx27579@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <3DD92622.90007@lemburg.com> Guido van Rossum wrote: >>gvanrossum@projects.sourceforge.net wrote: >> >>>*** NEWS 18 Nov 2002 16:19:39 -0000 1.526 >>>--- NEWS 18 Nov 2002 16:27:16 -0000 1.527 >>>*************** >>>*** 694,698 **** >>> - PyNumber_Check() now returns true for string and unicode objects. >>> This is a result of these types having a partially defined >>>! tp_as_number slot. >>> >>> - The string object's layout has changed: the pointer member >>>--- 694,700 ---- >>> - PyNumber_Check() now returns true for string and unicode objects. >>> This is a result of these types having a partially defined >>>! tp_as_number slot. (This is not a feature, but an indication that >>>! PyNumber_check() is not very useful to determine numeric behavior. >>>! It may be deprecated.) >> >>Perhaps PyNumber_Check() should check that at least >>one of nb_int, nb_long, nb_float is available (in addition to the >>tp_as_number slot) ?! > > > No, I think PyNumber_Check() should be deprecated. I don't think > there's a valid use case for it. If you really want a fuzzy > definition like "at least one of nb_int, nb_long, nb_float is > available", you can test for that yourself -- IMO that doesn't really > make it a number. Hmm, I use it in mxODBC to switch on Python object types. The above change causes the semantics to change as well: ... else if (PyNumber_Check(v)) { if (mxODBC_DateStructFromFloat(dbcs, v, param, row)) { if (!PyErr_Occurred()) PyErr_Format(PyExc_TypeError, "parameter %i in row %i must be a mxODBC date float", column, row); goto onError; } } else if (PyString_Check(v)) { param->data = (SQLPOINTER)(PyString_AS_STRING(v)); param->data_len = strlen(PyString_AS_STRING(v)); param->ctype = SQL_C_CHAR; } With the new semantics, the PyNumber_Check() test would succeed for strings, making the second test a no-op. I would expect that this kind of switching on types is not uncommon for code which works in polymorphic ways. > PyNumber_Check() comes from an old era, when the presence or absence > of the as_number "extension" to the type object was thought to be > useful. If I had to do it over, I wouldn't provide PyNumber_Check() > at all (nor PySequence_Check() nor PyMapping_Check()). Ok, but why not fix those APIs to mean something more useful than deprecating them ? E.g. I would expect that a number is usable as input to float(), int() or long() and that a mapping knows at least about __getitem__. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From guido@python.org Mon Nov 18 18:02:07 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 18 Nov 2002 13:02:07 -0500 Subject: [Python-Dev] Re: PyNumber_Check() In-Reply-To: Your message of "Mon, 18 Nov 2002 18:40:50 +0100." <3DD92622.90007@lemburg.com> References: <3DD91D07.3000704@lemburg.com> <200211181724.gAIHOgx27579@pcp02138704pcs.reston01.va.comcast.net> <3DD92622.90007@lemburg.com> Message-ID: <200211181802.gAII28306798@pcp02138704pcs.reston01.va.comcast.net> > > No, I think PyNumber_Check() should be deprecated. I don't think > > there's a valid use case for it. If you really want a fuzzy > > definition like "at least one of nb_int, nb_long, nb_float is > > available", you can test for that yourself -- IMO that doesn't really > > make it a number. > > Hmm, I use it in mxODBC to switch on Python object types. > The above change causes the semantics to change as well: > > ... > else if (PyNumber_Check(v)) { When you pass a class instance that doesn't support __float__, you'll get an error from the code below: > if (mxODBC_DateStructFromFloat(dbcs, v, param, row)) { > if (!PyErr_Occurred()) > PyErr_Format(PyExc_TypeError, > "parameter %i in row %i must be a mxODBC date float", > column, row); > goto onError; > } > } > else if (PyString_Check(v)) { > param->data = (SQLPOINTER)(PyString_AS_STRING(v)); > param->data_len = strlen(PyString_AS_STRING(v)); > param->ctype = SQL_C_CHAR; > } > > With the new semantics, the PyNumber_Check() test would > succeed for strings, making the second test a no-op. > > I would expect that this kind of switching on types is > not uncommon for code which works in polymorphic ways. Alas, I agree with this expectation, even though I believe that such code is based on a misunderstanding. :-( > > PyNumber_Check() comes from an old era, when the presence or absence > > of the as_number "extension" to the type object was thought to be > > useful. If I had to do it over, I wouldn't provide PyNumber_Check() > > at all (nor PySequence_Check() nor PyMapping_Check()). > > Ok, but why not fix those APIs to mean something more > useful than deprecating them ? E.g. I would expect that > a number is usable as input to float(), int() or long() > and that a mapping knows at least about __getitem__. Maybe, as long as we all agree that that's *exactly* what they check for, and as long as we agree that there may be overlapping areas (where two or more of these will return True). PyMapping_Check() returns true for a variety of non-mappings like strings, lists, and all classic instances. --Guido van Rossum (home page: http://www.python.org/~guido/) From just@letterror.com Mon Nov 18 18:44:07 2002 From: just@letterror.com (Just van Rossum) Date: Mon, 18 Nov 2002 19:44:07 +0100 Subject: [Python-Dev] Plea: can modulefinder.py move to the library? Message-ID: Is there any reason not to move freeze's modulefinder.py to the library? It's a very useful module, and it's a shame it's not available in non-source distributions. It seems odd that utilities that use it (like py2exe) must ship it themselves, since some otherwise work perfectly with a binary distribution. I'm currently working on a py2exe-like tool for MacOSX and it would've been nice if I could have just done "import modulefinder"... Just From guido@python.org Mon Nov 18 18:47:25 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 18 Nov 2002 13:47:25 -0500 Subject: [Python-Dev] Plea: can modulefinder.py move to the library? In-Reply-To: Your message of "Mon, 18 Nov 2002 19:44:07 +0100." References: Message-ID: <200211181847.gAIIlPk21006@pcp02138704pcs.reston01.va.comcast.net> > Is there any reason not to move freeze's modulefinder.py to the > library? It's a very useful module, and it's a shame it's not > available in non-source distributions. It seems odd that utilities > that use it (like py2exe) must ship it themselves, since some > otherwise work perfectly with a binary distribution. I'm currently > working on a py2exe-like tool for MacOSX and it would've been nice > if I could have just done "import modulefinder"... IMO it needs work before it's suitable for the standad library: e.g. it contains a bunch of print statements for reporting that aren't always appropriate (but are fine in the context of freeze.py) and there's no documentation. --Guido van Rossum (home page: http://www.python.org/~guido/) From theller@python.net Mon Nov 18 19:20:40 2002 From: theller@python.net (Thomas Heller) Date: 18 Nov 2002 20:20:40 +0100 Subject: [Python-Dev] Plea: can modulefinder.py move to the library? In-Reply-To: <200211181847.gAIIlPk21006@pcp02138704pcs.reston01.va.comcast.net> References: <200211181847.gAIIlPk21006@pcp02138704pcs.reston01.va.comcast.net> Message-ID: > > Is there any reason not to move freeze's modulefinder.py to the > > library? It's a very useful module, and it's a shame it's not > > available in non-source distributions. It seems odd that utilities > > that use it (like py2exe) must ship it themselves, since some > > otherwise work perfectly with a binary distribution. I'm currently > > working on a py2exe-like tool for MacOSX and it would've been nice > > if I could have just done "import modulefinder"... > > IMO it needs work before it's suitable for the standad library: > e.g. it contains a bunch of print statements for reporting that aren't > always appropriate (but are fine in the context of freeze.py) and > there's no documentation. That doesn't sound so negative anymore ;-) Thomas From guido@python.org Mon Nov 18 19:23:02 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 18 Nov 2002 14:23:02 -0500 Subject: [Python-Dev] Plea: can modulefinder.py move to the library? In-Reply-To: Your message of "18 Nov 2002 20:20:40 +0100." References: <200211181847.gAIIlPk21006@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <200211181923.gAIJN2r32024@pcp02138704pcs.reston01.va.comcast.net> > > > Is there any reason not to move freeze's modulefinder.py to the > > > library? It's a very useful module, and it's a shame it's not > > > available in non-source distributions. It seems odd that utilities > > > that use it (like py2exe) must ship it themselves, since some > > > otherwise work perfectly with a binary distribution. I'm currently > > > working on a py2exe-like tool for MacOSX and it would've been nice > > > if I could have just done "import modulefinder"... > > > > IMO it needs work before it's suitable for the standad library: > > e.g. it contains a bunch of print statements for reporting that aren't > > always appropriate (but are fine in the context of freeze.py) and > > there's no documentation. > > That doesn't sound so negative anymore ;-) Well, the more people want this, the more I'm inclined to let them fix it. :-) --Guido van Rossum (home page: http://www.python.org/~guido/) From DavidA@ActiveState.com Sat Nov 16 21:00:09 2002 From: DavidA@ActiveState.com (David Ascher) Date: Sat, 16 Nov 2002 13:00:09 -0800 Subject: [Python-Dev] Sigh... Message-ID: <3DD6B1D9.1040106@ActiveState.com> I accidentally deleted a mail folder which includes all of the "important, unprocessed" emails. If there's something that you sent me that needs my attention or that includes important schedule or other information, please resend it. I'll be looking for a backup, but I'm not 100% sure that I'll find one. --david From just@letterror.com Mon Nov 18 22:52:51 2002 From: just@letterror.com (Just van Rossum) Date: Mon, 18 Nov 2002 23:52:51 +0100 Subject: [Python-Dev] Plea: can modulefinder.py move to the library? In-Reply-To: <200211181847.gAIIlPk21006@pcp02138704pcs.reston01.va.comcast.net> Message-ID: Guido van Rossum wrote: > IMO it needs work before it's suitable for the standad library: > e.g. it contains a bunch of print statements for reporting that aren't > always appropriate (but are fine in the context of freeze.py) and > there's no documentation. Documentation: Ok, that'll need to been done. Regarding the print statements: most are in ModuleFinder.msg() and ModuleFinder.report(), and then there are two in _try_registry(). Apart from the latter, if msg() and reprot() are not appropriate for an application, they can easily be overridden. But: by default, self.debug is set to zero, which means that no msg's are ever printed. I think the module as is is just fine (apart perhaps from the _try_registry print statements), it's a meta tool after all. If you don't agree: how should it be changed specifically? Just From guido@python.org Mon Nov 18 22:54:56 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 18 Nov 2002 17:54:56 -0500 Subject: [Python-Dev] Plea: can modulefinder.py move to the library? In-Reply-To: Your message of "Mon, 18 Nov 2002 23:52:51 +0100." References: Message-ID: <200211182254.gAIMsv700437@pcp02138704pcs.reston01.va.comcast.net> > Documentation: Ok, that'll need to been done. > > Regarding the print statements: most are in ModuleFinder.msg() and > ModuleFinder.report(), and then there are two in > _try_registry(). Apart from the latter, if msg() and reprot() are > not appropriate for an application, they can easily be overridden. > > But: by default, self.debug is set to zero, which means that no > msg's are ever printed. I think the module as is is just fine (apart > perhaps from the _try_registry print statements), it's a meta tool > after all. If you don't agree: how should it be changed > specifically? Ok, make it so! --Guido van Rossum (home page: http://www.python.org/~guido/) From martin@v.loewis.de Tue Nov 19 08:22:52 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 19 Nov 2002 09:22:52 +0100 Subject: [Python-Dev] bsddb3 imported Message-ID: I have now imported bsddb3 3.4.0. Remember to cvs up -d to get Lib/bsddb. The original bsddb module is not built anymore automatically, if it was, it would compile into a module bsddb185. I haven't updated the Windows build process: I recommend that bsddbmodule.c is not built anymore. Instead, _bsddb.c must be compiled into _bsddb.pyd. Since this code does not work with Sleepycat 4.1, I recommend that Sleepycat 4.0 is used, available from http://www.sleepycat.com/update/snapshot/db-4.0.14.zip Barry wants the test suite to be incorporated also, this will happen after we decide on the specifics. I don't have plans to incorporate any of the documentation at the moment. Regards, Martin From barry@python.org Tue Nov 19 14:06:48 2002 From: barry@python.org (Barry A. Warsaw) Date: Tue, 19 Nov 2002 09:06:48 -0500 Subject: [Python-Dev] bsddb3 imported References: Message-ID: <15834.17784.35358.906103@gargle.gargle.HOWL> >>>>> "MvL" == Martin v Loewis writes: MvL> I have now imported bsddb3 3.4.0. Awesome! I'll try to do some testing today. BTW, IIRC we might need to grab the cvs version of _db.c from pybsddb because there were some bugs in the released 3.4.0 version. MvL> Barry wants the test suite to be incorporated also, this will MvL> happen after we decide on the specifics. The pybsddb test suite is too thorough to just throw away. I was thinking we'd incorporate it, but run it only with a regrtest -u option. Way to go Martin! -Barry From martin@v.loewis.de Tue Nov 19 14:21:41 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 19 Nov 2002 15:21:41 +0100 Subject: [Python-Dev] bsddb3 imported In-Reply-To: <15834.17784.35358.906103@gargle.gargle.HOWL> References: <15834.17784.35358.906103@gargle.gargle.HOWL> Message-ID: barry@python.org (Barry A. Warsaw) writes: > MvL> Barry wants the test suite to be incorporated also, this will > MvL> happen after we decide on the specifics. > > The pybsddb test suite is too thorough to just throw away. I was > thinking we'd incorporate it, but run it only with a regrtest -u > option. That sounds find. The other question is how to technically incorporate the tests. They currently live in 14 files, plus test_all, plus unittest. How am I supposed to arrange them into the Python CVS? Make a single file? Make a subdirectory? If so, how will regrtest.py find them? Put them all along with the other tests? Regards, Martin From barry@python.org Tue Nov 19 14:32:06 2002 From: barry@python.org (Barry A. Warsaw) Date: Tue, 19 Nov 2002 09:32:06 -0500 Subject: [Python-Dev] bsddb3 imported References: <15834.17784.35358.906103@gargle.gargle.HOWL> Message-ID: <15834.19302.710499.876242@gargle.gargle.HOWL> >>>>> "MvL" == Martin v Loewis writes: >> MvL> Barry wants the test suite to be incorporated also, this >> will MvL> happen after we decide on the specifics. >> The pybsddb test suite is too thorough to just throw away. I >> was thinking we'd incorporate it, but run it only with a >> regrtest -u option. MvL> That sounds find. The other question is how to technically MvL> incorporate the tests. They currently live in 14 files, plus MvL> test_all, plus unittest. How am I supposed to arrange them MvL> into the Python CVS? Make a single file? Make a subdirectory? MvL> If so, how will regrtest.py find them? Put them all along MvL> with the other tests? Off the top of my head (because I have to run in a moment), create a test/ subdir inside Lib/bsddb and drop the tests there. Hack Lib/test/test_bsddb.py to run Lib/bsddb/test/testall.py when -u bsddb is given. I do something similar (w/o the -u) for the email pkg tests. -Barry From skip@pobox.com Tue Nov 19 15:05:02 2002 From: skip@pobox.com (Skip Montanaro) Date: Tue, 19 Nov 2002 09:05:02 -0600 Subject: [Python-Dev] bsddb3 imported In-Reply-To: References: Message-ID: <15834.21278.432029.154733@montanaro.dyndns.org> Martin> I have now imported bsddb3 3.4.0. I take it that database files created before the switch will still work as long as the same underlying version of the Sleepycat libraries is used, yes? Skip From barry@python.org Tue Nov 19 15:41:02 2002 From: barry@python.org (Barry A. Warsaw) Date: Tue, 19 Nov 2002 10:41:02 -0500 Subject: [Python-Dev] bsddb3 imported References: <15834.21278.432029.154733@montanaro.dyndns.org> Message-ID: <15834.23438.190916.311852@gargle.gargle.HOWL> >>>>> "SM" == Skip Montanaro writes: Martin> I have now imported bsddb3 3.4.0. SM> I take it that database files created before the switch will SM> still work as long as the same underlying version of the SM> Sleepycat libraries is used, yes? Depends. API version <> file format version. The good news is that Berkeley will complain loudly if you're incompatible, and there are tools for upgrading your database files. -Barry From martin@v.loewis.de Tue Nov 19 15:51:11 2002 From: martin@v.loewis.de (Martin v. Löwis) Date: Tue, 19 Nov 2002 16:51:11 +0100 Subject: [Python-Dev] bsddb3 imported References: <15834.21278.432029.154733@montanaro.dyndns.org> Message-ID: <002701c28fe3$7c0d74c0$4a1be8d9@mira> > I take it that database files created before the switch will still work as > long as the same underlying version of the Sleepycat libraries is used, yes? Correct. Likewise, code should continue to work unmodified, since bsddb3 is API compatible with the old bsddb module. Regards, Martin From martin@v.loewis.de Tue Nov 19 15:52:30 2002 From: martin@v.loewis.de (Martin v. Löwis) Date: Tue, 19 Nov 2002 16:52:30 +0100 Subject: [Python-Dev] bsddb3 imported References: <15834.21278.432029.154733@montanaro.dyndns.org> <15834.23438.190916.311852@gargle.gargle.HOWL> Message-ID: <003b01c28fe3$ab291660$4a1be8d9@mira> > SM> I take it that database files created before the switch will > SM> still work as long as the same underlying version of the > SM> Sleepycat libraries is used, yes? > > Depends. API version <> file format version. If it is the *very* same underlying version of the Sleepycat libraries, how could the file format be different? Regards, Martin From barry@python.org Tue Nov 19 15:54:00 2002 From: barry@python.org (Barry A. Warsaw) Date: Tue, 19 Nov 2002 10:54:00 -0500 Subject: [Python-Dev] bsddb3 imported References: <15834.21278.432029.154733@montanaro.dyndns.org> <15834.23438.190916.311852@gargle.gargle.HOWL> <003b01c28fe3$ab291660$4a1be8d9@mira> Message-ID: <15834.24216.109938.105356@gargle.gargle.HOWL> >>>>> "MvL" =3D=3D Martin v L=F6wis writes: >> SM> I take it that database files created before the switch >> will SM> still work as long as the same underlying version of >> the SM> Sleepycat libraries is used, yes? >> Depends. API version <> file format version. MvL> If it is the *very* same underlying version of the Sleepycat MvL> libraries, how could the file format be different? It can't. -Barry From dave@boost-consulting.com Tue Nov 19 15:33:09 2002 From: dave@boost-consulting.com (David Abrahams) Date: Tue, 19 Nov 2002 10:33:09 -0500 Subject: [Python-Dev] Licensing question Message-ID: Hi, Recently we've been trying to shore up copyrights in Bost code; some of our source files have no copyright notice at all. Boost.Python includes a modified version of Python.h to work around some C++ interoperability bugs in Python 2.2.1's header. Lawyers in companies that use Boost would feel a lot more comfortable if the file included a copyright notice. My usual practice is to write: // Copyright David Abrahams 2002. Permission to copy, use, // modify, sell and distribute this software is granted provided this // copyright notice appears in all copies. This software is provided // "as is" without express or implied warranty, and with no claim as // to its suitability for any purpose. At the top of every C++ source file. Is there any reason not to do that with our modified Python.h? If so, what should I put there? Thanks, Dave -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Boost support, enhancements, training, and commercial distribution From skip@pobox.com Tue Nov 19 15:56:33 2002 From: skip@pobox.com (Skip Montanaro) Date: Tue, 19 Nov 2002 09:56:33 -0600 Subject: [Python-Dev] bsddb3 imported In-Reply-To: <15834.23438.190916.311852@gargle.gargle.HOWL> References: <15834.21278.432029.154733@montanaro.dyndns.org> <15834.23438.190916.311852@gargle.gargle.HOWL> Message-ID: <15834.24369.6657.739334@montanaro.dyndns.org> Martin> I have now imported bsddb3 3.4.0. SM> I take it that database files created before the switch will still SM> work as long as the same underlying version of the Sleepycat SM> libraries is used, yes? BAW> Depends. API version <> file format version. BAW> The good news is that Berkeley will complain loudly if you're BAW> incompatible, and there are tools for upgrading your database BAW> files. Yes, I realize API version != file version. What I was getting at was that if I did something like db = bsddb.hashopen("foo", "c") db["1"] = "1" db.close() under the old bsddb module using Sleepycat 4.0.14, can I be assured that db = bsddb.hashopen("foo") print db["1"] db.close() will work with the new bsddb module? There should be no surprises in the common case, yes? Skip From skip@pobox.com Tue Nov 19 15:57:29 2002 From: skip@pobox.com (Skip Montanaro) Date: Tue, 19 Nov 2002 09:57:29 -0600 Subject: [Python-Dev] bsddb3 imported In-Reply-To: <002701c28fe3$7c0d74c0$4a1be8d9@mira> References: <15834.21278.432029.154733@montanaro.dyndns.org> <002701c28fe3$7c0d74c0$4a1be8d9@mira> Message-ID: <15834.24425.439388.763126@montanaro.dyndns.org> >> ... database files created before the switch will still work ... ? Martin> Correct. Likewise, code should continue to work unmodified, Martin> since bsddb3 is API compatible with the old bsddb module. Excellent. Precisely what I wanted to hear. S From guido@python.org Tue Nov 19 16:12:58 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 19 Nov 2002 11:12:58 -0500 Subject: [Python-Dev] Licensing question In-Reply-To: Your message of "Tue, 19 Nov 2002 10:33:09 EST." References: Message-ID: <200211191612.gAJGCwW29559@odiug.zope.com> > Recently we've been trying to shore up copyrights in Bost code; some > of our source files have no copyright notice at all. Boost.Python > includes a modified version of Python.h to work around some C++ > interoperability bugs in Python 2.2.1's header. Lawyers in companies > that use Boost would feel a lot more comfortable if the file included > a copyright notice. My usual practice is to write: > > // Copyright David Abrahams 2002. Permission to copy, use, > // modify, sell and distribute this software is granted provided this > // copyright notice appears in all copies. This software is provided > // "as is" without express or implied warranty, and with no claim as > // to its suitability for any purpose. > > At the top of every C++ source file. Is there any reason not to do > that with our modified Python.h? If so, what should I put there? As a derived work, it is indeed your copyright, but the original is licensed to you under the conditions of the PSF license: http://www.python.org/2.2.2/license.html Relevant are in particular: """ 2. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python 2.2.2 alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) 2001, 2002 Python Software Foundation; All Rights Reserved" are retained in Python 2.2.2 alone or in any derivative version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates Python 2.2.2 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python 2.2.2. """ I'd say you need to include at least the PSF copyright notice quoted there and a comment explaining how your Python.h differs from the original. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim@zope.com Tue Nov 19 16:36:47 2002 From: tim@zope.com (Tim Peters) Date: Tue, 19 Nov 2002 11:36:47 -0500 Subject: [Python-Dev] bsddb broken on Windows Message-ID: Rev 1.38 of bsddbmodule.c changed the name of the module init function from initbsddb to initbsddb185 I imagine that's what causes >>> import bsddb Traceback (most recent call last): File "", line 1, in ? ImportError: dynamic module does not define init function (initbsddb) >>> on Windows. Who understands what was intended here? From dave@boost-consulting.com Tue Nov 19 16:58:29 2002 From: dave@boost-consulting.com (David Abrahams) Date: Tue, 19 Nov 2002 11:58:29 -0500 Subject: [Python-Dev] Licensing question In-Reply-To: <200211191612.gAJGCwW29559@odiug.zope.com> (Guido van Rossum's message of "Tue, 19 Nov 2002 11:12:58 -0500") References: <200211191612.gAJGCwW29559@odiug.zope.com> Message-ID: Guido van Rossum writes: > I'd say you need to include at least the PSF copyright notice quoted > there and a comment explaining how your Python.h differs from the > original. Thanks! I don't think I need my own copyright on it. Here's what I did; does it look OK? http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/boost/boost/boost/python/detail/python22_fixed.h?rev=HEAD&content-type=text/vnd.viewcvs-markup Thanks again, Dave -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Boost support, enhancements, training, and commercial distribution From guido@python.org Tue Nov 19 17:02:30 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 19 Nov 2002 12:02:30 -0500 Subject: [Python-Dev] Licensing question In-Reply-To: Your message of "Tue, 19 Nov 2002 11:58:29 EST." References: <200211191612.gAJGCwW29559@odiug.zope.com> Message-ID: <200211191702.gAJH2U130007@odiug.zope.com> > > I'd say you need to include at least the PSF copyright notice quoted > > there and a comment explaining how your Python.h differs from the > > original. > > Thanks! I don't think I need my own copyright on it. Here's what I > did; does it look OK? > > http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/boost/boost/boost/python/detail/python22_fixed.h?rev=HEAD&content-type=text/vnd.viewcvs-markup Sure. I presume someone is incorporating these needed changes into Python.h for later Python versions that 2.2.1? --Guido van Rossum (home page: http://www.python.org/~guido/) From dave@boost-consulting.com Tue Nov 19 17:05:51 2002 From: dave@boost-consulting.com (David Abrahams) Date: Tue, 19 Nov 2002 12:05:51 -0500 Subject: [Python-Dev] Licensing question In-Reply-To: <200211191702.gAJH2U130007@odiug.zope.com> (Guido van Rossum's message of "Tue, 19 Nov 2002 12:02:30 -0500") References: <200211191612.gAJGCwW29559@odiug.zope.com> <200211191702.gAJH2U130007@odiug.zope.com> Message-ID: Guido van Rossum writes: >> > I'd say you need to include at least the PSF copyright notice quoted >> > there and a comment explaining how your Python.h differs from the >> > original. >> >> Thanks! I don't think I need my own copyright on it. Here's what I >> did; does it look OK? >> >> http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/boost/boost/boost/python/detail/python22_fixed.h?rev=HEAD&content-type=text/vnd.viewcvs-markup > > Sure. > > I presume someone is incorporating these needed changes into Python.h > for later Python versions that 2.2.1? I think we did that already for 2.2.2 -- David Abrahams dave@boost-consulting.com * http://www.boost-consulting.com Boost support, enhancements, training, and commercial distribution From fredrik@pythonware.com Tue Nov 19 17:17:42 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Tue, 19 Nov 2002 18:17:42 +0100 Subject: [Python-Dev] Anon CVS References: <20021118075905.GC372@eecs.tufts.edu> <15832.65251.994061.492964@gargle.gargle.HOWL> Message-ID: <025601c28fef$94c26a00$ced241d5@hagrid> barry wrote: > I was able to do an authenticated checkout at about 1am EST last > night, but they're definitely mucking around with their cvs servers. > I noticed the checkin message now seemed to come from > bwarsaw@projects.sourceforge.net. I don't know if this is a permanent > change, but it certainly mucked up the Mailman sender filters. :/ > > Checking now, it looks like non-anon cvs is working ok. I haven't been able to connect since the last wednesday or so (cvs update just hangs until it times out). If this persists, someone else will have to check in my SRE patches... From neal@metaslash.com Tue Nov 19 17:25:26 2002 From: neal@metaslash.com (Neal Norwitz) Date: Tue, 19 Nov 2002 12:25:26 -0500 Subject: [Python-Dev] Anon CVS In-Reply-To: <025601c28fef$94c26a00$ced241d5@hagrid> References: <15832.65251.994061.492964@gargle.gargle.HOWL> <025601c28fef$94c26a00$ced241d5@hagrid> Message-ID: <20021119172526.GS17931@epoch.metaslash.com> On Tue, Nov 19, 2002 at 06:17:42PM +0100, Fredrik Lundh wrote: > barry wrote: > > > I was able to do an authenticated checkout at about 1am EST last > > night, but they're definitely mucking around with their cvs servers. > > I noticed the checkin message now seemed to come from > > bwarsaw@projects.sourceforge.net. I don't know if this is a permanent > > change, but it certainly mucked up the Mailman sender filters. :/ > > > > Checking now, it looks like non-anon cvs is working ok. > > I haven't been able to connect since the last wednesday or > so (cvs update just hangs until it times out). > > If this persists, someone else will have to check in my SRE > patches... You may need to force protocol version 1. You can do this by adding/modifying a stanza in your .ssh/config file: host cvs.python.sourceforge.net Protocol 1,2 Neal From martin@v.loewis.de Tue Nov 19 17:56:56 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 19 Nov 2002 18:56:56 +0100 Subject: [Python-Dev] bsddb broken on Windows In-Reply-To: References: Message-ID: "Tim Peters" writes: > Who understands what was intended here? See http://mail.python.org/pipermail/python-dev/2002-November/030247.html Martin From martin@v.loewis.de Tue Nov 19 18:03:24 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 19 Nov 2002 19:03:24 +0100 Subject: [Python-Dev] bsddb3 imported In-Reply-To: <15834.19302.710499.876242@gargle.gargle.HOWL> References: <15834.17784.35358.906103@gargle.gargle.HOWL> <15834.19302.710499.876242@gargle.gargle.HOWL> Message-ID: barry@python.org (Barry A. Warsaw) writes: > Off the top of my head (because I have to run in a moment), create a > test/ subdir inside Lib/bsddb and drop the tests there. Hack > Lib/test/test_bsddb.py to run Lib/bsddb/test/testall.py when -u bsddb > is given. > > I do something similar (w/o the -u) for the email pkg tests. Ok, done. The test suite produces a number of errors for me, namely multiple occurrences of /home/martin/work/pybsd/Lib/bsddb/test/test_compat.py:18: RuntimeWarning: mktemp is a potential security risk to your program self.filename = tempfile.mktemp() and of Exception bsddb3._db.DBError: (0, 'DBCursor object has been closed') in > ignored Exception exceptions.AttributeError: "bsdTableDB instance has no attribute 'db'" in > ignored Running the testsuite with -v gives additional errors (9 of 177 tests fail). These failures are for test01_both, test02_dbobj_dict_interface, test01, and others, and they all have their traceback end with DBNoSuchFileError: (2, 'No such file or directory -- db_home/__db.001: No such file or directory') Greg, can you please take a look? It may be that I made a mistake when incorporating bsddb3, or it may be an error in the package itself. Any insights appreciated. Regards, Martin From barry@python.org Tue Nov 19 18:12:39 2002 From: barry@python.org (Barry A. Warsaw) Date: Tue, 19 Nov 2002 13:12:39 -0500 Subject: [Python-Dev] bsddb3 imported References: <15834.17784.35358.906103@gargle.gargle.HOWL> <15834.19302.710499.876242@gargle.gargle.HOWL> Message-ID: <15834.32535.606078.681754@gargle.gargle.HOWL> >>>>> "MvL" == Martin v Loewis writes: MvL> Exception bsddb3._db.DBError: (0, 'DBCursor object has been MvL> closed') in > MvL> ignored Exception exceptions.AttributeError: "bsdTableDB MvL> instance has no attribute 'db'" in bsdTableDB.__del__ of 0x4035096c>> ignored I think this one at least is fixed in pybsddb's cvs. Can you update to the latest cvs? -Barry From martin@v.loewis.de Tue Nov 19 18:22:39 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 19 Nov 2002 19:22:39 +0100 Subject: [Python-Dev] bsddb3 imported In-Reply-To: <15834.32535.606078.681754@gargle.gargle.HOWL> References: <15834.17784.35358.906103@gargle.gargle.HOWL> <15834.19302.710499.876242@gargle.gargle.HOWL> <15834.32535.606078.681754@gargle.gargle.HOWL> Message-ID: barry@python.org (Barry A. Warsaw) writes: > I think this one at least is fixed in pybsddb's cvs. Can you update > to the latest cvs? Not if I can avoid it. If I have to merge now, how can I merge the next time? Copying over the files won't work (because of the few changes I had to make), so I will have to apply patches. But how can I produce patches if I have to diff against some CVS snapshot? I don't want to keep the snapshot around, because that would mean nobody but me could merge the code the next time. Of course, if there was a 3.4.1 release or some such, merging would be easier: I would now apply the diffs between 3.4.0 and 3.4.1, and the next time the diffs between 3.4.1, and the then-current release. Regards, Martin From skip@pobox.com Tue Nov 19 19:02:10 2002 From: skip@pobox.com (Skip Montanaro) Date: Tue, 19 Nov 2002 13:02:10 -0600 Subject: [Python-Dev] Need some configure eyeballs and testing Message-ID: <15834.35506.842761.101061@montanaro.dyndns.org> Awhile ago I volunteered to fix the use of OPT within configure. The problem is that OPT is overloaded with two different kinds of things, optimization/debug flags and other flags which are required to get code to compile. The problem is that if there are any flags which need to be set to simply get Python to compile, if you override OPT when running configure or make, you may wipe out compiler flags necessary to simply compile the interpreter. I finally got a round to finishing that off this morning. I split OPT into two variables, OPT and BASECFLAGS. OPT can be modified, while BASECFLAGS should be left alone. You can pass BASECFLAGS to configure. If it decides you need some other flags they just get tacked on. On the other hand, if you pass OPT to configure, it generally will prevent the configure script from doing anything else to it. (There is one bit of code I didn't understand too well which does set OPT unconditionally on BeOS. I left that alone, but I think it probably needs to be changed. As they say, "it works for me", however all I've tested it on is MacOSX, and only a Unix build (I still don't know how to do the fancy schmansy Mac builds). The following issues remain: * For some reason the flags which wind up in BASECFLAGS are getting duplicated on the gcc command line. I'm not sure where this is happening, but Makefile.pre and Makefile both wind up with these definitions: OPT= -Wno-long-double -no-cpp-precomp -g -Wall -Wstrict-prototypes BASECFLAGS= -Wno-long-double -no-cpp-precomp I don't merge BASECFLAGS into OPT anywhere, and the echo statements in configure report: Base CFLAGS: < -Wno-long-double -no-cpp-precomp> OPT: <-g -Wall -Wstrict-prototypes> * There are three XXX comments in configure.in which could use some investigation. Two are related to condensing two bits of code which seem to overlap a bit. The third is related to Monterey. This small block of code: # The current (beta) Monterey compiler dies with optimizations case $ac_sys_system in Monterey*) OPT="" ;; esac unconditionally wipes out OPT, but doesn't say what Monterey is or give any indication that this setting needs to be revisited. Is "Monterey" still in beta test? "cvs annotate" indicates that Trent Mick added this in August 2000, so I assume it's something to do with Win64 (the OS? MSVC?), but it would be nice to know if this is still needed. I can use some extra eyeballs if you have a few moments to spare. To save bandwidth on the group, I opened a patch at http://python.org/sf/640843 The uploaded file includes diffs for Makefile.pre.in, configure.in and configure, so you don't need to have a recent version of autoconf to try it out. I uploaded the context diffs using Internet Explorer on my Mac. I just downloaded it to my Linux laptop using Opera and noticed a bit of corruption (NULs at the start and the end). The file is fine on my Mac. Downloading on the Mac with IE generates an okay file, but Mac+Opera also results in corruption. If you can't get a clean file, let me know and I'll be happy to mail it to you. I have no idea what caused this but would appreciate some feedback on efforts to download it, even if you don't have time to test it more extensively. Skip From skip@pobox.com Tue Nov 19 19:10:34 2002 From: skip@pobox.com (Skip Montanaro) Date: Tue, 19 Nov 2002 13:10:34 -0600 Subject: [Python-Dev] Need some configure eyeballs and testing In-Reply-To: <15834.35506.842761.101061@montanaro.dyndns.org> References: <15834.35506.842761.101061@montanaro.dyndns.org> Message-ID: <15834.36010.859146.834474@montanaro.dyndns.org> Skip> The problem is ... The problem is ... Sorry for this editorial gaff. Didn't get enough sleep last night. Skip> As they say, "it works for me", however all I've tested it on is Skip> MacOSX, and only a Unix build (I still don't know how to do the Skip> fancy schmansy Mac builds). I've also tried it out on Mandrake Linux 8.1. Seems to work there as well. Skip From barry@python.org Tue Nov 19 20:59:44 2002 From: barry@python.org (Barry A. Warsaw) Date: Tue, 19 Nov 2002 15:59:44 -0500 Subject: [Python-Dev] bsddb3 imported References: <15834.17784.35358.906103@gargle.gargle.HOWL> <15834.19302.710499.876242@gargle.gargle.HOWL> <15834.32535.606078.681754@gargle.gargle.HOWL> Message-ID: <15834.42560.409640.287545@gargle.gargle.HOWL> >>>>> "MvL" == Martin v Loewis writes: >> I think this one at least is fixed in pybsddb's cvs. Can you >> update to the latest cvs? MvL> Not if I can avoid it. If I have to merge now, how can I MvL> merge the next time? I thought we decided that once we get a blessed version in the Python cvs tree, we should /stop/ any development over in pybsddb.sf.net? If not, why did we merge? MvL> Copying over the files won't work (because of the few changes MvL> I had to make), so I will have to apply patches. But how can MvL> I produce patches if I have to diff against some CVS MvL> snapshot? I don't want to keep the snapshot around, because MvL> that would mean nobody but me could merge the code the next MvL> time. MvL> Of course, if there was a 3.4.1 release or some such, merging MvL> would be easier: I would now apply the diffs between 3.4.0 MvL> and 3.4.1, and the next time the diffs between 3.4.1, and the MvL> then-current release. Actually, what's in pybsddb's cvs reports itself as 3.4.2. I've been using that extensively and it's stable. Greg, perhaps you can make a 3.4.2 release and mark it as the last that is going to come out of pybsddb.sf.net. Then Martin can pull that into Python cvs and future development would out of the Python project. Pybsddb would stick around if you want to do future distutils releases (e.g. for BDB 4.1.24). Anyone who has commit privs to pybsddb but not python can probably be given access to the latter. This is basically how I run the mimelib/email stuff and it's not too painful. I can help with details if necessary. -Barry From martin@v.loewis.de Tue Nov 19 21:25:08 2002 From: martin@v.loewis.de (Martin v. Löwis) Date: Tue, 19 Nov 2002 22:25:08 +0100 Subject: [Python-Dev] bsddb3 imported References: <15834.17784.35358.906103@gargle.gargle.HOWL><15834.19302.710499.876242@gargle.gargle.HOWL><15834.32535.606078.681754@gargle.gargle.HOWL> <15834.42560.409640.287545@gargle.gargle.HOWL> Message-ID: <003501c29012$22d746e0$bf20e8d9@mira> > I thought we decided that once we get a blessed version in the Python > cvs tree, we should /stop/ any development over in pybsddb.sf.net? If > not, why did we merge? If that is what we decided, I'm happy to merge the current CVS. > Greg, perhaps you can make a 3.4.2 release and mark it as the last > that is going to come out of pybsddb.sf.net. I'll wait a few days to see whether a release is upcoming; if not, I just take the current CVS. > Then Martin can pull > that into Python cvs and future development would out of the Python > project. Pybsddb would stick around if you want to do future > distutils releases (e.g. for BDB 4.1.24). Anyone who has commit privs > to pybsddb but not python can probably be given access to the latter. That sounds good. Martin From barry@python.org Tue Nov 19 21:43:36 2002 From: barry@python.org (Barry A. Warsaw) Date: Tue, 19 Nov 2002 16:43:36 -0500 Subject: [Python-Dev] bsddb3 imported References: <15834.17784.35358.906103@gargle.gargle.HOWL> <15834.19302.710499.876242@gargle.gargle.HOWL> <15834.32535.606078.681754@gargle.gargle.HOWL> <15834.42560.409640.287545@gargle.gargle.HOWL> <003501c29012$22d746e0$bf20e8d9@mira> Message-ID: <15834.45192.680362.350038@gargle.gargle.HOWL> Cool. From tim.one@comcast.net Wed Nov 20 01:40:27 2002 From: tim.one@comcast.net (Tim Peters) Date: Tue, 19 Nov 2002 20:40:27 -0500 Subject: [Python-Dev] bsddb broken on Windows In-Reply-To: Message-ID: [martin@v.loewis.de] > See > > http://mail.python.org/pipermail/python-dev/2002-November/030247.html Thanks! I missed that the first time around -- I confess I'm skipping a lot of email lately. Let me know when the elves finish fixing it all on Windows <0.7 wink>. From jdhunter@ace.bsd.uchicago.edu Wed Nov 20 01:48:33 2002 From: jdhunter@ace.bsd.uchicago.edu (John Hunter) Date: Tue, 19 Nov 2002 19:48:33 -0600 Subject: [Python-Dev] Need some configure eyeballs and testing In-Reply-To: <15834.36010.859146.834474@montanaro.dyndns.org> (Skip Montanaro's message of "Tue, 19 Nov 2002 13:10:34 -0600") References: <15834.35506.842761.101061@montanaro.dyndns.org> <15834.36010.859146.834474@montanaro.dyndns.org> Message-ID: >>>>> "Skip" == Skip Montanaro writes: Skip> I uploaded the context diffs using Internet Explorer on my Skip> Mac. I just downloaded it to my Linux laptop using Opera Skip> and noticed a bit of corruption (NULs at the start and the Skip> end). The file is fine on my Mac. Downloading on the Mac Skip> with IE generates an okay file, but Mac+Opera also results Skip> in corruption. If you can't get a clean file, let me know Skip> and I'll be happy to mail it to you. I got these too, both in opera and with wget. But I edited them out with emacs and the patch applied correctly. Skip> I can use some extra eyeballs if you have a few moments to Skip> spare. To save bandwidth on the group, I opened a patch at I compiled your patched CVS under a few systems I have available. I just did the default thing each time (./configure; make; make test). If there is something more strenuous you'd like me to try, I still have the build dirs. Everything went fairly smoothly: python built on every system, though there were some failed and/or skipped tests on each platform. I have the platform/OS/gcc version below, as well as the 'make test' summary. test_signal hangs on the sun solaris platform; I have no idea what is causing this. Tomorrow I may get a chance to test it on an old IRIX box. It'll probably take a day to compile though, if I'm lucky. John Hunter ==================================================================== SunOS ace 5.8 Generic_108528-09 sun4u sparc SUNW,Ultra-5_10 make test runs fine then hangs on test_signal # ./gcc --version 2.95.3 # ./make --version GNU Make version 3.79.1, by Richard Stallman and Roland McGrath. Built for sparc-sun-solaris2.8 ./configure; make ran normally ==================================================================== AMD Athlon XP with RHL 7.1 running kernel 2.4.2 make test 204 tests OK. 1 test failed: test_linuxaudiodev 14 tests skipped: test_al test_bsddb3 test_cd test_cl test_curses test_email_codecs test_gl test_imgfile test_pep277 test_socket_ssl test_socketserver test_sunaudiodev test_winreg test_winsound 1 skip unexpected on linux2: test_bsddb3 make: *** [test] Error 1 [root@cruncher1 src]# uname -a Linux cruncher1.paradise.lost 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown [root@cruncher1 src]# gcc --version 2.96 [root@cruncher1 src]# make --version GNU Make version 3.79.1, by Richard Stallman and Roland McGrath. Built for i386-redhat-linux-gnu [root@cruncher1 /root]# cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD cpu family : 6 model : 6 model name : AMD Athlon(tm) XP 1800+ stepping : 2 cpu MHz : 1541.261 cache size : 256 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 mmx fxsr sse syscall mmxext 3dnowext 3dnow bogomips : 3073.63 ==================================================================== Redhat linux 8.0 on Pentium III with gcc 3.2 configure/make were normal make test terminated on the audiodev test. I am 99.99% sure this is all my fault since I installed the audio modules with -f against my kernel's pleading test_largefile test_linuxaudiodev make: *** [test] Floating point exception [root@video src]# gcc --version gcc (GCC) 3.2 20020903 (Red Hat Linux 8.0 3.2-7) Copyright (C) 2002 Free Software Foundation, Inc. [root@video src]# make --version GNU Make version 3.79.1, by Richard Stallman and Roland McGrath. Built for i386-redhat-linux-gnu [root@video src]# uname -a Linux video.paradise.lost 2.4.18-14 #1 Wed Sep 4 13:35:50 EDT 2002 i686 i686 i386 GNU/Linux [root@video src]# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 8 model name : Pentium III (Coppermine) stepping : 1 cpu MHz : 797.976 cache size : 256 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 mmx fxsr sse bogomips : 1586.32 ============================================================================== Pentium III running RHL 7.1 with gcc 3.2 configure/make were normal make test 204 tests OK. 15 tests skipped: test_al test_bsddb3 test_cd test_cl test_curses test_email_codecs test_gl test_imgfile test_linuxaudiodev test_pep277 test_socket_ssl test_socketserver test_sunaudiodev test_winreg test_winsound 2 skips unexpected on linux2: test_bsddb3 test_linuxaudiodev mother:~/python/src/cvs/python/dist/src> gcc --version gcc (GCC) 3.2 mother:~/python/src/cvs/python/dist/src> make --version GNU Make version 3.79.1, by Richard Stallman and Roland McGrath. mother:~/python/src/cvs/python/dist/src> uname -a Linux mother.paradise.lost 2.4.9 #7 Fri Oct 12 15:20:49 CDT 2001 i686 mother:~/python/src/cvs/python/dist/src> cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 8 model name : Pentium III (Coppermine) stepping : 6 cpu MHz : 937.551 cache size : 256 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 mmx fxsr sse bogomips : 1867.77 ============================================================================== From skip@pobox.com Wed Nov 20 02:20:19 2002 From: skip@pobox.com (Skip Montanaro) Date: Tue, 19 Nov 2002 20:20:19 -0600 Subject: [Python-Dev] Need some configure eyeballs and testing In-Reply-To: References: <15834.35506.842761.101061@montanaro.dyndns.org> <15834.36010.859146.834474@montanaro.dyndns.org> Message-ID: <15834.61795.217003.880856@montanaro.dyndns.org> John> I compiled your patched CVS under a few systems I have available. John> I just did the default thing each time (./configure; make; make John> test). If there is something more strenuous you'd like me to try, John> I still have the build dirs. Thanks, that's an excellent start. On most of the more popular systems, like Linux, I anticipate no problems. In the Makefile generated by configure you should two variables initialized, OPT and BASECFLAGS. On my Mac I see something like: OPT= -DNDEBUG -g BASECFLAGS= -Wno-long-double -no-cpp-precomp depending on how I initialize OPT in configure's environment. The BASECFLAGS are supposed to be the flags which are absolutely required to build Python. On my Mandrake Linux system, BASECFLAGS are always empty. The only bits in OPT would be stuff to affect optimization and debugging. I was free to fiddle with OPT at any time and in any way I wanted. On my Mac though, the -Wno-long-double -no-cpp-precomp are absolutely required to get things to build properly. Before I made these changes, fiddling with OPT during configure or make would blow them away and my build would fail if I wasn't careful to include those two crucial flags in OPT. When you're looking at the output of building on a particular system, the most important thing to look for is if OPT contains anything besides optimization or debugging stuff. If so, you would probably not have been able to casually modify OPT. Now, you should be able to. John> Everything went fairly smoothly: python built on every system, Comparing the old OPT with the new OPT+BASECFLAGS Makefile variables should give you a good idea if I got things right (if they add up to the same flags before and after, you're golden). I figured out why I was getting the flag duplication I mentioned in my earlier note. I simply forgot to run autoconf after my last change to configure.in. I will upload a new patch in a few minutes. John> Tomorrow I may get a chance to test it on an old IRIX box. It'll John> probably take a day to compile though, if I'm lucky. This should be an interesting build. Something like 1. configure 2. note the value of OPT in the generated Makefile 3. apply the patch 4. note the combined values of OPT & BASECFLAGS in the generated Makefile 5. build if there's still some time left in the day ;-) If someone has access to other unusual platforms (BeOS, AIX, SCO) that would be helpful as well. Skip From jdhunter@ace.bsd.uchicago.edu Wed Nov 20 02:48:11 2002 From: jdhunter@ace.bsd.uchicago.edu (John Hunter) Date: Tue, 19 Nov 2002 20:48:11 -0600 Subject: [Python-Dev] Need some configure eyeballs and testing In-Reply-To: <15834.61795.217003.880856@montanaro.dyndns.org> (Skip Montanaro's message of "Tue, 19 Nov 2002 20:20:19 -0600") References: <15834.35506.842761.101061@montanaro.dyndns.org> <15834.36010.859146.834474@montanaro.dyndns.org> <15834.61795.217003.880856@montanaro.dyndns.org> Message-ID: >>>>> "Skip" == Skip Montanaro writes: Skip> Comparing the old OPT with the new OPT+BASECFLAGS Makefile Skip> variables should give you a good idea if I got things right Skip> (if they add up to the same flags before and after, you're Skip> golden). OK, now I have a better idea of what you need. On the SunOS 5.8 sparc SUNW,Ultra-5_10 platform with gcc 2.95.3, the two flags add up. Patched: OPT= -DNDEBUG -g -O3 -Wall -Wstrict-prototypes BASECFLAGS= Un-patched: OPT= -DNDEBUG -g -O3 -Wall -Wstrict-prototypes I'll let you know if I have any luck on the IRIX box. May god have mercy on my soul. John Hunter From mgilfix@eecs.tufts.edu Wed Nov 20 04:43:08 2002 From: mgilfix@eecs.tufts.edu (Michael Gilfix) Date: Tue, 19 Nov 2002 23:43:08 -0500 Subject: [Python-Dev] Anon CVS In-Reply-To: <20021119172526.GS17931@epoch.metaslash.com> References: <15832.65251.994061.492964@gargle.gargle.HOWL> <025601c28fef$94c26a00$ced241d5@hagrid> <20021119172526.GS17931@epoch.metaslash.com> Message-ID: <20021120044308.GA7041@eecs.tufts.edu> On Tue, Nov 19 @ 12:25, Neal Norwitz wrote: > On Tue, Nov 19, 2002 at 06:17:42PM +0100, Fredrik Lundh wrote: > > barry wrote: > > > > > I was able to do an authenticated checkout at about 1am EST last > > > night, but they're definitely mucking around with their cvs servers. > > > I noticed the checkin message now seemed to come from > > > bwarsaw@projects.sourceforge.net. I don't know if this is a permanent > > > change, but it certainly mucked up the Mailman sender filters. :/ > > > > > > Checking now, it looks like non-anon cvs is working ok. > > > > I haven't been able to connect since the last wednesday or > > so (cvs update just hangs until it times out). > > > > If this persists, someone else will have to check in my SRE > > patches... > > You may need to force protocol version 1. You can do this by > adding/modifying a stanza in your .ssh/config file: > > host cvs.python.sourceforge.net > Protocol 1,2 Yeah, I'm still having some anon troubles. Tried Neil's trick with no luck. -- Mike -- Michael Gilfix mgilfix@eecs.tufts.edu For my gpg public key: http://www.eecs.tufts.edu/~mgilfix/contact.html From martin@v.loewis.de Wed Nov 20 08:46:01 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 20 Nov 2002 09:46:01 +0100 Subject: [Python-Dev] bsddb broken on Windows In-Reply-To: References: Message-ID: Tim Peters writes: > Thanks! I missed that the first time around -- I confess I'm skipping a lot > of email lately. Let me know when the elves finish fixing it all on Windows > <0.7 wink>. For the remaining 0.3: I could arrange to update bsddb.dsp, however I'm not sure it would do much good, since you still have to obtain a Sleepycat installation. Regards, Martin From Jack.Jansen@cwi.nl Wed Nov 20 12:56:54 2002 From: Jack.Jansen@cwi.nl (Jack Jansen) Date: Wed, 20 Nov 2002 13:56:54 +0100 Subject: [Python-Dev] Need some configure eyeballs and testing In-Reply-To: <15834.35506.842761.101061@montanaro.dyndns.org> Message-ID: <8B7A6D4A-FC87-11D6-BBB5-0030655234CE@cwi.nl> On Tuesday, Nov 19, 2002, at 20:02 Europe/Amsterdam, Skip Montanaro wrote: > I uploaded the context diffs using Internet Explorer on my Mac. I just > downloaded it to my Linux laptop using Opera and noticed a bit of > corruption > (NULs at the start and the end). The file is fine on my Mac. My guess is that IE is uploading the file as MacBinary or AppleSingle format or some such. Repeat after me: IE is evil! IE is evil! IE is evil! OmniWeb is good! OmniWeb is good! Chimera is pretty cool too! Chimera is pretty cool too! -- - Jack Jansen http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma Goldman - From Raymond Hettinger" Have dictionaries support the repetition. Use the replication factor to provide a hint as to how large the dictionary is expected to be. [0] * n # allocate an n-length list {} * n # allocate an n-element dictionary If the a list already has k elements, the result is already n*k long; however, for a dictionary, since the keys are unique, the result says at n*1. Hence, the dictionary contents are unaffected by repetition. But the n-factor is useful for eliminating all but one of the re-size operations when growing a large dictionary, element by element. If k > n, then the hint is disregarded since the number of elements already exceeds the hint. Raymond Hettinger From aahz@pythoncraft.com Wed Nov 20 20:33:26 2002 From: aahz@pythoncraft.com (Aahz) Date: Wed, 20 Nov 2002 15:33:26 -0500 Subject: [Python-Dev] Dictionary Foolishness? In-Reply-To: <000d01c290d1$379a0300$125ffea9@oemcomputer> References: <000d01c290d1$379a0300$125ffea9@oemcomputer> Message-ID: <20021120203326.GA29688@panix.com> On Wed, Nov 20, 2002, Raymond Hettinger wrote: > > Have dictionaries support the repetition. Use the replication factor > to provide a hint as to how large the dictionary is expected to be. > > [0] * n # allocate an n-length list > {} * n # allocate an n-element dictionary > > If the a list already has k elements, the result is already n*k long; > however, for a dictionary, since the keys are unique, the result says > at n*1. Hence, the dictionary contents are unaffected by repetition. > But the n-factor is useful for eliminating all but one of the re-size > operations when growing a large dictionary, element by element. IIRC, dictionaries historically have only resized when *adding* elements, so it's common for dictionaries to resize *smaller* during an add, which would defeat the purpose. Also, dictionaries have always used exponential resize, so it's contant amortized time. I vote YAGNI -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "If you don't know what your program is supposed to do, you'd better not start writing it." --Dijkstra From guido@python.org Wed Nov 20 21:22:40 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 20 Nov 2002 16:22:40 -0500 Subject: [Python-Dev] Dictionary Foolishness? In-Reply-To: Your message of "Wed, 20 Nov 2002 15:12:56 EST." <000d01c290d1$379a0300$125ffea9@oemcomputer> References: <000d01c290d1$379a0300$125ffea9@oemcomputer> Message-ID: <200211202122.gAKLMf420133@pcp02138704pcs.reston01.va.comcast.net> > Have dictionaries support the repetition. > Use the replication factor to provide a hint > as to how large the dictionary is expected > to be. -1. This is too close to arbitrary magic by side effect. If you *really* want this functionality, propose a method. --Guido van Rossum (home page: http://www.python.org/~guido/) From jdhunter@ace.bsd.uchicago.edu Wed Nov 20 22:11:13 2002 From: jdhunter@ace.bsd.uchicago.edu (John Hunter) Date: Wed, 20 Nov 2002 16:11:13 -0600 Subject: [Python-Dev] Need some configure eyeballs and testing In-Reply-To: <15834.61795.217003.880856@montanaro.dyndns.org> (Skip Montanaro's message of "Tue, 19 Nov 2002 20:20:19 -0600") References: <15834.35506.842761.101061@montanaro.dyndns.org> <15834.36010.859146.834474@montanaro.dyndns.org> <15834.61795.217003.880856@montanaro.dyndns.org> Message-ID: >>>>> "Skip" == Skip Montanaro writes: John> Tomorrow I may get a chance to test it on an old IRIX box. John> It'll probably take a day to compile though, if I'm lucky. Skip> This should be an interesting build. Something like godot:~/python/dist/src % uname -a IRIX godot 6.5 04191225 IP22 godot:/usr/local/bin % cc -version MIPSpro Compilers: Version 7.30 unpatched: OPT= -DNDEBUG -O -OPT:Olimit=0 patched: OPT= -OPT:Olimit=0 -DNDEBUG -O BASECFLAGS= -OPT:Olimit=0 (can't make it now -- the damned compiler license has expired. I'm downloading an IRIX 6.5 gcc binary now) John Hunter From skip@pobox.com Thu Nov 21 00:03:02 2002 From: skip@pobox.com (Skip Montanaro) Date: Wed, 20 Nov 2002 18:03:02 -0600 Subject: [Python-Dev] Need some configure eyeballs and testing In-Reply-To: References: <15834.35506.842761.101061@montanaro.dyndns.org> <15834.36010.859146.834474@montanaro.dyndns.org> <15834.61795.217003.880856@montanaro.dyndns.org> Message-ID: <15836.8886.491752.808317@montanaro.dyndns.org> John> unpatched: John> OPT= -DNDEBUG -O -OPT:Olimit=0 John> patched: John> OPT= -OPT:Olimit=0 -DNDEBUG -O John> BASECFLAGS= -OPT:Olimit=0 Thanks, this looks good. Skip From tim.one@comcast.net Thu Nov 21 01:55:17 2002 From: tim.one@comcast.net (Tim Peters) Date: Wed, 20 Nov 2002 20:55:17 -0500 Subject: [Python-Dev] bsddb broken on Windows In-Reply-To: Message-ID: [MvL] > For the remaining 0.3: I could arrange to update bsddb.dsp, however > I'm not sure it would do much good, since you still have to obtain a > Sleepycat installation. Barry and I will straighten this out ... sometime. In the meantime, everyone should expect all the dbmish tests to fail on Windows. From martin@v.loewis.de Thu Nov 21 08:06:36 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 21 Nov 2002 09:06:36 +0100 Subject: [Python-Dev] Need some configure eyeballs and testing In-Reply-To: <15836.8886.491752.808317@montanaro.dyndns.org> References: <15834.35506.842761.101061@montanaro.dyndns.org> <15834.36010.859146.834474@montanaro.dyndns.org> <15834.61795.217003.880856@montanaro.dyndns.org> <15836.8886.491752.808317@montanaro.dyndns.org> Message-ID: Skip Montanaro writes: > John> OPT= -OPT:Olimit=0 -DNDEBUG -O > John> BASECFLAGS= -OPT:Olimit=0 > > Thanks, this looks good. Why is it good that -OPT:Olimit=0 gets duplicated? Regards, Martin From sjoerd@acm.org Thu Nov 21 09:06:29 2002 From: sjoerd@acm.org (Sjoerd Mullender) Date: Thu, 21 Nov 2002 10:06:29 +0100 Subject: [Python-Dev] Need some configure eyeballs and testing In-Reply-To: <15836.8886.491752.808317@montanaro.dyndns.org> References: <15834.35506.842761.101061@montanaro.dyndns.org> <15834.36010.859146.834474@montanaro.dyndns.org> <15834.61795.217003.880856@montanaro.dyndns.org> <15836.8886.491752.808317@montanaro.dyndns.org> Message-ID: <20021121090629.2A1F074B08@indus.ins.cwi.nl> On Wed, Nov 20 2002 Skip Montanaro wrote: > > John> unpatched: > > John> OPT= -DNDEBUG -O -OPT:Olimit=0 > > John> patched: > > John> OPT= -OPT:Olimit=0 -DNDEBUG -O > John> BASECFLAGS= -OPT:Olimit=0 > > Thanks, this looks good. Do you really want -DNDEBUG in OPT? It doesn't strike me as an optimization setting, so I would think it belongs in BASECFLAGS. -- Sjoerd Mullender From just@letterror.com Thu Nov 21 12:51:15 2002 From: just@letterror.com (Just van Rossum) Date: Thu, 21 Nov 2002 13:51:15 +0100 Subject: [Python-Dev] dict() enhancement idea? Message-ID: I sometimes use an idiom like def dictfromkeywords(**kwargs): return kwargs d = dictfromkeywords( akey = 12, anotherkey = "foo", ...etc. ) to conveniently build dicts with literal keys. I think I've seen Alex Martelli advertise it, too. Don't know how well known it is otherwise, but it is extremly handy. It just occured to me that the dict constructor could easily be overloaded with this behavior: it currently takes no keyword arguments[*], so the kwargs dict could simply be used to initialize the new dict. Has this been proposed before? Can't seem to find any reference to it. I could try to make a patch if people think this is a good idea. [*] caveat: dict_new() currently _does_ take one keyword argument "items", using it is the same as simply passing one arg to dict(). It seems to suggest a (key, value) list, though, but strangely it also works for dicts: >>> d = {1: 2, 3: 4} >>> dict(items=d) {1: 2, 3: 4} >>> However, this "items" keyword arg is not documented, and together with the above oddity I suggest to simply get rid of it. Just From just@letterror.com Thu Nov 21 13:08:14 2002 From: just@letterror.com (Just van Rossum) Date: Thu, 21 Nov 2002 14:08:14 +0100 Subject: [Python-Dev] dict() enhancement idea? Message-ID: --16563256-0-3246876498=:9003 Content-Type: text/plain; Charset=US-ASCII Content-Transfer-Encoding: 7bit Just van Rossum wrote: > I could try to make a patch if people think this is a good idea. Hm, that took a full 15 minutes ;-) See attached. Patching the docs is probably more work, as usual... Of course, I'll upload it to sf, but I'd like to get some feedback here, first. Just --16563256-0-3246876498=:9003 Content-Type: application/octet-stream; Name="dictobject.patch"; X-Mac-Type="54455854"; X-Mac-Creator="522A6368" Content-Transfer-Encoding: base64 Content-Disposition: attachment; Filename="dictobject.patch" --16563256-0-3246876498=:9003-- From oren-py-d@hishome.net Thu Nov 21 14:12:51 2002 From: oren-py-d@hishome.net (Oren Tirosh) Date: Thu, 21 Nov 2002 09:12:51 -0500 Subject: [Python-Dev] dict() enhancement idea? In-Reply-To: References: Message-ID: <20021121141251.GA12640@hishome.net> On Thu, Nov 21, 2002 at 01:51:15PM +0100, Just van Rossum wrote: > I sometimes use an idiom like > > def dictfromkeywords(**kwargs): > return kwargs > > d = dictfromkeywords( > akey = 12, > anotherkey = "foo", > ...etc. > ) I assume that the motive is to get rid of the quotes around the key and conceptually treat it as a "symbol" rather than as a string. If that is the case it could apply to access as well as initialization. class record(dict): def __init__(self, __initfrom=(), **kw): self.__dict__ = self dict.__init__(self, __initfrom) self.update(kw) def __repr__(self): return "%s(%s)" % (self.__class__.__name__, ', '.join(['%s=%s' % (k, repr(v)) for k,v in self.items()])) Fields can be accessed as either items or attributes of a record object. Oren From just@letterror.com Thu Nov 21 15:16:29 2002 From: just@letterror.com (Just van Rossum) Date: Thu, 21 Nov 2002 16:16:29 +0100 Subject: [Python-Dev] dict() enhancement idea? In-Reply-To: <20021121141251.GA12640@hishome.net> Message-ID: Oren Tirosh wrote: > I assume that the motive is to get rid of the quotes around the key and > conceptually treat it as a "symbol" rather than as a string. If that is > the case it could apply to access as well as initialization. > > class record(dict): > def __init__(self, __initfrom=(), **kw): > self.__dict__ = self > dict.__init__(self, __initfrom) > self.update(kw) > > def __repr__(self): > return "%s(%s)" % (self.__class__.__name__, > ', '.join(['%s=%s' % (k, repr(v)) for k,v in self.items()])) > > Fields can be accessed as either items or attributes of a record object. (Neat! Would've never guessed that works... I actually wrote a class with this purpose the other day, I'll see whether I can use the above instead.) But: no, I simply find the {"key": "value"} syntax sometimes inappropriate. Consider the following example: template = """some elaborate template using %(name)s-style substitution""" # idiom 1 x = template % {"name1": foo(), "name2": baz()} # idiom 2 name1 = foo() name2 = foo() x = template % locals() # idiom 3 (after my patch, or with a homegrown function) x = template % dict(key1=foo(), key2=baz()) I find #3 more readable than #1. #2 ain't so bad, but I hate it that when you're quickly going over the code it looks like there are some unused variables. Just From barry@python.org Thu Nov 21 16:01:17 2002 From: barry@python.org (Barry A. Warsaw) Date: Thu, 21 Nov 2002 11:01:17 -0500 Subject: [Python-Dev] dict() enhancement idea? References: <20021121141251.GA12640@hishome.net> Message-ID: <15837.845.970249.469838@gargle.gargle.HOWL> >>>>> "JvR" == Just van Rossum writes: JvR> But: no, I simply find the {"key": "value"} syntax sometimes JvR> inappropriate. Consider the following example: Me too, and I have something similar in Mailman. It's fine that keys are limited to identifiers (although I did recently have one small related bug because of this in a mostly unused corner of the code). | # idiom 2 | name1 = foo() | name2 = foo() | x = template % locals() | # idiom 3 (after my patch, or with a homegrown function) | x = template % dict(key1=foo(), key2=baz()) JvR> I find #3 more readable than #1. #2 ain't so bad, but I hate JvR> it that when you're quickly going over the code it looks like JvR> there are some unused variables. I go even a step farther than #2 with my i18n idiom, e.g. name1 = foo() name2 = foo() x = _(template) Where _() does a sys._getframe() and automatically sucks the locals and globals for interpolation. Works great, but it gives pychecker fits. :) I like both the **kws addition as well as the keywords-as-attributes conveniences. -Barry From just@letterror.com Thu Nov 21 16:10:02 2002 From: just@letterror.com (Just van Rossum) Date: Thu, 21 Nov 2002 17:10:02 +0100 Subject: [Python-Dev] dict() enhancement idea? In-Reply-To: <15837.845.970249.469838@gargle.gargle.HOWL> Message-ID: Barry A. Warsaw wrote: > name1 = foo() > name2 = foo() > x = _(template) > > Where _() does a sys._getframe() and automatically sucks the locals > and globals for interpolation. Works great, but it gives pychecker > fits. :) You, my friend, are a sick man! But then again, I already knew that ;-) > I like both the **kws addition as well as the keywords-as-attributes > conveniences. I wouldn't propose the latter seriously as an enhancement to the dict object: imagine all the c.l.py posts we'll get of people who do >>> d = {} >>> d.keys = and wonder why it blows up later... What Oren posted is a great hack in cases where you more or less know the kind of keys you're going to expect, but it isn't as great as a general feature of dicts. Just From skip@pobox.com Thu Nov 21 16:11:07 2002 From: skip@pobox.com (Skip Montanaro) Date: Thu, 21 Nov 2002 10:11:07 -0600 Subject: [Python-Dev] Need some configure eyeballs and testing In-Reply-To: References: <15834.35506.842761.101061@montanaro.dyndns.org> <15834.36010.859146.834474@montanaro.dyndns.org> <15834.61795.217003.880856@montanaro.dyndns.org> <15836.8886.491752.808317@montanaro.dyndns.org> Message-ID: <15837.1435.875699.611589@montanaro.dyndns.org> >>>>> "Martin" == Martin v Loewis writes: Martin> Skip Montanaro writes: John> OPT= -OPT:Olimit=0 -DNDEBUG -O John> BASECFLAGS= -OPT:Olimit=0 >> >> Thanks, this looks good. Martin> Why is it good that -OPT:Olimit=0 gets duplicated? It actually doesn't anymore. I had forgotten to run autoconf after fixing that problem. John was almost certainly still using the first patch I uploaded instead of the second one. Skip From skip@pobox.com Thu Nov 21 16:19:02 2002 From: skip@pobox.com (Skip Montanaro) Date: Thu, 21 Nov 2002 10:19:02 -0600 Subject: [Python-Dev] Need some configure eyeballs and testing In-Reply-To: <20021121090629.2A1F074B08@indus.ins.cwi.nl> References: <15834.35506.842761.101061@montanaro.dyndns.org> <15834.36010.859146.834474@montanaro.dyndns.org> <15834.61795.217003.880856@montanaro.dyndns.org> <15836.8886.491752.808317@montanaro.dyndns.org> <20021121090629.2A1F074B08@indus.ins.cwi.nl> Message-ID: <15837.1910.967591.4485@montanaro.dyndns.org> John> OPT= -OPT:Olimit=0 -DNDEBUG -O John> BASECFLAGS= -OPT:Olimit=0 Sjoerd> Do you really want -DNDEBUG in OPT? It doesn't strike me as an Sjoerd> optimization setting, so I would think it belongs in BASECFLAGS. I view it as a tossup. My intent was with BASECFLAGS was that if you messed with it you would likely break the build. Deleting OPT shouldn't have that effect. It might well compile differently, but it should still build. Skip From theller@python.net Thu Nov 21 16:35:15 2002 From: theller@python.net (Thomas Heller) Date: 21 Nov 2002 17:35:15 +0100 Subject: [Python-Dev] dict() enhancement idea? In-Reply-To: References: Message-ID: Just van Rossum writes: > I sometimes use an idiom like > > def dictfromkeywords(**kwargs): > return kwargs > > d = dictfromkeywords( > akey = 12, > anotherkey = "foo", > ...etc. > ) For me it's usually spelled a shorter way: def DICT(**kw): return kw d = DICT(akey=12, anotherkey="foo") Looks nicer than the long name, IMO, and complements 'dict'. > > It just occured to me that the dict constructor could easily be overloaded with > this behavior: it currently takes no keyword arguments[*], so the kwargs dict > could simply be used to initialize the new dict. Usually I do not create dictionaries by calling the constructor, because I never can remember which arguments I have to use. This change would make me change my mind again to use it again, so a +1. Thomas From mwh@python.net Thu Nov 21 17:01:58 2002 From: mwh@python.net (Michael Hudson) Date: 21 Nov 2002 17:01:58 +0000 Subject: [Python-Dev] --disable-unicode, again Message-ID: <2mptsyzwjd.fsf@starship.python.net> Once again the --disable-unicode build is completely shafted. Include/pyerrors.h, Python/exceptions.c are easy enough to fix. Python/codecs.c looks worse. Modules/posixmodule.c doesn't build either, haven't looked into this at all. Seems the PEP 293 work is the main culprit. I think I've asked this before, but does anyone use this build? Should I spend some time patching things up -- again -- or ripping --disable-unicode support out. *I* don't care about it. Cheers, M. -- Important data should not be entrusted to Pinstripe, as it may eat it and make loud belching noises. -- from the announcement of the beta of "Pinstripe" aka. Redhat 7.0 From barry@python.org Thu Nov 21 17:36:14 2002 From: barry@python.org (Barry A. Warsaw) Date: Thu, 21 Nov 2002 12:36:14 -0500 Subject: [Python-Dev] dict() enhancement idea? References: <15837.845.970249.469838@gargle.gargle.HOWL> Message-ID: <15837.6542.894825.557511@gargle.gargle.HOWL> >>>>> "JvR" == Just van Rossum writes: >> Where _() does a sys._getframe() and automatically sucks the >> locals and globals for interpolation. Works great, but it >> gives pychecker fits. :) JvR> You, my friend, are a sick man! But then again, I already JvR> knew that ;-) You've read my poems. :) >> I like both the **kws addition as well as the >> keywords-as-attributes conveniences. JvR> I wouldn't propose the latter seriously as an enhancement to JvR> the dict object: imagine all the c.l.py posts we'll get of JvR> people who do >> d = {} d.keys = JvR> and wonder why it blows up later... What Oren posted is a JvR> great hack in cases where you more or less know the kind of JvR> keys you're going to expect, but it isn't as great as a JvR> general feature of dicts. Completely agreed! I tried to weasel word that in a way to say I like those for some applications, but not necessarily as part of dicts. -Barry From fredrik@pythonware.com Thu Nov 21 17:37:57 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Thu, 21 Nov 2002 18:37:57 +0100 Subject: [Python-Dev] --disable-unicode, again References: <2mptsyzwjd.fsf@starship.python.net> Message-ID: <004601c29184$bc91d660$ced241d5@hagrid> Michael Hudson wrote: > I think I've asked this before, but does anyone use this build? > Should I spend some time patching things up -- again -- or ripping > --disable-unicode support out. *I* don't care about it. you missed the third option: since you don't care, don't do anything. isn't the idea that "the first one who really needs this has to fix it"? (adding a comment along those lines to the right place might be a good idea, though...) From python@rcn.com Thu Nov 21 17:51:56 2002 From: python@rcn.com (Raymond Hettinger) Date: Thu, 21 Nov 2002 12:51:56 -0500 Subject: [Python-Dev] dict() enhancement idea? References: Message-ID: <005a01c29188$71bd3400$125ffea9@oemcomputer> > # idiom 3 (after my patch, or with a homegrown function) > x = template % dict(key1=foo(), key2=baz()) +1 This is much more readable, intuitive, and easy to type. From walter@livinglogic.de Thu Nov 21 17:50:39 2002 From: walter@livinglogic.de (=?ISO-8859-15?Q?Walter_D=F6rwald?=) Date: Thu, 21 Nov 2002 18:50:39 +0100 Subject: [Python-Dev] --disable-unicode, again In-Reply-To: <2mptsyzwjd.fsf@starship.python.net> References: <2mptsyzwjd.fsf@starship.python.net> Message-ID: <3DDD1CEF.8080000@livinglogic.de> Michael Hudson wrote: > Once again the --disable-unicode build is completely shafted. > > Include/pyerrors.h, Python/exceptions.c are easy enough to fix. > > Python/codecs.c looks worse. > > Modules/posixmodule.c doesn't build either, haven't looked into this > at all. > > Seems the PEP 293 work is the main culprit. OK, I'll try to fix this. Bye, Walter Dörwald From oren-py-d@hishome.net Thu Nov 21 18:58:42 2002 From: oren-py-d@hishome.net (Oren Tirosh) Date: Thu, 21 Nov 2002 20:58:42 +0200 Subject: [Python-Dev] Yet another string formatting proposal Message-ID: <20021121205842.A13917@hishome.net> "\(a) + \(b) = \(a+b)\n" The expressions embedded in the string are parsed at compile time and any syntax errors in them are detected during compilation. The use of the backslash as introducer makes it unnecessary to add a new magic character ("$") along with a new escaping convention when this character needs to appear in the string ("$$") and a new string prefix (pep 215) or method (pep 292) to instruct the system to perform additional processing on this string. One advantage of using an operator, method or function over in-line formatting is that it enables the use of a template. A new string method can provide run-time evaluation of the same format: "\(a) + \(b) = \(a+b)\n" r"\(a) + \(b) = \(a+b)\n".cook() A raw string is used to defer the evaluation of all backslash escape sequences to some later time. The cook method evaluates backslash escapes in the string, including any embedded expressions. This runtime version may be used for internationalization, for example. By default, the cook method uses the global and local namespace of the calling scope, just like the built-in function eval(). Dictionary and/or named arguments may be used to override the namespace in which embedded expressions are evaluated: s = formatstring.cook(a=5, b=6) s = formatstring.cook(sys._getframe().f_locals) Security issues: Compile-time expression embedding should not have any special security concerns since there is no parsing of data from untrusted sources (if your SOURCE CODE is not trusted I can't help you there). In order to provide protection against evaluation of arbitrary code when an attacker has access to the format strings the cook() method could be limited to variable names only. A sparate cook_eval() method would support full expressions. The 'eval' in the method name should remind the programmer that it is potentially as dangerous as eval(). Drawbacks: Must use the full format "I like \(traffic) lights". There is no option for the shorter version "I like \traffic lights" because these combinations are already taken. May be considered an advantage: "There should be one-- and preferably only one --obvious way to do it." Not as familiar as $ for programmers from other languages. May also be considered an advantage :-) Oren From fredrik@pythonware.com Thu Nov 21 19:24:54 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Thu, 21 Nov 2002 20:24:54 +0100 Subject: [Python-Dev] Yet another string formatting proposal References: <20021121205842.A13917@hishome.net> Message-ID: <005401c29193$adb51120$ced241d5@hagrid> oren won't give up: > "\(a) + \(b) = \(a+b)\n" > > The expressions embedded in the string are parsed at compile time and > any syntax errors in them are detected during compilation. note that "\(" is commonly used to escape parentheses in regular expression strings. From martin@v.loewis.de Thu Nov 21 19:45:34 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 21 Nov 2002 20:45:34 +0100 Subject: [Python-Dev] --disable-unicode, again In-Reply-To: <004601c29184$bc91d660$ced241d5@hagrid> References: <2mptsyzwjd.fsf@starship.python.net> <004601c29184$bc91d660$ced241d5@hagrid> Message-ID: "Fredrik Lundh" writes: > isn't the idea that "the first one who really needs this has to fix it"? Exactly. It took some effort to put it in, so I would not like it to be taken out right now. If nobody complains that it is broken by 2.4, we might want to reconsider. For the moment, I can happily live with it being broken. Regards, Martin From oren-py-d@hishome.net Thu Nov 21 19:53:23 2002 From: oren-py-d@hishome.net (Oren Tirosh) Date: Thu, 21 Nov 2002 21:53:23 +0200 Subject: [Python-Dev] Yet another string formatting proposal In-Reply-To: <005401c29193$adb51120$ced241d5@hagrid>; from fredrik@pythonware.com on Thu, Nov 21, 2002 at 08:24:54PM +0100 References: <20021121205842.A13917@hishome.net> <005401c29193$adb51120$ced241d5@hagrid> Message-ID: <20021121215323.A15502@hishome.net> On Thu, Nov 21, 2002 at 08:24:54PM +0100, Fredrik Lundh wrote: > oren won't give up: > > > > "\(a) + \(b) = \(a+b)\n" > > > > The expressions embedded in the string are parsed at compile time and > > any syntax errors in them are detected during compilation. > > note that "\(" is commonly used to escape parentheses in regular > expression strings. Yes, it might break some existing code that doesn't use proper \\ escaping or raw strings for regular expression. Note that such code is already broken in the sense that it uses an undefined escape. If this turns out to be a real problem a possible alternative is to use curly braces. There is a precedent for this in u"\N{UNICODE CHAR NAMES}" Braces are also more visually distinctive and less confusing when the expression itself contains parentheses: print "X=\{x}, y=\{calc_y(x)}" Oren From walter@livinglogic.de Thu Nov 21 20:41:09 2002 From: walter@livinglogic.de (=?ISO-8859-15?Q?Walter_D=F6rwald?=) Date: Thu, 21 Nov 2002 21:41:09 +0100 Subject: [Python-Dev] --disable-unicode, again In-Reply-To: <2mptsyzwjd.fsf@starship.python.net> References: <2mptsyzwjd.fsf@starship.python.net> Message-ID: <3DDD44E5.1070003@livinglogic.de> Michael Hudson wrote: > Once again the --disable-unicode build is completely shafted. > > Include/pyerrors.h, Python/exceptions.c are easy enough to fix. > > Python/codecs.c looks worse. Those are fixed now. > Modules/posixmodule.c doesn't build either, haven't looked into this > at all. This and Object/fileobject.c are still broken. Bye, Walter Dörwald From fredrik@pythonware.com Thu Nov 21 21:22:48 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Thu, 21 Nov 2002 22:22:48 +0100 Subject: [Python-Dev] Yet another string formatting proposal References: <20021121205842.A13917@hishome.net> <005401c29193$adb51120$ced241d5@hagrid> <20021121215323.A15502@hishome.net> Message-ID: <011801c291a4$271cfb30$ced241d5@hagrid> Oren Tirosh wrote: > Yes, it might break some existing code that doesn't use proper \\ escaping > or raw strings for regular expression. Note that such code is already > broken in the sense that it uses an undefined escape. "not proper"? "broken"? "undefined"? Please read the section on string escapes in the *Python* language reference, and try again. Start here: http://www.python.org/doc/current/ref/strings.html > If this turns out to be a real problem a possible alternative is to use > curly braces. There is a precedent for this in u"\N{UNICODE CHAR NAMES}" Last time I checked, the N in \N was a character, not a curly brace. From tim@multitalents.net Thu Nov 21 22:25:26 2002 From: tim@multitalents.net (Tim Rice) Date: Thu, 21 Nov 2002 14:25:26 -0800 (PST) Subject: [Python-Dev] release22-maint branch broken Message-ID: I pulled the release22-maint branch today and tried to build on UnixWare 7 It looks like changes to setup.py and Lib/distutils/sysconfig.py broke the build. At least if there is no pre existing python installed. Here is the CVS log. ... Backport fdrake's revision 1.88 of setup.py revision 1.46 of Lib/distutils/sysconfig.py When using a Python that has not been installed to build 3rd-party modules, distutils does not understand that the build version of the source tree is needed. This patch fixes distutils.sysconfig to understand that the running Python is part of the build tree and needs to use the appropriate "shape" of the tree. This does not assume anything about the current directory, so can be used to build 3rd-party modules using Python's build tree as well. This is useful since it allows us to use a non-installed debug-mode Python with 3rd-party modules for testing. It as the side-effect that set_python_build() is no longer needed (the hack which was added to allow distutils to be used to build the "standard" extension modules). This closes SF patch #547734. ... Here are the errors I get ... running build_ext Traceback (most recent call last): File "/opt/src/utils/python/python-2.2.2/src/setup.py", line 795, in ? main() File "/opt/src/utils/python/python-2.2.2/src/setup.py", line 790, in main scripts = ['Tools/scripts/pydoc'] File "/opt/src/utils/python/python-2.2.2/src/Lib/distutils/core.py", line 138, in setup dist.run_commands() File "/opt/src/utils/python/python-2.2.2/src/Lib/distutils/dist.py", line 902, in run_commands self.run_command(cmd) File "/opt/src/utils/python/python-2.2.2/src/Lib/distutils/dist.py", line 922, in run_command cmd_obj.run() File "/opt/src/utils/python/python-2.2.2/src/Lib/distutils/command/build.py", line 107, in run self.run_command(cmd_name) File "/opt/src/utils/python/python-2.2.2/src/Lib/distutils/cmd.py", line 330, in run_command self.distribution.run_command(command) File "/opt/src/utils/python/python-2.2.2/src/Lib/distutils/dist.py", line 922, in run_command cmd_obj.run() File "/opt/src/utils/python/python-2.2.2/src/Lib/distutils/command/build_ext.py", line 231, in run customize_compiler(self.compiler) File "/opt/src/utils/python/python-2.2.2/src/Lib/distutils/sysconfig.py", line 139, in customize_compiler (cc, opt, ccshared, ldshared, so_ext) = \ File "/opt/src/utils/python/python-2.2.2/src/Lib/distutils/sysconfig.py", line 421, in get_config_vars func() File "/opt/src/utils/python/python-2.2.2/src/Lib/distutils/sysconfig.py", line 326, in _init_posix raise DistutilsPlatformError(my_msg) distutils.errors.DistutilsPlatformError: invalid Python installation: unable to open /usr/local/lib/python2.2/config/Makefile (No such file or directory) gmake: *** [sharedmods] Error 1 ... There is no /usr/local/lib/python2.2/config/Makefile because python has not been built/installed on this machine yet. -- Tim Rice Multitalents (707) 887-1469 tim@multitalents.net From oren-py-d@hishome.net Thu Nov 21 22:46:18 2002 From: oren-py-d@hishome.net (Oren Tirosh) Date: Thu, 21 Nov 2002 17:46:18 -0500 Subject: [Python-Dev] Yet another string formatting proposal In-Reply-To: <011801c291a4$271cfb30$ced241d5@hagrid> References: <20021121205842.A13917@hishome.net> <005401c29193$adb51120$ced241d5@hagrid> <20021121215323.A15502@hishome.net> <011801c291a4$271cfb30$ced241d5@hagrid> Message-ID: <20021121224618.GA78921@hishome.net> On Thu, Nov 21, 2002 at 10:22:48PM +0100, Fredrik Lundh wrote: > Oren Tirosh wrote: > > > Yes, it might break some existing code that doesn't use proper \\ escaping > > or raw strings for regular expression. Note that such code is already > > broken in the sense that it uses an undefined escape. > > "not proper"? "broken"? "undefined"? > > Please read the section on string escapes in the *Python* language > reference, and try again. Start here: > > http://www.python.org/doc/current/ref/strings.html Unlike Standard C, all UNRECOGNIZED escape sequences are left in the string unchanged, i.e., the backslash is left in the string. (This behavior is useful when deBUGging: if an escape sequence is MISTYPED, the resulting output is more easily recognized as BROKEN.) My mistake. I should have RTFM. There was no excuse for me calling such escape sequences "undefined" and not "proper" when the documentation describes escape sequences not listed in the table as merely "unrecognized" or possibly "mistyped". Sorry. Oren From oren-py-d@hishome.net Thu Nov 21 23:12:55 2002 From: oren-py-d@hishome.net (Oren Tirosh) Date: Thu, 21 Nov 2002 18:12:55 -0500 Subject: [Python-Dev] Yet another string formatting proposal In-Reply-To: References: <20021121205842.A13917@hishome.net> Message-ID: <20021121231255.GA87139@hishome.net> On Thu, Nov 21, 2002 at 05:00:44PM -0500, Paul Svensson wrote: > On Thu, 21 Nov 2002, Oren Tirosh wrote: > > >One advantage of using an operator, method or function over in-line > >formatting is that it enables the use of a template. A new string method > >can provide run-time evaluation of the same format: > > > > "\(a) + \(b) = \(a+b)\n" > > r"\(a) + \(b) = \(a+b)\n".cook() > > > >A raw string is used to defer the evaluation of all backslash escape > >sequences to some later time. The cook method evaluates backslash > >escapes in the string, including any embedded expressions. This runtime > >version may be used for internationalization, for example. > > Why not call this method "eval", so we all will know to treat it with care ? If cook() is limited to variable names it will be pretty safe and no special care should be necessary. In that case the example above wouldn't work, though, because it contains (a+b). It will need to use cook_eval(). Since what it does is the opposite of "raw" I just had to call it "cook"! Oren From esr@thyrsus.com Fri Nov 22 00:31:43 2002 From: esr@thyrsus.com (Eric S. Raymond) Date: Thu, 21 Nov 2002 19:31:43 -0500 Subject: [Python-Dev] Expect in python Message-ID: <20021122003143.GA2242@thyrsus.com> Has anyone else here looked at Pexpect? It's a pure-Python module that uses ptys to support mmost of the capability of Tcl expect. http://pexpect.sourceforge.net/ This is an 0.94 beta, but the author appears to know what he is doing, and it would be an excellent boost to Python's capabilities for administrative scripting. Among other things, it would subsume most of what Tcl is actually used for. I'm thinking this is a very strong candidate to enter the Python library when it reaches 1.0 level. Comments? -- Eric S. Raymond From DavidA@ActiveState.com Fri Nov 22 01:22:31 2002 From: DavidA@ActiveState.com (David Ascher) Date: Thu, 21 Nov 2002 17:22:31 -0800 Subject: [Python-Dev] Expect in python In-Reply-To: <20021122003143.GA2242@thyrsus.com> References: <20021122003143.GA2242@thyrsus.com> Message-ID: <3DDD86D7.3010409@ActiveState.com> Eric S. Raymond wrote: > Has anyone else here looked at Pexpect? It's a pure-Python module that > uses ptys to support mmost of the capability of Tcl expect. > > http://pexpect.sourceforge.net/ > > This is an 0.94 beta, but the author appears to know what he is doing, and > it would be an excellent boost to Python's capabilities for administrative > scripting. Among other things, it would subsume most of what Tcl is actually > used for. > > I'm thinking this is a very strong candidate to enter the Python library when > it reaches 1.0 level. Comments? The thing that would make a true 'killer app' as compared to Tcl is if it worked well on Windows and had a model that extended to a more abstract API (not just character streams, which, IIRC, is central to expect). Given the pty support, that seems unlikely. Too bad. Have you used pexpect enough to explain why it's good, or is this an 'in principle' argument? I agree that expectish functionality would be nice. Something that I could grow into a Windows/not-just-pty's model would be even nicer. --david From esr@thyrsus.com Fri Nov 22 01:25:40 2002 From: esr@thyrsus.com (Eric S. Raymond) Date: Thu, 21 Nov 2002 20:25:40 -0500 Subject: [Python-Dev] Expect in python In-Reply-To: <3DDD86D7.3010409@ActiveState.com> References: <20021122003143.GA2242@thyrsus.com> <3DDD86D7.3010409@ActiveState.com> Message-ID: <20021122012540.GA2687@thyrsus.com> David Ascher : > The thing that would make a true 'killer app' as compared to Tcl is if it > worked well on Windows and had a model that extended to a more abstract API > (not just character streams, which, IIRC, is central to expect). Given the > pty support, that seems unlikely. Too bad. Granted, but let's not let (nonexistent) perfection be the enemy of the good. This module would be very useful to Unix sysadmins and others just as it is, and the API will be comfortable for anybody familiar with Tcl expect. That's not a trivial advantage. > Have you used pexpect enough to explain why it's good, or is this an 'in > principle' argument? I'm writing an application using it now, a program to prepare remote sites for ssh access. There are a lot of fiddly details to this, like verifying that the permissions on remote directories are correct. Users shouldn't have to sweat this sort of thing by hand. It seems to work, so far. It's well documented and the examples are helpful. > I agree that expectish functionality would be nice. > Something that I could grow into a Windows/not-just-pty's model would be > even nicer. Agreed. Your chances of this certainly won't be *decreased* if pexpect is included. -- Eric S. Raymond From bac@OCF.Berkeley.EDU Fri Nov 22 02:00:51 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Thu, 21 Nov 2002 18:00:51 -0800 (PST) Subject: [Python-Dev] dict() enhancement idea? In-Reply-To: Message-ID: [Just van Rossum] > I sometimes use an idiom like > > def dictfromkeywords(**kwargs): > return kwargs > > d = dictfromkeywords( > akey = 12, > anotherkey = "foo", > ...etc. > ) > +1 So far everyone who has responded to this idea has admitted they have code somewhere the operates like this. > to conveniently build dicts with literal keys. I think I've seen Alex Martelli > advertise it, too. Don't know how well known it is otherwise, but it is extremly > handy. > I think Alex has it in the Cookbook. -Brett From prabhu@aero.iitm.ernet.in Fri Nov 22 02:25:27 2002 From: prabhu@aero.iitm.ernet.in (Prabhu Ramachandran) Date: Fri, 22 Nov 2002 07:55:27 +0530 Subject: [Python-Dev] Expect in python In-Reply-To: <20021122012540.GA2687@thyrsus.com> References: <20021122003143.GA2242@thyrsus.com> <3DDD86D7.3010409@ActiveState.com> <20021122012540.GA2687@thyrsus.com> Message-ID: <15837.38295.922784.797912@monster.linux.in> >>>>> "ESR" == Eric S Raymond writes: [using pexpect] ESR> It seems to work, so far. It's well documented and the ESR> examples are helpful. I've been using it off and on to script some non-Python programs. I don't have the time to wrap my libraries to Python, so use pexpect to 'script' the application directly. This is inflexible but very handy. Actually, an interactive version of os.popen4 would be just as handy. What I mean is that you send in commands via the input stream and the output is sent out to the output stream without requiring that the input stream be closed. This is is like pexpect but does not force you to 'expect' anything specific. cheers, prabhu From martin@v.loewis.de Fri Nov 22 07:57:36 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 22 Nov 2002 08:57:36 +0100 Subject: [Python-Dev] Expect in python In-Reply-To: <20021122003143.GA2242@thyrsus.com> References: <20021122003143.GA2242@thyrsus.com> Message-ID: "Eric S. Raymond" writes: > I'm thinking this is a very strong candidate to enter the Python > library when it reaches 1.0 level. Comments? For that, we need a commitment of somebody to maintain it for us, preferably from the author of the package. Regards, Martin From aleax@aleax.it Fri Nov 22 08:07:30 2002 From: aleax@aleax.it (Alex Martelli) Date: Fri, 22 Nov 2002 09:07:30 +0100 Subject: [Python-Dev] dict() enhancement idea? In-Reply-To: References: Message-ID: On Friday 22 November 2002 03:00 am, Brett Cannon wrote: ... > > to conveniently build dicts with literal keys. I think I've seen Alex > > Martelli advertise it, too. Don't know how well known it is otherwise, > > but it is extremly handy. > > I think Alex has it in the Cookbook. Yes, originally from a contribution by Brent Burley: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52313 and in class clothing at: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52308 Alex From esr@thyrsus.com Fri Nov 22 08:22:16 2002 From: esr@thyrsus.com (Eric S. Raymond) Date: Fri, 22 Nov 2002 03:22:16 -0500 Subject: [Python-Dev] Expect in python In-Reply-To: References: <20021122003143.GA2242@thyrsus.com> Message-ID: <20021122082216.GA5519@thyrsus.com> Martin v. Loewis : > > I'm thinking this is a very strong candidate to enter the Python > > library when it reaches 1.0 level. Comments? > > For that, we need a commitment of somebody to maintain it for us, > preferably from the author of the package. I've exchanged email with him. I asked if he were aiming the package at the standard library. He indicated that the thought had occurred to him, but that he had no idea how to go about accomplishing that. His phrasing was such that I'm pretty sure he was thinking "Yeah! Cool!" :-) I told him that we don't have a formal process, but attracting the attention of someone on python-dev is usually where it starts. The rest of the conversation was about some minor problems with the module -- actually with ptys. Seems the Solaris pty implementation is prone to hangs and fd leakage. I got the impression that the guy is pretty dedicated to the code and more than willing to maintain it for the long haul. I can also report that I have finished about 3/4ths of the application I was working on and can evaluate pexpect in more detail now. The application is an ssh key installer for dummies; you give it a remote hostname and (optionally) the username to log in as. First, it It walks the user through generating and populating a local .ssh with keypairs if they're not already presennt. It then handles all the fiddly bits of making sure there's a remote .ssh directory, checking permissions, copying public keys from the local .ssh to the remote one, etc. The idea is to allow a novice user to type ssh-installkeys remotehost@somewhere.com and have the Right Thing happen. This is worth automating because there are a bunch of obscure bugs you can run into if you get it even slightly wrong. ssh -d diagnostics deliberately don't pass back warnings about bad file and directory permissions, for example, because that might leak sesnsitive information about the remote system. I plan to give this script to the openssh guys for their distribution. No surprise that I'm using pexpect to execute the remote commands. I can report that the interface seems pretty natural, I haven't run into any sharp edges yet, and it does the job. Anybody who has ever written an expect script will be productive with this puppy in 10 minutes flat. -- Eric S. Raymond From Raymond Hettinger" PEP 288 has been updated and undeferred. Comments are solicited. The old proposal for generator parameter passing with g.next(val) has been replaced with simply using attributes in about the same way as classes: def outputCaps(logfile): while True: line = __self__.data logfile.write(line.upper) yield None outputCaps.data = "" # optional attribute initialization g = outputCaps(open('logfil.txt','w')) for line in open('myfile.txt'): g.data = line g.next() The separate proposed for generator exceptions is unmodified from before. From martin@v.loewis.de Fri Nov 22 09:38:14 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 22 Nov 2002 10:38:14 +0100 Subject: [Python-Dev] Expect in python In-Reply-To: <20021122082216.GA5519@thyrsus.com> References: <20021122003143.GA2242@thyrsus.com> <20021122082216.GA5519@thyrsus.com> Message-ID: "Eric S. Raymond" writes: > I told him that we don't have a formal process, but attracting the > attention of someone on python-dev is usually where it starts. Actually, we do: Write a library PEP, see PEP 2. This procedure is rarely executed completely, but I would atleast insist on the maintenance part being clarified in advance - with a clear, up-front indication that the module *will* be removed if it ever becomes unmaintained (of course, if that ever happens, and the module has its usership, threat of removing it might produce a different maintainer). As you can see, I don't care that much about usefulness of the module (although I'm sure others here will); to me, if users say it is useful, and maintenance is clear, and it does not come with a huge C library that you need to build, I'm fine with incorporating it - provided there is somebody I can assign bug reports to. Regards, Martin From esr@thyrsus.com Fri Nov 22 09:40:22 2002 From: esr@thyrsus.com (Eric S. Raymond) Date: Fri, 22 Nov 2002 04:40:22 -0500 Subject: [Python-Dev] Expect in python In-Reply-To: References: <20021122003143.GA2242@thyrsus.com> <20021122082216.GA5519@thyrsus.com> Message-ID: <20021122094022.GA6368@thyrsus.com> Martin v. Loewis : > > I told him that we don't have a formal process, but attracting the > > attention of someone on python-dev is usually where it starts. > > Actually, we do: Write a library PEP, see PEP 2. > > This procedure is rarely executed completely, but I would atleast > insist on the maintenance part being clarified in advance Thanks for the correction. > As you can see, I don't care that much about usefulness of the module > (although I'm sure others here will); to me, if users say it is > useful, and maintenance is clear, and it does not come with a huge C > library that you need to build, I'm fine with incorporating it - > provided there is somebody I can assign bug reports to. No C. That's one of the good things about this implementation; it's pure Python, relying only on the existing pty module. I don't think it's going to need a lot of maintainance, frankly. Simple API, well-understood problem, and an author who (correctly) describes himself as "anal-retentive" :-). What it *is* likely to do is tickle a lot of bugs and edge cases in the pty code. -- Eric S. Raymond From martin@v.loewis.de Fri Nov 22 09:54:50 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 22 Nov 2002 10:54:50 +0100 Subject: [Python-Dev] Returning objects in Tkinter Message-ID: I'd like to integrate python.org/sf/518625 in some form into Python 2.3. If it is unacceptable in its current form, could I apply it if I change the default to "not return objects"? Regards, Martin From Jack.Jansen@cwi.nl Fri Nov 22 10:35:28 2002 From: Jack.Jansen@cwi.nl (Jack Jansen) Date: Fri, 22 Nov 2002 11:35:28 +0100 Subject: [Python-Dev] PEP 288: Generator Attributes In-Reply-To: <002b01c29203$a0b88680$125ffea9@oemcomputer> Message-ID: <1E8BAC24-FE06-11D6-9661-0030655234CE@cwi.nl> On Friday, Nov 22, 2002, at 09:46 Europe/Amsterdam, Raymond Hettinger wrote: > def outputCaps(logfile): > while True: > line = __self__.data > logfile.write(line.upper) > yield None > outputCaps.data = "" # optional attribute initialization > > g = outputCaps(open('logfil.txt','w')) > for line in open('myfile.txt'): > g.data = line > g.next() I don't like it, there's "Magic! Magic!" written all over it. Generators have always given me that feeling (you start reading them as a function, then 20 lines down you meet a "yield" and suddenly realize you have to start reading at the top again, keeping in mind that this is a persistent stack frame), but with the __self__ plus the fact that you local variables may not be what they appear to be makes it hairy. You basically cannot understand the code without knowing the code of the caller. There's also absolutely no way to get encapsulation. So count me in for a -1. Generators have to me always felt more "class-instance-like" than "function-like", and I guess this just goes to show it. -- - Jack Jansen http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma Goldman - From mwh@python.net Fri Nov 22 10:37:49 2002 From: mwh@python.net (Michael Hudson) Date: 22 Nov 2002 10:37:49 +0000 Subject: [Python-Dev] --disable-unicode, again In-Reply-To: "Fredrik Lundh"'s message of "Thu, 21 Nov 2002 18:37:57 +0100" References: <2mptsyzwjd.fsf@starship.python.net> <004601c29184$bc91d660$ced241d5@hagrid> Message-ID: <2mn0o1zy82.fsf@starship.python.net> "Fredrik Lundh" writes: > Michael Hudson wrote: > > > I think I've asked this before, but does anyone use this build? > > Should I spend some time patching things up -- again -- or ripping > > --disable-unicode support out. *I* don't care about it. > > you missed the third option: since you don't care, don't do anything. That's what I'd been doing since it broke a couple of months back. > isn't the idea that "the first one who really needs this has to fix it"? Maybe, but having a configure option so conspicuously broken seemed a bit embarrassing. I also got tired of the failure reports from my cronjob every night -- got a couple of warnings last night, but it built. > (adding a comment along those lines to the right place might be a > good idea, though...) Where is the right place? It had occurred to me to stencil "don't forget the --disable-unicode build!" along the top of every Python developer's monitor, but that didn't seem practical. Cheers, M. -- I have a cat, so I know that when she digs her very sharp claws into my chest or stomach it's really a sign of affection, but I don't see any reason for programming languages to show affection with pain. -- Erik Naggum, comp.lang.lisp From mwh@python.net Fri Nov 22 10:41:05 2002 From: mwh@python.net (Michael Hudson) Date: 22 Nov 2002 10:41:05 +0000 Subject: [Python-Dev] release22-maint branch broken In-Reply-To: Tim Rice's message of "Thu, 21 Nov 2002 14:25:26 -0800 (PST)" References: Message-ID: <2mk7j5zy2m.fsf@starship.python.net> Tim Rice writes: > I pulled the release22-maint branch today and tried to build on UnixWare 7 > It looks like changes to setup.py and Lib/distutils/sysconfig.py > broke the build. Oops. > At least if there is no pre existing python installed. Well, if was reading the previous Python's config, that would be broken too. Can you work out why Lib/distutils/sysconfig.py isn't setting python_build? I suggest some debugging 'print's. Cheers, M. -- I've even been known to get Marmite *near* my mouth -- but never actually in it yet. Vegamite is right out. UnicodeError: ASCII unpalatable error: vegamite found, ham expected -- Tim Peters, comp.lang.python From martin@v.loewis.de Fri Nov 22 11:11:37 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 22 Nov 2002 12:11:37 +0100 Subject: [Python-Dev] --disable-unicode, again In-Reply-To: <2mn0o1zy82.fsf@starship.python.net> References: <2mptsyzwjd.fsf@starship.python.net> <004601c29184$bc91d660$ced241d5@hagrid> <2mn0o1zy82.fsf@starship.python.net> Message-ID: Michael Hudson writes: > Maybe, but having a configure option so conspicuously broken seemed a > bit embarrassing. Did you try --with-dl-dld lately ?-) > I also got tired of the failure reports from my cronjob every night -- > got a couple of warnings last night, but it built. So turn off the cronjob. It is actual little effort to keep this working. But checking it during the release cycle only seems more than enough to me - there is no need to have it permanently working. Regards, Martin From guido@python.org Fri Nov 22 12:06:55 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 22 Nov 2002 07:06:55 -0500 Subject: [Python-Dev] PEP 288: Generator Attributes In-Reply-To: Your message of "Fri, 22 Nov 2002 11:35:28 +0100." <1E8BAC24-FE06-11D6-9661-0030655234CE@cwi.nl> References: <1E8BAC24-FE06-11D6-9661-0030655234CE@cwi.nl> Message-ID: <200211221206.gAMC6tY06913@pcp02138704pcs.reston01.va.comcast.net> > On Friday, Nov 22, 2002, at 09:46 Europe/Amsterdam, Raymond Hettinger > wrote: > > def outputCaps(logfile): > > while True: > > line = __self__.data > > logfile.write(line.upper) > > yield None > > outputCaps.data = "" # optional attribute initialization > > > > g = outputCaps(open('logfil.txt','w')) > > for line in open('myfile.txt'): > > g.data = line > > g.next() [Jack] > I don't like it, there's "Magic! Magic!" written all over it. > Generators have always given me that feeling (you start reading them as > a function, then 20 lines down you meet a "yield" and suddenly realize > you have to start reading at the top again, keeping in mind that this > is a persistent stack frame), but with the __self__ plus the fact that > you local variables may not be what they appear to be makes it hairy. > You basically cannot understand the code without knowing the code of > the caller. There's also absolutely no way to get encapsulation. So > count me in for a -1. Ditto here. The PEP is way too thin on rationale. It has some examples but doesn't explain why this is better than what you woul ddo in current Python. Since passing a simple objects as an extra argument to the generator is all that's needed to pass extra values into a generator between next() calls, I don't see the advantage. And __self__ is butt-ugly. > Generators have to me always felt more "class-instance-like" than > "function-like", and I guess this just goes to show it. --Guido van Rossum (home page: http://www.python.org/~guido/) From amk@amk.ca Fri Nov 22 13:30:08 2002 From: amk@amk.ca (A.M. Kuchling) Date: Fri, 22 Nov 2002 08:30:08 -0500 Subject: [Python-Dev] Re: release22-maint branch broken In-Reply-To: References: Message-ID: Tim Rice wrote: > raise DistutilsPlatformError(my_msg) > distutils.errors.DistutilsPlatformError: invalid Python installation: > unable to open /usr/local/lib/python2.2/config/Makefile (No such file > or directory) > gmake: *** [sharedmods] Error 1 The revised version of sysconfig.py figures out if it's in the build directory by looking for a landmark file; the landmark is Modules/Setup. Does that file exist? --amk From mchermside@ingdirect.com Fri Nov 22 14:00:30 2002 From: mchermside@ingdirect.com (Chermside, Michael) Date: Fri, 22 Nov 2002 09:00:30 -0500 Subject: [Python-Dev] Re: Yet another string formatting proposal Message-ID: <902A1E710FEAB740966EC991C3A38A8903C35B1B@INGDEXCHANGEC1.ingdirect.com> > [Oren proposes "\(a) + \(b)" string formatting.] I've been watching the string formatting proposals for a while. Each time I've thought... "well, it seems a LITTLE nicer than %s and %(name)s, but is it really ENOUGH better that it's worth replacing what we've got?". So far I've generally been unsure. But this one is good. It's best features are that it is _simple_, and _elegant_. I would prefer \{} over \() because it is more distinctive ("\(" being used in REs), and somehow the=20 {} seem more consistent with "execution" and () with "grouping" across a broad range of languages. I'm also bit dubious about the name "cook" (cute though!). But The design is clean and readable. The separation between cook() and cook_eval() is probably quite wise (although it will throw a couple of newbies, but in the process it'll teach them to beware of eval'ing arbitrary data). +1 (but, of course, that vote doesn't count until there's a PEP to refer to). -- Michael Chermside From fredrik@pythonware.com Fri Nov 22 14:00:27 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Fri, 22 Nov 2002 15:00:27 +0100 Subject: [Python-Dev] Anon CVS References: <15832.65251.994061.492964@gargle.gargle.HOWL> <025601c28fef$94c26a00$ced241d5@hagrid> <20021119172526.GS17931@epoch.metaslash.com> Message-ID: <04b301c2922f$8528ff90$0900a8c0@spiff> neal wrote: > > I haven't been able to connect since the last wednesday or > > so (cvs update just hangs until it times out). > >=20 > > If this persists, someone else will have to check in my SRE > > patches... >=20 > You may need to force protocol version 1. You can do this by > adding/modifying a stanza in your .ssh/config file: >=20 > host cvs.python.sourceforge.net > Protocol 1,2 unfortunately, that didn't help. at least not at once; things started = to work again after we rebooted the computer (for other reasons). linux gremlins, most likely. thanks /F From marc@informatik.uni-bremen.de Fri Nov 22 15:19:27 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: Fri, 22 Nov 2002 16:19:27 +0100 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: <200211111438.gABEcKX15504@pcp02138704pcs.reston01.va.comcast.net> References: <200211111438.gABEcKX15504@pcp02138704pcs.reston01.va.comcast.ne t> Message-ID: <75750000.1037978367@leeloo.intern.geht.de> >> > So it looks like PyOblect_Free() was called with 0x800 as an argument, >> > which is a bogus pointer value. Can you go up one stack level and see >> > what the value of k in function_call() is? >> 713 if (ADDRESS_IN_RANGE(p, pool->arenaindex)) { >> (gdb) up >> # 1 0x080dfef4 in function_call (func=0x826317c, arg=0x8256aac, >> kw=0x8269bdc) at Objects/funcobject.c:481 >> 481 PyMem_DEL(k); >> (gdb) p k >> $1 = (struct _object **) 0x800 > > Well, then maybe you can follow MvL's suggestion and find out how come > this value was returned by PyMem_NEW(PyObject *, 2*nk)??? The problem seems to be in the FreeBSD malloc implementation. malloc(0) returns 0x800 in all my tests. malloc(1) and up seems to work properly.. HTH, Marc From mal@lemburg.com Fri Nov 22 15:52:20 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 22 Nov 2002 16:52:20 +0100 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: <200211111438.gABEcKX15504@pcp02138704pcs.reston01.va.comcast.ne t> References: <200211111438.gABEcKX15504@pcp02138704pcs.reston01.va.comcast.ne t> <75750000.1037978367@leeloo.intern.geht.de> Message-ID: <3DDE52B4.5050904@lemburg.com> arc Recht wrote: > >> > So it looks like PyOblect_Free() was called with 0x800 as an argument, > >> > which is a bogus pointer value. Can you go up one stack level and see > >> > what the value of k in function_call() is? > >> 713 if (ADDRESS_IN_RANGE(p, pool->arenaindex)) { > >> (gdb) up > >> # 1 0x080dfef4 in function_call (func=0x826317c, arg=0x8256aac, > >> kw=0x8269bdc) at Objects/funcobject.c:481 > >> 481 PyMem_DEL(k); > >> (gdb) p k > >> $1 = (struct _object **) 0x800 > > > > > > Well, then maybe you can follow MvL's suggestion and find out how come > > this value was returned by PyMem_NEW(PyObject *, 2*nk)??? > > The problem seems to be in the FreeBSD malloc implementation. malloc(0) > returns 0x800 in all my tests. malloc(1) and up seems to work properly.. Maybe we have to relax the configure test a bit and set the MALLOC_ZERO_RETURNS_NULL #define not only on NULL returns, but on returns < 0x1000 as well ?! (or add something to pyport.h along these lines specific to FreeBSD)... # check whether malloc(0) returns NULL or not AC_MSG_CHECKING(what malloc(0) returns) AC_CACHE_VAL(ac_cv_malloc_zero, [AC_TRY_RUN([#include #ifdef HAVE_STDLIB #include #else char *malloc(), *realloc(); int *free(); #endif main() { char *p; p = malloc(0); if ((unsigned long)p < 0x1000) exit(1); p = realloc(p, 0); if ((unsigned long)p < 0x1000) exit(1); free(p); exit(0); }], ac_cv_malloc_zero=nonnull, ac_cv_malloc_zero=null, ac_cv_malloc_zero=nonnull)]) # XXX arm cross-compile? AC_MSG_RESULT($ac_cv_malloc_zero) if test "$ac_cv_malloc_zero" = null then AC_DEFINE(MALLOC_ZERO_RETURNS_NULL, 1, [Define if malloc(0) returns a NULL pointer.]) fi -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From tjreedy@udel.edu Fri Nov 22 16:21:49 2002 From: tjreedy@udel.edu (Terry Reedy) Date: Fri, 22 Nov 2002 11:21:49 -0500 Subject: [Python-Dev] Re: PEP 288: Generator Attributes References: <002b01c29203$a0b88680$125ffea9@oemcomputer> Message-ID: "Raymond Hettinger" wrote in message news:002b01c29203$a0b88680$125ffea9@oemcomputer... > PEP 288 has been updated and undeferred. > Comments are solicited. > > The old proposal for generator parameter passing with g.next(val) > has been replaced with simply using attributes in about the same > way as classes: > > def outputCaps(logfile): > while True: > line = __self__.data > logfile.write(line.upper) > yield None > outputCaps.data = "" # optional attribute initialization > > g = outputCaps(open('logfil.txt','w')) > for line in open('myfile.txt'): > g.data = line > g.next() -1 1. Having assignment of a dummy attribute to a (generator) function unlock assignment of an attribute of the same name to the otherwise read-only generator produced by that function is way too magical. It also breaks the illusion of simplicity and ignorance of detail allowed when generators are used in for loops. 2. Magical use of implicit '__self__' will likely increase questions about the self-object as the initial parameter of methods and requests that it be removed or worse, be made optional. Terry J. Reedy From marc@informatik.uni-bremen.de Fri Nov 22 16:11:39 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: Fri, 22 Nov 2002 17:11:39 +0100 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: <3DDE52B4.5050904@lemburg.com> References: <200211111438.gABEcKX15504@pcp02138704pcs.reston01.va.comcast.ne t> <75750000.1037978367@leeloo.intern.geht.de> <3DDE52B4.5050904@lemburg.com> Message-ID: <109570000.1037981499@leeloo.intern.geht.de> >> The problem seems to be in the FreeBSD malloc implementation. malloc(0) >> returns 0x800 in all my tests. malloc(1) and up seems to work properly.. > > Maybe we have to relax the configure test a bit and set > the MALLOC_ZERO_RETURNS_NULL #define not only on NULL > returns, but on returns < 0x1000 as well ?! (or add > something to pyport.h along these lines specific to > FreeBSD)... I've just got an answer on the FreeBSD-current list: Feature in malloc and bug in third-party code. C99 says: If the size of the space requested is zero, the behavior is implimentation defined: either a null pointer is returned, or the behavior is as if the size were some nonzero value, except that the returned pointer shall not be used to access an object. So, if it's correct, then this isn't a FreeBSD specific problem and 0x800 could possibly something else on ther systems. Even above 0x1000. Maybe the part that wraps malloc could be changed to return NULL for malloc(0) (without calling malloc). Regards, Marc From guido@python.org Fri Nov 22 16:18:34 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 22 Nov 2002 11:18:34 -0500 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: Your message of "Fri, 22 Nov 2002 17:11:39 +0100." <109570000.1037981499@leeloo.intern.geht.de> References: <200211111438.gABEcKX15504@pcp02138704pcs.reston01.va.comcast.ne t> <75750000.1037978367@leeloo.intern.geht.de> <3DDE52B4.5050904@lemburg.com> <109570000.1037981499@leeloo.intern.geht.de> Message-ID: <200211221618.gAMGIYU27657@odiug.zope.com> This can be solved (as MAL suggested) by fixing configure so that malloc(0) returning 0x800 is treated the same as malloc(0) returning NULL. That way, pymalloc's free code doesn't have to special-case this. --Guido van Rossum (home page: http://www.python.org/~guido/) From marc@informatik.uni-bremen.de Fri Nov 22 16:35:55 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: Fri, 22 Nov 2002 17:35:55 +0100 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: <200211221618.gAMGIYU27657@odiug.zope.com> References: <200211111438.gABEcKX15504@pcp02138704pcs.reston01.va.comcast.ne t> <75750000.1037978367@leeloo.intern.geht.de> <3DDE52B4.5050904@lemburg.com> <109570000.1037981499@leeloo.intern.geht.de> <200211221618.gAMGIYU27657@odiug.zope.com> Message-ID: <123380000.1037982955@leeloo.intern.geht.de> > This can be solved (as MAL suggested) by fixing configure so that > malloc(0) returning 0x800 is treated the same as malloc(0) returning > NULL. That way, pymalloc's free code doesn't have to special-case > this. What about changing PyMem_MALLOC malloc to #define PyMem_MALLOC(n) n ? malloc(n) : NULL Regards, Marc From zack@codesourcery.com Fri Nov 22 17:10:32 2002 From: zack@codesourcery.com (Zack Weinberg) Date: Fri, 22 Nov 2002 09:10:32 -0800 Subject: [Python-Dev] Expect in python In-Reply-To: <20021122094022.GA6368@thyrsus.com> ("Eric S. Raymond"'s message of "Fri, 22 Nov 2002 04:40:22 -0500") References: <20021122003143.GA2242@thyrsus.com> <20021122082216.GA5519@thyrsus.com> <20021122094022.GA6368@thyrsus.com> Message-ID: <87wun5ttrr.fsf@egil.codesourcery.com> "Eric S. Raymond" writes: > Martin v. Loewis : >> As you can see, I don't care that much about usefulness of the >> module (although I'm sure others here will); to me, if users say it >> is useful, and maintenance is clear, and it does not come with a >> huge C library that you need to build, I'm fine with incorporating >> it - provided there is somebody I can assign bug reports to. I'd like to say here that this module would be quite useful to me. My employer is working on a replacement for DejaGNU (see http://www.codesourcery.com/qm/qmtest/) which is written in Python. Since DejaGNU uses expect, we're going to need equivalent functionality from somewhere. > No C. That's one of the good things about this implementation; > it's pure Python, relying only on the existing pty module. Which reminds me that the existing pty module needs some portability work; it only supports a subset of platforms. I will see if I can find the time to improve it. zw From tim.one@comcast.net Fri Nov 22 17:45:49 2002 From: tim.one@comcast.net (Tim Peters) Date: Fri, 22 Nov 2002 12:45:49 -0500 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) Message-ID: <15e94d1651fd.1651fd15e94d@icomcast.net> [Marc Recht] > What about changing > PyMem_MALLOC malloc > to > #define PyMem_MALLOC(n) n ? malloc(n) : NULL No, but expanding to malloc(n || 1) or malloc(n ? n : 1) would be fine. Code in Python uses a NULL return as an indication that a memory operation failed, so returning NULL is never appropriate for a PyMem_Malloc(0) call -- the Python API guarantees that its memory functions return NULL to mean out-of-memory, and that 0 is an OK argument. The configuration cruft should go away here. It's proven itself too brittle too many times. That is, we should pretend that all platforms are insane, and never pass 0 to any platform's malloc or realloc. From martin@v.loewis.de Fri Nov 22 18:20:48 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 22 Nov 2002 19:20:48 +0100 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: <200211221618.gAMGIYU27657@odiug.zope.com> References: <200211111438.gABEcKX15504@pcp02138704pcs.reston01.va.comcast.ne t> <75750000.1037978367@leeloo.intern.geht.de> <3DDE52B4.5050904@lemburg.com> <109570000.1037981499@leeloo.intern.geht.de> <200211221618.gAMGIYU27657@odiug.zope.com> Message-ID: Guido van Rossum writes: > This can be solved (as MAL suggested) by fixing configure so that > malloc(0) returning 0x800 is treated the same as malloc(0) returning > NULL. That way, pymalloc's free code doesn't have to special-case > this. This is nearly as bad as hard-coding the system on which it happens. If system developers come to like this trick, they may decide to return 0xFFFF0000 for malloc(0) (system developers, when confronted with a non-conformity in their implementation, always love to find a conforming but surprising implementation). Given that this is quite hard to debug if it happens, I'd rather like to see a better test. It's not easy to find one, though. One would be to do MALLOC_ZERO_RETURNS_ALWAYS_THE_SAME_THING, which would cover this and similar implementations (i.e. you test malloc(0) == malloc(0)). Another test would be MALLOC_ZERO_RETURNS_NO_MEMORY: malloc(0), round down to the page beginning, read a word there, expect a crash. This tests precisely the functionality that pymalloc needs. Regards, Martin From martin@v.loewis.de Fri Nov 22 18:23:39 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 22 Nov 2002 19:23:39 +0100 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: <123380000.1037982955@leeloo.intern.geht.de> References: <200211111438.gABEcKX15504@pcp02138704pcs.reston01.va.comcast.ne t> <75750000.1037978367@leeloo.intern.geht.de> <3DDE52B4.5050904@lemburg.com> <109570000.1037981499@leeloo.intern.geht.de> <200211221618.gAMGIYU27657@odiug.zope.com> <123380000.1037982955@leeloo.intern.geht.de> Message-ID: Marc Recht writes: > What about changing > PyMem_MALLOC malloc > to > #define PyMem_MALLOC(n) n ? malloc(n) : NULL No. Python requires that PyMem_MALLOC(0) returns a valid memory pointer (although I'm not certain where exactly it requires that). Regards, Martin From martin@v.loewis.de Fri Nov 22 18:24:47 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 22 Nov 2002 19:24:47 +0100 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: <15e94d1651fd.1651fd15e94d@icomcast.net> References: <15e94d1651fd.1651fd15e94d@icomcast.net> Message-ID: Tim Peters writes: > The configuration cruft should go away here. It's proven itself too > brittle too many times. That is, we should pretend that all platforms > are insane, and never pass 0 to any platform's malloc or realloc. I agree, ignore my posting that proposes enhancements to the test. Regards, Martin From guido@python.org Fri Nov 22 18:26:16 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 22 Nov 2002 13:26:16 -0500 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: Your message of "22 Nov 2002 19:20:48 +0100." References: <200211111438.gABEcKX15504@pcp02138704pcs.reston01.va.comcast.ne t> <75750000.1037978367@leeloo.intern.geht.de> <3DDE52B4.5050904@lemburg.com> <109570000.1037981499@leeloo.intern.geht.de> <200211221618.gAMGIYU27657@odiug.zope.com> Message-ID: <200211221826.gAMIQGF29615@odiug.zope.com> > Guido van Rossum writes: > > > This can be solved (as MAL suggested) by fixing configure so that > > malloc(0) returning 0x800 is treated the same as malloc(0) returning > > NULL. That way, pymalloc's free code doesn't have to special-case > > this. [MvL] > This is nearly as bad as hard-coding the system on which it > happens. If system developers come to like this trick, they may decide > to return 0xFFFF0000 for malloc(0) (system developers, when confronted > with a non-conformity in their implementation, always love to find a > conforming but surprising implementation). > > Given that this is quite hard to debug if it happens, I'd rather like > to see a better test. It's not easy to find one, though. > > One would be to do MALLOC_ZERO_RETURNS_ALWAYS_THE_SAME_THING, which > would cover this and similar implementations (i.e. you test malloc(0) > == malloc(0)). > > Another test would be MALLOC_ZERO_RETURNS_NO_MEMORY: malloc(0), round > down to the page beginning, read a word there, expect a crash. This > tests precisely the functionality that pymalloc needs. Yes, the test I proposed was naive. But I'd like to see this fixed in configure, not in pymalloc. (Tim seems to favor always ensuring that we never call malloc(0), but I can't see how that can be done without an extra test+jump. :-( ) --Guido van Rossum (home page: http://www.python.org/~guido/) From oren-py-d@hishome.net Fri Nov 22 18:29:19 2002 From: oren-py-d@hishome.net (Oren Tirosh) Date: Fri, 22 Nov 2002 13:29:19 -0500 Subject: [Python-Dev] Re: Yet another string formatting proposal In-Reply-To: <902A1E710FEAB740966EC991C3A38A8903C35B1B@INGDEXCHANGEC1.ingdirect.com> References: <902A1E710FEAB740966EC991C3A38A8903C35B1B@INGDEXCHANGEC1.ingdirect.com> Message-ID: <20021122182919.GA86940@hishome.net> On Fri, Nov 22, 2002 at 09:00:30AM -0500, Chermside, Michael wrote: > But this one is good. It's best features are that it is > _simple_, and _elegant_. Thanks! > I would prefer \{} over \() because it > is more distinctive ("\(" being used in REs), and somehow the > {} seem more consistent with "execution" and () with "grouping" > across a broad range of languages. Yes, I'm starting to lean in the direction of \{}, too. > I'm also bit dubious about the name "cook" (cute though!). The rationale behind the name wasn't just "cuteness". The effect of this method is the exact opposite of the "raw" prefix so the name "cook" was the most natural and descriptive choice. Got any other ideas? > +1 (but, of course, that vote doesn't count until there's a PEP > to refer to). Just wanted to sample the water first. I bet everyone is already sick of this subject after two PEPs, several other proposals, endless discussions and no results. Oren From mchermside@ingdirect.com Fri Nov 22 18:49:05 2002 From: mchermside@ingdirect.com (Chermside, Michael) Date: Fri, 22 Nov 2002 13:49:05 -0500 Subject: [Python-Dev] Re: Yet another string formatting proposal Message-ID: <902A1E710FEAB740966EC991C3A38A8903C35B1C@INGDEXCHANGEC1.ingdirect.com> > > I'm also bit dubious about the name "cook" (cute though!).=20 >=20 > The rationale behind the name wasn't just "cuteness". The effect > of this method is the exact opposite of the "raw" prefix so the > name "cook" was the most natural and descriptive choice. Got any > other ideas? But I don't think that it IS the opposite of the "raw" prefix. In a case with just substitutions like this, it works: >>> x =3D 5 >>> s1 =3D ":\{x}:" >>> s1 ':5:' >>> s2 =3D r":\{x}:" >>> s2 ':\{x}:' >>> s2.cook() ':5:' But in a case where OTHER escapes are used, it wouldn't be: >>> x =3D 5 >>> s1 =3D "\t\{x}" >>> print s1 5 >>> s2 =3D r"\t\{x}" >>> print s2 \t# >>> print s2.cook() \t5 You could "fix" this by ALSO processing \ escapes in the cook() method, but that confuses two very different processes and would only make the feature MORE difficult to use. Of course, now that I knock your idea down, I'd better come up with a replacement of my own. I suggest ".sub()" from PEP 292 (after all... let's try to build on what was developed in the previous discussions). > > +1 (but, of course, that vote doesn't count until there's a PEP > > to refer to). >=20 > Just wanted to sample the water first. I bet everyone is already=20 > sick of this subject after two PEPs, several other proposals, endless=20 > discussions and no results. Yes, but the reason for so many discussions is that it's an issue which actually matters. So long as new ideas BUILD on existing ones, we're making progress. This is one of those cases where the implementation is trivial, but designing it right... ah, that's HARD. -- Michael Chermside From sholden@holdenweb.com Fri Nov 22 15:31:40 2002 From: sholden@holdenweb.com (Steve Holden) Date: Fri, 22 Nov 2002 10:31:40 -0500 Subject: [Python-Dev] Expect in python References: <20021122003143.GA2242@thyrsus.com> <3DDD86D7.3010409@ActiveState.com> <20021122012540.GA2687@thyrsus.com> Message-ID: <017701c29266$ac82d4d0$6300000a@holdenweb.com> > David Ascher : [ .. ] > > I agree that expectish functionality would be nice. > > Something that I could grow into a Windows/not-just-pty's model would be > > even nicer. [esr] > > Agreed. Your chances of this certainly won't be *decreased* if pexpect is > included. I'd quite happily settle for something that could drive cygwin bash shells under Windows, but clearly somebody cleverer than I will have to work some magic for such functionality to appear. Presumably manipulating command windows with cyg/bash inside them requires pretty much the same features as manipulating windows with the Windows command shell inside them. regards ----------------------------------------------------------------------- Steve Holden http://www.holdenweb.com/ Python Web Programming http://pydish.holdenweb.com/pwp/ Previous .sig file retired to www.homeforoldsigs.com ----------------------------------------------------------------------- From tim@multitalents.net Fri Nov 22 21:55:19 2002 From: tim@multitalents.net (Tim Rice) Date: Fri, 22 Nov 2002 13:55:19 -0800 (PST) Subject: [Python-Dev] Re: release22-maint branch broken In-Reply-To: Message-ID: On Fri, 22 Nov 2002, A.M. Kuchling wrote: > Tim Rice wrote: > > > raise DistutilsPlatformError(my_msg) > > distutils.errors.DistutilsPlatformError: invalid Python installation: > > unable to open /usr/local/lib/python2.2/config/Makefile (No such file > > or directory) > > gmake: *** [sharedmods] Error 1 > > The revised version of sysconfig.py figures out if it's in the build > directory by looking for a landmark file; the landmark is Modules/Setup. > Does that file exist? Yes it does. I put some prints in ... argv0_path = os.path.dirname(os.path.abspath(sys.executable)) print argv0_path landmark = os.path.join(argv0_path, "Modules", "Setup") print landmark if not os.path.isfile(landmark): python_build = 0 print "python_build = 0" elif os.path.isfile(os.path.join(argv0_path, "Lib", "os.py")): python_build = 1 print "python_build = 1" else: python_build = os.path.isfile(os.path.join(os.path.dirname(argv0_path), "Lib", "os.py")) print "else" print python_build del argv0_path, landmark ... And get ... /usr/local/src/utils/Python-2 /usr/local/src/utils/Python-2/Modules/Setup else 0 running build ... Could this breaking because I build outside of the source tree? > > --amk > -- Tim Rice Multitalents (707) 887-1469 tim@multitalents.net From Gareth.McCaughan@pobox.com Sat Nov 23 02:34:19 2002 From: Gareth.McCaughan@pobox.com (Gareth McCaughan) Date: Sat, 23 Nov 2002 02:34:19 +0000 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments Message-ID: <200211230234.19694@gjm> Since def f(a,b,*c): ... f(1,2,3,4,5) does (in effect) a=1, b=2, c=(3,4,5), I suggest that a,b,*c = 1,2,3,4,5 should do a=1, b=2, c=(3,4,5) too. Likewise, a,b,c,d,e = 1,2,*(3,4,5) should be parallel to def f(a,b,c,d,e): ... f(1,2,*(3,4,5)) I shall refrain from suggesting that the following should "work": a,b,c = 1,**{'b':2,'c':3} a,**d = (a=1, b=2, c=3) a,b,c = (b=2, c=3, a=1) though actually it's an amusing thought, and I wouldn't complain if they did. The second of them (in the special case where the LHS is just "**d") would provide an alternative to Just van Rossum's proposal to overload the dict constructor. (The two aren't exclusive.) Note that the parens would be necessary, to avoid ambiguity with "a=b=c=0". The motivating principle is that parameter passing is, or should be, just like assignment[1], so what works in one context should work in the other. I've moderately often actually wanted the a,b,*c=... notation, too, usually when using split() on strings containing an unknown number of fields; the most concise alternative goes like a,b,c = (line.split()+[None])[:3] which both looks and feels ugly. Of course, you can't take "parameter passing is just like assignment" too seriously here, because x = 1,2,3 a,b,c = x works, whereas def f(x): ... def g(a,b,c): ... f(1,2,3) g(x) doesn't. Still, the analogy is (to me) quite a compelling one. Am I nuts? [1] This principle matters more in languages like C++ and Eiffel where parameter passing and assignment can both cause funny (and potentially user-defined) copying operations to happen, not just slinging references. -- g From lellinghaus@yahoo.com Sat Nov 23 05:29:03 2002 From: lellinghaus@yahoo.com (Lance Ellinghaus) Date: Fri, 22 Nov 2002 21:29:03 -0800 (PST) Subject: [Python-Dev] Expect in python In-Reply-To: <20021122003143.GA2242@thyrsus.com> Message-ID: <20021123052903.74620.qmail@web20901.mail.yahoo.com> Yes. I have been using it for a while. It works very well, except to make it run on Solaris you have to make modifications to the posix module. I submitted the necessary changes, but they were denied since I made them Solaris specific. The module changes are in the submitted patches on SourceForge under the Python project. Lance Ellinghaus --- "Eric S. Raymond" wrote: > Has anyone else here looked at Pexpect? It's a pure-Python module > that > uses ptys to support mmost of the capability of Tcl expect. > > http://pexpect.sourceforge.net/ > > This is an 0.94 beta, but the author appears to know what he is > doing, and > it would be an excellent boost to Python's capabilities for > administrative > scripting. Among other things, it would subsume most of what Tcl is > actually > used for. > > I'm thinking this is a very strong candidate to enter the Python > library when > it reaches 1.0 level. Comments? > -- > Eric S. Raymond > > _______________________________________________ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev ===== -- Lance Ellinghaus __________________________________________________ Do you Yahoo!? Yahoo! Mail Plus – Powerful. Affordable. Sign up now. http://mailplus.yahoo.com From oren-py-d@hishome.net Sat Nov 23 07:46:59 2002 From: oren-py-d@hishome.net (Oren Tirosh) Date: Sat, 23 Nov 2002 02:46:59 -0500 Subject: [Python-Dev] Re: Yet another string formatting proposal In-Reply-To: <902A1E710FEAB740966EC991C3A38A8903C35B1C@INGDEXCHANGEC1.ingdirect.com> References: <902A1E710FEAB740966EC991C3A38A8903C35B1C@INGDEXCHANGEC1.ingdirect.com> Message-ID: <20021123074659.GA89078@hishome.net> On Fri, Nov 22, 2002 at 01:49:05PM -0500, Chermside, Michael wrote: > > The rationale behind the name wasn't just "cuteness". The effect > > of this method is the exact opposite of the "raw" prefix so the > > name "cook" was the most natural and descriptive choice. > > But I don't think that it IS the opposite of the "raw" prefix. In ... > But in a case where OTHER escapes are used, it wouldn't be: ... > You could "fix" this by ALSO processing \ escapes in the cook() > method, but that confuses two very different processes and would > only make the feature MORE difficult to use. In case it wasn't clear from my posting this is exactly what I propose. Otherwise how would you convert the following run-time embedding to a compile-time template? u"Resistance = \{resistance} \N{GREEK CAPITAL LETTER OMEGA}\n" If cooking did not process ALL escape sequences you would not be able to just change the 'u' prefix to 'ur' and cook it later. I don't think it makes it more difficult to use. NOT processing escapes would be inconsistent and make it difficult to use. Oren From bac@OCF.Berkeley.EDU Sat Nov 23 08:17:01 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sat, 23 Nov 2002 00:17:01 -0800 (PST) Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <200211230234.19694@gjm> Message-ID: [Gareth McCaughan] > Since > > def f(a,b,*c): ... > f(1,2,3,4,5) > > does (in effect) a=1, b=2, c=(3,4,5), I suggest that > > a,b,*c = 1,2,3,4,5 > > should do a=1, b=2, c=(3,4,5) too. Likewise, > > a,b,c,d,e = 1,2,*(3,4,5) > > should be parallel to > > def f(a,b,c,d,e): ... > f(1,2,*(3,4,5)) > This just strikes me as YAGNI. I have never felt the need to have this kind of assignment outside of parameter passing. If this kind of assignment is that big of a deal you can just use slicing. I realize the same argument can be used for parameter arguments, but the reason for having *args and **kwargs is to allow more general functions and I just don't feel this is needed for assignment. This might all seem rather weak and vague, but my gut is just saying this is not a good idea. So -1 from me unless this feeling is from indigestion. -Brett From brian@sweetapp.com Sat Nov 23 08:35:05 2002 From: brian@sweetapp.com (Brian Quinlan) Date: Sat, 23 Nov 2002 00:35:05 -0800 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: Message-ID: <006d01c292cb$39986420$21795418@dell1700> Brett Cannon wrote: > > does (in effect) a=1, b=2, c=(3,4,5), I suggest that > > > > a,b,*c = 1,2,3,4,5 > > > > should do a=1, b=2, c=(3,4,5) too. Likewise, > > > > a,b,c,d,e = 1,2,*(3,4,5) > > > > should be parallel to > > > > def f(a,b,c,d,e): ... > > f(1,2,*(3,4,5)) > > > > This just strikes me as YAGNI. I have never felt the need to have > this kind of assignment outside of parameter passing. Need is a bit too strong. But which do you like better: year, month, day = time.localtime()[0:3] or year, month, day, *dummy = time.localtime() Cheers, Brian From bac@OCF.Berkeley.EDU Sat Nov 23 08:48:25 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sat, 23 Nov 2002 00:48:25 -0800 (PST) Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <006d01c292cb$39986420$21795418@dell1700> Message-ID: [Brian Quinlan] > Need is a bit too strong. But which do you like better: > > year, month, day = time.localtime()[0:3] > > or > > year, month, day, *dummy = time.localtime() > In all honesty, the top one. The *dummy variable strikes me as cluttering the assignment variables. I would rather have clutter on the RHS of an assignment then on the LHS. Personally it takes me a little bit more effort to realize that *dummy is going to get assigned what ever is left over and that it will be unneeded. I would feel the need to delete the variable so as to not constantly remember what exactly what it is for or holding (yes, good naming can handle that but I don't want the bother of having to read a variable name just to realize that I don't want the variable). The reason this kind of thing is okay for parameter passing is that you might have the possibility of extra arguments, but you want that possibility. This example, though, does not have that benefit since you know you don't want what ``dummy`` gets. I guess I just like very precise, clean assignment and I don't see this ability helping with that keeping that view. -Brett From martin@v.loewis.de Sat Nov 23 09:15:27 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 23 Nov 2002 10:15:27 +0100 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: <75750000.1037978367@leeloo.intern.geht.de> References: <200211111438.gABEcKX15504@pcp02138704pcs.reston01.va.comcast.ne t> <75750000.1037978367@leeloo.intern.geht.de> Message-ID: Marc Recht writes: > The problem seems to be in the FreeBSD malloc > implementation. malloc(0) returns 0x800 in all my tests. malloc(1) and > up seems to work properly.. Can you please try again? This should be fixed now. Regards, Martin From martin@v.loewis.de Sat Nov 23 09:25:35 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 23 Nov 2002 10:25:35 +0100 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <006d01c292cb$39986420$21795418@dell1700> References: <006d01c292cb$39986420$21795418@dell1700> Message-ID: Brian Quinlan writes: > Need is a bit too strong. But which do you like better: > > year, month, day = time.localtime()[0:3] > > or > > year, month, day, *dummy = time.localtime() lttime = time.localtime() year = ltime.tm_year month = ltime.tm_mon day = ltime.tm_mday Readability counts. Actually, you may just avoid the assignments altogether, and just use the named fields. Regards, Martin From marc@informatik.uni-bremen.de Sat Nov 23 09:31:54 2002 From: marc@informatik.uni-bremen.de (Marc Recht) Date: Sat, 23 Nov 2002 10:31:54 +0100 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: References: <200211111438.gABEcKX15504@pcp02138704pcs.reston01.va.comcast.ne t> <75750000.1037978367@leeloo.intern.geht.de> Message-ID: <32660000.1038043914@leeloo.intern.geht.de> > Can you please try again? This should be fixed now. Works. Thanks! Regards, Marc From arigo@tunes.org Sat Nov 23 10:32:46 2002 From: arigo@tunes.org (Armin Rigo) Date: Sat, 23 Nov 2002 02:32:46 -0800 (PST) Subject: [Python-Dev] from tuples to immutable dicts Message-ID: <20021123103246.C6E724B27@bespin.org> Hello everybody, There are some (admittedly occasional) situations in which an immutable dictionary-like type would be handy, e.g. in the C core to implement switches or to speed up keyword argument matching. Here is a related suggestion: enhancing or subclassing tuples to allow items to be named. Example syntax: (1,2,c=3,d=4) would build the 4-tuple (1,2,3,4) with names on the last two items. I believe that it fills a hole between very small structures (e.g. (x,y) coordinates for points) and large structures naturally implemented as classes: it allows small structures to grow a bit without turning into an obscure 5- or 10-tuple. As a typical example, consider the os.stat() trick: it currently returns an object that behaves like a 10-tuple, but whose items can also be read via attributes for the sake of clarity. It only seems natural to be allowed to do: p = (x=2, y=3) print p.x # 2 print p[-1] # 3 print list(p) # [2,3] Of course, the syntax to build such tuples is chosen to match the call syntax. Conversely, tuple unpacking could use exactly the same syntax as function argument lists: x = (1, 2) (a, b, c=3) = x # set a=1, b=2, c=3 a, b, c = (b=2, c=3, a=1) # idem The notion might unify the two special * and ** arguments, which are set respectively to an _immutable_ list and a _mutable_ new dictionary. A single special argument (***?) might be used to get the extra arguments as a tuple with possibly named items. As a side effect the function can know in which order the keyword arguments were passed in, which may or may not be a good idea a priori but which I sometimes wish I could do. def f(x, ***rest): print rest f(1,2,3) # -> (2, 3) f(1,2,c=3,d=4) # -> (2, c=3, d=4) f(1,2,x=3,y=4) # -> TypeError ('x' gets two values) f(w=1,x=2,y=3) # -> (w=1, y=3) Questions: * use 'tuple' or a new subtype 'namedtuple' or 'structure'? * the suggested syntax emphasis the use of strings for the keys, but the constructor could be more general, taking arbitrary hashable values: t = namedtuple([(1,2), (3,4)]) t = namedtuple({1:2, 3:4}) dict(t) does not work: confusion with dict() of a sequence of pairs dict(**t) -> {1:2, 3:4} # based on Just's idea of extra keyword arguments to dict() * how do we read the key names? It seems impossible to add methods to namedtuples since all attributes should potentially be reserved for item reads. Some bad ideas: p = (1, 2, x=3, y=4) p.__keys__() -> [None, None, 'x', 'y'] # special method name p.__key__(2) -> 'x' or p.__keys__ -> (None, None, 'x', 'y') # special attribute or p % 2 -> 'x' # new operator (abusing % again) or iterkeys(p) -> iterable # global function (urgh) Note that dict(p) -> {'x': 3, 'y': 4} is a partial solution, but it makes inspection heavier and looses all info about key order and unnamed items. Besides, dict((1,2,3)) -> {} looks like a bad thing to do. * shoud name collisions be allowed inside a namedtuple? * what about * and ** call syntaxes? For compatibility we might have to do p = (1, 2, x=3, y=4) f(*p) # treat p as a tuple, always ignoring names (?) f(**p) # only uses the named items of p (??) f(***p) # same as f(1, 2, x=3, y=4) or maybe push forward a shorter form of *** and deprecate * and **...? * dissymetries between namedtuples and dicts: operations like 'in' and iteration operates on values in tuples, but on keys in dicts... Armin. From zack@codesourcery.com Sat Nov 23 10:39:26 2002 From: zack@codesourcery.com (Zack Weinberg) Date: Sat, 23 Nov 2002 02:39:26 -0800 Subject: [Python-Dev] Expect in python In-Reply-To: <20021123052903.74620.qmail@web20901.mail.yahoo.com> (Lance Ellinghaus's message of "Fri, 22 Nov 2002 21:29:03 -0800 (PST)") References: <20021123052903.74620.qmail@web20901.mail.yahoo.com> Message-ID: <87wun4muxt.fsf@egil.codesourcery.com> Lance Ellinghaus writes: > Yes. I have been using it for a while. It works very well, except > to make it run on Solaris you have to make modifications to the posix > module. I submitted the necessary changes, but they were denied since > I made them Solaris specific. The module changes are in the submitted > patches on SourceForge under the Python project. If these changes are what I think they are, I know how to implement them generically. What was the patch number? zw From martin@v.loewis.de Sat Nov 23 11:02:15 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 23 Nov 2002 12:02:15 +0100 Subject: [Python-Dev] Expect in python In-Reply-To: <87wun4muxt.fsf@egil.codesourcery.com> References: <20021123052903.74620.qmail@web20901.mail.yahoo.com> <87wun4muxt.fsf@egil.codesourcery.com> Message-ID: Zack Weinberg writes: > If these changes are what I think they are, I know how to implement > them generically. What was the patch number? Patch #579433. All I'm asking for is that autoconf tests are added to test for the needed features instead of testing for defined(sun). Lance' other patch (#579435) also needs work, like documentation, setup.py and Setup.dist integration, perhaps some autoconf tests to avoid trying to build it when it can't be built, and perhaps tests to verify it works correctly. Maybe this should go into the pwd module in the first place. Regards, Martin From martin@v.loewis.de Sat Nov 23 11:19:05 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 23 Nov 2002 12:19:05 +0100 Subject: [Python-Dev] from tuples to immutable dicts In-Reply-To: <20021123103246.C6E724B27@bespin.org> References: <20021123103246.C6E724B27@bespin.org> Message-ID: Armin Rigo writes: > Here is a related suggestion: enhancing or subclassing tuples to > allow items to be named. Example > syntax: > > (1,2,c=3,d=4) > > would build the 4-tuple (1,2,3,4) with names on the last two items. While I agree on the need for a struct-like type, I'm strongly opposed to adding new syntax, or even make it a builtin type (i.e. one that is referenced from __builtins__). Provide such a type using the standard language mechanisms. > I believe that it fills a hole between very small structures > (e.g. (x,y) coordinates for points) and large structures naturally > implemented as classes: it allows small structures to grow a bit > without turning into an obscure 5- or 10-tuple. As a typical > example, consider the os.stat() trick: it currently returns an > object that behaves like a 10-tuple, but whose items can also be > read via attributes for the sake of clarity. I think the structseq type is a good starting point. Fred once had a plan to expose structseqs to Python, to allow the creation of new structs in Python. I was suggesting that there should be a method new.struct_seq, which is called as struct_seq(name, doc, n_in_sequence, (fields)) where fields is a list of (name,doc) tuples. The resulting thing would be similar to os.stat_result: you need to call it with the mandatory fields in sequence, and can call it with the optional fields by keyword argument. > * use 'tuple' or a new subtype 'namedtuple' or 'structure'? No new builtins at all, please. > * the suggested syntax emphasis the use of strings for the keys, but > the constructor could be more general, taking arbitrary hashable > values: YAGNI. > * how do we read the key names? You can currently get them from the type's __dict__, although this contains more information than you want. However, what do you need the key names for? > * shoud name collisions be allowed inside a namedtuple? You mean, two fields with the same name? No. > * what about * and ** call syntaxes? Irrelevant, since syntax extensions are not acceptable. > * dissymetries between namedtuples and dicts: operations like 'in' > and iteration operates on values in tuples, but on keys in dicts... A struct really is a sequence, not a mapping. Regards, Martin From martin@v.loewis.de Sat Nov 23 11:31:55 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 23 Nov 2002 12:31:55 +0100 Subject: [Python-Dev] bsddb3 imported In-Reply-To: <15834.32535.606078.681754@gargle.gargle.HOWL> References: <15834.17784.35358.906103@gargle.gargle.HOWL> <15834.19302.710499.876242@gargle.gargle.HOWL> <15834.32535.606078.681754@gargle.gargle.HOWL> Message-ID: barry@python.org (Barry A. Warsaw) writes: > I think this one at least is fixed in pybsddb's cvs. Can you update > to the latest cvs? Done (updated 2002.11.23.10.42.36); I now get 10 failures on Linux. Can somebody please look into these? Greg, can you please indicate whether you will use the Python CVS as the master source from now on, or forward all your changes to the Python CVS? Regards, Martin From fredrik@pythonware.com Sat Nov 23 14:21:27 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Sat, 23 Nov 2002 15:21:27 +0100 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments References: <006d01c292cb$39986420$21795418@dell1700> Message-ID: <00f201c29308$c2b37f00$ced241d5@hagrid> Martin v. Loewis wrote: > day = ltime.tm_mday > > Readability counts. > > Actually, you may just avoid the assignments altogether, and just use > the named fields. with names like "tm_mday", doesn't your second statement contradict the first one? From barry@python.org Sat Nov 23 16:02:48 2002 From: barry@python.org (Barry A. Warsaw) Date: Sat, 23 Nov 2002 11:02:48 -0500 Subject: [Python-Dev] bsddb3 imported References: <15834.17784.35358.906103@gargle.gargle.HOWL> <15834.19302.710499.876242@gargle.gargle.HOWL> <15834.32535.606078.681754@gargle.gargle.HOWL> Message-ID: <15839.42664.139738.974217@gargle.gargle.HOWL> >>>>> "MvL" == Martin v Loewis writes: >> I think this one at least is fixed in pybsddb's cvs. Can you >> update to the latest cvs? MvL> Done (updated 2002.11.23.10.42.36); I now get 10 failures on MvL> Linux. Can somebody please look into these? test_bsddb.py runs to successful completion on RH7.3. test_bsddb3.py is running now... Note that test_ucn.py crashes Python. MvL> Greg, can you please indicate whether you will use the Python MvL> CVS as the master source from now on, or forward all your MvL> changes to the Python CVS? We're working in a branch right now to try to support BDB 4.1. I'll take responsibility to merge that code into Python cvs when its ready. -Barry From martin@v.loewis.de Sat Nov 23 16:44:19 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 23 Nov 2002 17:44:19 +0100 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <00f201c29308$c2b37f00$ced241d5@hagrid> References: <006d01c292cb$39986420$21795418@dell1700> <00f201c29308$c2b37f00$ced241d5@hagrid> Message-ID: "Fredrik Lundh" writes: > > Readability counts. > > > > Actually, you may just avoid the assignments altogether, and just use > > the named fields. > > with names like "tm_mday", doesn't your second statement > contradict the first one? It depends on the reader. A reader familiar with the time module will recognize tm_foo as a field of struct tm, and then figure out what field it is. That the field is called mday and not just day is surprising; if the reader is surprised enough, he will look up the documentation, and the rationale will become clear immediately. Regards, Martin From martin@v.loewis.de Sat Nov 23 17:12:10 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 23 Nov 2002 18:12:10 +0100 Subject: [Python-Dev] bsddb3 imported In-Reply-To: <15839.42664.139738.974217@gargle.gargle.HOWL> References: <15834.17784.35358.906103@gargle.gargle.HOWL> <15834.19302.710499.876242@gargle.gargle.HOWL> <15834.32535.606078.681754@gargle.gargle.HOWL> <15839.42664.139738.974217@gargle.gargle.HOWL> Message-ID: barry@python.org (Barry A. Warsaw) writes: > Note that test_ucn.py crashes Python. Oops. Fixed. Regards, Martin From skip@pobox.com Sat Nov 23 17:12:45 2002 From: skip@pobox.com (Skip Montanaro) Date: Sat, 23 Nov 2002 11:12:45 -0600 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: References: <006d01c292cb$39986420$21795418@dell1700> Message-ID: <15839.46861.447588.825415@montanaro.dyndns.org> >> But which do you like better: >> >> year, month, day = time.localtime()[0:3] >> >> or >> >> year, month, day, *dummy = time.localtime() >> Brett> In all honesty, the top one. The *dummy variable strikes me as Brett> cluttering the assignment variables. >From what little ELisp programming I did in the dim dark past, I would prefer the second variant written as year, month, day, *rest = time.localtime() Brett> The reason this kind of thing is okay for parameter passing is Brett> that you might have the possibility of extra arguments, but you Brett> want that possibility. This example, though, does not have that Brett> benefit since you know you don't want what ``dummy`` gets. Yes you do. You know it gets the stuff you don't care about. ;-) Skip From sholden@holdenweb.com Sat Nov 23 17:09:52 2002 From: sholden@holdenweb.com (Steve Holden) Date: Sat, 23 Nov 2002 12:09:52 -0500 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments References: <200211230234.19694@gjm> Message-ID: <02d101c29313$23fc6380$6300000a@holdenweb.com> [Gareth] > Since > > def f(a,b,*c): ... > f(1,2,3,4,5) > > does (in effect) a=1, b=2, c=(3,4,5), I suggest that > > a,b,*c = 1,2,3,4,5 > [similar suggestions] > > The motivating principle is that parameter passing is, > or should be, just like assignment[1], so what works in > one context should work in the other. I've moderately > often actually wanted the a,b,*c=... notation, too, > usually when using split() on strings containing an > unknown number of fields; the most concise alternative > goes like > > a,b,c = (line.split()+[None])[:3] > > which both looks and feels ugly. > The advantage, of course, is the explicit nature. > Of course, you can't take "parameter passing is just like > assignment" too seriously here, because > > x = 1,2,3 > a,b,c = x > > works, whereas > > def f(x): ... > def g(a,b,c): ... > f(1,2,3) > g(x) > > doesn't. Still, the analogy is (to me) quite a compelling one. > > Am I nuts? > Not nuts, but perhaps striving too hard for a false generality. I would support this suggestion if it made the assignment code simpler, but if it complicates it then my feeling is YAGNI. It's a nice parallel, but n ot one that meets any crying need, IMO. regards ----------------------------------------------------------------------- Steve Holden http://www.holdenweb.com/ Python Web Programming http://pydish.holdenweb.com/pwp/ Previous .sig file retired to www.homeforoldsigs.com ----------------------------------------------------------------------- From nathan-nntp@geerbox.com Sat Nov 23 17:52:40 2002 From: nathan-nntp@geerbox.com (Nathan Clegg) Date: Sat, 23 Nov 2002 09:52:40 -0800 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <006d01c292cb$39986420$21795418@dell1700> References: <006d01c292cb$39986420$21795418@dell1700> Message-ID: <15839.49256.721749.856846@jin.int.geerbox.com> Brian Quinlan wrote: > year, month, day = time.localtime()[0:3] > > or > > year, month, day, *dummy = time.localtime() I think Gareth's split example is better. localtime returns a fixed-length tuple. Things are much more interesting when you don't know how many elements you're getting. Which do you like better for, say, parsing required and optional parameters to a command: (cmd, req1, req2, req3, opt_str) = input.split(None, 4) opts = opt_str.split() execute_cmd(cmd, req1, req2, req3, opts) or parts = input.split() (cmd, req1, req2, req3) = parts[:4] opts = parts[4:] execute_cmd(cmd, req1, req2, req3, opts) or (cmd, req1, req2, req3, *opts) = input.split() execute_cmd(cmd, req1, req2, req3, opts) A really interesting possibility is: (first, *middle, last) = input.split() This breaks the parallel to parameter passing, but would be very handy. Are we getting too perlish? -- Nathan Clegg GeerBox nathan-nntp@geerbox.com From python@rcn.com Sat Nov 23 17:57:50 2002 From: python@rcn.com (Raymond Hettinger) Date: Sat, 23 Nov 2002 12:57:50 -0500 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments References: <200211230234.19694@gjm> Message-ID: <001b01c29319$d7717e40$125ffea9@oemcomputer> > a,b,*c = 1,2,3,4,5 Hmm, a right-associative unary tuplizer that is only valid for the right mostvariable in the left-hand-side of a multiple-assignment. It sure doesn't sound like a simplication. Still, it looks like a handy tool for ML style head/tail separation: head, *tail = func_returning_a_sequence() And, it beats the existing alternative: seq = func_returning_a_sequence() head, tail = seq[0], seq[1:] > a,b,c,d,e = 1,2,*(3,4,5) In this, the exisiting alternative is better: a,b,(c,d,e) = 1,2,(3,4,5) Raymond Hettinger From tim.one@comcast.net Sat Nov 23 17:59:05 2002 From: tim.one@comcast.net (Tim Peters) Date: Sat, 23 Nov 2002 12:59:05 -0500 Subject: [Python-Dev] bsddb3 imported In-Reply-To: Message-ID: [Barry] > Note that test_ucn.py crashes Python. [MvL] > Oops. Fixed. Not here. In a debug build, it reliably crashes in the bowels of strcpy. My guess: strcpy(buffer, hangul_syllables[L][0]); can't always work because the hangul_syllables array contains NULL pointers in some entries instead of empty strings. It blows up for me when L is pointing at the { 0, "YI", "S" }, entry. I'm proceeding on "a fix" to see whether s/0/""/g cures it. From martin@v.loewis.de Sat Nov 23 18:13:24 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 23 Nov 2002 19:13:24 +0100 Subject: [Python-Dev] bsddb3 imported In-Reply-To: References: Message-ID: Tim Peters writes: > Not here. In a debug build, it reliably crashes in the bowels of strcpy. > My guess: > > strcpy(buffer, hangul_syllables[L][0]); > > can't always work because the hangul_syllables array contains NULL pointers > in some entries instead of empty strings. It blows up for me when L is > pointing at the > > { 0, "YI", "S" }, > > entry. I'm proceeding on "a fix" to see whether s/0/""/g cures it. Are you sure you are up-to-date? For that to happen, L must be 19. Now, L is SIndex / NCount, where NCount is 588. So SIndex must be atleast (588*19 =) 11172 (= SCount). However, SIndex is code-SBase, so code must be atleast SBase+SCount. In that case, the entire if statement should not be executed, because the if statement reads if (SBase <= code && code < SBase+SCount) { It so happens that the fields which are NULL are never accessed. Regards, Martin From tim.one@comcast.net Sat Nov 23 18:26:06 2002 From: tim.one@comcast.net (Tim Peters) Date: Sat, 23 Nov 2002 13:26:06 -0500 Subject: [Python-Dev] bsddb3 imported In-Reply-To: Message-ID: [Tim] > Not here. In a debug build, it [test_ucn] reliably crashes in the > bowels of strcpy. > My guess: > > strcpy(buffer, hangul_syllables[L][0]); > > can't always work because the hangul_syllables array contains > NULL pointers in some entries instead of empty strings. It blows > up for me when L is pointing at the > > { 0, "YI", "S" }, > > entry. I'm proceeding on "a fix" to see whether s/0/""/g cures it. [Martin] > Are you sure you are up-to-date? I was at the time. > For that to happen, L must be 19. It was. > Now, L is SIndex / NCount, where NCount is 588. So SIndex must be > atleast (588*19 =) 11172 (= SCount). However, SIndex is code-SBase, > so code must be at least SBase+SCount. In that case, the entire if > statement should not be executed, because the if statement reads > > if (SBase <= code && code < SBase+SCount) { > > It so happens that the fields which are NULL are never accessed. It was at the time. It would be a stretch to believe I pasted that line in by blind luck -- it's what the debugger was pointing at when the crash occurred, and hangul_syllables[L][0] was NULL at the time. Maybe that's been "fixed" in more than one way by now. From pobrien@orbtech.com Sat Nov 23 18:29:47 2002 From: pobrien@orbtech.com (Patrick K. O'Brien) Date: Sat, 23 Nov 2002 12:29:47 -0600 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <200211230234.19694@gjm> References: <200211230234.19694@gjm> Message-ID: <200211231229.47332.pobrien@orbtech.com> On Friday 22 November 2002 08:34 pm, Gareth McCaughan wrote: > Since > > def f(a,b,*c): ... > f(1,2,3,4,5) > > does (in effect) a=1, b=2, c=(3,4,5), I suggest that > > a,b,*c = 1,2,3,4,5 > > should do a=1, b=2, c=(3,4,5) too. Funny you should bring this up. Last Sunday I was demonstrating Python to some novice programmers and we were talking about tuple unpacking. Someone asked how you could unpack only the first couple of items from a tuple. I immediately decided to try the code that you suggested and was disappointed it didn't work (Not that I really expected it to work, but sometimes you surprise yourself.) Anyway, we then moved on to slicing, but I do like this approach that you suggest. -- Patrick K. O'Brien Orbtech http://www.orbtech.com/web/pobrien ----------------------------------------------- "Your source for Python programming expertise." ----------------------------------------------- From tim.one@comcast.net Sat Nov 23 18:42:06 2002 From: tim.one@comcast.net (Tim Peters) Date: Sat, 23 Nov 2002 13:42:06 -0500 Subject: [Python-Dev] test_codeccallbacks failing on Windows Message-ID: ====================================================================== FAIL: test_uninamereplace (__main__.CodecCallbackTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "..\lib\test\test_codeccallbacks.py", line 74, in test_uninamereplace self.assertEqual(sin.encode("ascii", "test.uninamereplace"), sout) File "C:\CODE\PYTHON\lib\unittest.py", line 292, in failUnlessEqual raise self.failureException, \ AssertionError: '\x1b[1mNOT SIGN, ETHIOPIC SYLLABLE SEE, EURO SIGN, CJK UNIFIED IDEOGRAPH-8000\x1b[0m' != '\x1b[1mNOT SIGN, ETHIOPIC SYLLABLE SEE, EURO SIGN, 0x8000\x1b[0m' From zack@codesourcery.com Sat Nov 23 19:59:39 2002 From: zack@codesourcery.com (Zack Weinberg) Date: Sat, 23 Nov 2002 11:59:39 -0800 Subject: [Python-Dev] Expect in python In-Reply-To: (martin@v.loewis.de's message of "23 Nov 2002 12:02:15 +0100") References: <20021123052903.74620.qmail@web20901.mail.yahoo.com> <87wun4muxt.fsf@egil.codesourcery.com> Message-ID: <87u1i8m504.fsf@egil.codesourcery.com> martin@v.loewis.de (Martin v. Loewis) writes: > Zack Weinberg writes: > >> If these changes are what I think they are, I know how to implement >> them generically. What was the patch number? > > Patch #579433. All I'm asking for is that autoconf tests are added to > test for the needed features instead of testing for defined(sun). It does do what I thought it might. That is pretty much what I would do to it, except that I am tempted to yank it all out of posixmodule into its own thing, assuming compatibility can be arranged. > Lance' other patch (#579435) also needs work, like documentation, > setup.py and Setup.dist integration, perhaps some autoconf tests to > avoid trying to build it when it can't be built, and perhaps tests to > verify it works correctly. Maybe this should go into the pwd module in > the first place. I think it makes a lot of sense to put it in the pwd module. zw From martin@v.loewis.de Sat Nov 23 20:11:48 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 23 Nov 2002 21:11:48 +0100 Subject: [Python-Dev] Expect in python In-Reply-To: <87u1i8m504.fsf@egil.codesourcery.com> References: <20021123052903.74620.qmail@web20901.mail.yahoo.com> <87wun4muxt.fsf@egil.codesourcery.com> <87u1i8m504.fsf@egil.codesourcery.com> Message-ID: Zack Weinberg writes: > It does do what I thought it might. That is pretty much what I would > do to it, except that I am tempted to yank it all out of posixmodule > into its own thing, assuming compatibility can be arranged. I don't think that's worth it. The posix module collects "such things". Of course, if you can expose enough functions (grantpt, unlockpt), you might be able to implement the entire thing in pure Python. Regards, Martin From brian@sweetapp.com Sat Nov 23 20:27:38 2002 From: brian@sweetapp.com (Brian Quinlan) Date: Sat, 23 Nov 2002 12:27:38 -0800 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: Message-ID: <007301c2932e$c4533000$21795418@dell1700> Martin v. Loewis wrote: > It depends on the reader. A reader familiar with the time module will > recognize tm_foo as a field of struct tm, and then figure out what > field it is. A reader familiar with the time module might have the meaning of each element of the 9-tuple memorized. > That the field is called mday and not just day is > surprising; if the reader is surprised enough, he will look up the > documentation, and the rationale will become clear immediately. The meaning of the fields is documented in the development documentation but not in the current release documentation. I'm not sure why... And the day field being called "mday" is still a mystery to me. So is the "tm_" prefix. Cheers, Brian From bac@OCF.Berkeley.EDU Sat Nov 23 22:18:55 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sat, 23 Nov 2002 14:18:55 -0800 (PST) Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <007301c2932e$c4533000$21795418@dell1700> Message-ID: [Brian Quinlan] > > That the field is called mday and not just day is > > surprising; if the reader is surprised enough, he will look up the > > documentation, and the rationale will become clear immediately. > > The meaning of the fields is documented in the development documentation > but not in the current release documentation. I'm not sure why... > The reason is that it was not meant to be exposed initially. But enough modules in the stdlib use it that it was felt documenting it was worth it. The documentation was written and added very recently (I think this month). -Brett From tdelaney@avaya.com Sat Nov 23 23:57:44 2002 From: tdelaney@avaya.com (Delaney, Timothy) Date: Sun, 24 Nov 2002 10:57:44 +1100 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments Message-ID: > From: Gareth McCaughan [mailto:Gareth.McCaughan@pobox.com] > > a,b,*c = 1,2,3,4,5 FWIW, this has been proposed before, but no one ever bothered to make a PEP about it. Each time it's come up, my gut feeling has been that I would really like it. I've often been in the situation where I'd *like* to pass a variable-length tuple from a function, but the current way of splitting it up is too clunky. I hadn't thought of it before, but I do like the suggestion of allowing * in any location of the LHS. This would of course require that the LHS contains exactly zero or one * i.e. a, b, c = t *a, b, c = t # all leading into a, 2nd-last into b, last into c a, *b, c = t # first into a, all middle into b, last into c a, b, *c = t # first into a, second into b, all trailing into c would all be legal constructs, but *a, *b, c = t a, *b, *c = t *a, b, *c = t *a, *b, *c = t would all be illegal constructs. I think the addition of this would lead to a more functional style of programming. This may be a plus for some people, and a minus for others. For myself, the idea (including arbitrary placement of the *) is +1. However, I'm -1 on the idea of additional syntax on the RHS. Tim Delaney From bac@OCF.Berkeley.EDU Sun Nov 24 00:37:09 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sat, 23 Nov 2002 16:37:09 -0800 (PST) Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: Message-ID: [Delaney, Timothy] > I think the addition of this would lead to a more functional style of > programming. This may be a plus for some people, and a minus for others. > MINUS! And yes, I have done my fair share of Scheme and Lisp programming so this is not an ignorant statement. Python is not a functional language, nor do I personally want to see it go in that direction. Guido has said before that he regrets including ``lambda``, ``map`` and the other built-in methods that come from the functional world (read his "Python Regrets" presentation from OSCON 2002). But I am sure there are people out there who disagree with my view. =) So I am still going to hold as my main argument that it does not promote concise code and I want to keep the LHS of the assignment simple. But then again that might be a bias coming from a right-handed person. =) -Brett From bac@OCF.Berkeley.EDU Sun Nov 24 00:55:44 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sat, 23 Nov 2002 16:55:44 -0800 (PST) Subject: [Python-Dev] from tuples to immutable dicts In-Reply-To: Message-ID: [Martin v. Loewis] > Fred once had a plan to expose structseqs to Python, to allow the > creation of new structs in Python. I was suggesting that there should > be a method new.struct_seq, which is called as > > struct_seq(name, doc, n_in_sequence, (fields)) > > where fields is a list of (name,doc) tuples. The resulting thing would > be similar to os.stat_result: you need to call it with the mandatory ^^^^^^^ You meant "can", right, Martin? > fields in sequence, and can call it with the optional fields by > keyword argument. > I think the idea is good if you can get it to tie directly into C code. That would get a +1 from me. If not, then +0. Kind of strikes me like a poor man's object with __getitem__ written to call the object's attributes assigned a numeric name. Or can be viewed as a tuple with attribute names for each index. If my Perl memory is not too rusty then I think this would be like Perl's arrays which might help get more people over from Perl (if we really want that =). -Brett From tdelaney@avaya.com Sun Nov 24 01:00:17 2002 From: tdelaney@avaya.com (Delaney, Timothy) Date: Sun, 24 Nov 2002 12:00:17 +1100 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments Message-ID: > From: Brett Cannon [mailto:bac@OCF.Berkeley.EDU] > [Delaney, Timothy] > > > I think the addition of this would lead to a more > functional style of > > programming. This may be a plus for some people, and a > minus for others. > > > > MINUS! > > And yes, I have done my fair share of Scheme and Lisp > programming so this > is not an ignorant statement. Python is not a functional > language. This is true - and yet I find that I program in a more 'functional' way in Python than in any other language I use. I find that for many problems it is natural for me to think in a functional way, and I appreciate that Python allows me to (without imposing it on me). In particular, I tend to produce a lot of filtering functions (whether run through list comps, filter or just standalone) and this is one place where this syntax would be useful. Of course, that presupposes that at least *one* value passes the filter ... but surely a failure there would be a data error ;) > concise code and I want to keep the LHS of the assignment simple. But > then again that might be a bias coming from a right-handed person. =) Nah - I'm right-handed too, but I've got nothing against the LHS. The major point I have against it is that you would presumably have duplication in: a = t *a = t unless that were special-cased. I personally think it should be syntactially correct if the proposal were to be accepted, but with a strong suggestion in the documentation that it be avoided ;) Tim Delaney From nas@python.ca Sun Nov 24 01:08:35 2002 From: nas@python.ca (Neil Schemenauer) Date: Sat, 23 Nov 2002 17:08:35 -0800 Subject: [Python-Dev] test failures on Debian unstable Message-ID: <20021124010835.GA7017@glacier.arctrix.com> test test_anydbm crashed -- exceptions.AttributeError: 'module' object has no attribute 'error' test test_bsddb crashed -- exceptions.AttributeError: 'module' object has no attribute 'btopen' test test_strptime failed -- Traceback (most recent call last): File "/home/nas/Python/py_cvs/Lib/test/test_strptime.py", line 156, in test_returning_RE self.failUnless(_strptime.strptime("1999", strp_output), "Use or re object failed") File "/home/nas/Python/py_cvs/Lib/_strptime.py", line 399, in strptime if format.pattern.find(locale_time.lang) == -1: TypeError: expected a character buffer object test test_whichdb failed -- Traceback (most recent call last): File "/home/nas/Python/py_cvs/Lib/test/test_whichdb.py", line 48, in test_whichdb_name f = mod.open(_fname, 'c') AttributeError: 'module' object has no attribute 'open' 198 tests OK. 4 tests failed: test_anydbm test_bsddb test_strptime test_whichdb 18 tests skipped: test_al test_bsddb3 test_bz2 test_cd test_cl test_curses test_dbm test_email_codecs test_gl test_imgfile test_nis test_normalization test_pep277 test_socket_ssl test_socketserver test_sunaudiodev test_winreg test_winsound 4 skips unexpected on linux2: test_normalization test_dbm test_bz2 test_bsddb3 From bac@OCF.Berkeley.EDU Sun Nov 24 01:09:09 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sat, 23 Nov 2002 17:09:09 -0800 (PST) Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: Message-ID: [Delaney, Timothy] > > From: Brett Cannon [mailto:bac@OCF.Berkeley.EDU] > > [Delaney, Timothy] > > In particular, I tend to produce a lot of filtering functions (whether run > through list comps, filter or just standalone) and this is one place where > this syntax would be useful. Of course, that presupposes that at least *one* > value passes the filter ... but surely a failure there would be a data error > ;) > But is it that big of a deal to just take the [1:] index of something instead of using this suggested addition? And yes, it is always a data error. =) > > concise code and I want to keep the LHS of the assignment simple. But > > then again that might be a bias coming from a right-handed person. =) > > Nah - I'm right-handed too, but I've got nothing against the LHS. > Damn. Would have been nice if that argument held up. > The major point I have against it is that you would presumably have > duplication in: > > a = t > *a = t > > unless that were special-cased. I personally think it should be syntactially > correct if the proposal were to be accepted, but with a strong suggestion in > the documentation that it be avoided ;) > Yes, it should be correct if the change is accepted. But what are the chances that people will actually read the note advising against it? =) -Brett From bac@OCF.Berkeley.EDU Sun Nov 24 01:10:59 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sat, 23 Nov 2002 17:10:59 -0800 (PST) Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: <20021124010835.GA7017@glacier.arctrix.com> Message-ID: [Neil Schemenauer] > test test_strptime failed -- Traceback (most recent call last): > File "/home/nas/Python/py_cvs/Lib/test/test_strptime.py", line 156, in > test_returning_RE > self.failUnless(_strptime.strptime("1999", strp_output), "Use or re > object failed") > File "/home/nas/Python/py_cvs/Lib/_strptime.py", line 399, in strptime > if format.pattern.find(locale_time.lang) == -1: > TypeError: expected a character buffer object Fixed and ready to be checked in from patch #639112. -Brett From tdelaney@avaya.com Sun Nov 24 01:27:06 2002 From: tdelaney@avaya.com (Delaney, Timothy) Date: Sun, 24 Nov 2002 12:27:06 +1100 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments Message-ID: > From: Brett Cannon [mailto:bac@OCF.Berkeley.EDU] > > But is it that big of a deal to just take the [1:] index of something > instead of using this suggested addition? Actually, it can be. a = func() b, c = a[:1], a[1:] vs b, *c = func() The first version leads to namespace pollution, which you cannot safely ignore. The two options are: a = func() b, c = a[:1], a[1:] del a in which case you may at times fail to del a (probably not a big issue if an exception has been thrown ... unless a is global, but we don't do that ;) b = func() b, c = b[:1], b[1:] which again may potentially screw up. Additionally, the b, *c = func() version will work with any iterator on the RHS, unlike the slicing version. I see two options for this: 1. b is assigned the first value, c is assigned a tuple of the remaining values 2. b is assigned the first value, c is assigned the iterator. Hmm - on further thought, #2 may be the way to go - the value with a * is always assigned the result of calling iter() on the remainder. That would also produce a distinction between a = t # a is assigned a tuple *a = t # a is assigned a tuple iterator In that case, a, *b, c = t would have to perform the following shennanigans: 1. Extract all the values. 2. Assigned the first value to a, and the last value to c. 3. Create a tuple out of the rest, get its iterator and assign that to b. In the common case however of: a, b, *c = t that could be reduced to: 1. Get the iterator. 2. Assign the result of .next() to a. 3. Assign the result of .next() to b. 4. Assign the iterator to c. i.e. if the asterisk is attached to any but the last name of the LHS, there better not be an infinitly-long iterable in the RHS ;) Something further to think about. Tim Delaney From bac@OCF.Berkeley.EDU Sun Nov 24 02:00:29 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sat, 23 Nov 2002 18:00:29 -0800 (PST) Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: Message-ID: [Delaney, Timothy] > > From: Brett Cannon [mailto:bac@OCF.Berkeley.EDU] > > > > But is it that big of a deal to just take the [1:] index of something > > instead of using this suggested addition? > > Actually, it can be. > > a = func() > b, c = a[:1], a[1:] > > vs > > b, *c = func() > > The first version leads to namespace pollution, which you cannot safely > ignore. The two options are: > > a = func() > b, c = a[:1], a[1:] > del a > > in which case you may at times fail to del a (probably not a big issue if an > exception has been thrown ... unless a is global, but we don't do that ;) > Which was the argument I used with the time.localtime() discussion. Negating each other's arguments is not helping. =) > b = func() > b, c = b[:1], b[1:] > > which again may potentially screw up. > > Additionally, the > > b, *c = func() > > version will work with any iterator on the RHS, unlike the slicing version. > I see two options for this: > > 1. b is assigned the first value, c is assigned a tuple of the remaining > values > > 2. b is assigned the first value, c is assigned the iterator. > > Hmm - on further thought, #2 may be the way to go - the value with a * is > always assigned the result of calling iter() on the remainder. > #2 is the better option. But it that complicated to write as a function:: def peel(iterator, arg_cnt): if not hasattr(iterator, 'next'): iterator = iter(iterator) for num in xrange(arg_cnt): yield iterator.next() else: yield iterator >>> a,b,c = peel((1,2,3,4,5), 2) >>> a 1 >>> b 2 >>> c >>> c.next() 3 Or you could just do it the old-fashioned way and just write out the two ``.next()`` calls. =) > That would also produce a distinction between > > a = t # a is assigned a tuple > *a = t # a is assigned a tuple iterator > > In that case, > > a, *b, c = t > > would have to perform the following shennanigans: > > 1. Extract all the values. > 2. Assigned the first value to a, and the last value to c. > 3. Create a tuple out of the rest, get its iterator and assign that to b. > > In the common case however of: > > a, b, *c = t > > that could be reduced to: > > 1. Get the iterator. > 2. Assign the result of .next() to a. > 3. Assign the result of .next() to b. > 4. Assign the iterator to c. > > i.e. if the asterisk is attached to any but the last name of the LHS, there > better not be an infinitly-long iterable in the RHS ;) > =) No. That would not be good. I still don't love this whole idea. But I will admit, though, that viewing this more as a syntactically easy way to unrolling an iterator makes it seem more reasonable. Ditch the possibility of having the arbitrary assignment variable anywhere but the end of the assignment and you might be able to get me to a -0 vote with some good examples of where someone could have really used this. But if you do it like this with iterators the whole initial suggestion of making parameter passing just like assignment goes out the door. -Brett From tim.one@comcast.net Sun Nov 24 03:02:10 2002 From: tim.one@comcast.net (Tim Peters) Date: Sat, 23 Nov 2002 22:02:10 -0500 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: <20021124010835.GA7017@glacier.arctrix.com> Message-ID: [Neil Schemenauer] > test test_anydbm crashed -- exceptions.AttributeError: 'module' > object has no attribute 'error' > test test_bsddb crashed -- exceptions.AttributeError: 'module' > object has no attribute 'btopen' > test test_strptime failed -- Traceback (most recent call last): > File "/home/nas/Python/py_cvs/Lib/test/test_strptime.py", line 156, in > test_returning_RE > self.failUnless(_strptime.strptime("1999", strp_output), "Use or re > object failed") > File "/home/nas/Python/py_cvs/Lib/_strptime.py", line 399, in strptime > if format.pattern.find(locale_time.lang) == -1: > TypeError: expected a character buffer object > test test_whichdb failed -- Traceback (most recent call last): > File "/home/nas/Python/py_cvs/Lib/test/test_whichdb.py", line 48, in > test_whichdb_name > f = mod.open(_fname, 'c') > AttributeError: 'module' object has no attribute 'open' > > 198 tests OK. > 4 tests failed: > test_anydbm test_bsddb test_strptime test_whichdb The db story is something of a mess right now. Upgrade to Windows and you can take advantage of my recent pain there (which hasn't all gone away: still getting link warnings, don't know whether they matter or how to stop them if they do, and can't make more time to stare at it). > 18 tests skipped: > test_al test_bsddb3 test_bz2 test_cd test_cl test_curses test_dbm > test_email_codecs test_gl test_imgfile test_nis test_normalization > test_pep277 test_socket_ssl test_socketserver test_sunaudiodev > test_winreg test_winsound > 4 skips unexpected on linux2: > test_normalization test_dbm test_bz2 test_bsddb3 test_normalization should probably be changed to a -u thing, and put in regrtest's "expected skip" list on all platforms (it requires a giant text file of test cases that isn't checked into the project). test_bz2 was likely skipped because the new bz2 stuff isn't being built on this box, and/or bunzip2 wasn't found on the PATH. test_bsddb3 is already in already a regrtest -u thingie, so should be in the expected-skip list on all boxes. From tim.one@comcast.net Sun Nov 24 04:20:42 2002 From: tim.one@comcast.net (Tim Peters) Date: Sat, 23 Nov 2002 23:20:42 -0500 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: <200211221826.gAMIQGF29615@odiug.zope.com> Message-ID: [Guido] > ... > But I'd like to see this fixed in configure, not in pymalloc. (Tim > seems to favor always ensuring that we never call malloc(0), Or realloc(0). > but I can't see how that can be done without an extra test+jump. :-( ) Probably can't be, but this is ideal for a conditional-move instruction, and more architectures are growing that. In the meantime, Martin made this change, and I haven't measured a slowdown on my box (it is an extra test-&-branch under MSVC, but the branch is over just one instruction so doesn't (I believe) flush the pipeline). From andymac@bullseye.apana.org.au Sun Nov 24 04:33:59 2002 From: andymac@bullseye.apana.org.au (Andrew MacIntyre) Date: Sun, 24 Nov 2002 14:33:59 +1000 (est) Subject: [Python-Dev] urllib performance issue on FreeBSD 4.x Message-ID: I've been following up a thread on python-list about lousy performance of urllib.urlopen(...).read() on FreeBSD 4.x comparted to using wget to retrieve the same file. I've determined that the following patch (against 2.2.2) makes an enormous difference in throughput: -----8<-----8<-----8<----- *** Lib/httplib.py.orig Mon Oct 7 11:18:17 2002 --- Lib/httplib.py Sun Nov 24 14:44:16 2002 *************** *** 210,216 **** # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details. def __init__(self, sock, debuglevel=0, strict=0): ! self.fp = sock.makefile('rb', 0) self.debuglevel = debuglevel self.strict = strict --- 210,216 ---- # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details. def __init__(self, sock, debuglevel=0, strict=0): ! self.fp = sock.makefile('rb', -1) self.debuglevel = debuglevel self.strict = strict -----8<-----8<-----8<----- Without this patch, d/l a 4MB file from localhost gets a bit over 110kB/s, with the patch gets 4-5.5MB/s on the same system (FBSD 4.4 SMP, 2xC300A, 128MB RAM, ATA66 HD). My question: - why is the socket.fp being set to unbuffered? I can't check the FBSD library source at the moment (and can't get to the RFC's mentioned above either at the moment for that matter), and can only speculate that fread() is resorting to reading from the socket a character at a time. So I'm not sure whether this should be treated as a FreeBSD issue or/and a Python issue. Another poster in the same thread mentions seeing somewhat similar performance problems on Win2k, although not nearly as bad. FWIW, my test script is -----8<-----8<-----8<----- import time import urllib t1 = time.time() u = urllib.urlopen("http://localhost/big_file").read() t2 = time.time() print 'throughput: %f kB/s' % (len(u) / (t2 - t1)) -----8<-----8<-----8<----- Reactions? -- Andrew I MacIntyre "These thoughts are mine alone..." E-mail: andymac@bullseye.apana.org.au | Snail: PO Box 370 andymac@pcug.org.au | Belconnen ACT 2616 Web: http://www.andymac.org/ | Australia From tim.one@comcast.net Sun Nov 24 04:48:12 2002 From: tim.one@comcast.net (Tim Peters) Date: Sat, 23 Nov 2002 23:48:12 -0500 Subject: [Python-Dev] PEP 288: Generator Attributes In-Reply-To: <1E8BAC24-FE06-11D6-9661-0030655234CE@cwi.nl> Message-ID: [Jack Jansen] > I don't like it, there's "Magic! Magic!" written all over it. > Generators have always given me that feeling (you start reading them as > a function, then 20 lines down you meet a "yield" and suddenly realize > you have to start reading at the top again, keeping in mind that this > is a persistent stack frame), Except you don't need to do such a thing -- "yield" is much the same as "print" this way. Both have the same effect on the stack frame: none. So if you don't find print to be confusing wrt local state, you shouldn't find yield confusing wrt local state either. > ... > Generators have to me always felt more "class-instance-like" than > "function-like", I *exoect* you'll feel more the opposite the more you use them. Heck, they're so much like functions that Guido reused "def" for them . From aleax@aleax.it Sun Nov 24 08:03:20 2002 From: aleax@aleax.it (Alex Martelli) Date: Sun, 24 Nov 2002 09:03:20 +0100 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: References: Message-ID: <200211240903.20111.aleax@aleax.it> On Sunday 24 November 2002 03:00 am, Brett Cannon wrote: ... > #2 is the better option. But it that complicated to write as a function:: > > def peel(iterator, arg_cnt): > if not hasattr(iterator, 'next'): > iterator = iter(iterator) > for num in xrange(arg_cnt): > yield iterator.next() > else: > yield iterator I think this generator can be simplified without altering its behavior: def peel(iterator, arg_cnt=2): iterator = iter(iterator) for num in xrange(arg_cnt): yield iterator.next() yield iterator by exploiting the idempotence of iter on iterators and doing away with an unneeded else -- I also think defaulting arg_cnt to 2 is useful because I see myself using this most often for head/tail splits a la ML/Haskell, only occasionally for other purposes. Admittedly, these are minor details. I'm +0 on the ability to use "head, *tail = rhs" syntax as equivalent to "head, tail = peel(rhs)". It's nice and it has an impressionistic analogy with positional-arguments passing, but not quite the same semantics as the latter (if tail is to be an iterator rather than a sequence), which makes it a little bit trickier to teach, and could be seen as a somewhat gratuitous syntax addition/extension for modest practical gain. Much more might be gained by deciding on a small set of iterator-handling generators to go in a new standard library module -- right now every substantial user of generators is writing their own little helpers set. But maybe waiting until more collective experience on what goes in such sets is accumulated is wiser. There are other, less-glamorous situations in which the ability to express "whatever number of X's we need here" would help a lot. For example, the struct module lacks the ability to express this reasonably frequent need (while its Perl counterpart has it), so I find myself using string operations to construct data-format strings on the fly just for the purpose of ending them with the appropriate "Nx" or "Ns", or similar workarounds. Nothing terrible, but being able to use a 'count' of * for the last element of the format string would simplify this and make it less error-prone, I think. Alex From martin@v.loewis.de Sun Nov 24 08:15:58 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 24 Nov 2002 09:15:58 +0100 Subject: [Python-Dev] from tuples to immutable dicts In-Reply-To: References: Message-ID: Brett Cannon writes: > > Fred once had a plan to expose structseqs to Python, to allow the > > creation of new structs in Python. I was suggesting that there should > > be a method new.struct_seq, which is called as > > > > struct_seq(name, doc, n_in_sequence, (fields)) > > > > where fields is a list of (name,doc) tuples. The resulting thing would > > be similar to os.stat_result: you need to call it with the mandatory > ^^^^^^^ > You meant "can", right, Martin? Probably an English-language issue: If you call it, the mandatory fields must be present as positional arguments (i.e. you can't call it and omit mandatory fields, or pass them as keyword arguments). > I think the idea is good if you can get it to tie directly into C code. > That would get a +1 from me. If not, then +0. What means to "tie into C code" here? Regards, Martin From martin@v.loewis.de Sun Nov 24 08:29:35 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 24 Nov 2002 09:29:35 +0100 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: <20021124010835.GA7017@glacier.arctrix.com> References: <20021124010835.GA7017@glacier.arctrix.com> Message-ID: Neil Schemenauer writes: > test test_anydbm crashed -- exceptions.AttributeError: 'module' > object has no attribute 'error' Part of the problem is that _bsddb is not built on Debian, see #590377. The other part is that the packages stays around after the module is gone; this is fixed now. Regards, Martin From oren-py-d@hishome.net Sun Nov 24 08:33:16 2002 From: oren-py-d@hishome.net (Oren Tirosh) Date: Sun, 24 Nov 2002 03:33:16 -0500 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <200211240903.20111.aleax@aleax.it> References: <200211240903.20111.aleax@aleax.it> Message-ID: <20021124083316.GA67930@hishome.net> On Sun, Nov 24, 2002 at 09:03:20AM +0100, Alex Martelli wrote: > def peel(iterator, arg_cnt=2): > iterator = iter(iterator) > for num in xrange(arg_cnt): > yield iterator.next() > yield iterator I like this function, but the argument name is misleading - it isn't necessarily an iterator. In the common use cases it will be a list or tuple. def peel(iterable, arg_cnt=2): iterator = iter(iterable) for num in xrange(arg_cnt): yield iterator.next() yield iterator The name 'iterable' is unambigous but a little awkward. I'd like to use the term 'sequence' but I'm afraid people already associate it with the built-in sequence types or with something indexable rather than iterable. Do you think of iterators as a sequences? Oren From martin@v.loewis.de Sun Nov 24 08:34:35 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 24 Nov 2002 09:34:35 +0100 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: References: Message-ID: Tim Peters writes: > test_normalization should probably be changed to a -u thing, and put in > regrtest's "expected skip" list on all platforms (it requires a giant text > file of test cases that isn't checked into the project). What -u would you propose? > test_bsddb3 is already in already a regrtest -u thingie, so should be in the > expected-skip list on all boxes. I don't like the expected-skip mechanism at all. Why does being Linux (or HP-UX) has anything to do with whether we can run the test_normalization test; that test is completely platform-independent. Why is it expected that test_bz2 works on Linux? It won't work if you don't have the libraries. Instead, I would expect that test_bz2 works if the bz2 module is present, and fails to work if it was not built. Likewise for test_normalization: If a certain file is present, it should succeed, otherwise, it should fail. Regards, Martin From martin@v.loewis.de Sun Nov 24 08:36:41 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 24 Nov 2002 09:36:41 +0100 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: References: Message-ID: Tim Peters writes: > Probably can't be, but this is ideal for a conditional-move instruction, and > more architectures are growing that. In the meantime, Martin made this > change, and I haven't measured a slowdown on my box (it is an extra > test-&-branch under MSVC, but the branch is over just one instruction so > doesn't (I believe) flush the pipeline). BTW, you managed to trick me into writing malloc((n) || 1). That took some time to figure out... Regards, Martin From martin@v.loewis.de Sun Nov 24 08:43:38 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 24 Nov 2002 09:43:38 +0100 Subject: [Python-Dev] urllib performance issue on FreeBSD 4.x In-Reply-To: References: Message-ID: Andrew MacIntyre writes: > - why is the socket.fp being set to unbuffered? I believe it prevents deadlocks. In HTTP/1.1, the server may not close the connection, but may refuse to send more data until it receives the next command. So you must be very careful to not read more data from the socket than the protocol guarantees you to be present. I believe stdio would not apply the necessary care: if it wants to fill the buffer, it will block. It won't see EOF because there is none, but there won't be any more data, because the server won't send any until we send the next command. We won't send the next command, since we are blocked. Regards, Martin From tim.one@comcast.net Sun Nov 24 09:21:48 2002 From: tim.one@comcast.net (Tim Peters) Date: Sun, 24 Nov 2002 04:21:48 -0500 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: Message-ID: [martin@v.loewis.de] > BTW, you managed to trick me into writing malloc((n) || 1). That took > some time to figure out... I was thinking Python there. Guido picked up on it at once -- I guess he doesn't write enough Python . From bac@OCF.Berkeley.EDU Sun Nov 24 09:34:08 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sun, 24 Nov 2002 01:34:08 -0800 (PST) Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <200211240903.20111.aleax@aleax.it> Message-ID: [Alex Martelli] > On Sunday 24 November 2002 03:00 am, Brett Cannon wrote: > ... > > #2 is the better option. But it that complicated to write as a function:: > > > > def peel(iterator, arg_cnt): > > if not hasattr(iterator, 'next'): > > iterator = iter(iterator) > > for num in xrange(arg_cnt): > > yield iterator.next() > > else: > > yield iterator > > I think this generator can be simplified without altering its behavior: > > def peel(iterator, arg_cnt=2): > iterator = iter(iterator) > for num in xrange(arg_cnt): > yield iterator.next() > yield iterator > > by exploiting the idempotence of iter on iterators and doing away with an > unneeded else -- I also think defaulting arg_cnt to 2 is useful because I see > myself using this most often for head/tail splits a la ML/Haskell, only > occasionally for other purposes. Admittedly, these are minor details. > Actually, with the way it is coded, you would want arg_cnt defaulting to 1; it is meant to represent the number of arguments sans the one the iterator is being assigned to. Didn't think (or realize) that raising an exception in the generator would kill the generator and not allow any more calls to ``.next()``. That was why I bothered with the ``else:`` statement. And I didn't know ``iter()`` just returned its argument if it was already an iterator. Should have, though, since all other constructor methods are like that (e.g. dict(), list(), etc.). Always learning something new. =) > I'm +0 on the ability to use "head, *tail = rhs" syntax as equivalent to > "head, tail = peel(rhs)". It's nice and it has an impressionistic analogy > with positional-arguments passing, but not quite the same semantics > as the latter (if tail is to be an iterator rather than a sequence), which > makes it a little bit trickier to teach, and could be seen as a somewhat > gratuitous syntax addition/extension for modest practical gain. Much > more might be gained by deciding on a small set of iterator-handling > generators to go in a new standard library module -- right now every > substantial user of generators is writing their own little helpers set. But > maybe waiting until more collective experience on what goes in such > sets is accumulated is wiser. > I think the question becomes whether we want to start adding modules that service specific language abilities like generators. We have the string module, but that is on its way out (right?). There is no module for list, dict, or tuple tricks or helper functions. There is also no module for iterators (although I have no clue what we would put in there). Raymond had his generator comprehensions with generator version of ``map()`` and family, but that was rejected. The question is do we want to move towards adding things like this, or should it stay relegated to places like the Python Cookbook (which reminds me I should probably put this sucker up on the site) and the Demo/ directory. Something to consider. -Brett From bac@OCF.Berkeley.EDU Sun Nov 24 09:40:45 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sun, 24 Nov 2002 01:40:45 -0800 (PST) Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <20021124083316.GA67930@hishome.net> Message-ID: [Oren Tirosh] > I like this function, but the argument name is misleading - it isn't > necessarily an iterator. In the common use cases it will be a list or > tuple. > > def peel(iterable, arg_cnt=2): > iterator = iter(iterable) > for num in xrange(arg_cnt): > yield iterator.next() > yield iterator > > The name 'iterable' is unambigous but a little awkward. I'd like to > use the term 'sequence' but I'm afraid people already associate it > with the built-in sequence types or with something indexable rather > than iterable. Do you think of iterators as a sequences? > I know I think of them as a lazy sequence. But you are right, people tend to associate sequence with either lists or tuples, perhaps because you can do indexing on them. -Brett From tim.one@comcast.net Sun Nov 24 09:53:08 2002 From: tim.one@comcast.net (Tim Peters) Date: Sun, 24 Nov 2002 04:53:08 -0500 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: Message-ID: [martin@v.loewis.de, on test_normalization] > What -u would you propose? Someone who knows what the test does should name the resource. >> test_bsddb3 is already in already a regrtest -u thingie, so >> should be in the expected-skip list on all boxes. > I don't like the expected-skip mechanism at all. I love it: it solves real problems on Windows. The latest example is that I never would have noticed that test_bsddb3 even existed if the mechanism hadn't told me it was an unexpected skip on Windows. Ditto test_normalization. Likewise Neil's report of expected skips on Debian pointed out oddities to him. On platforms where dozens of tests are skipped, just listing the tests that were skipped isn't informative enough -- the list is so big that you simply don't notice if a new test shows up there. Before this mechanism, entire new packages didn't make it into the Windows distro because *nobody* noticed that their test suite was getting skipped. That hasn't happened again. > Why does being Linux (or HP-UX) has anything to do with whether we > can run the test_normalization test; that test is completely > platform-independent. The expected-skip list doesn't really have to do with whether we *can* run a test, it has to do with whether it's a red flag when we find we can't run a test. > Why is it expected that test_bz2 works on Linux? I don't know that it is. > It won't work if you don't have the libraries. If so, put it in the expected-skip list for Linux. regrtest doesn't whine about expected-skip entries that *do* run to completion, it only points out tests that don't run to completion (due to ImportError or TestSkipped) that also aren't in the platform's expected-skip list. > Instead, I would expect that test_bz2 works if the bz2 module is > present, and fails to work if it was not built. Then it's an expected skip on platforms where that's a choice, in the sense the phrase is intended. "expected skip" may be a poor name for it, although it makes literal sense on Windows. > Likewise for test_normalization: If a certain file is present, it > should succeed, otherwise, it should fail. Ditto, but even more so in this case, because that "certain file" isn't present on any system unless the user specifically downloads it and places it in their current working directory before running test_normalization. That makes it so unlikely this test will run for users that it should probably be put into the expected-skip list on all boxes via the _ExpectedSkips constructor (maybe a good idea for all -u tests). From aleax@aleax.it Sun Nov 24 10:12:51 2002 From: aleax@aleax.it (Alex Martelli) Date: Sun, 24 Nov 2002 11:12:51 +0100 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: References: Message-ID: <200211241112.51330.aleax@aleax.it> On Sunday 24 November 2002 10:34 am, Brett Cannon wrote: ... > Actually, with the way it is coded, you would want arg_cnt defaulting to > 1; it is meant to represent the number of arguments sans the one the > iterator is being assigned to. Right -- sorry, I let the parameter's name fool me, without checking more carefully. Maybe renaming it to number_of_splits or something like that would help the somewhat-careless reader;-). > Didn't think (or realize) that raising an exception in the generator would > kill the generator and not allow any more calls to ``.next()``. That was > why I bothered with the ``else:`` statement. Transparently propagating a StopIteration is often the nicest way to end a generator, I think. > And I didn't know ``iter()`` just returned its argument if it was already > an iterator. Should have, though, since all other constructor methods are > like that (e.g. dict(), list(), etc.). Hmmm, not really: list(L) returns a list L1 such that L1 == L but id(L1) != id(L) -- i.e., list and dict give a *shallow copy* of their list and dict argument respectively. iter is different... > I think the question becomes whether we want to start adding modules that > service specific language abilities like generators. We have the string > module, but that is on its way out (right?). There is no module for list, > dict, or tuple tricks or helper functions. There is also no module for No, but that's because lists and dicts (and strings) package their functionality up as methods, so there's no need to have any supporting module. Tuples don't need anything special. Generic iterators do not offer pre-cooked rich functionality nor do they have methods usable to locate such functionality, yet clearly they could well use it, therefore a module appears to be exactly the right place to locate such functionality in. There's no need to distinguish generators from other iterators in this respect; generators are just one neat implementation of iterators. > family, but that was rejected. The question is do we want to move towards > adding things like this, or should it stay relegated to places like the > Python Cookbook (which reminds me I should probably put this sucker up on > the site) and the Demo/ directory. Something to consider. I think that having everybody re-code the same powerful idioms all the time, when the idioms are well suited to being encapsulated in functions that the standard library might supply, is sub-optimal, even if the alternative is that the re-coding be mostly copying from the cookbook. Alex From martin@v.loewis.de Sun Nov 24 11:01:41 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 24 Nov 2002 12:01:41 +0100 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: References: Message-ID: Tim Peters writes: > > I don't like the expected-skip mechanism at all. > > I love it: it solves real problems on Windows. It is unfortunate that it solves these problems *only* on Windows. > > Why is it expected that test_bz2 works on Linux? > > I don't know that it is. So should it be? If not: Why is it expected that test_bz2 fails on Linux? Whether the test passes or fails simply has nothing to what system you run it on. To solve the real problem on Windows, it seems that a list "tim_has_seen_this_test" would be sufficient. > > It won't work if you don't have the libraries. > > If so, put it in the expected-skip list for Linux. Sure, but that holds for nearly every test requiring extension modules: if some library isn't there, the module doesn't work. > Then it's an expected skip on platforms where that's a choice, in the sense > the phrase is intended. "expected skip" may be a poor name for it, although > it makes literal sense on Windows. Again: Unfortunately *only* on Windows. If the test is added to the skip list, the a potential problem will be hidden: it may be that the test ought to pass on a certain installation, but doesn't because of a real bug in Python. So adding the test into the skipped list on grounds of the library potentially unavailable hides real problems. Regards, Martin From guido@python.org Sun Nov 24 12:19:07 2002 From: guido@python.org (Guido van Rossum) Date: Sun, 24 Nov 2002 07:19:07 -0500 Subject: [Python-Dev] urllib performance issue on FreeBSD 4.x In-Reply-To: Your message of "Sun, 24 Nov 2002 14:33:59 +1000." References: Message-ID: <200211241219.gAOCJ7231015@pcp02138704pcs.reston01.va.comcast.net> > I've been following up a thread on python-list about lousy performance of > urllib.urlopen(...).read() on FreeBSD 4.x comparted to using wget to > retrieve the same file. > > I've determined that the following patch (against 2.2.2) makes an enormous > difference in throughput: > > -----8<-----8<-----8<----- > *** Lib/httplib.py.orig Mon Oct 7 11:18:17 2002 > --- Lib/httplib.py Sun Nov 24 14:44:16 2002 > *************** > *** 210,216 **** > # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details. > > def __init__(self, sock, debuglevel=0, strict=0): > ! self.fp = sock.makefile('rb', 0) > self.debuglevel = debuglevel > self.strict = strict > > --- 210,216 ---- > # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details. > > def __init__(self, sock, debuglevel=0, strict=0): > ! self.fp = sock.makefile('rb', -1) > self.debuglevel = debuglevel > self.strict = strict > > -----8<-----8<-----8<----- > > Without this patch, d/l a 4MB file from localhost gets a bit over 110kB/s, > with the patch gets 4-5.5MB/s on the same system (FBSD 4.4 SMP, 2xC300A, > 128MB RAM, ATA66 HD). > > My question: > > - why is the socket.fp being set to unbuffered? I can't make time for a full essay on the issue, but I believe that it must be unbuffered because some applications want to read until the end of the headers and then pass the file descriptor to a subprocess or to code that uses the socket directly. --Guido van Rossum (home page: http://www.python.org/~guido/) From fredrik@pythonware.com Sun Nov 24 12:49:02 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Sun, 24 Nov 2002 13:49:02 +0100 Subject: [Python-Dev] urllib performance issue on FreeBSD 4.x References: <200211241219.gAOCJ7231015@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <017d01c293b7$debbc0e0$ced241d5@hagrid> Guido wrote: > > Without this patch, d/l a 4MB file from localhost gets a bit over 110kB/s, > > with the patch gets 4-5.5MB/s on the same system > > > > - why is the socket.fp being set to unbuffered? > > I can't make time for a full essay on the issue, but I believe that it > must be unbuffered because some applications want to read until the > end of the headers and then pass the file descriptor to a subprocess > or to code that uses the socket directly. sounds like it would be a good idea to provide a subclass (or option) for applications that don't need that feature. From guido@python.org Sun Nov 24 12:53:21 2002 From: guido@python.org (Guido van Rossum) Date: Sun, 24 Nov 2002 07:53:21 -0500 Subject: PyMem_MALLOC (was [Python-Dev] Snake farm) In-Reply-To: Your message of "Sun, 24 Nov 2002 04:21:48 EST." References: Message-ID: <200211241253.gAOCrLP11174@pcp02138704pcs.reston01.va.comcast.net> > [martin@v.loewis.de] > > BTW, you managed to trick me into writing malloc((n) || 1). That took > > some time to figure out... [Tim] > I was thinking Python there. Guido picked up on it at once -- I guess he > doesn't write enough Python . No, I was probably the first person ever to be bitten by this, back in 1991. :-) --Guido van Rossum (home page: http://www.python.org/~guido/) From skip@manatee.mojam.com Sun Nov 24 13:00:19 2002 From: skip@manatee.mojam.com (Skip Montanaro) Date: Sun, 24 Nov 2002 07:00:19 -0600 Subject: [Python-Dev] Weekly Python Bug/Patch Summary Message-ID: <200211241300.gAOD0JiV004032@manatee.mojam.com> Bug/Patch Summary ----------------- 308 open / 3075 total bugs (+3) 94 open / 1795 total patches (-7) New Bugs -------- Optional argument for dict.pop() method (2002-11-17) http://python.org/sf/639806 64-bit bug on AIX (2002-11-18) http://python.org/sf/639945 email.Header misparses mixed headers (2002-11-18) http://python.org/sf/640110 Discovered typo in zlib test. (2002-11-18) http://python.org/sf/640230 zlib.decompressobj under-described. (2002-11-18) http://python.org/sf/640236 Misuse of /usr/local/in setup.py (2002-11-19) http://python.org/sf/640553 Undocumented side effect of eval (2002-11-20) http://python.org/sf/641111 help() fails for some builtin topics (2002-11-21) http://python.org/sf/642168 pygettext should be installed (2002-11-22) http://python.org/sf/642309 metaclass causes __dict__ to be dict (2002-11-22) http://python.org/sf/642358 tempfile.mktemp() for directories (2002-11-22) http://python.org/sf/642391 better error reporting for MRO conflict (2002-11-22) http://python.org/sf/642489 Webbrowser (and ic) on MacOSX failure (2002-11-23) http://python.org/sf/642720 property() builtin not documented (2002-11-23) http://python.org/sf/642742 socket.inet_aton("255.255.255.255") (2002-11-24) http://python.org/sf/643005 New Patches ----------- Fix breakage caused when user sets OPT (2002-11-19) http://python.org/sf/640843 reST version of Lib/test/README (2002-11-20) http://python.org/sf/641170 Use .dylib for shared objects on MacOS X (2002-11-20) http://python.org/sf/641685 time and timetz for the datetime module (2002-11-21) http://python.org/sf/641958 optparse LaTeX docs (bug #638703) (2002-11-22) http://python.org/sf/642236 Expose PyImport_FrozenModules in imp (2002-11-22) http://python.org/sf/642578 logging SysLogHandler proto type wrong (2002-11-23) http://python.org/sf/642974 Closed Bugs ----------- Circular reference makes Py_Init crash (2002-03-13) http://python.org/sf/529750 odd index entries (2002-06-17) http://python.org/sf/570003 smtplib.SMTP.ehlo method esmtp_features (2002-07-13) http://python.org/sf/581165 CRAM-MD5 module (2002-08-24) http://python.org/sf/599679 No __mod__ on str subclass (2002-09-27) http://python.org/sf/615506 plat-linux2/IN.py FutureWarning's (2002-10-03) http://python.org/sf/618012 imaplib fails with literals in LIST resp (2002-10-07) http://python.org/sf/619732 int(1e200) should return long (2002-11-07) http://python.org/sf/635115 re.sub() coerces u'' to '' (2002-11-08) http://python.org/sf/635398 Install script goes into infinite loop (2002-11-15) http://python.org/sf/639022 archiver should use zipfile before zip (2002-11-15) http://python.org/sf/639118 Closed Patches -------------- makes doctest.testmod() to work (2001-11-28) http://python.org/sf/486438 bugfix and extension to Lib/asyncore.py (2001-11-30) http://python.org/sf/487458 Improvements for pygettext (2001-12-18) http://python.org/sf/494845 native win32 threading primitives (2002-01-07) http://python.org/sf/500447 2.2 patches for BSD/OS 5.0 (2002-03-26) http://python.org/sf/535335 SocketServer behavior (2002-04-30) http://python.org/sf/550765 Support Unicode normalization (2002-10-21) http://python.org/sf/626485 Support Hangul Syllable names (2002-10-21) http://python.org/sf/626548 Plural forms support for gettext (2002-11-04) http://python.org/sf/633547 Allow any file-like object on dis module (2002-11-13) http://python.org/sf/637906 Logging 0.4.7 & patches thereto (2002-11-15) http://python.org/sf/638825 new string method -- format (2002-11-16) http://python.org/sf/639307 Removal of FreeBSD 5.0 specific test (2002-11-16) http://python.org/sf/639371 New icon for .py files (2002-11-17) http://python.org/sf/639635 From arigo@tunes.org Sun Nov 24 14:05:34 2002 From: arigo@tunes.org (Armin Rigo) Date: Sun, 24 Nov 2002 06:05:34 -0800 (PST) Subject: [Python-Dev] from tuples to immutable dicts In-Reply-To: ; from martin@v.loewis.de on Sun, Nov 24, 2002 at 09:15:58AM +0100 References: Message-ID: <20021124140534.C26224B24@bespin.org> On Sun, Nov 24, 2002 at 09:15:58AM +0100, Martin v. Loewis wrote: > > > struct_seq(name, doc, n_in_sequence, (fields)) > > > > > > where fields is a list of (name,doc) tuples. The resulting thing would > > > be similar to os.stat_result: you need to call it with the mandatory This goes against the initial proposal, which was to have a lightweight and declaration-less way to build structures. When you feel a need to explicitely declare the type of a structure, then making it a class is just the way to go. I'm thinking about no-type small structures, the ones for which you'd almost use dictionaries: point = {'x': 5, 'y': 6} print point['x'] print point['y'] which looks reasonably if not quite entierely nice. The problem is that it is incompatible with tuples: you cannot smoothly go from tuples to dicts without changing your whole program. What about just allowing keyword parameters in 'tuple'? point = tuple(5, 6, color=RED, visible=False) As the order of the keyword parameters is lost, they cannot show up positionally (i.e. the length of the above tuple is 2). But this might be exactly what you want in some cases. The above tuple behaves like (5,6) in many situations. If desired you can store the values both positionally and as attributes -- since it's immutable, it doesn't matter beyond a slight duplication in the constructor: x = ... y = ... point = tuple(x, y, x=x, y=y) assert point[0] == point.x == x assert point[1] == point.y == y In this solution, 'point.__dict__' would return a read-only dict wrapper similar to '.__dict__'. For an efficient C implementation, 'tuple-with-attributes' should be a subtype of 'tuple', and not 'tuple' itself, to avoid 'sizeof(PyObject*)' extra bytes in all the tuples of the program. There is no need for the new type name to show up in __builtins__. Armin From arigo@tunes.org Sun Nov 24 14:05:38 2002 From: arigo@tunes.org (Armin Rigo) Date: Sun, 24 Nov 2002 06:05:38 -0800 (PST) Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <200211241112.51330.aleax@aleax.it>; from aleax@aleax.it on Sun, Nov 24, 2002 at 11:12:51AM +0100 References: <200211241112.51330.aleax@aleax.it> Message-ID: <20021124140538.05381488D@bespin.org> On Sun, Nov 24, 2002 at 11:12:51AM +0100, Alex Martelli wrote: > > I think the question becomes whether we want to start adding modules that > > service specific language abilities like generators. We have the string > > module, but that is on its way out (right?). There is no module for list, > > dict, or tuple tricks or helper functions. There is also no module for > > No, but that's because lists and dicts (and strings) package their > functionality up as methods, so there's no need to have any supporting > module. I've always felt it counter-intuitive and even confusing that lists and dicts are built-in types with methods, whereas iterators are just an expected interface with no user-visible methods apart from 'next', which is also a magic name (a magic name with no __??). The lack of common implementational part between iterator forces us to use a module to collect useful operations, which goes against the current everything-as-a-method trend. I wish we would have a standard built-in 'iter' type with a standard set of methods, instead of the current 'iter(x)' which is essentially just 'x.__iter__()' and returns anything. If 'iter' were a built-in type it could provide additional methods, e.g. a, b, c = iter(x).peel(2) as an equivalent to the proposed iterator-returning 'a,b,*c=x'. A similar option would be to require that iterator objects be instances of subtypes of 'iter'. Well, it's probably too late to change that kind of thing anyway. Armin. From martin@v.loewis.de Sun Nov 24 17:10:54 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 24 Nov 2002 18:10:54 +0100 Subject: [Python-Dev] from tuples to immutable dicts In-Reply-To: <20021124140534.C26224B24@bespin.org> References: <20021124140534.C26224B24@bespin.org> Message-ID: Armin Rigo writes: > This goes against the initial proposal, which was to have a > lightweight and declaration-less way to build structures. Yes. I never had the need for a lightweight and declaration-less way to build structures. What is the need? > point = {'x': 5, 'y': 6} > print point['x'] > print point['y'] > > which looks reasonably if not quite entierely nice. If looking reasonable, or even nice, is the goal, why not write class Point: def __init__(self, x, y): self.x, self.y = x, y point = Point(5, 6) #or point = Point(x=5, x=6) print point.x print point.y > The problem is that it is incompatible with tuples: you cannot > smoothly go from tuples to dicts without changing your whole > program. So you need to enhance class Point def __getitem__(self, index): if index == 0:return self.x if index == 1:return self.y raise IndexError > What about just allowing keyword parameters in 'tuple'? > > point = tuple(5, 6, color=RED, visible=False) I have to problems imagining such an extension: 1. I'm not sure this would be useful. 2. I can't imagine how to implement it, without compromising performance for tuples. 3. It can be easily implemented without any change to builtin types: class Tuple(tuple): pass def TUPLE(*args, **kw): res = Tuple(args) res.__dict__=kw return res point = TUPLE(5, 6, color="RED", visible=False) print point[0] print point.color Regards, Martin From niemeyer@conectiva.com Sun Nov 24 17:27:47 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Sun, 24 Nov 2002 15:27:47 -0200 Subject: [Python-Dev] Expect in python In-Reply-To: <20021122082216.GA5519@thyrsus.com> References: <20021122003143.GA2242@thyrsus.com> <20021122082216.GA5519@thyrsus.com> Message-ID: <20021124152747.A23037@ibook.distro.conectiva> [...] > The application is an ssh key installer for dummies; you give it a > remote hostname and (optionally) the username to log in as. First, it > It walks the user through generating and populating a local .ssh with > keypairs if they're not already presennt. It then handles all the > fiddly bits of making sure there's a remote .ssh directory, checking > permissions, copying public keys from the local .ssh to the remote > one, etc. > > The idea is to allow a novice user to type > > ssh-installkeys remotehost@somewhere.com > > and have the Right Thing happen. This is worth automating because > there are a bunch of obscure bugs you can run into if you get it > even slightly wrong. ssh -d diagnostics deliberately don't pass back > warnings about bad file and directory permissions, for example, because > that might leak sesnsitive information about the remote system. I plan > to give this script to the openssh guys for their distribution. [...] If you get anything better than the hackish script I made, please let me know (and include an attached copy ;-). [niemeyer@ibook ~]% cat ~/.shell/ssh-install-keys #!/bin/sh USERHOST=$1 RSA1KEY=~/.ssh/identity.pub RSA2KEY=~/.ssh/id_rsa.pub DSA2KEY=~/.ssh/id_dsa.pub SSHTMPDIR=~/.ssh-install-keys if [ -z "$USERHOST" ]; then echo "Usage: ssh-install-keys [@]" exit 1 fi KEYS="" echo "Searching for available keys..." for KEY in $RSA1KEY $RSA2KEY $DSA2KEY; do if [ -f $KEY ]; then KEYS="$KEYS $KEY" fi done echo "Creating temporary .ssh directory with found keys..." umask 077 rm -rf $SSHTMPDIR mkdir -p $SSHTMPDIR/.ssh cat $KEYS > $SSHTMPDIR/.ssh/authorized_keys echo "Copying temporary .ssh directory to $USERHOST..." scp -pqr $SSHTMPDIR/.ssh $USERHOST:~/ echo "Removing temporary .ssh directory..." rm -rf $SSHTMPDIR echo "Done!" -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From tim.one@comcast.net Sun Nov 24 17:27:08 2002 From: tim.one@comcast.net (Tim Peters) Date: Sun, 24 Nov 2002 12:27:08 -0500 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: Message-ID: [martin@v.loewis.de] > ... > So should it be? If not: Why is it expected that test_bz2 fails on > Linux? I can only repeat that the expected-skip mechanism has nothing to do with tests *failing*. {pass, fail, skip} are the possible outcomes of running a test, they're mutually exclusive, and the expected-skips list has only to do with the skip outcome. If a test passes or fails, expected-skips doesn't enter into it. The appearance of a test name T in the expected-skips list doesn't imply that T *must* be skipped, either. If T isn't skipped, fine, the regrtest summary doesn't mention T at all if it passed; it does mention T if it failed. > Whether the test passes or fails simply has nothing to what system you > run it on. Nor does passing or failing have anything to do with expected-skips. > To solve the real problem on Windows, it seems that a list > "tim_has_seen_this_test" would be sufficient. I'm not sure exactly what that means, so can't guess whether it would be sufficient. I do know that keeping skip lists by hand was not sufficient, and that the current mechanism is sufficient for Windows. >>> It won't work if you don't have the libraries. >> If so, put it in the expected-skip list for Linux. > Sure, but that holds for nearly every test requiring extension > modules: if some library isn't there, the module doesn't work. > ... > If the test is added to the skip list, the a potential problem will be > hidden: it may be that the test ought to pass on a certain > installation, but doesn't because of a real bug in Python. So adding > the test into the skipped list on grounds of the library potentially > unavailable hides real problems. expected-skips doesn't hide test failures. If, wrt a specific test T, you want to say that the failure to import a thing is a failure *of* T, then the ImportError should be caught and transformed into an exception other than {ImportError, TestSkipped}, so that regrtest treats T as a failure instead of a skip. For example, raising TestFailed in that case would be thoroughly appropriate. And TestSkipped should never be raised for a test failure. If Unix weenies want a much fancier system, or want to exempt Unix entirely from this mechanism, that's fine by me, but I'm not going to do the work. But from what I've seen, there are still differing notions of "the default configuration" on various non-Windows platforms, and expected-skips does point to problems on them in real life. The {pass, fail, skip} partition of test outcomes has been there forever, and before the expected-skip mechanism was added too, users (including Python developers!) had no clue about whether the tests that skipped on their platforms were OK or were really errors. From what I've seen, introducing the expected-skips mechanism did more good than harm everywhere, although it did most good on Windows, Windows relatives, and Macs. If you exempt Linux from the mechanism, then everyone (except, presumably, you) who runs tests on Linux is going to see that test_normalization is skipped, and they're going to ask whether that's expected, or whether it's an error. It doesn't matter that the regrtest output will no longer say that the skip isn't expected on Linux, they'll ask about *every* test that got skipped. For goodness sake, they even used to ask whether it was expected than winsound got skipped on Linux -- and reasonably so, since "winsound" doesn't mean anything to most users. Neither does "normalization", for that matter. All tests are baffling to someone installing Python for the first time, and I wager most tests remain obscure to everyone except their author. The best that can be done with test_normalization under this framework is to be realistic: virtually nobody has the file it needs, so it's going to get skipped virtually everywhere. A clearer case of an expected skip is hard to imagine, so it should be added to the expected-skip list everywhere. The test should also be changed to avoid conflating the skip outcome with various possible file failure outcomes. That is, instead of try: data = open("NormalizationTest.txt", "r").readlines() except IOError: raise TestSkipped("NormalizationTest.txt not found, ..." ...) it should do something like if not os.path.exists("NormalizationTest.txt"): raise TestSkipped("NormalizationTest.txt not found, ..." ...) data = open("NormalizationTest.txt").readlines() A permission problem, or problem with reading the file, are errors here, and should be allowed to propagate back to regrtest so that they can be reported as test failure (!= test skipped). From martin@v.loewis.de Sun Nov 24 18:02:02 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 24 Nov 2002 19:02:02 +0100 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: References: Message-ID: Tim Peters writes: > > Whether the test passes or fails simply has nothing to what system you > > run it on. > > Nor does passing or failing have anything to do with expected-skips. Right. I meant to make a different point, though: For many of the tests that are somtimes skipped, knowing the system does not tell you whether the test will should rightfully be skipped, on that system. > I'm not sure exactly what that means, so can't guess whether it would be > sufficient. I do know that keeping skip lists by hand was not sufficient, > and that the current mechanism is sufficient for Windows. Apparently, the only useful information drawn from the skip lists is "there is a new test that is skipped, should we update the windows build process?". In all other installations, the only action people take is to silence the message, which wouldn't be necessary if the message wasn't produced in the first place. > expected-skips doesn't hide test failures. If, wrt a specific test > T, you want to say that the failure to import a thing is a failure > *of* T, then the ImportError should be caught and transformed into > an exception other than {ImportError, TestSkipped}, so that regrtest > treats T as a failure instead of a skip. That is my point: I *cannot know* in advance whether a failure to import T is a failure of T, in the general case. You seem to think that one can tell by just looking at sys.platform whether import ought to succeed: a failed import is either expect (pass), or unexpected (fail, i.e. must investigate). This is not the case, not even on Windows - except for your build environment, and the Pythonlabs Windows distribution. For example, you expect to execute test_bz2. However, on somebody else's Windows distribution, test_bz2 might be skipped because the libraries where not all in place when Batch Build was invoked. So should we add test_bz2 to expected skips on Windows, as somebody might not have bz2 libraries in his build environment? > If Unix weenies want a much fancier system, or want to exempt Unix entirely > from this mechanism, that's fine by me, but I'm not going to do the work. > But from what I've seen, there are still differing notions of "the default > configuration" on various non-Windows platforms, and expected-skips does > point to problems on them in real life. There are certainly modules which will never work on certain systems; in those cases, the mechanism works as designed. In general, you need to know much more than just the system name to determine whether skipping a test is expected. > The {pass, fail, skip} partition of test outcomes has been there forever, > and before the expected-skip mechanism was added too, users (including > Python developers!) had no clue about whether the tests that skipped on > their platforms were OK or were really errors. They still don't have a clue. When they find a non-empty output, they submit a patch listing all the tests that were skipped, and claim that this list is correct for their platform. > From what I've seen, introducing the expected-skips mechanism did > more good than harm everywhere, although it did most good on > Windows, Windows relatives, and Macs. I disagree. It generates a constant flow of useless patches. > If you exempt Linux from the mechanism, then everyone (except, presumably, > you) who runs tests on Linux is going to see that test_normalization is > skipped, and they're going to ask whether that's expected, or whether it's > an error. In this specific case, the "skipped" message gives a clear indication of the problem. > It doesn't matter that the regrtest output will no longer say > that the skip isn't expected on Linux, they'll ask about *every* test that > got skipped. For goodness sake, they even used to ask whether it was > expected than winsound got skipped on Linux -- and reasonably so, since > "winsound" doesn't mean anything to most users. If that is the problem to solve, we can always perform print "Those skips are all expected." at the end of the run - no need to maintain explicit lists. Regards, Martin From tim.one@comcast.net Sun Nov 24 18:28:36 2002 From: tim.one@comcast.net (Tim Peters) Date: Sun, 24 Nov 2002 13:28:36 -0500 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: Message-ID: If you want to change the skip mechanism on Linux, I've already said that's fine by me. Let's hear from other Linux users: is the mechanism useless to you? Counterproductive? I've got nothing new to say about it. I haven't seen a specific suggestion for what you'd *like* to do here, apart from what appear to be rhetorical devices. > ... > So should we add test_bz2 to expected skips on Windows, as somebody > might not have bz2 libraries in his build environment? Until this hypothetical becomes a reality, no. If it does, perhaps. > ... > There are certainly modules which will never work on certain systems; > in those cases, the mechanism works as designed. In general, you need > to know much more than just the system name to determine whether > skipping a test is expected. Perhaps "skip expected" should be split into "skip expected" and "no fuckin' clue" . A system that gives users no guidance about which skips are expected isn't attractive either (we've already done that; it didn't work; if the current system could be improved for the systems you run on, please feel free to improve it). From guido@python.org Sun Nov 24 18:55:52 2002 From: guido@python.org (Guido van Rossum) Date: Sun, 24 Nov 2002 13:55:52 -0500 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: Your message of "Sun, 24 Nov 2002 13:28:36 EST." References: Message-ID: <200211241855.gAOItqE20348@pcp02138704pcs.reston01.va.comcast.net> > If you want to change the skip mechanism on Linux, I've already said > that's fine by me. Let's hear from other Linux users: is the > mechanism useless to you? Counterproductive? I've got nothing new > to say about it. I haven't seen a specific suggestion for what > you'd *like* to do here, apart from what appear to be rhetorical > devices. I use two Linux boxes with somewhat different setups, and some tests are skipped on one box but not on the other. I'd be happier if I could tune the skip mechanism by editing a file that's not checked in. The following would probably work well: when the file doesn't exist, the internal table is used; but when the file exists, it replaces the internal table. (I thought about copying the internal table to the file when the file doesn't exist, but the problem with that is that if the internal evolves, you don't automatically get the updates, even if you don't edit the file.) --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one@comcast.net Sun Nov 24 18:59:50 2002 From: tim.one@comcast.net (Tim Peters) Date: Sun, 24 Nov 2002 13:59:50 -0500 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: Message-ID: FYI, I fiddled things so that test_normalization is expected to be skipped now if and only if NormalizationTest.txt doesn't exist (BTW, that's over 2M bytes, so I doubt many will bother to download it). This is one response to complaints that the current mechanism is too stupid: teach it what *you* know. In this case, it was easy to do so, just a matter of deciding to do so. I don't know how hard it may get in other cases. From esr@thyrsus.com Sun Nov 24 18:53:34 2002 From: esr@thyrsus.com (Eric S. Raymond) Date: Sun, 24 Nov 2002 13:53:34 -0500 Subject: [Python-Dev] Expect in python In-Reply-To: <20021124152747.A23037@ibook.distro.conectiva> References: <20021122003143.GA2242@thyrsus.com> <20021122082216.GA5519@thyrsus.com> <20021124152747.A23037@ibook.distro.conectiva> Message-ID: <20021124185334.GD2104@thyrsus.com> Gustavo Niemeyer : > If you get anything better than the hackish script I made, please let me > know (and include an attached copy ;-). It's better :-). But therre are a bunch of edge cases I want to test before I release it. -- Eric S. Raymond From pobrien@orbtech.com Sun Nov 24 19:04:39 2002 From: pobrien@orbtech.com (Patrick K. O'Brien) Date: Sun, 24 Nov 2002 13:04:39 -0600 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: References: Message-ID: <200211241304.39794.pobrien@orbtech.com> On Sunday 24 November 2002 12:28 pm, Tim Peters wrote: > If you want to change the skip mechanism on Linux, I've already said > that's fine by me. Let's hear from other Linux users: is the > mechanism useless to you? Counterproductive? I've got nothing new > to say about it. I haven't seen a specific suggestion for what you'd > *like* to do here, apart from what appear to be rhetorical devices. Here is my 2 cents worth. For some time I've been using the binary distro of Python on Windows. But I recently compiled Python for Linux, for the first time. When I saw the list of unexpected skips, it prompted me to download and install the necessary libraries and recompile Python. So I definitely found that aspect to be very helpful. -- Patrick K. O'Brien Orbtech http://www.orbtech.com/web/pobrien ----------------------------------------------- "Your source for Python programming expertise." ----------------------------------------------- From mal@lemburg.com Sun Nov 24 19:26:50 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Sun, 24 Nov 2002 20:26:50 +0100 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Lib/test test_normalization.py,1.2,1.3 In-Reply-To: References: Message-ID: <3DE127FA.3070801@lemburg.com> tim_one@users.sourceforge.net wrote: > Update of /cvsroot/python/python/dist/src/Lib/test > In directory sc8-pr-cvs1:/tmp/cvs-serv15172/Lib/test > > Modified Files: > test_normalization.py > Log Message: > Split long line. > XXX If NormalizationTest.txt is required to run this test, why isn't it > checked into the project? Have you had a look at the file size ? ;-) > --- 3,10 ---- > from unicodedata import normalize > try: > ! data = open("NormalizationTest.txt", "r").readlines() > except IOError: > ! raise TestSkipped("NormalizationTest.txt not found, download from " > ! "http://www.unicode.org/Public/UNIDATA/NormalizationTest.txt") > > class RangeError: -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From aahz@pythoncraft.com Sun Nov 24 21:05:36 2002 From: aahz@pythoncraft.com (Aahz) Date: Sun, 24 Nov 2002 16:05:36 -0500 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: <200211241304.39794.pobrien@orbtech.com> References: <200211241304.39794.pobrien@orbtech.com> Message-ID: <20021124210536.GB26352@panix.com> On Sun, Nov 24, 2002, Patrick K. O'Brien wrote: > > Here is my 2 cents worth. For some time I've been using the binary > distro of Python on Windows. But I recently compiled Python for Linux, > for the first time. When I saw the list of unexpected skips, it > prompted me to download and install the necessary libraries and > recompile Python. So I definitely found that aspect to be very helpful. +1 -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "If you don't know what your program is supposed to do, you'd better not start writing it." --Dijkstra From tim.one@comcast.net Sun Nov 24 21:16:20 2002 From: tim.one@comcast.net (Tim Peters) Date: Sun, 24 Nov 2002 16:16:20 -0500 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: <200211241304.39794.pobrien@orbtech.com> Message-ID: [Patrick K. O'Brien] > Here is my 2 cents worth. For some time I've been using the binary > distro of Python on Windows. But I recently compiled Python for Linux, > for the first time. When I saw the list of unexpected skips, it > prompted me to download and install the necessary libraries and > recompile Python. So I definitely found that aspect to be very helpful. That's an interesting and potentially useful meaning for a skip that isn't expected: Python is meant to be a "Batteries Included" offering, so if a battery is missing, it can be helpful if the test suite tells you so. Couple that with Guido's idea of supporting an override file, and my obvservation that software can be taught new things , and see what pops out. From martin@v.loewis.de Sun Nov 24 22:30:02 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 24 Nov 2002 23:30:02 +0100 Subject: [Python-Dev] test failures on Debian unstable In-Reply-To: References: Message-ID: Tim Peters writes: > That's an interesting and potentially useful meaning for a skip that isn't > expected: Python is meant to be a "Batteries Included" offering, so if a > battery is missing, it can be helpful if the test suite tells you so. > Couple that with Guido's idea of supporting an override file, and my > obvservation that software can be taught new things , and see what > pops out. That may be all well, but it still doesn't tell me how to proceed with patches that people submit making arbitrary changes to the _expectations dictionary (#551977, or the meanwhile rejected #535335). I don't care (primarily) about Linux here, but about all the other systems where regrtest.py gives the helpful message Ask someone to teach regrtest.py about which tests are expected to get skipped on foo. How is someone supposed to know? How can someone find out? Regards, Martin From lists@webcrunchers.com Sun Nov 24 22:34:38 2002 From: lists@webcrunchers.com (John D.) Date: Sun, 24 Nov 2002 14:34:38 -0800 Subject: [Python-Dev] Problems installing Python Message-ID: I just installed Python, and got messages something like "network not configured", so it wouldn't built certain modules (https, etc). What causes this? We got this message when we typed: make test I haven't got the time to go through more then 2500 mails from this list in the past 3 weeks to see if anyone else reported this problem. Can someone shed some light on this? John From bac@OCF.Berkeley.EDU Sun Nov 24 23:46:54 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sun, 24 Nov 2002 15:46:54 -0800 (PST) Subject: [Python-Dev] Problems installing Python In-Reply-To: Message-ID: [John D.] > I just installed Python, and got messages something like "network not configured", so it wouldn't built certain modules (https, etc). What causes this? > > We got this message when we typed: make test > I haven't got the time to go through more then 2500 mails from this list in the past 3 weeks to see if anyone else reported this problem. > > Can someone shed some light on this? > John, this is the wrong place to ask this type of question. python-dev is for discussing the design of Python. Tech support questions like this are best posted to comp.lang.python. -Brett C. From bac@OCF.Berkeley.EDU Mon Nov 25 00:07:36 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sun, 24 Nov 2002 16:07:36 -0800 (PST) Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <200211241112.51330.aleax@aleax.it> Message-ID: [Alex Martelli] > On Sunday 24 November 2002 10:34 am, Brett Cannon wrote: > ... > > Actually, with the way it is coded, you would want arg_cnt defaulting to > > 1; it is meant to represent the number of arguments sans the one the > > iterator is being assigned to. > > Right -- sorry, I let the parameter's name fool me, without checking more > carefully. Maybe renaming it to number_of_splits or something like that > would help the somewhat-careless reader;-). > =) This is what happens when you just code and post on a whim. =) > > > And I didn't know ``iter()`` just returned its argument if it was already > > an iterator. Should have, though, since all other constructor methods are > > like that (e.g. dict(), list(), etc.). > > Hmmm, not really: list(L) returns a list L1 such that L1 == L but id(L1) != > id(L) -- i.e., list and dict give a *shallow copy* of their list and dict > argument respectively. iter is different... > Oops. Rather cool, though, since that is a nice way to copy a list just like the [:] trick. > > > I think the question becomes whether we want to start adding modules that > > service specific language abilities like generators. We have the string > > module, but that is on its way out (right?). There is no module for list, > > dict, or tuple tricks or helper functions. There is also no module for > > No, but that's because lists and dicts (and strings) package their > functionality up as methods, so there's no need to have any supporting > module. Tuples don't need anything special. Generic iterators do not offer > pre-cooked rich functionality nor do they have methods usable to locate > such functionality, yet clearly they could well use it, therefore a module > appears to be exactly the right place to locate such functionality in. > True, but there are possible things that are handy but you don't want to have as a full-blown method (and of course because I have said this no example is coming to my mind). I guess I just have not come up with that many fancy tricks for iterators that would fill a module. The only other thing I have found handy is a wrapper function that keeps copies of what an iterator returns so you can index the result later; memoization for iterators. > There's no need to distinguish generators from other iterators in this > respect; generators are just one neat implementation of iterators. > I totally agree with that assessment. Generators have always been a handy way of writing ``__iter__()``. This is one of the "cool" features of Python that always just makes people I talk to about Python go "wow!". > > family, but that was rejected. The question is do we want to move towards > > adding things like this, or should it stay relegated to places like the > > Python Cookbook (which reminds me I should probably put this sucker up on > > the site) and the Demo/ directory. Something to consider. > > I think that having everybody re-code the same powerful idioms all the time, > when the idioms are well suited to being encapsulated in functions that the > standard library might supply, is sub-optimal, even if the alternative is that > the re-coding be mostly copying from the cookbook. > True. This does seem to go with the "batteries included" idea, although at a more fundamental coding level. -Brett From bac@OCF.Berkeley.EDU Mon Nov 25 00:14:16 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sun, 24 Nov 2002 16:14:16 -0800 (PST) Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <20021124140538.05381488D@bespin.org> Message-ID: [Armin Rigo] > I've always felt it counter-intuitive and even confusing that lists and dicts are built-in types with > methods, whereas iterators are just an expected interface with no user-visible methods apart from > 'next', which is also a magic name (a magic name with no __??). > What have people found with newbies and their comprehension of iterators? I ran into one person who has recently taught himself Python and didn't know what an iterator was. How much promotion of iterators is occuring out in the community? > The lack of common implementational part between iterator forces us to use a module to collect useful > operations, which goes against the current everything-as-a-method trend. > > I wish we would have a standard built-in 'iter' type with a standard set of methods, instead of the > current 'iter(x)' which is essentially just 'x.__iter__()' and returns anything. If 'iter' were a > built-in type it could provide additional methods, e.g. > > a, b, c = iter(x).peel(2) > > as an equivalent to the proposed iterator-returning 'a,b,*c=x'. > Interesting thought. One reason I can think of it not being considered a built-in type is that iterators (at least to me) are wrappers around an existing type. I view built-in types as fundamental and as bare-bones as you care to go; iterators sit at a level above that. > A similar option would be to require that iterator objects be instances of subtypes of 'iter'. Well, > it's probably too late to change that kind of thing anyway. > Not necessarily. Something like this could be done with this iterator module idea. Could have a class in there that iterators could subclass. Problem with that is that iterators can be generators and those are not classes at all (I take the "generators are pausable functions" view). There is always Python 3. =) -Brett From bac@OCF.Berkeley.EDU Mon Nov 25 00:52:33 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Sun, 24 Nov 2002 16:52:33 -0800 (PST) Subject: [Python-Dev] from tuples to immutable dicts In-Reply-To: Message-ID: [Martin v. Loewis] > Brett Cannon writes: > > > > where fields is a list of (name,doc) tuples. The resulting thing would > > > be similar to os.stat_result: you need to call it with the mandatory > > ^^^^^^^ > > You meant "can", right, Martin? > > Probably an English-language issue: If you call it, the mandatory > fields must be present as positional arguments (i.e. you can't call it > and omit mandatory fields, or pass them as keyword arguments). > Yeah, that is what I thought you meant. > > I think the idea is good if you can get it to tie directly into C code. > > That would get a +1 from me. If not, then +0. > > What means to "tie into C code" here? > Somehow being able to use this setup with C extensions; possibly as a direct hook into actual C structs with minimal Python objection conversion fuss. So setting values in the structseq could somehow alter the underlying C struct directly. That is what I took away from one of your earlier comments. -Brett From python@rcn.com Mon Nov 25 07:02:55 2002 From: python@rcn.com (Raymond Hettinger) Date: Mon, 25 Nov 2002 02:02:55 -0500 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) Message-ID: <004101c29450$adfe1680$125ffea9@oemcomputer> Here's a write-up for a proposed dictionary update method. Comments are invited while I work on a patch, docs, tests, etc. Raymond Hettinger ------------------------------------------------------------------------ METHOD SPECIFICATION def sequpdate(self, iterable, value=True): for elem in iterable: self[elem] = value return self USE CASES # Fast Membership testing termwords = dict('End Quit Stop Abort'.split()) . . . if lexeme in termwords: sys.exit(0) # Removing duplicates seq = dict().sequpdate(seq).keys() # Initializing or resetting mapped accumulators absences = dict('Tom Dick Harry'.split(), 0) for name, date in classlog: absences[name] += 1 # Intentionally raises KeyError for invalid names RATIONALE Examining code in the library, the cookbook, and the vaults of parnassus, the most common practice for membership testing is a sequential search of a list, tuple, or string. Where the writers were sophisticated and cared about performance, they used in-line versions of the python fragment shown above. I have not found a single case where people took the time to "roll their own" pure python function as shown above. Even if they had, the construction time would take twice as long as with a C coded method. IOW, practices are tending toward slow or verbose approaches in the absence of a quick, convenient, standardized tool for moving sequences into dictionaries. The problem of removing duplicates from a sequence is most frequently solved with inline use of a code fragment similar to the one shown above. Again, you almost never see people taking the time to build their own uniq function, factoring all the effort into a single, tested, reusable code fragment. The exception is Tim's wonderful, general purpose recipe for uniquifying anything -- unfortunately, I don't think you ever see his code used in practice. Also, since all containers support __iter__, they can be fed to list(); however, in the absence of the above method, they cannot be fed to dict() without shenanigans for building item lists. The proposed method makes todict() as doable as tolist() and encourages switching to appropriate data structures. OBJECTIONS TO THE PRIOR IDEA OF EXPANDING THE DICT CONSTRUCTOR (SF 575224) 1. The constructor was already overloaded. 2. Weird syntax was needed to fit with other constructor syntax. 3. The sets module was going to meet all needs. The first two comments led to this revision in method form. The Sets module met several needs centering around set mathematics; however, for membership testing, it is so slow that it is almost always preferable to use dictionaries instead (even without this proposed method). The slowness is intrinsic because of the time to search for the __contains__ method in the class and the time to setup a try/except to handle mutable elements. Another reason to prefer dictionaries is that there is one less thing to import and expect readers to understand. My experiences applying the Sets module indicates that it will *never* replace dictionaries for membership testing and will have only infrequent use for uniquification. From mal@lemburg.com Mon Nov 25 08:14:30 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 25 Nov 2002 09:14:30 +0100 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) In-Reply-To: <004101c29450$adfe1680$125ffea9@oemcomputer> References: <004101c29450$adfe1680$125ffea9@oemcomputer> Message-ID: <3DE1DBE6.4090100@lemburg.com> aymond Hettinger wrote: > Here's a write-up for a proposed dictionary update method. > Comments are invited while I work on a patch, docs, tests, etc. Just as data point: you may want to have a look at the setdict() API in mxTools. It does pretty much what you are suggesting. If you want to add this functionality as method, I'd also suggest to add invdict(): setdict(sequence,value=None) Constructs a dictionary from the given sequence. The sequence must contain hashable objects which are used as keys. The values are all set to value. Multiple keys are silently ignored. The function comes in handy whenever you need to work with a sequence in a set based context (e.g. to determine the set of used values). invdict(dictionary) Constructs a new dictionary from the given one with inverted mappings. Keys become values and vice versa. Note that no exception is raised if the values are not unique. The result is undefined in this case (there is a value:key entry, but it is not defined which key gets used). Other useful extensions would be extract() and iremove() as method on lists: extract(object,indices[,defaults]) Builds a list with entries object[index] for each index in the sequence indices. If a lookup fails and the sequence defaults is given, then defaults[nth_index] is used, where nth_index is the index of index in indices (confused ? it works as expected !). defaults should have the same length as indices. If you need the indices as well, try the irange function. The function raises an IndexError in case it can't find an entry in indices or defaults. iremove(object,indices) Removes the items indexed by indices from object. This changes the object in place and thus is only possible for mutable types. For sequences the index list must be sorted ascending; an IndexError will be raised otherwise (and the object left in an undefined state). (Note that the above work on all kinds of sequences/mappings, not just lists or dictionaries.) > > > Raymond Hettinger > > > ------------------------------------------------------------------------ > METHOD SPECIFICATION > > def sequpdate(self, iterable, value=True): > for elem in iterable: > self[elem] = value > return self > > > USE CASES > > # Fast Membership testing > termwords = dict('End Quit Stop Abort'.split()) > . . . > if lexeme in termwords: sys.exit(0) > > # Removing duplicates > seq = dict().sequpdate(seq).keys() > > # Initializing or resetting mapped accumulators > absences = dict('Tom Dick Harry'.split(), 0) > for name, date in classlog: > absences[name] += 1 # Intentionally raises KeyError for invalid names > > > RATIONALE > > Examining code in the library, the cookbook, and the vaults of > parnassus, the most common practice for membership testing is > a sequential search of a list, tuple, or string. Where the > writers were sophisticated and cared about performance, they > used in-line versions of the python fragment shown above. I have > not found a single case where people took the time to "roll their > own" pure python function as shown above. Even if they had, the > construction time would take twice as long as with a C coded method. > IOW, practices are tending toward slow or verbose approaches in the > absence of a quick, convenient, standardized tool for moving sequences > into dictionaries. > > The problem of removing duplicates from a sequence is most frequently > solved with inline use of a code fragment similar to the one shown > above. Again, you almost never see people taking the time to build > their own uniq function, factoring all the effort into a single, > tested, reusable code fragment. The exception is Tim's wonderful, > general purpose recipe for uniquifying anything -- unfortunately, I > don't think you ever see his code used in practice. > > Also, since all containers support __iter__, they can be fed to list(); > however, in the absence of the above method, they cannot be fed to dict() > without shenanigans for building item lists. The proposed method makes > todict() as doable as tolist() and encourages switching to appropriate > data structures. > > > OBJECTIONS TO THE PRIOR IDEA OF EXPANDING THE DICT CONSTRUCTOR (SF 575224) > > 1. The constructor was already overloaded. > 2. Weird syntax was needed to fit with other constructor syntax. > 3. The sets module was going to meet all needs. > > The first two comments led to this revision in method form. > > The Sets module met several needs centering around set mathematics; > however, for membership testing, it is so slow that it is almost > always preferable to use dictionaries instead (even without this > proposed method). The slowness is intrinsic because of the time > to search for the __contains__ method in the class and the time > to setup a try/except to handle mutable elements. Another reason > to prefer dictionaries is that there is one less thing to import > and expect readers to understand. My experiences applying the > Sets module indicates that it will *never* replace dictionaries for > membership testing and will have only infrequent use for uniquification. > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From python@rcn.com Mon Nov 25 07:35:31 2002 From: python@rcn.com (Raymond Hettinger) Date: Mon, 25 Nov 2002 02:35:31 -0500 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) References: <004101c29450$adfe1680$125ffea9@oemcomputer> Message-ID: <004b01c29455$3c63ce20$125ffea9@oemcomputer> Fix typos in the use cases section: ---------------------------------- # Fast Membership testing termwords = {}.sequpdate('End Quit Stop Abort'.split()) . . . if lexeme in termwords: sys.exit(0) # Removing duplicates seq = {}.sequpdate(seq).keys() # Initializing or resetting value accumulators absences = {}.sequpdate('Tom Dick Harry'.split()) for name, date in classlog: absences[name] += 1 # Intentionally raises KeyError for invalid names From martin@v.loewis.de Mon Nov 25 07:51:17 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 25 Nov 2002 08:51:17 +0100 Subject: [Python-Dev] from tuples to immutable dicts In-Reply-To: References: Message-ID: Brett Cannon writes: > Somehow being able to use this setup with C extensions; possibly as a > direct hook into actual C structs with minimal Python objection conversion > fuss. So setting values in the structseq could somehow alter the > underlying C struct directly. That is what I took away from one of your > earlier comments. It is certainly possible to use a structseq in a C extension, after all, this is what the posix and time modules do. However, there is no attempt to directly use the layout of the underlying struct. The struct module would probably be a better starting point for that. Regards, Martin From aleax@aleax.it Mon Nov 25 07:52:48 2002 From: aleax@aleax.it (Alex Martelli) Date: Mon, 25 Nov 2002 08:52:48 +0100 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: References: Message-ID: <200211250852.48735.aleax@aleax.it> On Monday 25 November 2002 01:14 am, Brett Cannon wrote: ... > Interesting thought. One reason I can think of it not being considered a > built-in type is that iterators (at least to me) are wrappers around an > existing type. I view built-in types as fundamental and as bare-bones as > you care to go; iterators sit at a level above that. list and dict are full of helpful "convenience" methods, some of which supply handier ways to perform operations for which there would also be other ways, e.g. alist.extend(another) as a readable equivalent of alist[len(alist):]=another and so on. Thus I don't understand what you mean by "fundamental and bare-bones" in this context. > > A similar option would be to require that iterator objects be instances > > of subtypes of 'iter'. Well, it's probably too late to change that kind > > of thing anyway. > > Not necessarily. Something like this could be done with this iterator > module idea. Could have a class in there that iterators could subclass. > Problem with that is that iterators can be generators and those are not > classes at all (I take the "generators are pausable functions" view). I don't see any problem (besides the SMOP of making it so) with ensuring that a generator, when called, returns an instance of (whatever subclass of) iter. Being "not classes at all" is a strange objection. After: x = somegenerator() x is an object of a system-determined type (and type==class, in perspective), so why coulnd't the system choose to have that type be a subtype of iter? What WOULD be intolerable, it appears to me, would be to *require* that user-coded iterators (classes exposing currently-suitable __iter__ and next methods) MUST subclass iter. That would break existing, running code, and go against the grain of Python, which nowhere else imposes such requirements. Having a (mix-in?) class that iterators COULD subclass (as Brent suggests) is one thing; for Python to REQUIRE such subtyping (as Armin appears to wish could be done) is quite another. Currently Python tends to supply extra optional functionality in modules, not packaged up as mix-in classes but rather as functions. For example, lists don't subclass a "Bsearchable" mix-in to get binary search capabilities; rather, module bisect supplies useful polymorphic functions. There is no need to subclass any "Shuffleable" class for a sequence to be subjected to shuffling: rather, module random supplies the useful polymorphic function shuffle. I like this general approach and I don't see why it should be abandoned for supplying useful polymorphic functionality on iterators. Alex From aleax@aleax.it Mon Nov 25 08:17:09 2002 From: aleax@aleax.it (Alex Martelli) Date: Mon, 25 Nov 2002 09:17:09 +0100 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: References: Message-ID: <200211250917.09043.aleax@aleax.it> On Monday 25 November 2002 01:07 am, Brett Cannon wrote: ... > > Hmmm, not really: list(L) returns a list L1 such that L1 == L but id(L1) > > != id(L) -- i.e., list and dict give a *shallow copy* of their list and > > dict argument respectively. iter is different... > > Oops. Rather cool, though, since that is a nice way to copy a list just > like the [:] trick. Not quite "just like", which is good, because there is not a total overlap of functionality but rather two useful idioms with different semantics: x = y[:] binds x to a sequence of the same type as sequence y (tuple, list, string, ...) -- whatever (if anything) now happens to y, x remains bound to that, and y's type is preserved as the type of x; x = list(y) binds x to a LIST with the same items as iterable y (tuple, list, string, file, dict, ...) -- you can now alter x with any or all list operations, and y will not be affected in any way. > > No, but that's because lists and dicts (and strings) package their > > functionality up as methods, so there's no need to have any supporting > > module. Tuples don't need anything special. Generic iterators do not > > offer pre-cooked rich functionality nor do they have methods usable to > > locate such functionality, yet clearly they could well use it, therefore > > a module appears to be exactly the right place to locate such > > functionality in. > > True, but there are possible things that are handy but you don't want to > have as a full-blown method (and of course because I have said this no > example is coming to my mind). Easy examples for lists include random.shuffle and bisect.insert -- even though lists offer lots of functionality as methods, Python ALSO offers supporting modules with functions for more specialized needs. > I guess I just have not come up with that many fancy tricks for iterators > that would fill a module. The only other thing I have found handy is a > wrapper function that keeps copies of what an iterator returns so you can > index the result later; memoization for iterators. That is one example of a wrapper that could well use support from the underlying iterator it is wrapping -- if the underlying iterator already has e.g. a list, it WOULD be nice for the memoizer to be able to get to it rather than storing stuff yet once again, e.g. by the iterator exposing an optional special method for the purpose. Most "tricks" (not really tricky nor fancy) are on a less fundamental level, indeed quite analogous to the functionality of the bisect module. For example, merging 2 or N iterators, each of which returns items in sorted (i.e. increasing) order, into one iterator which iterleaves said items and also ensures it returns them in sorted (increasing) order. That's easy to code, just as binary-search is easy to code, but frequently useful, and silly to have people recode a hundred times with some risk of silly coding bugs. Other things, mostly even less fancy, are strongly suggested by languages which have a long tradition of lazy (non-strict) lists, such as Haskell, for example take and drop. Often I want for example to work on (e.g.) lines 3 to 7 included of a file, and "for line in take(5, drop(2, open("thefile"))):" is the first approach that comes to mind -- but as take and drop are not in the standard library I confess I most often don't bother coding them up and end up with less-clear and slightly more complicated code as a result. Nothing Earth-shaking, of course, whence my musing that delaying this until some larger slice of the Python community gets used to iterator operations and some consensus emerges on them may be preferable. Alex From bac@OCF.Berkeley.EDU Mon Nov 25 09:16:54 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Mon, 25 Nov 2002 01:16:54 -0800 (PST) Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: <200211250852.48735.aleax@aleax.it> Message-ID: [Alex Martelli] > On Monday 25 November 2002 01:14 am, Brett Cannon wrote: > ... > > Interesting thought. One reason I can think of it not being considered a > > built-in type is that iterators (at least to me) are wrappers around an > > existing type. I view built-in types as fundamental and as bare-bones as > > you care to go; iterators sit at a level above that. > > list and dict are full of helpful "convenience" methods, some of which supply > handier ways to perform operations for which there would also be other ways, > e.g. alist.extend(another) as a readable equivalent of > alist[len(alist):]=another and so on. Thus I don't understand what you mean > by "fundamental and bare-bones" in this context. > In other words the data structures is not built off of something else. You can't get much simpler than a list, tuple, or string (yes, you don't need dicts, but they are wonderful things and basic to almost any major program). Iterators, though, are built off of something, and thus are not "bare-bones". It isn't about the methods as much as what is used to create the data structure. > > > > A similar option would be to require that iterator objects be instances > > > of subtypes of 'iter'. Well, it's probably too late to change that kind > > > of thing anyway. > > > > Not necessarily. Something like this could be done with this iterator > > module idea. Could have a class in there that iterators could subclass. > > Problem with that is that iterators can be generators and those are not > > classes at all (I take the "generators are pausable functions" view). > > I don't see any problem (besides the SMOP of making it so) with ensuring that > a generator, when called, returns an instance of (whatever subclass of) iter. > Being "not classes at all" is a strange objection. After: > x = somegenerator() > x is an object of a system-determined type (and type==class, in perspective), > so why coulnd't the system choose to have that type be a subtype of iter? > I guess you could do that, but as you point out below, I don't want to require subclassing by having generators turn into some special iter type. > What WOULD be intolerable, it appears to me, would be to *require* that > user-coded iterators (classes exposing currently-suitable __iter__ and next > methods) MUST subclass iter. That would break existing, running code, and go > against the grain of Python, which nowhere else imposes such requirements. > Having a (mix-in?) class that iterators COULD subclass (as Brent suggests) is > one thing; for Python to REQUIRE such subtyping (as Armin appears to wish > could be done) is quite another. > Here here! I am slowly being sold on this iterator module idea. > Currently Python tends to supply extra optional functionality in modules, not > packaged up as mix-in classes but rather as functions. For example, lists > don't subclass a "Bsearchable" mix-in to get binary search capabilities; > rather, module bisect supplies useful polymorphic functions. There is no > need to subclass any "Shuffleable" class for a sequence to be subjected to > shuffling: rather, module random supplies the useful polymorphic function > shuffle. I like this general approach and I don't see why it should be > abandoned for supplying useful polymorphic functionality on iterators. > OK, so it is starting to sound like a module that collects well-coded iterator code is a good thing. Perhaps a deferred PEP could be started that collected the suggested code to be included in the module? Or perhaps a patch on SF (or even a project)? That way this idea doesn't slip through the cracks for any reason and the rest of the Python community will hear about this and start thinking about it. People would also get the benefit of the code now instead of having to wait until Python 2.(>=4) to get the code. -Brett From guido@python.org Mon Nov 25 13:10:35 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 25 Nov 2002 08:10:35 -0500 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: Your message of "Mon, 25 Nov 2002 09:17:09 +0100." <200211250917.09043.aleax@aleax.it> References: <200211250917.09043.aleax@aleax.it> Message-ID: <200211251310.gAPDAao10052@pcp02138704pcs.reston01.va.comcast.net> I have no time to follow this discussion any more, but I'm worried that you all are wasting your time. I don't think that there's a sufficient need to add new syntax, and the idea of making it store an interator reduces its chances. --Guido van Rossum (home page: http://www.python.org/~guido/) From just@letterror.com Mon Nov 25 13:28:22 2002 From: just@letterror.com (Just van Rossum) Date: Mon, 25 Nov 2002 14:28:22 +0100 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) In-Reply-To: <004b01c29455$3c63ce20$125ffea9@oemcomputer> Message-ID: Raymond Hettinger wrote: > # Fast Membership testing > termwords = {}.sequpdate('End Quit Stop Abort'.split()) d.update(x) returns None. I would expect d.sequpdate() to do the same. A classmethods would be a nice solution here: >>> dict.fromseq('End Quit Stop Abort'.split()) {'End': True, 'Quit': True, 'Stop': True, 'Abort': True} >>> Classmethods rule as alternative constructors. Just From mwh@python.net Mon Nov 25 13:56:20 2002 From: mwh@python.net (Michael Hudson) Date: 25 Nov 2002 13:56:20 +0000 Subject: [Python-Dev] Re: release22-maint branch broken In-Reply-To: Tim Rice's message of "Fri, 22 Nov 2002 13:55:19 -0800 (PST)" References: Message-ID: <2msmxpda7v.fsf@starship.python.net> Tim Rice writes: > On Fri, 22 Nov 2002, A.M. Kuchling wrote: > > > Tim Rice wrote: > > > > > raise DistutilsPlatformError(my_msg) > > > distutils.errors.DistutilsPlatformError: invalid Python installation: > > > unable to open /usr/local/lib/python2.2/config/Makefile (No such file > > > or directory) > > > gmake: *** [sharedmods] Error 1 > > > > The revised version of sysconfig.py figures out if it's in the build > > directory by looking for a landmark file; the landmark is Modules/Setup. > > Does that file exist? > > Yes it does. > > I put some prints in > ... > argv0_path = os.path.dirname(os.path.abspath(sys.executable)) > print argv0_path > landmark = os.path.join(argv0_path, "Modules", "Setup") > print landmark > if not os.path.isfile(landmark): > python_build = 0 > print "python_build = 0" > elif os.path.isfile(os.path.join(argv0_path, "Lib", "os.py")): > python_build = 1 > print "python_build = 1" > else: > python_build = os.path.isfile(os.path.join(os.path.dirname(argv0_path), > "Lib", "os.py")) > print "else" > print python_build > del argv0_path, landmark > ... > > And get > ... > /usr/local/src/utils/Python-2 > /usr/local/src/utils/Python-2/Modules/Setup > else > 0 > running build > ... > > Could this breaking because I build outside of the source tree? I guess so. Where are you building? Why is sys.executable /usr/local/src/utils/Python-2/python? At least it looks like that's what's happened. Can you try (in your build directory) $ ./python ... >>> print sys.executable and see if that looks reasonable? Cheers, M. -- Monte Carlo sampling is no way to understand code. -- Gordon McMillan, comp.lang.python From guido@python.org Mon Nov 25 16:04:39 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 25 Nov 2002 11:04:39 -0500 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) In-Reply-To: Your message of "Mon, 25 Nov 2002 02:02:55 EST." <004101c29450$adfe1680$125ffea9@oemcomputer> References: <004101c29450$adfe1680$125ffea9@oemcomputer> Message-ID: <200211251604.gAPG4dn08814@odiug.zope.com> > def sequpdate(self, iterable, value=True): > for elem in iterable: > self[elem] = value > return self I'm strongly against the "return self" part. All your examples use x = {}.sequpdate(...) which suggests that you're really looking for a different constructor as a class method. --Guido van Rossum (home page: http://www.python.org/~guido/) From python@rcn.com Mon Nov 25 16:16:10 2002 From: python@rcn.com (Raymond Hettinger) Date: Mon, 25 Nov 2002 11:16:10 -0500 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) References: <004101c29450$adfe1680$125ffea9@oemcomputer> <200211251604.gAPG4dn08814@odiug.zope.com> Message-ID: <006f01c2949d$f8edb720$125ffea9@oemcomputer> > x = {}.sequpdate(...) > > which suggests that you're really looking for a different constructor > as a class method. Yes! Will revise the patch accordingly and use Just's suggested name, fromseq(). Do you prefer the default value to be None or True? Earlier discussions on python-dev showed that True is more meaningful to some in the context of membership testing. OTOH, dict.setvalue and dict.get both use None. Raymond Hettinger From guido@python.org Mon Nov 25 16:32:34 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 25 Nov 2002 11:32:34 -0500 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) In-Reply-To: Your message of "Mon, 25 Nov 2002 11:16:10 EST." <006f01c2949d$f8edb720$125ffea9@oemcomputer> References: <004101c29450$adfe1680$125ffea9@oemcomputer> <200211251604.gAPG4dn08814@odiug.zope.com> <006f01c2949d$f8edb720$125ffea9@oemcomputer> Message-ID: <200211251632.gAPGWYb08901@odiug.zope.com> > > x = {}.sequpdate(...) > > > > which suggests that you're really looking for a different constructor > > as a class method. > > Yes! > > Will revise the patch accordingly > and use Just's suggested name, fromseq(). > > Do you prefer the default value to be None or True? > Earlier discussions on python-dev showed that > True is more meaningful to some in the context of > membership testing. OTOH, dict.setvalue and > dict.get both use None. I think it should be None -- let's be explicit when we want True. --Guido van Rossum (home page: http://www.python.org/~guido/) From fredrik@pythonware.com Mon Nov 25 17:25:42 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Mon, 25 Nov 2002 18:25:42 +0100 Subject: [Python-Dev] [#527371] sre bug/patch References: <20021105200259.A28375@ibook.distro.conectiva> <200211052207.gA5M7Mk24781@odiug.zope.com> <20021108191219.C11511@ibook.distro.conectiva> <200211082117.gA8LH6917726@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <097a01c294a7$af816fa0$0900a8c0@spiff> guido wrote: > > I've just noticed that sre_*.py were changed to include True/False, > > "foo in dict" and other recent conventions. Don't they break PEP = 291? > > Should we fallback those changes? Or perhaps PEP 291 applies just to > > C code? >=20 > Ask Fredrik Lundh. It looks like I made a mistake there. I'll sort this one out when I get there. (given that SRE hasn't built cleanly on 1.5.2 for a year or so without anyone complaining, it should be safe to bump the baseline to 2.1) From just@letterror.com Mon Nov 25 17:56:39 2002 From: just@letterror.com (Just van Rossum) Date: Mon, 25 Nov 2002 18:56:39 +0100 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) In-Reply-To: <200211251632.gAPGWYb08901@odiug.zope.com> Message-ID: Guido van Rossum wrote: > > Do you prefer the default value to be None or True? > > Earlier discussions on python-dev showed that > > True is more meaningful to some in the context of > > membership testing. OTOH, dict.setvalue and > > dict.get both use None. > > I think it should be None -- let's be explicit when we want True. I agree. Also: whatever the value is, it can't be any more meaningful than any other, as I don't think the value has _any_ meaning in this context ;-) Just From theller@python.net Mon Nov 25 18:56:54 2002 From: theller@python.net (Thomas Heller) Date: 25 Nov 2002 19:56:54 +0100 Subject: [Python-Dev] refactoring and documenting Modulefinder Message-ID: I 've just opened SF patch # 643711: refactoring and documenting ModuleFinder. The ultimate goal of this patch is to eventually move modulefinder into the standard library. It is planned that Just and I use this as a workspace for coordinating and sharing the work to be done, but anyone is invited to jump in with comments or other stuff as well. Thomas From fredrik@pythonware.com Mon Nov 25 20:22:13 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Mon, 25 Nov 2002 21:22:13 +0100 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) References: Message-ID: <003901c294c0$5a187860$ced241d5@hagrid> Just van Rossum wrote: > I agree. Also: whatever the value is, it can't be any more meaningful than any > other, as I don't think the value has _any_ meaning in this context ;-) if the value has no meaning, why not use a set? how many ways do we need to do the same thing? From skip@pobox.com Mon Nov 25 20:39:01 2002 From: skip@pobox.com (Skip Montanaro) Date: Mon, 25 Nov 2002 14:39:01 -0600 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Lib webbrowser.py,1.34,1.35 In-Reply-To: References: Message-ID: <15842.35429.245708.48343@montanaro.dyndns.org> Gustavo> Also, included skipstone support, as suggested by Fred in the Gustavo> mentioned bug. Is this such a wise idea? I have nothing against skipstone (never heard of it), but should we be in the business of trying to track every possible browser out there? Skip From tim@multitalents.net Mon Nov 25 20:45:48 2002 From: tim@multitalents.net (Tim Rice) Date: Mon, 25 Nov 2002 12:45:48 -0800 (PST) Subject: [Python-Dev] Re: release22-maint branch broken In-Reply-To: <2msmxpda7v.fsf@starship.python.net> Message-ID: On 25 Nov 2002, Michael Hudson wrote: > Tim Rice writes: > > > On Fri, 22 Nov 2002, A.M. Kuchling wrote: > > > > > Tim Rice wrote: > > > > > > > raise DistutilsPlatformError(my_msg) > > > > distutils.errors.DistutilsPlatformError: invalid Python installation: > > > > unable to open /usr/local/lib/python2.2/config/Makefile (No such file > > > > or directory) > > > > gmake: *** [sharedmods] Error 1 > > > > > > The revised version of sysconfig.py figures out if it's in the build > > > directory by looking for a landmark file; the landmark is Modules/Setup. > > > Does that file exist? > > > > Yes it does. > > > > I put some prints in > > ... > > argv0_path = os.path.dirname(os.path.abspath(sys.executable)) > > print argv0_path > > landmark = os.path.join(argv0_path, "Modules", "Setup") > > print landmark > > if not os.path.isfile(landmark): > > python_build = 0 > > print "python_build = 0" > > elif os.path.isfile(os.path.join(argv0_path, "Lib", "os.py")): > > python_build = 1 > > print "python_build = 1" > > else: > > python_build = os.path.isfile(os.path.join(os.path.dirname(argv0_path), > > "Lib", "os.py")) > > print "else" > > print python_build > > del argv0_path, landmark > > ... > > > > And get > > ... > > /usr/local/src/utils/Python-2 > > /usr/local/src/utils/Python-2/Modules/Setup > > else > > 0 > > running build > > ... > > > > Could this breaking because I build outside of the source tree? > > I guess so. > > Where are you building? Why is sys.executable > /usr/local/src/utils/Python-2/python? At least it looks like that's > what's happened. Yes, I'm building in /usr/local/src/utils/Python-2 > > Can you try (in your build directory) > > $ ./python > ... > >>> print sys.executable > > and see if that looks reasonable? .... tim@uw711 65% pwd /usr/local/src/utils/Python-2 tim@uw711 66% ./python Python 2.2.2 (#1, Nov 22 2002, 12:59:39) [C] on unixware7 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> print sys.executable /usr/local/src/utils/Python-2/python >>> .... > > Cheers, > M. > > -- Tim Rice Multitalents (707) 887-1469 tim@multitalents.net From niemeyer@conectiva.com Mon Nov 25 20:55:31 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Mon, 25 Nov 2002 18:55:31 -0200 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Lib webbrowser.py,1.34,1.35 In-Reply-To: <15842.35429.245708.48343@montanaro.dyndns.org> References: <15842.35429.245708.48343@montanaro.dyndns.org> Message-ID: <20021125185531.A5805@ibook.distro.conectiva> > Gustavo> Also, included skipstone support, as suggested by Fred in the > Gustavo> mentioned bug. > > Is this such a wise idea? I have nothing against skipstone (never heard of > it), but should we be in the business of trying to track every possible > browser out there? Considering the code already there, and Fred's comments, it looked like that was the idea. Something to take into account is that unlike other operating systems, in unix-like systems the browser preference is really spread, so it looks like a good idea to support "well-known" browsers. For example, I'd like to see Opera supported (I've heard it even has some Windows audience out there). I don't have a strong feeling about it, though. I'll accept peacefully whatever is decided. :-) -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From jrw@pobox.com Mon Nov 25 20:58:01 2002 From: jrw@pobox.com (John Williams) Date: Mon, 25 Nov 2002 14:58:01 -0600 Subject: [Python-Dev] Half-baked proposal: * (and **?) in assignments In-Reply-To: References: <200211250852.48735.aleax@aleax.it> Message-ID: <3DE28ED9.5020706@pobox.com> Alex Martelli wrote: > What WOULD be intolerable, it appears to me, would be to *require* > that user-coded iterators (classes exposing currently-suitable > __iter__ and next methods) MUST subclass iter. That would break > existing, running code, and go against the grain of Python, which > nowhere else imposes such requirements. Having a (mix-in?) class that > iterators COULD subclass (as Brent suggests) is one thing; for Python > to REQUIRE such subtyping (as Armin appears to wish could be done) is > quite another. What if you turn this around and place the burden on the Python system? Make "iter" a class rather than a function, and ensure that iter.__new__ always returns a subclass of "iter" like this (untested code): class iter(object): def __new__(cls, iterable): userIterator = iterable.__iter__() if isinstance(userIterator, iter): # Just like today's "iter" function. return userIterator else: # Build a wrapper. wrapper = object.__new__(iter) wrapper.next = userIterator.next if hasattr(userIterator, "__iter__"): wrapper.__iter__ = userIterator.__iter__ return wrapper def next(self): raise NotImplementedError def __iter__(self): return self # arbitrary new convenience methods here --jw From mhammond@skippinet.com.au Mon Nov 25 21:52:39 2002 From: mhammond@skippinet.com.au (Mark Hammond) Date: Tue, 26 Nov 2002 08:52:39 +1100 Subject: [Python-Dev] Windows builders? Message-ID: I'd like some help from a Windows builder. I have a patch to add SSL support, and while I have completed the patch, I would like some help and feedback on how well I did ;) Basically I will just ask you to try my patches as they stand, and ensure you get a nice clean failure building SSL. Then install SSL itself, and make sure my patch then finds and builds it before finally building the .pyd. Mail me if you can help. It shouldn't take too long. Thanks, Mark. From python@rcn.com Mon Nov 25 23:07:05 2002 From: python@rcn.com (Raymond Hettinger) Date: Mon, 25 Nov 2002 18:07:05 -0500 Subject: [Python-Dev] Classmethod Help Message-ID: <018a01c294d7$ad2110a0$125ffea9@oemcomputer> GvR pointed me to you guys for help in the C implementation of the patch for a dictionary class method: class dict: def fromseq(cls, iterable, value=None): """Return a new dictionary with keys from iterable and values equal to value.""" result = {} for elem in iterable: result[elem] = value return result fromseq = classmethod(fromseq) print dict.fromseq('collaborative') print dict().fromseq('associative') I've already C code the fromseq() as defined as above. The question is how to make it a class method and attach it to the dictionary type object. If you can help, please send me a note. Thanks in advance. BTW, I know this is ordinarily not the place to ask for help, but there are only a handful of people who understand descriptors at the C level (and most of them are currently 100% consumed by Zope priorities). Raymond Hettinger From just@letterror.com Tue Nov 26 00:17:31 2002 From: just@letterror.com (Just van Rossum) Date: Tue, 26 Nov 2002 01:17:31 +0100 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) In-Reply-To: <003901c294c0$5a187860$ced241d5@hagrid> Message-ID: Fredrik Lundh wrote: > if the value has no meaning, why not use a set? how many ways > do we need to do the same thing? True, very true. This basically kills the idea for me. Just From pedronis@bluewin.ch Tue Nov 26 00:11:59 2002 From: pedronis@bluewin.ch (Samuele Pedroni) Date: Tue, 26 Nov 2002 01:11:59 +0100 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) References: Message-ID: <055701c294e0$70412d60$6d94fea9@newmexico> From: "Just van Rossum" > Fredrik Lundh wrote: > > > if the value has no meaning, why not use a set? how many ways > > do we need to do the same thing? > > True, very true. This basically kills the idea for me. > If I understood correctly the not-so-veiled consideration is that sets are slower and always will be. "The Sets module met several needs centering around set mathematics; however, for membership testing, it is so slow that it is almost always preferable to use dictionaries instead (even without this proposed method). The slowness is intrinsic because of the time to search for the __contains__ method in the class and the time to setup a try/except to handle mutable elements. Another reason to prefer dictionaries is that there is one less thing to import and expect readers to understand. My experiences applying the Sets module indicates that it will *never* replace dictionaries for membership testing and will have only infrequent use for uniquification." So the purist solution would be to work long-term on improving set speed. regards. From guido@python.org Tue Nov 26 01:12:54 2002 From: guido@python.org (Guido van Rossum) Date: Mon, 25 Nov 2002 20:12:54 -0500 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) In-Reply-To: Your message of "Tue, 26 Nov 2002 01:11:59 +0100." <055701c294e0$70412d60$6d94fea9@newmexico> References: <055701c294e0$70412d60$6d94fea9@newmexico> Message-ID: <200211260112.gAQ1Cse21075@pcp02138704pcs.reston01.va.comcast.net> > If I understood correctly the not-so-veiled consideration is that sets are > slower and always will be. > > "The Sets module met several needs centering around set mathematics; > however, for membership testing, it is so slow that it is almost > always preferable to use dictionaries instead (even without this > proposed method). The slowness is intrinsic because of the time > to search for the __contains__ method in the class and the time > to setup a try/except to handle mutable elements. Another reason > to prefer dictionaries is that there is one less thing to import > and expect readers to understand. My experiences applying the > Sets module indicates that it will *never* replace dictionaries for > membership testing and will have only infrequent use for uniquification." > > So the purist solution would be to work long-term on improving set speed. There seems to be a misunderstanding about the status of the sets module. It is an attempt to prototype the set API without adding new C code. Once sets are accepted as a useful datatype, and we've settled upon the API, they should be reimplemented in C. Perhaps the current set implementation could be made faster by limiting it somewhat more? The current API attempts to be fast *and* flexible, but tends to favor correctness over speed where a trade-off has to be made. But maybe that's a poor way of selling a new built-in data type, and we would do better by having a truly fast implementation that is more limited? It's easier to remove limitations than to add them. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one@comcast.net Tue Nov 26 02:07:39 2002 From: tim.one@comcast.net (Tim Peters) Date: Mon, 25 Nov 2002 21:07:39 -0500 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) In-Reply-To: <200211260112.gAQ1Cse21075@pcp02138704pcs.reston01.va.comcast.net> Message-ID: [Guido] > ... > Perhaps the current set implementation could be made faster by > limiting it somewhat more? The current API attempts to be fast *and* > flexible, but tends to favor correctness over speed where a trade-off > has to be made. But maybe that's a poor way of selling a new built-in > data type, and we would do better by having a truly fast > implementation that is more limited? It's easier to remove > limitations than to add them. I don't think you *can* get "fast" membership testing unless sets inherit the C-level dict.__contains__ directly. The time burden of going thru a Python __contains__ method can't be overcome. For all the rest, it's plenty fast enough for me. Note that the spambayes project uses sets freely, mostly for uniquification, but also for convenience in manipulating usually-small sets of email.Message objects. If you're going to *do* something with the set elements (which that project does), the time to uniquify the original sequence is likely (as it is in that project) trivial compared to the rest. For contexts requiring heavy-duty repeated membership testing, though, spambayes code still used a straight dict with values 1. premature-etc-ly y'rs - tim From lists@webcrunchers.com Tue Nov 26 03:23:01 2002 From: lists@webcrunchers.com (John D.) Date: Mon, 25 Nov 2002 19:23:01 -0800 Subject: [Python-Dev] Some bugs in Python? Is this reportable? Message-ID: This Python "feature" has been here for so long, I assumed it was supposed to work this way... Problem is when a control-c is typed, it does a CORE DUMP! OpenBSD 3.2, 3.1, 3.0, 2.9, and beyond. i386 build. Standard PC. Python 2.2.2, 2.2.1, 2.1, and beyond. Standard build. Python 2.2.2 (#1, Nov 20 2002, 02:13:00) [GCC 2.95.3 20010125 (prerelease)] on openbsd3 Type "help", "copyright", "credits" or "license" for more information. >>> >>> pid 1234: Fatal error '_pq_insert_tail: Already in priority queue' at line 196 in file /usr/src/lib/libc_r/uthread/uthread_priority_queue.c (errno = 4) Abort (core dumped) John From niemeyer@conectiva.com Tue Nov 26 04:33:36 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Tue, 26 Nov 2002 02:33:36 -0200 Subject: [Python-Dev] Dictionary evaluation order Message-ID: <20021126023336.A9339@ibook.distro.conectiva> I was just looking at the bug [#448679] Left to right It mentions that code like that {f1():f2(), f3():f4()} Will call these functions in the order f2, f1, f4, f3. What should we do about it? Tim mentions that "When [Tim] asked Guido about that some years ago, he agreed it was a bug.". Is it too late to fix it, or is it still a desirable fix? The fix should be as easy as that: diff -u -r2.264 compile.c --- Python/compile.c 3 Oct 2002 09:50:47 -0000 2.264 +++ Python/compile.c 26 Nov 2002 04:02:27 -0000 @@ -1527,9 +1527,9 @@ It wants the stack to look like (value) (dict) (key) */ com_addbyte(c, DUP_TOP); com_push(c, 1); - com_node(c, CHILD(n, i+2)); /* value */ - com_addbyte(c, ROT_TWO); com_node(c, CHILD(n, i)); /* key */ + com_node(c, CHILD(n, i+2)); /* value */ + com_addbyte(c, ROT_THREE); com_addbyte(c, STORE_SUBSCR); com_pop(c, 3); } (compiler module should be fixed as well, as it mimicks that behavior) So I belive it just a matter of deciding what should be done. -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From jeremy@alum.mit.edu Tue Nov 26 04:52:34 2002 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Mon, 25 Nov 2002 23:52:34 -0500 Subject: [Python-Dev] Dictionary evaluation order In-Reply-To: <20021126023336.A9339@ibook.distro.conectiva> References: <20021126023336.A9339@ibook.distro.conectiva> Message-ID: <15842.65042.741466.922454@slothrop.zope.com> I think the language shouldn't specify what the order of evaluation here. The current implementation seems just as valid as the proposed change. No one should write code that depends on this order. Jeremy From andymac@bullseye.apana.org.au Mon Nov 25 21:37:57 2002 From: andymac@bullseye.apana.org.au (Andrew MacIntyre) Date: Tue, 26 Nov 2002 07:37:57 +1000 (est) Subject: [Python-Dev] urllib performance issue on FreeBSD 4.x In-Reply-To: <017d01c293b7$debbc0e0$ced241d5@hagrid> Message-ID: On Sun, 24 Nov 2002, Fredrik Lundh wrote: > > > Without this patch, d/l a 4MB file from localhost gets a bit over 110kB/s, > > > with the patch gets 4-5.5MB/s on the same system > > > > > > - why is the socket.fp being set to unbuffered? > > > > I can't make time for a full essay on the issue, but I believe that it > > must be unbuffered because some applications want to read until the > > end of the headers and then pass the file descriptor to a subprocess > > or to code that uses the socket directly. > > sounds like it would be a good idea to provide a subclass (or option) > for applications that don't need that feature. Thanks for the info. I'll add preparing a patch for this to my projects list... -- Andrew I MacIntyre "These thoughts are mine alone..." E-mail: andymac@bullseye.apana.org.au | Snail: PO Box 370 andymac@pcug.org.au | Belconnen ACT 2616 Web: http://www.andymac.org/ | Australia From guido@python.org Tue Nov 26 06:40:06 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 26 Nov 2002 01:40:06 -0500 Subject: [Python-Dev] Dictionary evaluation order In-Reply-To: Your message of "Tue, 26 Nov 2002 02:33:36 -0200." <20021126023336.A9339@ibook.distro.conectiva> References: <20021126023336.A9339@ibook.distro.conectiva> Message-ID: <200211260640.gAQ6e6916831@pcp02138704pcs.reston01.va.comcast.net> > I was just looking at the bug > > [#448679] Left to right > > It mentions that code like that > > {f1():f2(), f3():f4()} > > Will call these functions in the order f2, f1, f4, f3. What should we > do about it? Tim mentions that "When [Tim] asked Guido about that > some years ago, he agreed it was a bug.". Is it too late to fix it, or > is it still a desirable fix? Hm, there are other situations where it's not so easy to get strict L2R evaluation, e.g. a = {} a[f1()] = f2() It's rather natural to evaluate the RHS first in assignments. Since the dict display is a thinly veiled assignment, I'm not sure it's worth fixing this in the language definition. What does Jython do? --Guido van Rossum (home page: http://www.python.org/~guido/) From martin@v.loewis.de Tue Nov 26 08:45:17 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 26 Nov 2002 09:45:17 +0100 Subject: [Python-Dev] Some bugs in Python? Is this reportable? In-Reply-To: References: Message-ID: "John D." writes: > This Python "feature" has been here for so long, I assumed it was > supposed to work this way... Problem is when a control-c is typed, > it does a CORE DUMP! OpenBSD 3.2, 3.1, 3.0, 2.9, and beyond. i386 > build. Standard PC. Python 2.2.2, 2.2.1, 2.1, and beyond. Standard > build. You can report it, but not here. Please use sf.net/projects/python to report bugs. > >>> pid 1234: Fatal error '_pq_insert_tail: Already in priority queue' at line 196 in file /usr/src/lib/libc_r/uthread/uthread_priority_queue.c (errno = 4) > Abort (core dumped) Notice that this is a failed assertion in the C library of your operating system. It should not be possible for applications (Python or other) to trigger assertions in the C library, so when this happens, it indicates a bug in the system. Python might be using the C library incorrectly, though; please add -D_THREADSAFE to your Python compilation and see whether this changes anything. If this really is a bug in the system, you should configure Python with --disable-threads. Regards, Martin From mwh@python.net Tue Nov 26 09:43:06 2002 From: mwh@python.net (Michael Hudson) Date: Tue, 26 Nov 2002 09:43:06 +0000 (GMT) Subject: [Python-Dev] Re: release22-maint branch broken In-Reply-To: Message-ID: On Mon, 25 Nov 2002, Tim Rice wrote: > Yes, I'm building in /usr/local/src/utils/Python-2 Ack! I backported Fred's code to sysconfig.py, but I forgot to backport my own (!) fix to Fred's code to support this kind of build. What can I say? Oops. Sorry. Can you try CVS? Cheers, M. From martin@v.loewis.de Tue Nov 26 10:02:07 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 26 Nov 2002 11:02:07 +0100 Subject: [Python-Dev] Removing support for little used platforms Message-ID: I'm going to add error messages for all platforms that become unsupported in Python 2.3 really soon. If you have any corrections to the list in PEP 11, please let me know. Regards, Martin From walter@livinglogic.de Tue Nov 26 10:49:35 2002 From: walter@livinglogic.de (=?ISO-8859-15?Q?Walter_D=F6rwald?=) Date: Tue, 26 Nov 2002 11:49:35 +0100 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) In-Reply-To: <004101c29450$adfe1680$125ffea9@oemcomputer> References: <004101c29450$adfe1680$125ffea9@oemcomputer> <200211251604.gAPG4dn08814@odiug.zope.com> <006f01c2949d$f8edb720$125ffea9@oemcomputer> <200211251632.gAPGWYb08901@odiug.zope.com> Message-ID: <3DE351BF.7070309@livinglogic.de> Guido van Rossum wrote: > >> x = {}.sequpdate(...) > >> > >>which suggests that you're really looking for a different constructor > >>as a class method. > > > >Yes! > > > >Will revise the patch accordingly > >and use Just's suggested name, fromseq(). > > > >Do you prefer the default value to be None or True? > >Earlier discussions on python-dev showed that > >True is more meaningful to some in the context of > >membership testing. OTOH, dict.setvalue and > >dict.get both use None. > > > I think it should be None -- let's be explicit when we want True. And maybe the method should be named fromkeyseq, because there is another constructor that creates the dict from a sequence of items. (Maybe this constructor should be made into a class method fromitemseq?) Bye, Walter Dörwald From bckfnn@worldonline.dk Tue Nov 26 11:14:31 2002 From: bckfnn@worldonline.dk (Finn Bock) Date: Tue, 26 Nov 2002 12:14:31 +0100 Subject: [Python-Dev] Dictionary evaluation order References: <20021126023336.A9339@ibook.distro.conectiva> <200211260640.gAQ6e6916831@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <3DE35797.7020408@worldonline.dk> [Gustavo Niemeyer] >>I was just looking at the bug >> >> [#448679] Left to right >> >>It mentions that code like that >> >> {f1():f2(), f3():f4()} >> >>Will call these functions in the order f2, f1, f4, f3. What should we >>do about it? Tim mentions that "When [Tim] asked Guido about that >>some years ago, he agreed it was a bug.". Is it too late to fix it, or >>is it still a desirable fix? [Guido van Rossum] > ... > What does Jython do? Jython happens to evaluate in order of f1, f2, f3, f4. An accident, I'm sure, of the way the dictionary constructor is called with a sequence of (key, value, key, value, ...). regards, finn From akuchlin@mems-exchange.org Tue Nov 26 12:40:25 2002 From: akuchlin@mems-exchange.org (Andrew Kuchling) Date: Tue, 26 Nov 2002 07:40:25 -0500 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Misc NEWS,1.542,1.543 In-Reply-To: References: Message-ID: <20021126124025.GA24133@ute.mems-exchange.org> On Tue, Nov 26, 2002 at 01:28:07AM -0800, loewis@users.sourceforge.net wrote: >+ - _tkinter now returns Tcl objects, instead of strings. Objects which >+ have Python equivalents are converted to Python objects, other objects >+ are wrapped. This can be configured through the wantobjects method, >+ or Tkinter.want_objects. Will it be confusing to have the method be named wantobjects but the module variable named want_objects? (Glancing at the C code, it seems to use only want_objects as a variable name, so the method name is the only inconsistent name.) --amk (www.amk.ca) EDGAR: The worst is not, so long as we can say "This is the worst." -- _King Lear_, IV, i From guido@python.org Tue Nov 26 13:32:43 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 26 Nov 2002 08:32:43 -0500 Subject: [Python-Dev] Removing support for little used platforms In-Reply-To: Your message of "26 Nov 2002 11:02:07 +0100." References: Message-ID: <200211261332.gAQDWhY08327@pcp02138704pcs.reston01.va.comcast.net> > I'm going to add error messages for all platforms that become > unsupported in Python 2.3 really soon. If you have any corrections to > the list in PEP 11, please let me know. Are any of those platforms listed here? http://www.python.org/download/download_other.html Maybe a note on the website should be made, asking for volunteers to maintain specific platforms? --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Tue Nov 26 13:45:26 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 26 Nov 2002 08:45:26 -0500 Subject: [Python-Dev] Dictionary evaluation order In-Reply-To: Your message of "Tue, 26 Nov 2002 12:14:31 +0100." <3DE35797.7020408@worldonline.dk> References: <20021126023336.A9339@ibook.distro.conectiva> <200211260640.gAQ6e6916831@pcp02138704pcs.reston01.va.comcast.net> <3DE35797.7020408@worldonline.dk> Message-ID: <200211261345.gAQDjQu12129@pcp02138704pcs.reston01.va.comcast.net> > >>It mentions that code like that > >> > >> {f1():f2(), f3():f4()} > >> > >>Will call these functions in the order f2, f1, f4, f3. What should we > >>do about it? Tim mentions that "When [Tim] asked Guido about that > >>some years ago, he agreed it was a bug.". Is it too late to fix it, or > >>is it still a desirable fix? > > [Guido van Rossum] > > > ... > > What does Jython do? > > Jython happens to evaluate in order of f1, f2, f3, f4. An accident, I'm > sure, of the way the dictionary constructor is called with a sequence of > (key, value, key, value, ...). In that case, I see no reason to block the fix for this particular case. --Guido van Rossum (home page: http://www.python.org/~guido/) From martin@v.loewis.de Tue Nov 26 14:04:35 2002 From: martin@v.loewis.de (Martin v. Löwis) Date: Tue, 26 Nov 2002 15:04:35 +0100 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Misc NEWS,1.542,1.543 References: <20021126124025.GA24133@ute.mems-exchange.org> Message-ID: <000601c29554$c0a97e00$fe26e8d9@mira> > Will it be confusing to have the method be named wantobjects but the > module variable named want_objects? (Glancing at the C code, it seems > to use only want_objects as a variable name, so the method name is the only > inconsistent name.) That might well be. I was trying to be consistent with "createfilehandler", "dooneevent", etc. Should I break this consistency, or remove the underscores everywhere else? I really have no bias either way. Regards, Martin From martin@v.loewis.de Tue Nov 26 14:10:28 2002 From: martin@v.loewis.de (Martin v. Löwis) Date: Tue, 26 Nov 2002 15:10:28 +0100 Subject: [Python-Dev] Removing support for little used platforms References: <200211261332.gAQDWhY08327@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <001601c29555$92c2bd70$fe26e8d9@mira> > Are any of those platforms listed here? No (unless some of these platforms rely on the unsupported Posix threads drafts, which I doubt). > Maybe a note on the website should be made, asking for volunteers to > maintain specific platforms? I'd really suggest to follow the strategy of PEP 11 here: Trying to build Python on one of these systems produces a build error, which can be easily removed by uncommenting it. People can then decide to not build Python, or offer to volunteer as maintainers for the platform. If nobody volunteers, the code will be removed in Python 2.4. I believe this is more effective than posting a note on the website. Regards, Martin From mwh@python.net Tue Nov 26 14:32:07 2002 From: mwh@python.net (Michael Hudson) Date: 26 Nov 2002 14:32:07 +0000 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Lib/test test_descr.py,1.161,1.162 In-Reply-To: gvanrossum@users.sourceforge.net's message of "Mon, 25 Nov 2002 13:39:08 -0800" References: Message-ID: <2my97g1jx4.fsf@starship.python.net> gvanrossum@users.sourceforge.net writes: > Update of /cvsroot/python/python/dist/src/Lib/test > In directory sc8-pr-cvs1:/tmp/cvs-serv18947 > > Modified Files: > test_descr.py > Log Message: > A tweaked version of Jeremy's patch #642489, to produce better error > messages about MRO conflicts. (Tweaks here: don't print the message, > but compare it with an expected string.) [...] > + raises(TypeError, "MRO conflict among bases B, A", > + type, "X", (A, B), {}) Unfortunately, the order of the bases in the error message depends on the order they come out of a dict. As we all know, this is not a recipe for happiness... Not sure what to do about this; I'm just going to comment this test out in my checkout so I can get on with my stuff. Shout at me if I check the test in like that! Cheers, M. -- There are two kinds of large software systems: those that evolved from small systems and those that don't work. -- Seen on slashdot.org, then quoted by amk From mwh@python.net Tue Nov 26 14:42:30 2002 From: mwh@python.net (Michael Hudson) Date: 26 Nov 2002 14:42:30 +0000 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Modules _localemodule.c,2.35,2.36 In-Reply-To: loewis@users.sourceforge.net's message of "Tue, 26 Nov 2002 01:05:38 -0800" References: Message-ID: <2mu1i4crzd.fsf@starship.python.net> loewis@users.sourceforge.net writes: > Update of /cvsroot/python/python/dist/src/Modules > In directory sc8-pr-cvs1:/tmp/cvs-serv20873/Modules > > Modified Files: > _localemodule.c > Log Message: > Patch #632973: Implement _getdefaultlocale for OS X. I think this patch broke test_strptime: test test_strptime failed -- Traceback (most recent call last): File "/Users/mwh/Source/python/dist/src/Lib/test/test_strptime.py", line 156, in test_returning_RE self.failUnless(_strptime.strptime("1999", strp_output), "Use or re object failed") File "/Users/mwh/Source/python/dist/src/Lib/_strptime.py", line 399, in strptime if format.pattern.find(locale_time.lang) == -1: TypeError: expected a character buffer object locale_time.lang is None. Dunno who's at fault here -- strptime? the test? the locale stuff? -- just being the bearer of bad news. I also have a bunch of mods in my tree, but I'm not expecting this to be them. Cheers, M. -- $ head -n 2 src/bash/bash-2.04/unwind_prot.c /* I can't stand it anymore! Please can't we just write the whole Unix system in lisp or something? */ -- spotted by Rich van der Hoff From tim.one@comcast.net Tue Nov 26 15:04:48 2002 From: tim.one@comcast.net (Tim Peters) Date: Tue, 26 Nov 2002 10:04:48 -0500 Subject: [Python-Dev] _tkinter.c no longer compiles on Windows In-Reply-To: Message-ID: > Modified Files: > _tkinter.c > Log Message: > Patch #518625: Return objects in Tkinter. > > > Index: _tkinter.c > =================================================================== > RCS file: /cvsroot/python/python/dist/src/Modules/_tkinter.c,v > retrieving revision 1.130 > retrieving revision 1.131 > diff -C2 -d -r1.130 -r1.131 > *** _tkinter.c 1 Oct 2002 18:50:56 -0000 1.130 > --- _tkinter.c 26 Nov 2002 09:28:05 -0000 1.131 > *************** > *** 51,57 **** > --- 51,59 ---- > #ifdef TK_FRAMEWORK > #include > + #include > #include > #else > #include > + #include > #include > #endif > *************** tclInt.h doesn't exist in the Tcl/Tk install's Include directory, in either the 8.3.2 or 8.4.1 versions, so _tkinter no longer compiles on Windows. I don't know what the intent is here, so it would be better if someone who does tried to fix this. The only files in the 8.3.2 Include directory are tcl.h tclDecls.h tk.h tkDecls.h tkIntXlibDecls.h 8.4.1 adds two more to that set, which I expect are meant not to be used directly: tclPlatDecls.h tkPlatDecls.h The release Include directories in 8.3.2 and 8.4.1 also contain an X11 subdirectory, but that appears irrelevant. From mwh@python.net Tue Nov 26 15:06:38 2002 From: mwh@python.net (Michael Hudson) Date: 26 Nov 2002 15:06:38 +0000 Subject: [Python-Dev] assigning to new-style-class.__name__ Message-ID: <2mznrwcqv5.fsf@starship.python.net> My (very) recent patch #635933 allows assignment to both __name__ and __bases__ of new-style classes. Given that the code for __bases__ is much more complicated, it's a little odd that __name__ is the one still giving me headaches. It's all to do with dots. An extension type like (e.g.) time.struct_time is created with a tp_name of 'time.struct_time' which the accessors for __module__ and __name__ translate thusly: >>> time.struct_time.__name__ 'struct_time' >>> time.struct_time.__module__ 'time' User defined new-style classes _seem_ to behave similary: >>> class C(object): ... pass ... >>> C.__name__ 'C' >>> C.__module__ '__main__' but under the hood it's quite different: tp_name is just "C" and '__module__' is a key in C.__dict__. This shows up when in: >>> C.__name__ = 'C.D' >>> C.__name__ 'D' >>> C.__module__ 'C' which isn't really what I would have expected. What I'd like to do is treat heap types and not-heap types distinctly: For non-heap types, do as we do today: everything in tp_name up to the first dot is __module__, the rest is __name__. You can't change anything here, no worries about that. For heap types, __module__ is always __dict__['__module__'], __name__ is always tp_name (or rather ((etype*)type)->name). Comments? I think this is fine, so long as there aren't heap types that are created by some wierd means that leaves them without "'__modules__' in t.__dict__". (If someone does del t.__dict__['__modules__'] they deserve to lose, but we shouldn't crash. I don't expect this to be a problem). Cheers, M. -- First time I've gotten a programming job that required a drug test. I was worried they were going to say "you don't have enough LSD in your system to do Unix programming". -- Paul Tomblin -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html From guido@python.org Tue Nov 26 15:13:41 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 26 Nov 2002 10:13:41 -0500 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Lib/test test_descr.py,1.161,1.162 In-Reply-To: Your message of "26 Nov 2002 14:32:07 GMT." <2my97g1jx4.fsf@starship.python.net> References: <2my97g1jx4.fsf@starship.python.net> Message-ID: <200211261513.gAQFDfi01846@odiug.zope.com> > > Modified Files: > > test_descr.py > > Log Message: > > A tweaked version of Jeremy's patch #642489, to produce better error > > messages about MRO conflicts. (Tweaks here: don't print the message, > > but compare it with an expected string.) > [...] > > + raises(TypeError, "MRO conflict among bases B, A", > > + type, "X", (A, B), {}) > > Unfortunately, the order of the bases in the error message depends on > the order they come out of a dict. As we all know, this is not a > recipe for happiness... Oh, shit. I didn't review the code well enough. I'll change the test to check that we get *some* message starting with "MRO conflict among bases". --Guido van Rossum (home page: http://www.python.org/~guido/) From tim@zope.com Tue Nov 26 15:23:10 2002 From: tim@zope.com (Tim Peters) Date: Tue, 26 Nov 2002 10:23:10 -0500 Subject: [Python-Dev] Dictionary evaluation order In-Reply-To: <200211261345.gAQDjQu12129@pcp02138704pcs.reston01.va.comcast.net> Message-ID: Note that the primary point of the referenced bug report Left to right is that the Ref Man doesn't really address Python's evaluation order. Guido has said he intended left-to-right, in which case the dict example would be an endcase glitch. I don't believe the language definition currently requires or forbids any specific eval order here, though. From tim@multitalents.net Tue Nov 26 15:35:37 2002 From: tim@multitalents.net (Tim Rice) Date: Tue, 26 Nov 2002 07:35:37 -0800 (PST) Subject: [Python-Dev] Re: release22-maint branch broken In-Reply-To: Message-ID: On Tue, 26 Nov 2002, Michael Hudson wrote: > On Mon, 25 Nov 2002, Tim Rice wrote: > > > Yes, I'm building in /usr/local/src/utils/Python-2 > > Ack! I backported Fred's code to sysconfig.py, but I forgot to backport > my own (!) fix to Fred's code to support this kind of build. > > What can I say? Oops. Sorry. These things happen. :-) > > Can you try CVS? That works. > > Cheers, > M. > -- Tim Rice Multitalents (707) 887-1469 tim@multitalents.net From akuchlin@mems-exchange.org Tue Nov 26 15:50:00 2002 From: akuchlin@mems-exchange.org (Andrew Kuchling) Date: Tue, 26 Nov 2002 10:50:00 -0500 Subject: [Python-Dev] Sanity-check wanted: bug #641685 Message-ID: I have a patch (setup.patch) attached to bug #641685 that removes some code duplication in Python's setup.py, which has its own find_library_file() function instead of using the CCompiler.find_library_file() method offered by Distutils. Can someone please review the patch before I check it in? --amk (www.amk.ca) LEAR: Vengeance! plague! death! confusion! -- _King Lear_, II, iv From akuchlin@mems-exchange.org Tue Nov 26 15:52:58 2002 From: akuchlin@mems-exchange.org (Andrew Kuchling) Date: Tue, 26 Nov 2002 10:52:58 -0500 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Misc NEWS,1.542,1.543 In-Reply-To: <000601c29554$c0a97e00$fe26e8d9@mira> References: <20021126124025.GA24133@ute.mems-exchange.org> <000601c29554$c0a97e00$fe26e8d9@mira> Message-ID: <20021126155258.GA24880@ute.mems-exchange.org> On Tue, Nov 26, 2002 at 03:04:35PM +0100, Martin v. L?wis wrote: >That might well be. I was trying to be consistent with >"createfilehandler", >"dooneevent", etc. Should I break this consistency, or remove the >underscores >everywhere else? I really have no bias either way. Ah, OK. Consistency with the rest of the Tkinter module is most important, so wantobjects() makes the most sense as the method name. --amk (www.amk.ca) HAMLET: O my prophetic soul! -- _Hamlet_, I, v From aahz@pythoncraft.com Tue Nov 26 16:02:46 2002 From: aahz@pythoncraft.com (Aahz) Date: Tue, 26 Nov 2002 11:02:46 -0500 Subject: [Python-Dev] Removing support for little used platforms In-Reply-To: <001601c29555$92c2bd70$fe26e8d9@mira> References: <200211261332.gAQDWhY08327@pcp02138704pcs.reston01.va.comcast.net> <001601c29555$92c2bd70$fe26e8d9@mira> Message-ID: <20021126160246.GA3256@panix.com> On Tue, Nov 26, 2002, Martin v. Löwis wrote: > > I'd really suggest to follow the strategy of PEP 11 here: Trying to > build Python on one of these systems produces a build error, which can > be easily removed by uncommenting it. People can then decide to not > build Python, or offer to volunteer as maintainers for the platform. > > If nobody volunteers, the code will be removed in Python 2.4. > > I believe this is more effective than posting a note on the website. +1 (speaking as one webmaster, but not speaking for all webmasters) -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "If you don't know what your program is supposed to do, you'd better not start writing it." --Dijkstra From niemeyer@conectiva.com Tue Nov 26 16:09:02 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Tue, 26 Nov 2002 14:09:02 -0200 Subject: [Python-Dev] Dictionary evaluation order In-Reply-To: References: <200211261345.gAQDjQu12129@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021126140902.C11820@ibook.distro.conectiva> > Note that the primary point of the referenced bug report > > Left to right > > is that the Ref Man doesn't really address Python's evaluation order. Yes, I'm aware about that. I'd just like to fix that, or to document the exception, together with the evaluation order documentation. > Guido has said he intended left-to-right, in which case the dict > example would be an endcase glitch. I don't believe the language > definition currently requires or forbids any specific eval order here, > though. Ok. I can take the following conclusions then. Please, correct me if I'm wrong. - everyone seems to agree that the current behavior is not set in stone; - no one should expect the current behavior, as it is not documented, and not good to expect such language behavior anyway; - having the L2R order where possible would be good and was originaly intended; - Jython already uses L2R in that case as well; Based on that, I'll write a suggested solution including documentation, and post for review. Thanks everyone. -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From guido@python.org Tue Nov 26 16:41:06 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 26 Nov 2002 11:41:06 -0500 Subject: [Python-Dev] Currently baking idea for dict.sequpdate(iterable, value=True) In-Reply-To: Your message of "Tue, 26 Nov 2002 11:49:35 +0100." <3DE351BF.7070309@livinglogic.de> References: <004101c29450$adfe1680$125ffea9@oemcomputer> <200211251604.gAPG4dn08814@odiug.zope.com> <006f01c2949d$f8edb720$125ffea9@oemcomputer> <200211251632.gAPGWYb08901@odiug.zope.com> <3DE351BF.7070309@livinglogic.de> Message-ID: <200211261641.gAQGf6L13412@odiug.zope.com> > And maybe the method should be named fromkeyseq, because there > is another constructor that creates the dict from a sequence > of items. +1 > (Maybe this constructor should be made into a > class method fromitemseq?) Too late -- it's already dict() in Python 2.2. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Tue Nov 26 16:43:33 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 26 Nov 2002 11:43:33 -0500 Subject: [Python-Dev] assigning to new-style-class.__name__ In-Reply-To: Your message of "26 Nov 2002 15:06:38 GMT." <2mznrwcqv5.fsf@starship.python.net> References: <2mznrwcqv5.fsf@starship.python.net> Message-ID: <200211261643.gAQGhXF13853@odiug.zope.com> > >>> C.__name__ = 'C.D' > >>> C.__name__ > 'D' > >>> C.__module__ > 'C' > > which isn't really what I would have expected. In fact, this makes the feature useless (its main use is to "fix" the name of nested classes). > What I'd like to do is treat heap types and not-heap types distinctly: > > For non-heap types, do as we do today: everything in tp_name up to the > first dot is __module__, the rest is __name__. You can't change > anything here, no worries about that. > > For heap types, __module__ is always __dict__['__module__'], __name__ > is always tp_name (or rather ((etype*)type)->name). +1 > Comments? I think this is fine, so long as there aren't heap types > that are created by some wierd means that leaves them without > "'__modules__' in t.__dict__". > > (If someone does > > del t.__dict__['__modules__'] > > they deserve to lose, but we shouldn't crash. I don't expect this to > be a problem). Correct. --Guido van Rossum (home page: http://www.python.org/~guido/) From noah@noah.org Tue Nov 26 18:06:28 2002 From: noah@noah.org (Noah Spurrier) Date: Tue, 26 Nov 2002 10:06:28 -0800 Subject: [Python-Dev] Expect in python Message-ID: <000301c29580$27a5e050$5901a8c0@hal> > Lance Ellinghaus writes: > > > Yes. I have been using it for a while. It works very well, except > > to make it run on Solaris you have to make modifications to the posix > > module. I submitted the necessary changes, but they were denied since > > I made them Solaris specific. The module changes are in the submitted > > patches on SourceForge under the Python project. > > If these changes are what I think they are, I know how to implement > them generically. What was the patch number? > > zw Keep me in the loop on these changes. I have been using Source Forge's Compile Farm to test Pexpect on different platforms. So far, the most troublesome platform has been Solaris. This is bad because I think that Solaris is strategically a very important platform to support. Lance, I tried a copy of Python patched with the changes you sent me a long time ago, but I ran into a lot of problems. It built fine and would allow me to use the pty module, but it raised lots of exceptions that I was not clueful enough to track down. Zack, I would be happy to work with you or Lance to test changes necessary to make the pty module work on Solaris. ... There were also some small problem on OS X, but I am not sure if these problems were similar to the problems I had with Solaris. I noted these problems in the BUGS section of the Pexpect page on sourceforge. Yours, Noah From martin@v.loewis.de Tue Nov 26 21:37:03 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 26 Nov 2002 22:37:03 +0100 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Modules _localemodule.c,2.35,2.36 In-Reply-To: <2mu1i4crzd.fsf@starship.python.net> References: <2mu1i4crzd.fsf@starship.python.net> Message-ID: Michael Hudson writes: > I think this patch broke test_strptime: Are you sure? If I back out this patch, it still fails. I cannot see how the test could have ever worked on OS X. Regards, Martin From guido@python.org Tue Nov 26 21:51:57 2002 From: guido@python.org (Guido van Rossum) Date: Tue, 26 Nov 2002 16:51:57 -0500 Subject: [Python-Dev] tclInt.h? Message-ID: <200211262151.gAQLpvI24569@odiug.zope.com> _tkinter.c now includes . I don't seem to have this file, even though I installed Tcl/Tk 8.4.1. Thus, the _tkinter won't build. How can this be? --Guido van Rossum (home page: http://www.python.org/~guido/) From martin@v.loewis.de Tue Nov 26 22:15:04 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 26 Nov 2002 23:15:04 +0100 Subject: [Python-Dev] tclInt.h? In-Reply-To: <200211262151.gAQLpvI24569@odiug.zope.com> References: <200211262151.gAQLpvI24569@odiug.zope.com> Message-ID: Guido van Rossum writes: > _tkinter.c now includes . I don't seem to have this file, > even though I installed Tcl/Tk 8.4.1. Thus, the _tkinter won't > build. How can this be? On SuSE, it is installed as part of the tcl-devel package. It is an internal header, though, so it appears not to be universally available. I need it to implement the type checks, as it declares symbols like tclBooleanType. Unfortunately, Tcl developers appear to consider this symbol internal. Fortunately, they offer a lookup-by-name operation to find types, which is now used in _tkinter.c 1.133. For efficiency, I cache the lookup of the relevant types. Regards, Martin From martin@v.loewis.de Tue Nov 26 22:15:53 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 26 Nov 2002 23:15:53 +0100 Subject: [Python-Dev] _tkinter.c no longer compiles on Windows In-Reply-To: References: Message-ID: Tim Peters writes: > I don't know what the intent is here, so it would be better if someone who > does tried to fix this. The intent was to get tclBooleanType. I have now worked around the problem that tclInt.h is sometimes not installed. Regards, Martin From tim.one@comcast.net Tue Nov 26 23:09:46 2002 From: tim.one@comcast.net (Tim Peters) Date: Tue, 26 Nov 2002 18:09:46 -0500 Subject: [Python-Dev] _tkinter.c no longer compiles on Windows In-Reply-To: Message-ID: [MvL] > The intent was to get tclBooleanType. I have now worked around the > problem that tclInt.h is sometimes not installed. Thanks! The workaround worked around it here. Happy Thanksgiving! (That's a worldwide holiday, BTW -- if it's not celebrated in Germany, you should protest vigorously.) From aahz@pythoncraft.com Tue Nov 26 23:50:37 2002 From: aahz@pythoncraft.com (Aahz) Date: Tue, 26 Nov 2002 18:50:37 -0500 Subject: [Python-Dev] Talking Turkey In-Reply-To: References: Message-ID: <20021126235037.GA13452@panix.com> On Tue, Nov 26, 2002, Tim Peters wrote: > > Thanks! The workaround worked around it here. Happy Thanksgiving! > (That's a worldwide holiday, BTW -- if it's not celebrated in Germany, > you should protest vigorously.) It's celebrated in Canada -- just on a different day. -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "If you don't know what your program is supposed to do, you'd better not start writing it." --Dijkstra From tdelaney@avaya.com Tue Nov 26 23:56:17 2002 From: tdelaney@avaya.com (Delaney, Timothy) Date: Wed, 27 Nov 2002 10:56:17 +1100 Subject: [Python-Dev] Talking Turkey Message-ID: > From: Aahz [mailto:aahz@pythoncraft.com] > > On Tue, Nov 26, 2002, Tim Peters wrote: > > > > Thanks! The workaround worked around it here. Happy Thanksgiving! > > (That's a worldwide holiday, BTW -- if it's not celebrated > in Germany, you should protest vigorously.) > > It's celebrated in Canada -- just on a different day. We'll start celebrating Thanksgiving the day you guys start celebrating Australia Day ... ;) Tim Delaney From greg@cosc.canterbury.ac.nz Wed Nov 27 00:01:39 2002 From: greg@cosc.canterbury.ac.nz (Greg Ewing) Date: Wed, 27 Nov 2002 13:01:39 +1300 (NZDT) Subject: [Python-Dev] Talking Turkey In-Reply-To: <20021126235037.GA13452@panix.com> Message-ID: <200211270001.gAR01dv28395@kuku.cosc.canterbury.ac.nz> On Tue, Nov 26, 2002, Tim Peters wrote: > > Happy Thanksgiving! > (That's a worldwide holiday, BTW -- if it's not celebrated in Germany, > you should protest vigorously.) It's not celebrated in New Zealand. Are we being cheated? Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg@cosc.canterbury.ac.nz +--------------------------------------+ From greg@cosc.canterbury.ac.nz Wed Nov 27 00:06:45 2002 From: greg@cosc.canterbury.ac.nz (Greg Ewing) Date: Wed, 27 Nov 2002 13:06:45 +1300 (NZDT) Subject: [Python-Dev] Talking Turkey In-Reply-To: Message-ID: <200211270006.gAR06jw28400@kuku.cosc.canterbury.ac.nz> "Delaney, Timothy" : > We'll start celebrating Thanksgiving the day you guys start celebrating > Australia Day ... ;) Hmmm, on that basis, I suppose you'll have to start celebrating Waitangi Day before we get Thanksgiving. You probably wouldn't enjoy it... hoardes of Maori protesters marching on Washington DC would be a bit of a nuisance... Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg@cosc.canterbury.ac.nz +--------------------------------------+ From tim.one@comcast.net Wed Nov 27 00:22:14 2002 From: tim.one@comcast.net (Tim Peters) Date: Tue, 26 Nov 2002 19:22:14 -0500 Subject: [Python-Dev] Talking Turkey In-Reply-To: Message-ID: [Delaney, Timothy] > We'll start celebrating Thanksgiving the day you guys start celebrating > Australia Day ... ;) You should start the day before! A transplated friend from Israel told me, after about 10 years in the US, that Thanksgiving had become her favorite of all holidays, anywhere -- because it favors no nationality, religion, social class, culture, or, umm, programming language. Everyone is welcome at Thanksgiving! How many other holidays can say that? It's a day to sit around getting increasingly bitter, as you wonder why on Earth you should be thankful for the rotten hand life has dealt you. The great thing is that everyone else is thinking the same thing, and great relief follows after the group rips a turkey apart with their bare hands and (I hope this doesn't shock you) *eats* it! It's barbaric, I know, but all in all it's cheaper than bombing Iran or Iraq, and has much the same cathartic effect. Offhand, I doubt that eating an Australian instead would go over nearly as well here, so I think you'd best keep Australia Day where it belongs. From tdelaney@avaya.com Wed Nov 27 00:47:39 2002 From: tdelaney@avaya.com (Delaney, Timothy) Date: Wed, 27 Nov 2002 11:47:39 +1100 Subject: [Python-Dev] Talking Turkey Message-ID: > group rips a turkey apart with their bare hands and (I hope > this doesn't > shock you) *eats* it! It's barbaric, I know, but all in all I don't like turkey. Do you mind if I have a lamb roast instead? Tim Delaney From skip@pobox.com Wed Nov 27 01:20:40 2002 From: skip@pobox.com (Skip Montanaro) Date: Tue, 26 Nov 2002 19:20:40 -0600 Subject: [Python-Dev] Talking Turkey In-Reply-To: <200211270001.gAR01dv28395@kuku.cosc.canterbury.ac.nz> References: <20021126235037.GA13452@panix.com> <200211270001.gAR01dv28395@kuku.cosc.canterbury.ac.nz> Message-ID: <15844.7656.412999.169910@montanaro.dyndns.org> >> (That's a worldwide holiday, BTW -- if it's not celebrated in >> Germany, you should protest vigorously.) Greg> It's not celebrated in New Zealand. Are we being cheated? Yeah, if you enjoy consuming massive quantities of turkey, stuffing, cranberry sauce, pumpkin pie and lying about on the sofa like a beached whale watching American football. Did I forget to mention several days of turkey sandwiches afterwards? ;-) -- Skip Montanaro - skip@pobox.com http://www.mojam.com/ http://www.musi-cal.com/ From niemeyer@conectiva.com Wed Nov 27 04:47:18 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Wed, 27 Nov 2002 02:47:18 -0200 Subject: [Python-Dev] Re: tarfile module (update) In-Reply-To: <200211270111.57651.lars@gustaebel.de> References: <200211141847.34336.lars@gustaebel.de> <200211261656.08362.lars@gustaebel.de> <20021126143121.A14839@ibook.distro.conectiva> <200211270111.57651.lars@gustaebel.de> Message-ID: <20021127024717.A2333@ibook.distro.conectiva> Hello again Lars! > There is a two-level public API to tarfile. The low-level API which is > the TarFile class with its __init__(), and the high-level API which is > the module-level open() function. [...] Ok, but I still belive that the default constructor should be the most common used interface, just like most (all) of the standard library. Even the default open is now an alias for the 'file' type constructor. If you want to offer a low level access scheme (what is nice), you could just move the current __init__() function to some other name (init, whatever), and leave for those advanced users to subclass TarFile, and change its interface. > The low-level API has not changed since the first days of tarfile and > is thought for those (possibly rare) users who know what's going on > and want to do very special things with the class, inconveniently but > as versatilely as possible. I'm a little bit sceptic about how many advanced users are already using the lowlevel interface offered by TarFile. Right now, I don't think this should block us from doing an enhancement. > The high-level API is remaining backwards compatible since at least > the last 4 months, although almost constantly features were added to > it and the internals were turned upside down several times. The > high-level API is there for all users who want a straight-forward API > that offers common solutions. The fact that there are several > classmethods now can IMO be ignored, as long as nothing has changed > for the user - he still has the choice between the two APIs. If the > TarFile.open constructor replaces the __init__ constructor, the choice > between the two is gone. I don't have anything against classmethods. I just don't see a need for them in that case. > The TarFile.open constructor itself is actually no constructor. > Depending on which compression it shall use or if it must figure out > the compression by itself, it decides which constructor to use. This > is IMO exactly one benefit that comes with classmethods - you can use > several different constructors for the same class. I don't see this as > a disadvantage, but rather as a nice example on what classmethods are > good for. There are certainly other possible solutions for this task, > but I think I chose a rather obvious one. > > BTW, I think the most used interface is this: > > import tarfile > tar = tarfile.open(...) > > One simply doesn't have to bother if tarfile.open is a function, a > class or a classmethod. I'm not sure, as "from tarfile import *" will probably be a common idiom as well, and tarfile.open can't be exported in that case, as discussed before. Lars, as that's just my opinion, I'm forwarding that message to python-dev so that people there can give their opinions about this as well. Perhaps I'm just nit picking. Thank you for discussing that. -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From lomax@pumpichank.com Wed Nov 27 04:59:25 2002 From: lomax@pumpichank.com (Frank "Bird Buzz" Lomax) Date: Tue, 26 Nov 2002 23:59:25 -0500 Subject: [Python-Dev] Talking Turkey References: <20021126235037.GA13452@panix.com> <200211270001.gAR01dv28395@kuku.cosc.canterbury.ac.nz> <15844.7656.412999.169910@montanaro.dyndns.org> Message-ID: <15844.20781.252352.719568@gargle.gargle.HOWL> > Yeah, if you enjoy consuming massive quantities of turkey, > stuffing, cranberry sauce, pumpkin pie and lying about on the > sofa like a beached whale watching American football. Did I > forget to mention several days of turkey sandwiches afterwards? > ;-) I quiver with anticipation of this holiday all year, even though I know it's a Franklin Commission conspiracy to elevate the Disrespected Fowl to its rightful place in the pecking order of nationally important birds. Ben was right about the gorgeous turkey, dripping in its own goo, abnormally well-endowed but easily tricked into a gape-mouthed, rain-stare suicide. Unfortunately, Ben was still reeling from his kite "experiment" when he lost the slapfight to Tommy J. so we get the eagle. But Ben has had the last laugh, hasn't he? My one word of advice is to bring a small glass jar and an air-tight lid with you to your in-laws. Use a sliced paper straw to collect the tryptophan tears you will shed, belly taut, as you lament your favorite team's annual embarrassing loss in front of a similarly dazed and drooling national audience. Those tears, mixed with top eighth inch of coagulated sheen from the marshallow saturated yams, should be slow baked in Ron Popeil's greatest invention, the ST5000 Rotisserie and BBQ for 12 hours. This concentrated elixer will then provide haunting dreams of flightless squawking for months after the thrill of mayo, breast (turkey) and dough have subsided, and your body has become ever more craving of the "turkey trip". Two drops under your tongue, a cranberry up each nostril, and it's seven hours of non-stop slow motion gobbling. Purer joy cannot be known. pardon-me-ly y'rs, -frank From niemeyer@conectiva.com Wed Nov 27 04:59:50 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Wed, 27 Nov 2002 02:59:50 -0200 Subject: [Python-Dev] Re: tarfile module (update) In-Reply-To: <20021127024717.A2333@ibook.distro.conectiva> References: <200211141847.34336.lars@gustaebel.de> <200211261656.08362.lars@gustaebel.de> <20021126143121.A14839@ibook.distro.conectiva> <200211270111.57651.lars@gustaebel.de> <20021127024717.A2333@ibook.distro.conectiva> Message-ID: <20021127025950.A2583@ibook.distro.conectiva> > From: Gustavo Niemeyer > To: Lars@libretto.niemeyer.net, > Gustäbel @libretto.niemeyer.net > Cc: python-dev@python.org Argh.. mutt doesn't deal very well with accents in names. Sorry. -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From bac@OCF.Berkeley.EDU Wed Nov 27 06:53:48 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Tue, 26 Nov 2002 22:53:48 -0800 (PST) Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Modules _localemodule.c,2.35,2.36 In-Reply-To: Message-ID: [Martin v. Loewis] > Michael Hudson writes: > > > I think this patch broke test_strptime: > > Are you sure? If I back out this patch, it still fails. I cannot see > how the test could have ever worked on OS X. > It actually did since I wrote the module (coded the thing under OS X)i; still does with my slightly old CVS checkout:: >>> locale.getdefaultlocale() ['en_US', 'ISO8859-1'] Beats me why it works (I get the test_locale failure just like everyone else). This bug was actually first reported back in the thread about FreeBSD 4.4 and most recently when Debian unstable's Python broke. Patch #639112 fixes this along with the other problem that FreeBSD 4.4 brought up (same timezone names; e.g. ('EST', 'EST')). So there is a fix and it is ready to be checked in. -Brett From martin@v.loewis.de Wed Nov 27 08:33:35 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 27 Nov 2002 09:33:35 +0100 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Modules _localemodule.c,2.35,2.36 In-Reply-To: References: Message-ID: Brett Cannon writes: > It actually did since I wrote the module (coded the thing under OS X)i; > still does with my slightly old CVS checkout:: > > >>> locale.getdefaultlocale() > ['en_US', 'ISO8859-1'] > > Beats me why it works (I get the test_locale failure just like everyone > else). You do need to set LANG to make this test pass, right? > This bug was actually first reported back in the thread about FreeBSD 4.4 > and most recently when Debian unstable's Python broke. Patch #639112 > fixes this along with the other problem that FreeBSD 4.4 brought up > (same timezone names; e.g. ('EST', 'EST')). So there is a fix and it is > ready to be checked in. Since it fixes it for OS X as well, I've applied this patch. Thanks! Martin From fredrik@pythonware.com Wed Nov 27 09:08:11 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Wed, 27 Nov 2002 10:08:11 +0100 Subject: [Python-Dev] Re: tarfile module (update) References: <200211141847.34336.lars@gustaebel.de> <200211261656.08362.lars@gustaebel.de> <20021126143121.A14839@ibook.distro.conectiva> <200211270111.57651.lars@gustaebel.de> <20021127024717.A2333@ibook.distro.conectiva> Message-ID: <0e1e01c295f4$83fe38b0$0900a8c0@spiff> Gustavo Niemeyer wrote: > > There is a two-level public API to tarfile. The low-level API which = is > > the TarFile class with its __init__(), and the high-level API which = is > > the module-level open() function. > [...] >=20 > Ok, but I still belive that the default constructor should be the most > common used interface, just like most (all) of the standard library. using factory functions to create objects representing external entities is an extremely common pattern. in the Pythobn library, this pattern is used in aifc, anydbm, audiodev, dbhash, dumbdbm (and all other dbm modules), fileinput, gettext, gopherlib, gzip, imghdr, optparse, popen2, shelve, sndhdr, socket, sunau, sunaudio, tempfile, tokenize, just to name a few. to figure out *why* it's a good idea to use a factory function, think as a user. or as a library maintainer. (yes, PIL's using it too) From mwh@python.net Wed Nov 27 10:51:44 2002 From: mwh@python.net (Michael Hudson) Date: Wed, 27 Nov 2002 10:51:44 +0000 (GMT) Subject: [Python-Dev] assigning to new-style-class.__name__ In-Reply-To: <200211261802.gAQI2sb17235@odiug.zope.com> Message-ID: I'm sending this to python-dev as well. On Tue, 26 Nov 2002, Guido van Rossum wrote: ["> >" and "> > >" are both me] > > > Thinking about it, I'm expecting an exception that gets set to stay set > > > for quite a long time, and that may not be justified. Can see a way past > > > that. > > > > This is still a valid point; I need to test a class with a metaclass with > > a raising .mro()... > > Yes, I don't understand why you don't simply drop out when you set > r = -1. Well, currently I'm striving for leaving the system in a state of minimal inconsistency. There are problems with what I have now, though. Possibly the best solution is if any .mro() fails to abandon the whole procedure and leave things exactly as they are. The difficulty with this is that I'd have to keep a list of which subclasses have had their mro's frobbed so I can unfrob them if a later subclasses .mro() fails. This is just a matter of programming though. How much I care about this would be influenced by an answer to the following problem: In the following inheritance diagram: ... ... ... \ / / \ / / C D \ / \ / E is it possible to rearrange the __bases__ of C in a way that doesn't create a conflict for C but does for E? I haven't thought about MRO calculations at all, I'm afraid. Cheers, M. From niemeyer@conectiva.com Wed Nov 27 12:17:12 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Wed, 27 Nov 2002 10:17:12 -0200 Subject: [Python-Dev] Re: tarfile module (update) In-Reply-To: <0e1e01c295f4$83fe38b0$0900a8c0@spiff> References: <200211141847.34336.lars@gustaebel.de> <200211261656.08362.lars@gustaebel.de> <20021126143121.A14839@ibook.distro.conectiva> <200211270111.57651.lars@gustaebel.de> <20021127024717.A2333@ibook.distro.conectiva> <0e1e01c295f4$83fe38b0$0900a8c0@spiff> Message-ID: <20021127101712.A3565@ibook.distro.conectiva> > using factory functions to create objects representing external > entities is an extremely common pattern. > > in the Pythobn library, this pattern is used in aifc, anydbm, audiodev, > dbhash, dumbdbm (and all other dbm modules), fileinput, gettext, > gopherlib, gzip, imghdr, optparse, popen2, shelve, sndhdr, socket, > sunau, sunaudio, tempfile, tokenize, just to name a few. Perhaps I haven't explained it right. I was trying to tell that using a default constructor would be more obvious than having a methodclass "constructor" TarFile.open() which will be used 99.9% of the time. I don't see this pattern in any of the modules you mention above. > to figure out *why* it's a good idea to use a factory function, think > as a user. or as a library maintainer. That helped a lot. Thank you. -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From pedronis@bluewin.ch Wed Nov 27 12:23:48 2002 From: pedronis@bluewin.ch (Samuele Pedroni) Date: Wed, 27 Nov 2002 13:23:48 +0100 Subject: [Python-Dev] assigning to new-style-class.__name__ References: Message-ID: <001d01c2960f$ebcc3620$6d94fea9@newmexico> From: "Michael Hudson" > ... ... ... > \ / / > \ / / > C D > \ / > \ / > E > > is it possible to rearrange the __bases__ of C in a way that doesn't > create a conflict for C but does for E? I haven't thought about MRO > calculations at all, I'm afraid. > Yes, if A B (in this order in the mro) are bases of C, and also bases of D and you swap them in C (but not D) then E's mro will become subject to an order disagreement. I haven't looked at the code, but if it checks directly for the consistency of E's mro when you change C's bases, then there is no way to move from a hierarchy where A precedes B in the mros to one where the two are swapped, although the second would be legal if constructed anew piecewise from superclasses down to subclasses. So maybe the mros of the subclasses should be computed lazily when needed (e.g. onthe first - after the changes - dispatch), although this may produce inconsistences and errors at odd times. Maybe the code is already doing that? regards. From fredrik@pythonware.com Wed Nov 27 13:06:03 2002 From: fredrik@pythonware.com (Fredrik Lundh) Date: Wed, 27 Nov 2002 14:06:03 +0100 Subject: [Python-Dev] Re: tarfile module (update) References: <200211141847.34336.lars@gustaebel.de> <200211261656.08362.lars@gustaebel.de> <20021126143121.A14839@ibook.distro.conectiva> <200211270111.57651.lars@gustaebel.de> <20021127024717.A2333@ibook.distro.conectiva> <0e1e01c295f4$83fe38b0$0900a8c0@spiff> <20021127101712.A3565@ibook.distro.conectiva> Message-ID: <0f6801c29615$beca16f0$0900a8c0@spiff> Gustavo Niemeyer wrote: > > using factory functions to create objects representing external > > entities is an extremely common pattern. > >=20 > > in the Pythobn library, this pattern is used in aifc, anydbm, = audiodev, > > dbhash, dumbdbm (and all other dbm modules), fileinput, gettext, > > gopherlib, gzip, imghdr, optparse, popen2, shelve, sndhdr, socket, > > sunau, sunaudio, tempfile, tokenize, just to name a few. >=20 > Perhaps I haven't explained it right. I was trying to tell that using = a > default constructor would be more obvious than having a methodclass > "constructor" TarFile.open() which will be used 99.9% of the time. the mail you replied to talked about a module-level open() function, not a class method. > There is a two-level public API to tarfile. The low-level API = which is > the TarFile class with its __init__(), and the high-level API = which is > the module-level open() function From mwh@python.net Wed Nov 27 13:11:28 2002 From: mwh@python.net (Michael Hudson) Date: 27 Nov 2002 13:11:28 +0000 Subject: [Python-Dev] assigning to new-style-class.__name__ In-Reply-To: "Samuele Pedroni"'s message of "Wed, 27 Nov 2002 13:23:48 +0100" References: <001d01c2960f$ebcc3620$6d94fea9@newmexico> Message-ID: <2msmxntawv.fsf@starship.python.net> "Samuele Pedroni" writes: > From: "Michael Hudson" > > > > > ... ... ... > > \ / / > > \ / / > > C D > > \ / > > \ / > > E > > > > is it possible to rearrange the __bases__ of C in a way that doesn't > > create a conflict for C but does for E? I haven't thought about MRO > > calculations at all, I'm afraid. > > > > Yes, if A B (in this order in the mro) are bases of C, and also bases of D and > you swap them in C (but not D) then E's mro will become subject to an order > disagreement. Yes, that's quite obvious, isn't it? Fortunately: >>> class A(object): ... pass ... >>> class B(object): ... pass ... >>> class C(A,B): ... pass ... >>> class D(A,B): ... pass ... >>> class E(C, D): ... pass ... >>> C.__bases__ (, ) >>> C.__bases__ = (B, A) Traceback (most recent call last): File "", line 1, in ? TypeError: MRO conflict among bases A, B The error message isn't the greatest, but things seem to be behaving. > I haven't looked at the code, but if it checks directly for the consistency of > E's mro when you change C's bases, then there is no way to move from a > hierarchy where A precedes B in the mros to one where the two are swapped, > although the second would be legal if constructed anew piecewise from > superclasses down to subclasses. Hmm, I hadn't thought about that. > So maybe the mros of the subclasses should be computed lazily when needed (e.g. > onthe first - after the changes - dispatch), although this may produce > inconsistences and errors at odd times. This makes me feel queasy... currently (at least in my tree -- I need to write some tests before checkin) the code tries really quite hard to ensure that the system is always in a consistent state. Do you (or anyone else) know what CL or Dylan or other dynamic MI languages do about this? > Maybe the code is already doing that? No, and I'm unconvinced it should. We're allowing something that's not currently allowed so I feel I have the right to be restrictive. I would mention this restriction in the docs, if the were any... All *I* want assignment to __bases__ for is to swap out one class for another -- making instances of the old class instances of the new class, which was possible and making subclasses of the old subclasses of the new, which wasn't. In the fairly simple cases I'm envisioning I'm not going to run into mro conflicts. Cheers, M. -- 39. Re graphics: A picture is worth 10K words - but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures. -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html From mwh@python.net Wed Nov 27 13:20:02 2002 From: mwh@python.net (Michael Hudson) Date: 27 Nov 2002 13:20:02 +0000 Subject: [Python-Dev] assigning to new-style-class.__name__ In-Reply-To: Michael Hudson's message of "27 Nov 2002 13:11:28 +0000" References: <001d01c2960f$ebcc3620$6d94fea9@newmexico> <2msmxntawv.fsf@starship.python.net> Message-ID: <2mof8btail.fsf@starship.python.net> Michael Hudson writes: > "Samuele Pedroni" writes: > > > I haven't looked at the code, but if it checks directly for the consistency of > > E's mro when you change C's bases, then there is no way to move from a > > hierarchy where A precedes B in the mros to one where the two are swapped, Yes there is! With A thru E as in my previous mail: C.__bases__ = (A,) D.__bases__ = (B, A) C.__bases__ = (B, A) Now there are situations where this can probably cause difficulties, but that's always going to be possible... > Do you (or anyone else) know what CL or Dylan or other dynamic MI > languages do about this? It doesn't seem CL (even with the MOP) allows dynamic rearrangement of bases. So I can't nick their ideas :-/ Cheers, M. -- Famous remarks are very seldom quoted correctly. -- Simeon Strunsky From mwh@python.net Wed Nov 27 13:25:02 2002 From: mwh@python.net (Michael Hudson) Date: 27 Nov 2002 13:25:02 +0000 Subject: [Python-Dev] testing question Message-ID: <2mk7iztaa9.fsf@starship.python.net> There's a buglet in my assignable __bases__ code that can (in pretty obscure situations) lead to Python code getting executed with an exception pending. I have a fix, but I'd like a test -- can anyone think of a way of testing for this? Cheers, M. -- GAG: I think this is perfectly normal behaviour for a Vogon. ... VOGON: That is exactly what you always say. GAG: Well, I think that is probably perfectly normal behaviour for a psychiatrist. -- The Hitch-Hikers Guide to the Galaxy, Episode 9 From niemeyer@conectiva.com Wed Nov 27 13:30:15 2002 From: niemeyer@conectiva.com (Gustavo Niemeyer) Date: Wed, 27 Nov 2002 11:30:15 -0200 Subject: [Python-Dev] Re: tarfile module (update) In-Reply-To: <0f6801c29615$beca16f0$0900a8c0@spiff> References: <200211141847.34336.lars@gustaebel.de> <200211261656.08362.lars@gustaebel.de> <20021126143121.A14839@ibook.distro.conectiva> <200211270111.57651.lars@gustaebel.de> <20021127024717.A2333@ibook.distro.conectiva> <0e1e01c295f4$83fe38b0$0900a8c0@spiff> <20021127101712.A3565@ibook.distro.conectiva> <0f6801c29615$beca16f0$0900a8c0@spiff> Message-ID: <20021127113014.A4484@ibook.distro.conectiva> > the mail you replied to talked about a module-level open() function, > not a class method. You're right. Rereading my mail, I mention class methods, but it was not clear that I was talking about tarfile.TarFile.open(), not tarfile.open(). Sorry. -- Gustavo Niemeyer [ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ] From guido@python.org Wed Nov 27 13:33:19 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 27 Nov 2002 08:33:19 -0500 Subject: [Python-Dev] Re: tarfile module (update) In-Reply-To: Your message of "Wed, 27 Nov 2002 10:08:11 +0100." <0e1e01c295f4$83fe38b0$0900a8c0@spiff> References: <200211141847.34336.lars@gustaebel.de> <200211261656.08362.lars@gustaebel.de> <20021126143121.A14839@ibook.distro.conectiva> <200211270111.57651.lars@gustaebel.de> <20021127024717.A2333@ibook.distro.conectiva> <0e1e01c295f4$83fe38b0$0900a8c0@spiff> Message-ID: <200211271333.gARDXKM23617@pcp02138704pcs.reston01.va.comcast.net> > using factory functions to create objects representing external > entities is an extremely common pattern. > > in the Pythobn library, this pattern is used in aifc, anydbm, audiodev, > dbhash, dumbdbm (and all other dbm modules), fileinput, gettext, > gopherlib, gzip, imghdr, optparse, popen2, shelve, sndhdr, socket, > sunau, sunaudio, tempfile, tokenize, just to name a few. > > to figure out *why* it's a good idea to use a factory function, think > as a user. or as a library maintainer. Actually, most of those cases (not all) are factory functions because the type created is implemented in C, and until Python 2.2 the only way to create an instance of such a type was a factory function. Note that in 2.2, socket.socket changed from being a factory function to a type with a default constructor, and nobody's code broke. In fact, you can say that in Python, classes and types *are* factory functions (no "operator new" is required). --Guido van Rossum (home page: http://www.python.org/~guido/) From pedronis@bluewin.ch Wed Nov 27 13:22:45 2002 From: pedronis@bluewin.ch (Samuele Pedroni) Date: Wed, 27 Nov 2002 14:22:45 +0100 Subject: [Python-Dev] assigning to new-style-class.__name__ References: <001d01c2960f$ebcc3620$6d94fea9@newmexico> <2msmxntawv.fsf@starship.python.net> <2mof8btail.fsf@starship.python.net> Message-ID: <00e101c29618$132ea240$6d94fea9@newmexico> From: "Michael Hudson" To: Sent: Wednesday, November 27, 2002 2:20 PM Subject: Re: [Python-Dev] assigning to new-style-class.__name__ > Michael Hudson writes: > > > "Samuele Pedroni" writes: > > > > > I haven't looked at the code, but if it checks directly for the consistency of > > > E's mro when you change C's bases, then there is no way to move from a > > > hierarchy where A precedes B in the mros to one where the two are swapped, > > Yes there is! With A thru E as in my previous mail: > > C.__bases__ = (A,) > > D.__bases__ = (B, A) > > C.__bases__ = (B, A) > > Now there are situations where this can probably cause difficulties, > but that's always going to be possible... what about solid bases? e.g. B is list and A simply a subclass of object. regards. From mwh@python.net Wed Nov 27 13:39:37 2002 From: mwh@python.net (Michael Hudson) Date: 27 Nov 2002 13:39:37 +0000 Subject: [Python-Dev] assigning to new-style-class.__name__ In-Reply-To: "Samuele Pedroni"'s message of "Wed, 27 Nov 2002 14:22:45 +0100" References: <001d01c2960f$ebcc3620$6d94fea9@newmexico> <2msmxntawv.fsf@starship.python.net> <2mof8btail.fsf@starship.python.net> <00e101c29618$132ea240$6d94fea9@newmexico> Message-ID: <2mhee3t9ly.fsf@starship.python.net> "Samuele Pedroni" writes: > > Michael Hudson writes: > > > > > "Samuele Pedroni" writes: > > > > > > > I haven't looked at the code, but if it checks directly for the > consistency of > > > > E's mro when you change C's bases, then there is no way to move from a > > > > hierarchy where A precedes B in the mros to one where the two are > swapped, > > > > Yes there is! With A thru E as in my previous mail: > > > > C.__bases__ = (A,) > > > > D.__bases__ = (B, A) > > > > C.__bases__ = (B, A) > > > > Now there are situations where this can probably cause difficulties, > > but that's always going to be possible... > > what about solid bases? e.g. B is list and A simply a subclass of object. Well, exactly. I don't really care -- it's not like a generic "swap the order of these bases" function is going to be terribly useful. In that case, just setting C.__bases__ to (B,) first works, doesn't it? Cheers, M. -- GAG: I think this is perfectly normal behaviour for a Vogon. ... VOGON: That is exactly what you always say. GAG: Well, I think that is probably perfectly normal behaviour for a psychiatrist. -- The Hitch-Hikers Guide to the Galaxy, Episode 9 From pedronis@bluewin.ch Wed Nov 27 13:34:42 2002 From: pedronis@bluewin.ch (Samuele Pedroni) Date: Wed, 27 Nov 2002 14:34:42 +0100 Subject: [Python-Dev] assigning to new-style-class.__name__ References: <001d01c2960f$ebcc3620$6d94fea9@newmexico> <2msmxntawv.fsf@starship.python.net> <2mof8btail.fsf@starship.python.net> <00e101c29618$132ea240$6d94fea9@newmexico> <2mhee3t9ly.fsf@starship.python.net> Message-ID: <010b01c29619$be62f3e0$6d94fea9@newmexico> From: "Michael Hudson" > "Samuele Pedroni" writes: > > > > Michael Hudson writes: > > > > > > > "Samuele Pedroni" writes: > > > > > > > > > I haven't looked at the code, but if it checks directly for the > > consistency of > > > > > E's mro when you change C's bases, then there is no way to move from a > > > > > hierarchy where A precedes B in the mros to one where the two are > > swapped, > > > > > > Yes there is! With A thru E as in my previous mail: > > > > > > C.__bases__ = (A,) > > > > > > D.__bases__ = (B, A) > > > > > > C.__bases__ = (B, A) > > > > > > Now there are situations where this can probably cause difficulties, > > > but that's always going to be possible... > > > > what about solid bases? e.g. B is list and A simply a subclass of object. > > Well, exactly. I don't really care -- it's not like a generic "swap > the order of these bases" function is going to be terribly useful. > > In that case, just setting C.__bases__ to (B,) first works, doesn't it? > You're right. In general one can reset all classes' bases to the solid base. And then perform __bases__ assignments corresponding to the class statements that would build the desired hierarchy. Happy end? regards. From mwh@python.net Wed Nov 27 13:56:55 2002 From: mwh@python.net (Michael Hudson) Date: 27 Nov 2002 13:56:55 +0000 Subject: [Python-Dev] assigning to new-style-class.__name__ In-Reply-To: "Samuele Pedroni"'s message of "Wed, 27 Nov 2002 14:34:42 +0100" References: <001d01c2960f$ebcc3620$6d94fea9@newmexico> <2msmxntawv.fsf@starship.python.net> <2mof8btail.fsf@starship.python.net> <00e101c29618$132ea240$6d94fea9@newmexico> <2mhee3t9ly.fsf@starship.python.net> <010b01c29619$be62f3e0$6d94fea9@newmexico> Message-ID: <2mel97t8t4.fsf@starship.python.net> "Samuele Pedroni" writes: > You're right. In general one can reset all classes' bases to the solid base. > And then perform __bases__ assignments corresponding to the class statements > that would build the desired hierarchy. Happy end? I think so. It would be even happier if there was somewhere to document this, but never mind -- I'll be asking for the moon on a stick next. Cheers, M. -- : exploding like a turd Never had that happen to me, I have to admit. They do that often in your world? -- Eric The Read & Dave Brown, asr From skip@pobox.com Wed Nov 27 14:54:02 2002 From: skip@pobox.com (Skip Montanaro) Date: Wed, 27 Nov 2002 08:54:02 -0600 Subject: [Python-Dev] testing question In-Reply-To: <2mk7iztaa9.fsf@starship.python.net> References: <2mk7iztaa9.fsf@starship.python.net> Message-ID: <15844.56458.444607.635761@montanaro.dyndns.org> Michael> There's a buglet in my assignable __bases__ code that can (in Michael> pretty obscure situations) lead to Python code getting executed Michael> with an exception pending. I have a fix, but I'd like a test Michael> -- can anyone think of a way of testing for this? If it's too hard to implement in Python code, why not add something to Modules/_testcapimodule.c? Skip From guido@python.org Wed Nov 27 14:55:36 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 27 Nov 2002 09:55:36 -0500 Subject: [Python-Dev] testing question In-Reply-To: Your message of "27 Nov 2002 13:25:02 GMT." <2mk7iztaa9.fsf@starship.python.net> References: <2mk7iztaa9.fsf@starship.python.net> Message-ID: <200211271455.gAREtal28366@odiug.zope.com> > There's a buglet in my assignable __bases__ code that can (in pretty > obscure situations) lead to Python code getting executed with an > exception pending. I have a fix, but I'd like a test -- can anyone > think of a way of testing for this? I was going to suggest a __del__ method, except that doesn't work -- the code that calls __del__ is very careful to save and restore exceptions around the call. I think you're talking about the code near the end of type_set_bases(). I still don't understand why you can'y just bail out? --Guido van Rossum (home page: http://www.python.org/~guido/) From mwh@python.net Wed Nov 27 15:12:43 2002 From: mwh@python.net (Michael Hudson) Date: 27 Nov 2002 15:12:43 +0000 Subject: [Python-Dev] testing question In-Reply-To: Skip Montanaro's message of "Wed, 27 Nov 2002 08:54:02 -0600" References: <2mk7iztaa9.fsf@starship.python.net> <15844.56458.444607.635761@montanaro.dyndns.org> Message-ID: <2mbs4bt5as.fsf@starship.python.net> Skip Montanaro writes: > Michael> There's a buglet in my assignable __bases__ code that can (in > Michael> pretty obscure situations) lead to Python code getting executed > Michael> with an exception pending. I have a fix, but I'd like a test > Michael> -- can anyone think of a way of testing for this? > > If it's too hard to implement in Python code, why not add something to > Modules/_testcapimodule.c? That would be harder, I think... [Guido] > I think you're talking about the code near the end of > type_set_bases(). I still don't understand why you can'y just bail > out? I'm going to. But I'd still like a test against the old behaviour. It's not a big deal. Cheers, M. -- I don't have any special knowledge of all this. In fact, I made all the above up, in the hope that it corresponds to reality. -- Mark Carroll, ucam.chat From mhammond@skippinet.com.au Wed Nov 27 22:48:03 2002 From: mhammond@skippinet.com.au (Mark Hammond) Date: Thu, 28 Nov 2002 09:48:03 +1100 Subject: [Python-Dev] int/long FutureWarning Message-ID: I figured that once I started pasting and checking code like: """ if sys.version_info >= (2, 3): # sick off the new hex() warnings, and no time to digest what the # impact will be! import warnings warnings.filterwarnings("ignore", category=FutureWarning, append=1) """ into the Mozilla source tree, it was time to start digesting! Unfortunately, a simple answer seems to allude me whenever it is brought up here. So, to cut a long story short, I have lots and lots of script generated, then often hand-edited source files with constants defined thus: SOMETHING = 0x80000000 Which generate a warning telling me that this may become a positive long in Python 2.4. All I really care about is how my C extension code, which does: PyArg_ParseTuple("ilhH", ...) // Take your pick is going to react to this change? (There are similar warnings for certain shift operations too, but I believe they will all boil down to the same issue) People using the win32all extensions are unlikely to be happy with the screenfulls of warning generated. I know I'm not . But I don't know what to do. I know I can suppress the warning either using the code I have above, or simply by appending an L to each of the thousands of constants, or even converting them all to decimal. But if nothing is going to change from the POV of my C extensions, then changing all these constants just to suppress a warning seems overkill. Any suggestions for me? Thanks, Mark. From bac@OCF.Berkeley.EDU Wed Nov 27 23:04:58 2002 From: bac@OCF.Berkeley.EDU (Brett Cannon) Date: Wed, 27 Nov 2002 15:04:58 -0800 (PST) Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Modules _localemodule.c,2.35,2.36 In-Reply-To: Message-ID: [Martin v. Loewis] > Brett Cannon writes: > > > It actually did since I wrote the module (coded the thing under OS X)i; > > still does with my slightly old CVS checkout:: > > > > >>> locale.getdefaultlocale() > > ['en_US', 'ISO8859-1'] > > > > Beats me why it works (I get the test_locale failure just like everyone > > else). > > You do need to set LANG to make this test pass, right? > Well, I have the same crash with the error that locale._getdefaultlocale() is not set. But I actually never set it explicitly anywhere in my shell or anywhere else for that matter. > > This bug was actually first reported back in the thread about FreeBSD 4.4 > > and most recently when Debian unstable's Python broke. Patch #639112 > > fixes this along with the other problem that FreeBSD 4.4 brought up > > (same timezone names; e.g. ('EST', 'EST')). So there is a fix and it is > > ready to be checked in. > > Since it fixes it for OS X as well, I've applied this patch. Thanks! > I am just happy this got caught before 2.3 got out the door. -Brett From martin@v.loewis.de Wed Nov 27 23:14:36 2002 From: martin@v.loewis.de (Martin v. Loewis) Date: 28 Nov 2002 00:14:36 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: References: Message-ID: "Mark Hammond" writes: > Any suggestions for me? There is no solution to this problem yet. If you add an L to the constant (or if it becomes positive in Python 2.4), you can't pass it to the "i" format to anymore, as it will cause an OverflowError. Suppressing the warning now will only defer the problem to the future. Regards, Martin From mhammond@skippinet.com.au Thu Nov 28 00:59:16 2002 From: mhammond@skippinet.com.au (Mark Hammond) Date: Thu, 28 Nov 2002 11:59:16 +1100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Message-ID: > There is no solution to this problem yet. ... > Suppressing the warning now will only defer the problem to the future. Well, it seems I have no option other than to defer the problem to the future, at least until a solution is known by *someone*. Glad-it-isn't-just-me ly, Mark. From guido@python.org Thu Nov 28 01:07:21 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 27 Nov 2002 20:07:21 -0500 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Your message of "28 Nov 2002 00:14:36 +0100." References: Message-ID: <200211280107.gAS17LE27473@pcp02138704pcs.reston01.va.comcast.net> > "Mark Hammond" writes: > > > Any suggestions for me? [Martin] > There is no solution to this problem yet. If you add an L to the > constant (or if it becomes positive in Python 2.4), you can't pass it > to the "i" format to anymore, as it will cause an OverflowError. > > Suppressing the warning now will only defer the problem to the future. But in Mark's case (and in many other cases) there will be no problem in the future -- in Python 2.4, his C code will happily accept the positive Python longs that 0x80000000 and others will be then. I wonder if perhaps the warnings for hex/oct constants aren't so important, and we should use PendingDeprecationWarning or some similar warning that's not normally printed? They sure are a pest! BTW, this reminds me that I've long promised a set of new format codes for PyArg_ParseTuple() to specify taking the lower N bits (for N in 8, 16, 32, 64) and throwing the rest away, without range checks. If someone else can get to this first, that would be great -- I can't seem to make time for this, even though it is still my utmost desire and plan to have a 2.3a1 release ready before Xmas. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Thu Nov 28 04:16:02 2002 From: guido@python.org (Guido van Rossum) Date: Wed, 27 Nov 2002 23:16:02 -0500 Subject: [Python-Dev] Mac OSX issues Message-ID: <200211280416.gAS4G2221621@pcp02138704pcs.reston01.va.comcast.net> I have temporary access to a Mac OSX box. I find the following problems: - test_re crashes unless I do ulimit -s 2000 -- haven't tried other values, but the default of 512 (KB) is insufficient. I know the README file explains this, but now that I've experienced this myself, I wonder if we shouldn't hack main() to increase the stack size to 2 MB, inside an #ifdef darwin or something like that. - test_socket fails with errno 3, 'Unknown server error' on a socket.gethostbyaddr() call. Does anybody know what that is about? - test_locale fails with a complaint about "1,024" vs. "1024". Since this is probably a libc bug (though wouldn't this also occur on other BSD systems?) I can live with it. - test_largefile takes a *very* long time. Perhaps it actually creates a truly large file (the Mac OSX filesystem is case insensitive, so I suppose it may be so different that it doesn't support files with holes in them). Maybe the test should disable itself (or part of itself) unless a specific resource is requested? --Guido van Rossum (home page: http://www.python.org/~guido/) From mal@lemburg.com Thu Nov 28 09:18:58 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Thu, 28 Nov 2002 10:18:58 +0100 Subject: [Python-Dev] Re: PyNumber_Check() References: <3DD91D07.3000704@lemburg.com> <200211181724.gAIHOgx27579@pcp02138704pcs.reston01.va.comcast.net> <3DD92622.90007@lemburg.com> <200211181802.gAII28306798@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <3DE5DF82.1020503@lemburg.com> [Changing PyNumber_Check() to return True for strings] Guido van Rossum wrote: >> [Example switching on object types where the PyNumber_Check() >> preceeds the PyString_Check()] >> >>With the new semantics, the PyNumber_Check() test would >>succeed for strings, making the second test a no-op. >> >>I would expect that this kind of switching on types is >>not uncommon for code which works in polymorphic ways. > > Alas, I agree with this expectation, even though I believe that such > code is based on a misunderstanding. :-( > >>>PyNumber_Check() comes from an old era, when the presence or absence >>>of the as_number "extension" to the type object was thought to be >>>useful. If I had to do it over, I wouldn't provide PyNumber_Check() >>>at all (nor PySequence_Check() nor PyMapping_Check()). >> >>Ok, but why not fix those APIs to mean something more >>useful than deprecating them ? E.g. I would expect that >>a number is usable as input to float(), int() or long() >>and that a mapping knows at least about __getitem__. > > Maybe, as long as we all agree that that's *exactly* what they check > for, and as long as we agree that there may be overlapping areas > (where two or more of these will return True). > > PyMapping_Check() returns true for a variety of non-mappings like > strings, lists, and all classic instances. Perhaps we should simply keep the existing semantics for those two APIs, that is, ensure that they return the same results for the standard builtin types as they did in Python 2.2 and below ?! This would mean that a special case would have to be added to PyNumber_Check() to have it return False for strings and Unicode. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From martin@v.loewis.de Thu Nov 28 09:32:47 2002 From: martin@v.loewis.de (Martin v. =?iso-8859-15?q?L=F6wis?=) Date: 28 Nov 2002 10:32:47 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: <200211280107.gAS17LE27473@pcp02138704pcs.reston01.va.comcast.net> References: <200211280107.gAS17LE27473@pcp02138704pcs.reston01.va.comcast.net> Message-ID: Guido van Rossum writes: > But in Mark's case (and in many other cases) there will be no problem > in the future -- in Python 2.4, his C code will happily accept the > positive Python longs that 0x80000000 and others will be then. Can you please explain how this will happen? If you do int x; PyArg_ParseTuple(args,"i",&x); and args is (0x80000000,), what will be the value of x? > BTW, this reminds me that I've long promised a set of new format codes > for PyArg_ParseTuple() to specify taking the lower N bits (for N in > 8, 16, 32, 64) and throwing the rest away, without range checks. Wouldn't Mark have to use these format codes? Regards, Martin From mal@lemburg.com Thu Nov 28 10:10:43 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Thu, 28 Nov 2002 11:10:43 +0100 Subject: [Python-Dev] int/long FutureWarning References: <200211280107.gAS17LE27473@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <3DE5EBA3.2090409@lemburg.com> Martin v. L=F6wis wrote: > Guido van Rossum writes: >=20 >=20 >>But in Mark's case (and in many other cases) there will be no problem >>in the future -- in Python 2.4, his C code will happily accept the >>positive Python longs that 0x80000000 and others will be then. >=20 >=20 > Can you please explain how this will happen? If you do=20 >=20 > int x; > PyArg_ParseTuple(args,"i",&x); >=20 > and args is (0x80000000,), what will be the value of x? x should be 0x80000000. Whether that's a negative number in its decimal representation is really not all that important if you are interfacing to 32-bit bitmaps ;-) I honestly don't think that anyone would write x =3D 0x80000000 and then expect x < 0 to be True. People usually write hex representations when they are trying to do bit-level manipulations and these rarely deal with signed numeric data. >>BTW, this reminds me that I've long promised a set of new format codes >>for PyArg_ParseTuple() to specify taking the lower N bits (for N in >>8, 16, 32, 64) and throwing the rest away, without range checks.=20 >=20 > Wouldn't Mark have to use these format codes? --=20 Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From Jack.Jansen@oratrix.com Thu Nov 28 10:31:06 2002 From: Jack.Jansen@oratrix.com (Jack Jansen) Date: Thu, 28 Nov 2002 11:31:06 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: <200211280107.gAS17LE27473@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <80514E88-02BC-11D7-83AC-000A27B19B96@oratrix.com> On donderdag, nov 28, 2002, at 02:07 Europe/Amsterdam, Guido van Rossum wrote: > I wonder if perhaps the warnings for hex/oct constants aren't so > important, and we should use PendingDeprecationWarning or some similar > warning that's not normally printed? They sure are a pest! +100. I've been sick and tired of these warnings, especially since in 99.9% of the cases that you get the warning it is meaningless (as we are really taking about bitpatterns that have a special meaning in some C API). I personally haven't seen a single instance of the warning making sense. -- - Jack Jansen http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma Goldman - From Jack.Jansen@oratrix.com Thu Nov 28 10:41:07 2002 From: Jack.Jansen@oratrix.com (Jack Jansen) Date: Thu, 28 Nov 2002 11:41:07 +0100 Subject: [Python-Dev] Mac OSX issues In-Reply-To: <200211280416.gAS4G2221621@pcp02138704pcs.reston01.va.comcast.net> Message-ID: On donderdag, nov 28, 2002, at 05:16 Europe/Amsterdam, Guido van Rossum wrote: > - test_largefile takes a *very* long time. Perhaps it actually > creates a truly large file (the Mac OSX filesystem is case > insensitive, so I suppose it may be so different that it doesn't > support files with holes in them). Maybe the test should disable > itself (or part of itself) unless a specific resource is requested? Do you mean that on other systems it does *not* create these gigantic files??!? I've always wondered what the use was... Hmm, if this test test support for files with holes, how come that Windows then doesn't have the same problem as the Mac, it also doesn't support holes in files, or does it? -- - Jack Jansen http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma Goldman - From martin@v.loewis.de Thu Nov 28 10:44:12 2002 From: martin@v.loewis.de (Martin v. =?iso-8859-15?q?L=F6wis?=) Date: 28 Nov 2002 11:44:12 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: <80514E88-02BC-11D7-83AC-000A27B19B96@oratrix.com> References: <80514E88-02BC-11D7-83AC-000A27B19B96@oratrix.com> Message-ID: Jack Jansen writes: > I've been sick and tired of these warnings, especially since in 99.9% > of the cases that you get the warning it is meaningless (as we are > really taking about bitpatterns that have a special meaning in some C > API). I personally haven't seen a single instance of the warning > making sense. I found that all those warnings are correct: in particular *when* the constant is a bit pattern in some C API. It means that your code *will* break in Python 2.4, unless you take corrective action (which you cannot take at the moment). It will break because ParseTuple will raise an OverflowError. Regards, Martin From martin@v.loewis.de Thu Nov 28 10:48:46 2002 From: martin@v.loewis.de (Martin v. =?iso-8859-15?q?L=F6wis?=) Date: 28 Nov 2002 11:48:46 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: <3DE5EBA3.2090409@lemburg.com> References: <200211280107.gAS17LE27473@pcp02138704pcs.reston01.va.comcast.net> <3DE5EBA3.2090409@lemburg.com> Message-ID: "M.-A. Lemburg" writes: > > Can you please explain how this will happen? If you do int x; > > PyArg_ParseTuple(args,"i",&x); > > and args is (0x80000000,), what will be the value of x? > > x should be 0x80000000. Whether that's a negative number in > its decimal representation is really not all that important if > you are interfacing to 32-bit bitmaps ;-) This was not my question. I asked what the value *will* be, not what it *should* be. Can you answer the first question (how will this happen)? > I honestly don't think that anyone would write x = 0x80000000 > and then expect x < 0 to be True. People usually write hex > representations when they are trying to do bit-level manipulations > and these rarely deal with signed numeric data. I agree with all that. However, I cannot see how an actual implementation works that simultaneously meets all implied requirements: - integer constants are always positive; you get negative numbers only with a unary "-". - ParseTuple continues to raise OverflowErrors for values that are out of range. - The constant 0x80000000 has the same trailing 32 bits in the C int or long as it has in Python. Regards, Martin From martin@v.loewis.de Thu Nov 28 10:54:27 2002 From: martin@v.loewis.de (Martin v. =?iso-8859-15?q?L=F6wis?=) Date: 28 Nov 2002 11:54:27 +0100 Subject: [Python-Dev] Mac OSX issues In-Reply-To: References: Message-ID: Jack Jansen writes: > Do you mean that on other systems it does *not* create these gigantic > files??!? Certainly not. Being able to seek to some large file offset, then write one byte, without consuming huge amounts of disk space, has been a long-standing Unix feature. > I've always wondered what the use was... Hmm, if this test test > support for files with holes, how come that Windows then doesn't > have the same problem as the Mac, it also doesn't support holes in > files, or does it? Depends on the file system; NTFS surely does. However, on Windows, the test is not run unless the largefile resource is given to regrtest.py. Regards, Martin From nhodgson@bigpond.net.au Thu Nov 28 11:07:15 2002 From: nhodgson@bigpond.net.au (Neil Hodgson) Date: Thu, 28 Nov 2002 22:07:15 +1100 Subject: [Python-Dev] Mac OSX issues References: Message-ID: <008f01c296ce$4f511b20$3da48490@neil> Jack Jansen: > I've always wondered what the use was... Hmm, if this test test > support for files with holes, how come that Windows then doesn't > have the same problem as the Mac, it also doesn't support holes in > files, or does it? Martin v. Löwis: # Depends on the file system; NTFS surely does. # # However, on Windows, the test is not run unless the largefile resource # is given to regrtest.py. NTFS has only supported sparse files since Windows 2000 and then only if you set the sparse file attribute. http://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/base /sparse_files.asp Neil From guido@python.org Thu Nov 28 13:57:04 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 28 Nov 2002 08:57:04 -0500 Subject: [Python-Dev] Re: PyNumber_Check() In-Reply-To: Your message of "Thu, 28 Nov 2002 10:18:58 +0100." <3DE5DF82.1020503@lemburg.com> References: <3DD91D07.3000704@lemburg.com> <200211181724.gAIHOgx27579@pcp02138704pcs.reston01.va.comcast.net> <3DD92622.90007@lemburg.com> <200211181802.gAII28306798@pcp02138704pcs.reston01.va.comcast.net> <3DE5DF82.1020503@lemburg.com> Message-ID: <200211281357.gASDv4706281@pcp02138704pcs.reston01.va.comcast.net> > > Maybe, as long as we all agree that that's *exactly* what they check > > for, and as long as we agree that there may be overlapping areas > > (where two or more of these will return True). > > > > PyMapping_Check() returns true for a variety of non-mappings like > > strings, lists, and all classic instances. > > Perhaps we should simply keep the existing semantics for > those two APIs, that is, ensure that they return the same > results for the standard builtin types as they did in Python 2.2 > and below ?! That's a new way of defining their semantics, but I can agree with it! > This would mean that a special case would have to be added > to PyNumber_Check() to have it return False for strings > and Unicode. Somebody (MWH?) proposed to test for one or nb_int/nb_long/nb_float. That makes sense to me, and should do what you ask for. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Thu Nov 28 13:57:36 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 28 Nov 2002 08:57:36 -0500 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Your message of "28 Nov 2002 10:32:47 +0100." References: <200211280107.gAS17LE27473@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <200211281357.gASDvao06292@pcp02138704pcs.reston01.va.comcast.net> > Guido van Rossum writes: > > > But in Mark's case (and in many other cases) there will be no problem > > in the future -- in Python 2.4, his C code will happily accept the > > positive Python longs that 0x80000000 and others will be then. > > Can you please explain how this will happen? If you do > > int x; > PyArg_ParseTuple(args,"i",&x); > > and args is (0x80000000,), what will be the value of x? > > > BTW, this reminds me that I've long promised a set of new format codes > > for PyArg_ParseTuple() to specify taking the lower N bits (for N in > > 8, 16, 32, 64) and throwing the rest away, without range checks. > > Wouldn't Mark have to use these format codes? That's what I meant. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Thu Nov 28 14:02:35 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 28 Nov 2002 09:02:35 -0500 Subject: [Python-Dev] Mac OSX issues In-Reply-To: Your message of "Thu, 28 Nov 2002 11:41:07 +0100." References: Message-ID: <200211281402.gASE2ZF06334@pcp02138704pcs.reston01.va.comcast.net> > > - test_largefile takes a *very* long time. Perhaps it actually > > creates a truly large file (the Mac OSX filesystem is case > > insensitive, so I suppose it may be so different that it doesn't > > support files with holes in them). Maybe the test should disable > > itself (or part of itself) unless a specific resource is requested? > > Do you mean that on other systems it does *not* create these gigantic > files??!? I've always wondered what the use was... Hmm, if this test > test support for files with holes, how come that Windows then doesn't > have the same problem as the Mac, it also doesn't support holes in > files, or does it? RTSL: # On Windows this test comsumes large resources; It takes a long time to build # the >2GB file and takes >2GB of disk space therefore the resource must be # enabled to run this test. If not, nothing after this line stanza will be # executed. if sys.platform[:3] == 'win': test_support.requires( 'largefile', 'test requires %s bytes and a long time to run' % str(size)) --Guido van Rossum (home page: http://www.python.org/~guido/) From mwh@python.net Thu Nov 28 15:41:32 2002 From: mwh@python.net (Michael Hudson) Date: 28 Nov 2002 15:41:32 +0000 Subject: [Python-Dev] Re: PyNumber_Check() In-Reply-To: Guido van Rossum's message of "Thu, 28 Nov 2002 08:57:04 -0500" References: <3DD91D07.3000704@lemburg.com> <200211181724.gAIHOgx27579@pcp02138704pcs.reston01.va.comcast.net> <3DD92622.90007@lemburg.com> <200211181802.gAII28306798@pcp02138704pcs.reston01.va.comcast.net> <3DE5DF82.1020503@lemburg.com> <200211281357.gASDv4706281@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <2m1y554s7n.fsf@starship.python.net> Guido van Rossum writes: > Somebody (MWH?) proposed to test for one or > nb_int/nb_long/nb_float. That makes sense to me, and should do what > you ask for. Seems a reasonable idea so I don't mind being associated with it -- but it wasn't me :-) Cheers, M. -- [3] Modem speeds being what they are, large .avi files were generally downloaded to the shell server instead[4]. [4] Where they were usually found by the technical staff, and burned to CD. -- Carlfish, asr From tim_one@email.msn.com Thu Nov 28 16:18:59 2002 From: tim_one@email.msn.com (Tim Peters) Date: Thu, 28 Nov 2002 11:18:59 -0500 Subject: [Python-Dev] Mac OSX issues In-Reply-To: Message-ID: [Jack Jansen] > Do you mean that on other systems it does *not* create these gigantic > files??!? I've always wondered what the use was... Hmm, if this test > test support for files with holes, No, it's testing support for large files period. If the OS supports large files, and you've got enough disk space, it should pass regardless of whether the filesystem sticks holes in the middle. > how come that Windows then doesn't have the same problem as the Mac, it > also doesn't support holes in files, or does it? Holes are irrelevant to correct functioning. On Win2K the test passes and does indeed take a very long time. On Win9X the test passes but takes very little time. The difference seems to be that Win2K actually fills a file with gigabytes of NUL characters, but on Win9X you effectively get a window onto whatever happened to be sitting on disk in the blocks allocated for the file. But in neither case are there holes. Here on Win98SE: >>> f = file('temp.dat', 'wb') >>> f.seek(1000000) >>> f.write('abc') >>> f.close() >>> f = file('temp.dat', 'rb') >>> guts = f.read() >>> len(guts) 1000003 >>> guts[50000:50050] '|\x88\x08\x004\x0e6\x0e7\x0e8\x0e:\x0e;\x0eQ\x0e=\x0e>\x0e?\x0e@\x0eA\x0eB \x0eD\x0eE\x0eF\x0eH\x0eI\x0eJ\x0eQ\x0eL\x0eM\x0eQ\x0e' >>> You could very well find personal info in there! Win2K prevents that. From mal@lemburg.com Thu Nov 28 17:04:12 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Thu, 28 Nov 2002 18:04:12 +0100 Subject: [Python-Dev] Re: PyNumber_Check() References: <3DD91D07.3000704@lemburg.com> <200211181724.gAIHOgx27579@pcp02138704pcs.reston01.va.comcast.net> <3DD92622.90007@lemburg.com> <200211181802.gAII28306798@pcp02138704pcs.reston01.va.comcast.net> <3DE5DF82.1020503@lemburg.com> <200211281357.gASDv4706281@pcp02138704pcs.reston01.va.comcast.net> <2m1y554s7n.fsf@starship.python.net> Message-ID: <3DE64C8C.4000602@lemburg.com> Michael Hudson wrote: > Guido van Rossum writes: > > >>Somebody (MWH?) proposed to test for one or >>nb_int/nb_long/nb_float. That makes sense to me, and should do what >>you ask for. > > > Seems a reasonable idea so I don't mind being associated with it -- > but it wasn't me :-) That was me :-) -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From Jack.Jansen@oratrix.com Thu Nov 28 20:36:12 2002 From: Jack.Jansen@oratrix.com (Jack Jansen) Date: Thu, 28 Nov 2002 21:36:12 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Message-ID: <089D2DB4-0311-11D7-83AC-000A27B19B96@oratrix.com> On donderdag, nov 28, 2002, at 11:44 Europe/Amsterdam, Martin v. L=F6wis=20= wrote: > Jack Jansen writes: > >> I've been sick and tired of these warnings, especially since in 99.9% >> of the cases that you get the warning it is meaningless (as we are >> really taking about bitpatterns that have a special meaning in some C >> API). I personally haven't seen a single instance of the warning >> making sense. > > I found that all those warnings are correct: in particular *when* the > constant is a bit pattern in some C API. > > It means that your code *will* break in Python 2.4, unless you take > corrective action (which you cannot take at the moment). Well.... First of all, warning people about something without giving=20 them a way to do something about it isn't really good style. Second, it=20= will *not* break in 2.4, because I'm just going to add an O& formatter=20= PyMac_Parse32BitIntWithoutSillyComplaints, which will take any=20 reasonable type on the Python side and just return the lower 32 bits. Actually, I could make that fix *now*, but I would still be stuck with=20= the stupid warnings:-( -- - Jack Jansen =20 http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma=20 Goldman - From Jack.Jansen@oratrix.com Thu Nov 28 20:37:54 2002 From: Jack.Jansen@oratrix.com (Jack Jansen) Date: Thu, 28 Nov 2002 21:37:54 +0100 Subject: [Python-Dev] Mac OSX issues In-Reply-To: <200211281402.gASE2ZF06334@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <452ED368-0311-11D7-83AC-000A27B19B96@oratrix.com> On donderdag, nov 28, 2002, at 15:02 Europe/Amsterdam, Guido van Rossum wrote: > # On Windows this test comsumes large resources; It takes a long time > to build > # the >2GB file and takes >2GB of disk space therefore the resource > must be > # enabled to run this test. If not, nothing after this line stanza > will be > # executed. > if sys.platform[:3] == 'win': > test_support.requires( > 'largefile', > 'test requires %s bytes and a long time to run' % str(size)) Ok, I'll add mac and darwin to the platform tests, then. -- - Jack Jansen http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma Goldman - From martin@v.loewis.de Thu Nov 28 20:51:48 2002 From: martin@v.loewis.de (Martin v. =?iso-8859-15?q?L=F6wis?=) Date: 28 Nov 2002 21:51:48 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: <089D2DB4-0311-11D7-83AC-000A27B19B96@oratrix.com> References: <089D2DB4-0311-11D7-83AC-000A27B19B96@oratrix.com> Message-ID: Jack Jansen writes: > Well.... First of all, warning people about something without giving > them a way to do something about it isn't really good style. Whom are you directing this observation at? I was merely pointing out that the warning is factually correct and indicates a real future problem; I did not consider style at all. > Second, it will *not* break in 2.4, because I'm just going to add an > O& formatter PyMac_Parse32BitIntWithoutSillyComplaints, which will > take any reasonable type on the Python side and just return the > lower 32 bits. It would be better if you could just contribute a patch to add such an argument parser to the standard ParseTuple implementation, instead of coming up with proprietary Mac solutions. When I said "it will break", I meant "unless action is taken", of course. Regards, Martin From greg@cosc.canterbury.ac.nz Thu Nov 28 21:31:50 2002 From: greg@cosc.canterbury.ac.nz (Greg Ewing) Date: Fri, 29 Nov 2002 10:31:50 +1300 (NZDT) Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Message-ID: <200211282131.gASLVoc29405@kuku.cosc.canterbury.ac.nz> martin@v.loewis.de (Martin v. =?iso-8859-15?q?L=F6wis?=): > It means that your code *will* break in Python 2.4, unless you take > corrective action (which you cannot take at the moment). Pardon me, but... wouldn't it have been better to defer introducing these warnings until there *is* something that can be done about them? Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg@cosc.canterbury.ac.nz +--------------------------------------+ From martin@v.loewis.de Thu Nov 28 22:12:36 2002 From: martin@v.loewis.de (Martin v. =?iso-8859-15?q?L=F6wis?=) Date: 28 Nov 2002 23:12:36 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: <200211282131.gASLVoc29405@kuku.cosc.canterbury.ac.nz> References: <200211282131.gASLVoc29405@kuku.cosc.canterbury.ac.nz> Message-ID: Greg Ewing writes: > > It means that your code *will* break in Python 2.4, unless you take > > corrective action (which you cannot take at the moment). > > Pardon me, but... wouldn't it have been better to defer > introducing these warnings until there *is* something > that can be done about them? That might be the case. At the time the warning was added, there was consensus that it is be easy to do something about each of them. It was only detected later that it is not easy in some cases (strictly speaking, you can correct all the warnings today, taking, for example, the approach that Jack would take, of adding a custom conversion function into your C modules). Today, I would rather hope that somebody contributes a patch to add the requested features instead of contributing a patch to disable the warning. Regards, Martin From mhammond@skippinet.com.au Thu Nov 28 23:26:26 2002 From: mhammond@skippinet.com.au (Mark Hammond) Date: Fri, 29 Nov 2002 10:26:26 +1100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Message-ID: > Greg Ewing writes: > > > > It means that your code *will* break in Python 2.4, unless you take > > > corrective action (which you cannot take at the moment). > > > > Pardon me, but... wouldn't it have been better to defer > > introducing these warnings until there *is* something > > that can be done about them? > > That might be the case. At the time the warning was added, there was > consensus that it is be easy to do something about each of them. That is true. To be fair, the warning is simply saying "this literal is an int today - later it will be a long". It is not supplying any context for this warning - ie, it is not saying "your C extensions using 'l' format may break" - it is left to us to deduce such impacts. > Today, I would rather hope that somebody contributes a patch to add > the requested features instead of contributing a patch to disable the > warning. Except, on the flip-side, let's say I am *happy* for such contants to become longs. I really don't want to see the warning for every hex literal once I understand the impact. So, maybe we simply need finer-grained warnings - such as in PyArg_ParseTuple, and any other places where the impact will actually be felt. Mark. From martin@v.loewis.de Thu Nov 28 23:41:40 2002 From: martin@v.loewis.de (Martin v. =?iso-8859-15?q?L=F6wis?=) Date: 29 Nov 2002 00:41:40 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: References: Message-ID: "Mark Hammond" writes: > Except, on the flip-side, let's say I am *happy* for such contants to become > longs. I really don't want to see the warning for every hex literal once I > understand the impact. Reducing the warnings to one warning per module might be reasonable. To silence the warning, you should really add "L" suffixes to all those literals - or we could provide a future statement, where you indicate that you want those literals to become positive longs *now*; you'ld need to add this statement once per module. However, when you do that, I believe you will get OverflowErrors once you pass the constants to your C API. > So, maybe we simply need finer-grained warnings - such as in > PyArg_ParseTuple, and any other places where the impact will > actually be felt. Feel free to propose specific patches. I believe this specific strategy is unimplementable: When the value goes to ParseTuple, it is not known anymore whether the integer literal was hex (or whether there was any integer literal at all). Regards, Martin From mhammond@skippinet.com.au Fri Nov 29 00:15:20 2002 From: mhammond@skippinet.com.au (Mark Hammond) Date: Fri, 29 Nov 2002 11:15:20 +1100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: <200211281357.gASDvao06292@pcp02138704pcs.reston01.va.comcast.net> Message-ID: [Martin] > > Guido van Rossum writes: > > > > > But in Mark's case (and in many other cases) there will be no problem > > > in the future -- in Python 2.4, his C code will happily accept the > > > positive Python longs that 0x80000000 and others will be then. ... > > Wouldn't Mark have to use these format codes? > > That's what I meant. Well, I am afraid it is slightly more than "no problem" for me if it means I need to read the doc for and potentially touch *every single* function in the Win32 extensions that accepts an integer as an input arg. I must be missing something, but I can't understand why the existing PyArg_ParseTuple codes can't take on a kind of "hybrid" approach for b/w compatibility. This could mean, assuming a 32 bit platform, that 'l': c_func_taking_int( 0x80000000L ) -> 0x8000000 c_func_taking_int( 0xFFFFFFFFL ) -> 0xFFFFFFFF c_func_taking_int( 0x100000000L ) -> OverflowError c_func_taking_int( -0x80000001L ) -> whatever we like The new format codes can be more precise in their handling of these objects and their sign. I can't see a real downside to this. There is no way someone can get (+-)0x80000001L into a C int via ParseTuple("l") now, so we are not breaking anything. When the arg is an int object or a long in the currently supported range, the semantics are identical. A few cases may exist where an OverflowError would have been thrown but no longer is, but that behaviour is unlikely to be relied upon. The only case I see this failing with is bitwise rotates. Today: c_func_taking_int( 0x80000000<<1 ) -> 0x0 but with my semantics above, it would yield c_func_taking_int( 0x80000000L<<1 ) -> OverflowError re-defining 'l' to mean "lower 'n' bits" would solve this, but I accept this is going too far. Certainly for all my "constant files", all bitwise rotate operations are "safe" in terms of losing bits - just not in changing sign - so I believe this would still work for me. I am sure I am missing something, but I can't see it. I am ready to feel foolish . It would certainly be easier for me to work on such a strategy than to convert all my extensions. Mark. From guido@python.org Fri Nov 29 03:48:10 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 28 Nov 2002 22:48:10 -0500 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Your message of "Fri, 29 Nov 2002 10:31:50 +1300." <200211282131.gASLVoc29405@kuku.cosc.canterbury.ac.nz> References: <200211282131.gASLVoc29405@kuku.cosc.canterbury.ac.nz> Message-ID: <200211290348.gAT3mAm07516@pcp02138704pcs.reston01.va.comcast.net> > Pardon me, but... wouldn't it have been better to defer > introducing these warnings until there *is* something > that can be done about them? We *will* let you do something about them before we release 2.3a1. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Fri Nov 29 03:50:14 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 28 Nov 2002 22:50:14 -0500 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Your message of "Fri, 29 Nov 2002 10:26:26 +1100." References: Message-ID: <200211290350.gAT3oEd07541@pcp02138704pcs.reston01.va.comcast.net> > So, maybe we simply need finer-grained warnings - such as in > PyArg_ParseTuple, and any other places where the impact will actually be > felt. But PyArg_ParseTuple doesn't know whether an argument came from a hex constant or not. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Fri Nov 29 04:02:13 2002 From: guido@python.org (Guido van Rossum) Date: Thu, 28 Nov 2002 23:02:13 -0500 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Your message of "Fri, 29 Nov 2002 11:15:20 +1100." References: Message-ID: <200211290402.gAT42Dx07634@pcp02138704pcs.reston01.va.comcast.net> > I must be missing something, but I can't understand why the existing > PyArg_ParseTuple codes can't take on a kind of "hybrid" approach for b/w > compatibility. This could mean, assuming a 32 bit platform, that 'l': > > c_func_taking_int( 0x80000000L ) -> 0x8000000 > c_func_taking_int( 0xFFFFFFFFL ) -> 0xFFFFFFFF > c_func_taking_int( 0x100000000L ) -> OverflowError > c_func_taking_int( -0x80000001L ) -> whatever we like We might as well not bother with range checking then, which would be backwards incompatible. :-( > The new format codes can be more precise in their handling of these objects > and their sign. > I can't see a real downside to this. There is no way someone can get > (+-)0x80000001L into a C int via ParseTuple("l") now, so we are not breaking > anything. When the arg is an int object or a long in the currently > supported range, the semantics are identical. A few cases may exist where > an OverflowError would have been thrown but no longer is, but that behaviour > is unlikely to be relied upon. > > The only case I see this failing with is bitwise rotates. Today: > c_func_taking_int( 0x80000000<<1 ) -> 0x0 > > but with my semantics above, it would yield > > c_func_taking_int( 0x80000000L<<1 ) -> OverflowError You get a slightly different warning for left shifts that actually lose bits, and I don't think that warning is causing you any pain. > re-defining 'l' to mean "lower 'n' bits" would solve this, but I accept this > is going too far. Certainly for all my "constant files", all bitwise rotate > operations are "safe" in terms of losing bits - just not in changing sign - > so I believe this would still work for me. > > I am sure I am missing something, but I can't see it. I am ready to feel > foolish . It would certainly be easier for me to work on such a > strategy than to convert all my extensions. I don't have enough time to think about this right now (and I'm dying to go to bed), but I agree that something needs to be done. Originally the PEP didn't prescribe warnings for this. Maybe we can just disable the warnings by default? Or maybe we can use the "__future__" suggestion, since it's the only way to signal to the parser not to issue the warnings. (One problem with the warning about hex constants is that because the entire module is parsed before it is executed, putting a warnings.filterwarnings() call in the module itself is ineffective; you must put it somewhere else. This is not a problem with the warnings about <<. A __future__ statement is a signal to the parser.) --Guido van Rossum (home page: http://www.python.org/~guido/) From martin@v.loewis.de Fri Nov 29 08:17:29 2002 From: martin@v.loewis.de (Martin v. =?iso-8859-15?q?L=F6wis?=) Date: 29 Nov 2002 09:17:29 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: References: Message-ID: "Mark Hammond" writes: > I am sure I am missing something, but I can't see it. I am ready to feel > foolish . It would certainly be easier for me to work on such a > strategy than to convert all my extensions. It is a violation of the principle Errors should never pass silently. Unless explicitly silenced. If you have a function my_account.add_amount_of_money(0x80000000L) and the implementation is a C function with an "i" converter, this currently gives an indication of the error. Under your proposed change, it would remove money from the account. If we think that the case "large numbers are bitmaps" is more common than the case "large numbers are large numbers", we should drop the range check for the existing converters, and add new converters in case somebody wants large numbers instead of bitmaps, which then do the range checks. Regards, Martin From theller@python.net Fri Nov 29 08:58:36 2002 From: theller@python.net (Thomas Heller) Date: 29 Nov 2002 09:58:36 +0100 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Tools/freeze modulefinder.py,1.24,1.25 In-Reply-To: <200211261323.gAQDNLv05469@pcp02138704pcs.reston01.va.comcast.net> References: <200211261323.gAQDNLv05469@pcp02138704pcs.reston01.va.comcast.net> Message-ID: Guido van Rossum writes: > > Log Message: > > Don't look for modules in the registry any longer. > > > > Mark writes in private email: > > > > "Modules listed in the registry was a dumb idea. This whole scheme > > can die. AFAIK, no one in the world uses it (including win32all > > since the last build)." > > > > (See also SF #643711) > > Woo hoo! Hurray! > Actually this change broke modulefinder again, it doesn't find 'import pywintypes' any longer. I think I have to install a new win32all build... While I noticed that 'imp.find_module("pywintypes")' finds it, it doesn't find it in this case: 'imp.find_module("pywintypes", sys.path)'. Looking into import.c, PyWin_FindRegisteredModule() is only called if path is NULL. If I understood Mark correctly, this function will have to be removed. Is this correct? Thomas From Jack.Jansen@cwi.nl Fri Nov 29 09:19:18 2002 From: Jack.Jansen@cwi.nl (Jack Jansen) Date: Fri, 29 Nov 2002 10:19:18 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Message-ID: On Thursday, Nov 28, 2002, at 23:12 Europe/Amsterdam, Martin v. L=F6wis=20= wrote: >> Pardon me, but... wouldn't it have been better to defer >> introducing these warnings until there *is* something >> that can be done about them? > > That might be the case. At the time the warning was added, there was > consensus that it is be easy to do something about each of them. No, there was no consensus. I have screamed loudly about it, but things went ahead anyway. Note that I'm not complaining about this: that's the way it goes in a group project, but let's not try to rewrite history... > It > was only detected later that it is not easy in some cases (strictly > speaking, you can correct all the warnings today, taking, for example, > the approach that Jack would take, of adding a custom conversion > function into your C modules). No, that does *not* help (as I have explained umpteen times). As the warning is given by the parser there is absolutely nothing you can do about it with a PyArg_ParseTuple converter. There are of course workarounds (like adding, for 2.3 only, an "L" to all the constants in addition to the custom converter) but that would be silly because things will work again in 2.4. -- - Jack Jansen =20 http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma=20 Goldman - From Jack.Jansen@cwi.nl Fri Nov 29 09:35:27 2002 From: Jack.Jansen@cwi.nl (Jack Jansen) Date: Fri, 29 Nov 2002 10:35:27 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Message-ID: Replying to various postings at once: Mark Hammond wrote: > Well, I am afraid it is slightly more than "no problem" for me if it=20= > means I > need to read the doc for and potentially touch *every single* function=20= > in > the Win32 extensions that accepts an integer as an input arg. Exactly the same here. I have 2206 automatically generated Mac API=20 functions, and each of these would have to be inspected. Martin v. L=F6wis wrote: > To silence the warning, you should really add "L" suffixes to all > those literals - or we could provide a future statement, where you > indicate that you want those literals to become positive longs *now*; > you'ld need to add this statement once per module. The future statement would solve most of the headaches (combined with the 32 bit value parser that accepts both int and long). Martin again: > If we think that the case "large numbers are bitmaps" is more common > than the case "large numbers are large numbers", we should drop the > range check for the existing converters, and add new converters in > case somebody wants large numbers instead of bitmaps, which then do > the range checks. This would be ideal, of course. But note that for me changing the=20 format char is a minor issue, as long as the behavior of the new format char is as=20= expected (any 32 bit value is okay, and iff there is a way 32 bit constants can=20= become longs in the parser then those longs should be passed as 32 bit ints too). Those 2206 routines mentioned above are automatically generated. There=20= are a couple of tens of manually generated wrappers, but as I've written those=20 myself, mainly, there's a chance I will know what to do:-) But of course I don't know how this would be for Mark... -- - Jack Jansen =20 http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma=20 Goldman - From martin@v.loewis.de Fri Nov 29 09:59:51 2002 From: martin@v.loewis.de (Martin v. Löwis) Date: Fri, 29 Nov 2002 10:59:51 +0100 Subject: [Python-Dev] int/long FutureWarning References: Message-ID: <000b01c2978e$0fa53270$2f1de8d9@mira> > No, that does *not* help (as I have explained umpteen times). As the > warning is given by the parser there is absolutely nothing you can do > about it with a PyArg_ParseTuple converter. As I have explained twice as often, you need to add an L to the constants. > There are of course workarounds (like adding, for 2.3 only, an "L" > to all the constants in addition to the custom converter) but that would > be silly because things will work again in 2.4. No, they won't. If you just add the L, you will get OverflowErrors, unless you also modify the the ParseTuple calls. Regards, Martin From mal@lemburg.com Fri Nov 29 10:37:04 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 29 Nov 2002 11:37:04 +0100 Subject: [Python-Dev] int/long FutureWarning References: <000b01c2978e$0fa53270$2f1de8d9@mira> Message-ID: <3DE74350.80602@lemburg.com> Let me summarize this: 1. 0x80000000 and similar large hex and octal constants will generate a parser warning in Python 2.3 which is only useful for developers 2. x = 0x80000000 gives a signed integer in Python <=2.2 and long(x) results in -2147483648L; int(0x80000000L) gives an OverflowError in Python <=2.2 3. x = 0x80000000L gives a long integer 2147483648L in Python >=2.4; int(0x80000000L) returns a long in Python >=2.4 or generates an OverflowError (no idea ?) 4. C extensions have typically used "i" to get at integer values and often don't care for the sign (that is, they use them as if they were unsigned ints) 5. new parser markers would not be available in older Python versions unless they are backported To me the picture looks as if the warnings should only be printed on request by a developer (not per default) and that new parser markers are the only way to get everyone satisfied. These should then be backported to at least Python 2.1 and 2.2 to ensure that extensions using these new parser markers continue to work with older Python versions. Since the new parser markers will need to mask bitmaps, I suggest to use "m#" as new marker with # being 1,2,3 or 4 representing the number of bytes to mask, i.e. "m1" gives ((unsigned long)value & 0xF), "m2" gives ((unsigned long)value & 0xFF), etc. The marker variable will have to reference an (unsigned int). We may also extend this to m5-8 for assigning to (unsigned LONG_LONG)s for those who need them. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From aahz@pythoncraft.com Fri Nov 29 11:07:28 2002 From: aahz@pythoncraft.com (Aahz) Date: Fri, 29 Nov 2002 06:07:28 -0500 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: <3DE74350.80602@lemburg.com> References: <000b01c2978e$0fa53270$2f1de8d9@mira> <3DE74350.80602@lemburg.com> Message-ID: <20021129110728.GA2150@panix.com> On Fri, Nov 29, 2002, M.-A. Lemburg wrote: > > 2. x = 0x80000000 gives a signed integer in Python <=2.2 and long(x) > results in -2147483648L; int(0x80000000L) gives an OverflowError > in Python <=2.2 You sure about this? I thought the whole point was that it's *not* necessarily -2147483648L and OverflowError -- on a 64-bit platform. -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "If you don't know what your program is supposed to do, you'd better not start writing it." --Dijkstra From arigo@tunes.org Fri Nov 29 11:50:41 2002 From: arigo@tunes.org (Armin Rigo) Date: Fri, 29 Nov 2002 03:50:41 -0800 (PST) Subject: [Python-Dev] Classmethod Help In-Reply-To: <018a01c294d7$ad2110a0$125ffea9@oemcomputer>; from python@rcn.com on Mon, Nov 25, 2002 at 06:07:05PM -0500 References: <018a01c294d7$ad2110a0$125ffea9@oemcomputer> Message-ID: <20021129115041.C784D4B5D@bespin.org> Hello Raymond, On Mon, Nov 25, 2002 at 06:07:05PM -0500, Raymond Hettinger wrote: > GvR pointed me to you guys for help in the C > implementation of the patch for a dictionary class method: There are METH_CLASS and METH_STATIC flags that you can set in the tp_methods table. By the way, in your example you don't use the 'cls' parameter, so this looks like a static method rather than a class method, right? Armin From arigo@tunes.org Fri Nov 29 11:50:44 2002 From: arigo@tunes.org (Armin Rigo) Date: Fri, 29 Nov 2002 03:50:44 -0800 (PST) Subject: [Python-Dev] int/long FutureWarning In-Reply-To: <200211280107.gAS17LE27473@pcp02138704pcs.reston01.va.comcast.net>; from guido@python.org on Wed, Nov 27, 2002 at 08:07:21PM -0500 References: <200211280107.gAS17LE27473@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021129115044.939404B54@bespin.org> Hello Guido, On Wed, Nov 27, 2002 at 08:07:21PM -0500, Guido van Rossum wrote: > But in Mark's case (and in many other cases) there will be no problem > in the future -- in Python 2.4, his C code will happily accept the > positive Python longs that 0x80000000 and others will be then. As it stands, there is no good solution for Python 2.3: you have to use 0x80000000 in your Python code because 0x80000000L will cause PyArg_ParseTuple("i") to fail, and if you use 0x80000000 you get warnings that you cannot silence unless you write -2147483648... So it seems that "i" must already accept longs up to 0xffffffffL in Python 2.3. I already feel a certain confusion about the various decoders of PyArg_ParseTuple(), and I fear that this will only add more, given that these codes are similar but not identical to what we have in the struct module -- and the latter is also pretty obscure about the exact range of accepted values. I'm volunteering to add format codes to PyArg_ParseTuple, hoping that we can find some common ground and consistently add them to the struct module as well. Sadly, it seems that the two sets of format codes are definitely incompatible. Indeed, 'b' is unsigned and 'B' signed in PyArg_ParseTuple, and the converse in struct... And 'B' and 'H' are not documented for PyArg_ParseTuple!? Armin From arigo@tunes.org Fri Nov 29 11:50:43 2002 From: arigo@tunes.org (Armin Rigo) Date: Fri, 29 Nov 2002 03:50:43 -0800 (PST) Subject: [Python-Dev] from tuples to immutable dicts In-Reply-To: ; from martin@v.loewis.de on Sun, Nov 24, 2002 at 06:10:54PM +0100 References: <20021124140534.C26224B24@bespin.org> Message-ID: <20021129115043.D513D4B5A@bespin.org> Hello Martin, On Sun, Nov 24, 2002 at 06:10:54PM +0100, Martin v. Loewis wrote: > > point = tuple(5, 6, color=RED, visible=False) > > I have to problems imagining such an extension: > > 1. I'm not sure this would be useful. > 2. I can't imagine how to implement it, without compromising performance > for tuples. By introducing a subtype of tuple, just as you do in Python. By the way, doing it in Python is a nice thing, but then we end up with two ways of making this kind of small structures: one easily available in Python but not in C, and one (structseq) for C. Having a C-based implementation of tuple-with-immutable-dict would unify the two. But well, I realize that I'm getting stuck with minor things here. An example that comes in mind where a C-based immutable dict would be handy is to implement table-based jumps in the bytecode (for switches), but again this discussion drifted in incompatible directions as we need non-string keys for this. Let's just forget it all. Armin From martin@v.loewis.de Fri Nov 29 12:17:17 2002 From: martin@v.loewis.de (Martin v. Löwis) Date: Fri, 29 Nov 2002 13:17:17 +0100 Subject: [Python-Dev] int/long FutureWarning References: <000b01c2978e$0fa53270$2f1de8d9@mira> <3DE74350.80602@lemburg.com> Message-ID: <00c201c297a1$428163e0$2f1de8d9@mira> > 1. 0x80000000 and similar large hex and octal constants will > generate a parser warning in Python 2.3 which is only useful for > developers Correct. The same holds for any other warning, and for all uncaught exceptions: the message you get is only useful for the author of the software. > 2. x = 0x80000000 gives a signed integer in Python <=2.2 and long(x) > results in -2147483648L; int(0x80000000L) gives an OverflowError > in Python <=2.2 Correct. > 3. x = 0x80000000L gives a long integer 2147483648L in Python >=2.4; > int(0x80000000L) returns a long in Python >=2.4 or generates > an OverflowError (no idea ?) Correct, although you probably meant to omit the L in both cases. I think the intention is that int(x) is x if x is long. > 4. C extensions have typically used "i" to get at integer > values and often don't care for the sign (that is, they use > them as if they were unsigned ints) I'm not sure about the proportions, i.e. whether the "ints are bitmaps" case is more or less often than the "ints are ints" case. > 5. new parser markers would not be available in older Python > versions unless they are backported Correct. > To me the picture looks as if the warnings should only be printed > on request by a developer (not per default) and that new parser > markers are the only way to get everyone satisfied. The same is true for any other warning. The problem with not printing warnings by default is that it removes much of the application for the warning: To forcefully indicate that something will change. If we would only want to tell those who want to hear, we could just write it all in the documentation - if you read the documentation, you will know, if you don't, you won't. This approach was heavily critized in the past, and lead to the introduction of warnings. > Since the new parser markers will need to mask bitmaps, I > suggest to use "m#" as new marker with # being 1,2,3 or 4 > representing the number of bytes to mask, i.e. "m1" gives > ((unsigned long)value & 0xF), "m2" gives > ((unsigned long)value & 0xFF), etc. The marker variable > will have to reference an (unsigned int). I'm not sure this would work for all cases. If you have things like fcntl operation codes which you generate from header files, the format argument depends on sizeof(long), so writing a portable fcntl module would be difficult with these format markers. Regards, Martin From mhammond@skippinet.com.au Fri Nov 29 12:36:59 2002 From: mhammond@skippinet.com.au (Mark Hammond) Date: Fri, 29 Nov 2002 23:36:59 +1100 Subject: [Python-Dev] RE: [Python-checkins] python/dist/src/Tools/freeze modulefinder.py,1.24,1.25 In-Reply-To: Message-ID: > Actually this change broke modulefinder again, it doesn't find 'import > pywintypes' any longer. I think I have to install a new win32all > build... > > While I noticed that 'imp.find_module("pywintypes")' finds it, it > doesn't find it in this case: 'imp.find_module("pywintypes", sys.path)'. > > Looking into import.c, PyWin_FindRegisteredModule() is only called if > path is NULL. If I understood Mark correctly, this function will have > to be removed. Is this correct? The latest builds have a pywintypes.py/pythoncom.py which locate and import pywintypesxx.dll. They may not be immediately perfect for freeze/Installer etc, but they can be made so. I'm happy to mail these to you. So in the current CVS Python, PyWin_FindRegisteredModule() could be removed without breaking win32all, and AFAIK no one else uses it. Mark. From just@letterror.com Fri Nov 29 12:53:57 2002 From: just@letterror.com (Just van Rossum) Date: Fri, 29 Nov 2002 13:53:57 +0100 Subject: [Python-Dev] Classmethod Help In-Reply-To: <20021129115041.C784D4B5D@bespin.org> Message-ID: Armin Rigo wrote: > On Mon, Nov 25, 2002 at 06:07:05PM -0500, Raymond Hettinger wrote: > > GvR pointed me to you guys for help in the C > > implementation of the patch for a dictionary class method: > > There are METH_CLASS and METH_STATIC flags that you can set in the > tp_methods table. > > By the way, in your example you don't use the 'cls' parameter, so this > looks like a static method rather than a class method, right? Seems you're looking at an earlier patch: it's already in CVS and it does use cls now. I have a different comment about this patch, though. It's currently possible to trigger a "SystemError: bad internal call" with Python code: >>> from UserDict import UserDict >>> class mydict(dict): ... def __new__(cls, *args, **kwargs): ... return UserDict(*args, **kwargs) ... >>> set = mydict.fromkeys("a b c".split()) Traceback (most recent call last): File "", line 1, in ? SystemError: ../Objects/dictobject.c:983: bad argument to internal function >>> It's not a particularly sane piece of code, and I'm not saying the code should _work_, but I'm not so sure a SystemError is appropriate here. Just From Jack.Jansen@cwi.nl Fri Nov 29 13:44:24 2002 From: Jack.Jansen@cwi.nl (Jack Jansen) Date: Fri, 29 Nov 2002 14:44:24 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: <00c201c297a1$428163e0$2f1de8d9@mira> Message-ID: On Friday, Nov 29, 2002, at 13:17 Europe/Amsterdam, Martin v. L=F6wis=20 wrote: >> To me the picture looks as if the warnings should only be printed >> on request by a developer (not per default) and that new parser >> markers are the only way to get everyone satisfied. > > The same is true for any other warning. The problem with not printing > warnings by default is that it removes much of the application for the > warning: To forcefully indicate that something will change. But note that these long/int warnings are especially obnoxious: as they=20= are given by the parser you cannot turn them off with a filterwarning in the module itself, you have to go out and find each and every module=20 importing them. -- - Jack Jansen =20 http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma=20 Goldman - From martin@v.loewis.de Fri Nov 29 13:45:32 2002 From: martin@v.loewis.de (Martin v. =?iso-8859-15?q?L=F6wis?=) Date: 29 Nov 2002 14:45:32 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: References: Message-ID: Jack Jansen writes: > But note that these long/int warnings are especially obnoxious: as > they are given by the parser you cannot turn them off with a > filterwarning in the module itself, you have to go out and find each > and every module importing them. A patch contributing a future import statement would be appreciated. Regards, Martin From theller@python.net Fri Nov 29 14:09:51 2002 From: theller@python.net (Thomas Heller) Date: 29 Nov 2002 15:09:51 +0100 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Tools/freeze modulefinder.py,1.24,1.25 In-Reply-To: References: Message-ID: <8yzcsc0g.fsf@python.net> > The latest builds have a pywintypes.py/pythoncom.py which locate and import > pywintypesxx.dll. They may not be immediately perfect for freeze/Installer > etc, but they can be made so. I'm happy to mail these to you. I have got them from CVS, thanks. > > So in the current CVS Python, PyWin_FindRegisteredModule() could be removed > without breaking win32all, and AFAIK no one else uses it. > Thomas From Jack.Jansen@cwi.nl Fri Nov 29 14:53:47 2002 From: Jack.Jansen@cwi.nl (Jack Jansen) Date: Fri, 29 Nov 2002 15:53:47 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Message-ID: <5D849FDA-03AA-11D7-97A4-0030655234CE@cwi.nl> On Friday, Nov 29, 2002, at 14:45 Europe/Amsterdam, Martin v. L=F6wis=20 wrote: >> But note that these long/int warnings are especially obnoxious: as >> they are given by the parser you cannot turn them off with a >> filterwarning in the module itself, you have to go out and find each >> and every module importing them. > > A patch contributing a future import statement would be appreciated. Someone will have to give me a hand with this: I could probably figure=20= out how normal future imports work (although I've never done one), but this=20= is one of those hairy ones that needs a hook in the parser. And that's an area=20= of Python that has always frightened me to no end... By the way, on the other part of the patch, the format specifiers: the=20= discussion last July petered out before there was consensus on the format chars=20 needed. One option was to add "k" to mean "uint32", and possibly, for=20 completeness' sake, "Q" to mean uint64. Another option was to add "k1", "k2", "k4" and "k8", to mean uint8,=20 uint16, uint32 and uint64. "k1" and "k2" would be synonyms for "B" and "H", but this would make the k-format-family consistent. -- - Jack Jansen =20 http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma=20 Goldman - From guido@python.org Fri Nov 29 14:56:01 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 29 Nov 2002 09:56:01 -0500 Subject: [Python-Dev] Classmethod Help In-Reply-To: Your message of "Fri, 29 Nov 2002 13:53:57 +0100." References: Message-ID: <200211291456.gATEu1808876@pcp02138704pcs.reston01.va.comcast.net> > It's not a particularly sane piece of code, and I'm not saying the > code should _work_, but I'm not so sure a SystemError is appropriate > here. Indeed. PyErr_BadInternalCall() should only be used for cases where it's certain that the bad argment must have been created by a broken piece of C code. Poor Python code should never be allowed to get this invoked. (There are more violations of this principle, but that's the principle nevertheless, and those violations are just that.) --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Fri Nov 29 15:04:23 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 29 Nov 2002 10:04:23 -0500 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Your message of "29 Nov 2002 14:45:32 +0100." References: Message-ID: <200211291504.gATF4Ni08944@pcp02138704pcs.reston01.va.comcast.net> > A patch contributing a future import statement would be appreciated. If we don't have a solution that is satisfactory for Jack and Mark, we'll have to disable the warnings for the 2.3a1 release. I think it's not acceptable to issue warnings without providing a reasonable way to disable them. For the 2.3 final release (and even for 2.3b1) I want the warnings back on, but by then a solution for Jack's and Mark's problem will definitely have to be in place. Personally, I think there's nothing wrong with Jack & Mark regenerating their hex constants with a trailing 'L', even though that becomes redundant in 2.4. But who knows when 2.4 will be out. Trailing 'L' won't become illegal until 3.0. It seems that that would silence the warnings at the cost of breaking the calls, since PyArg_ParseTuple still does range checking. At least Jack knows how to also regenerate the C code that gets called. But I also have no problem with a __future__ statement. Look in Python 2.2 for how the "yield" keyword support was done. --Guido van Rossum (home page: http://www.python.org/~guido/) From martin@v.loewis.de Fri Nov 29 15:26:48 2002 From: martin@v.loewis.de (Martin v. Löwis) Date: Fri, 29 Nov 2002 16:26:48 +0100 Subject: [Python-Dev] int/long FutureWarning References: <200211291504.gATF4Ni08944@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <007101c297bb$bc262450$6d29e8d9@mira> > Personally, I think there's nothing wrong with Jack & Mark > regenerating their hex constants with a trailing 'L', even though that > becomes redundant in 2.4. I understand that this is Jack's primary complaint: Not that it is impossible to do that, but he doesn't like adding the L. Whether this objection is for aesthetic reasons, or because it is tedious to add the L, or whether he objects for other reasons, I don't know. Regards, Martin From martin@v.loewis.de Fri Nov 29 15:41:28 2002 From: martin@v.loewis.de (Martin v. Löwis) Date: Fri, 29 Nov 2002 16:41:28 +0100 Subject: [Python-Dev] int/long FutureWarning References: <5D849FDA-03AA-11D7-97A4-0030655234CE@cwi.nl> Message-ID: <007901c297bd$c8927610$6d29e8d9@mira> > Someone will have to give me a hand with this: I could probably figure > out how normal future imports work (although I've never done one), > but this is one of those hairy ones that needs a hook in the parser. > And that's an area of Python that has always frightened me to no end... I think Guido's recommendation of looking at the yield processing is not that helpful; yield is added as a keyword in the parser, whereas the FutureWarning occurs in compile.c. So you need to add the feature to compile.c:future_check_features, and then just check for the flag around the place where the FutureWarning is raised (and interpret the constant as long if the feature is set). > Another option was to add "k1", "k2", "k4" and "k8", to mean > uint8, uint16, uint32 and uint64. "k1" and "k2" would be synonyms > for "B" and "H", but this would make the k-format-family consistent. Since this is the proposal that MAL just came up with also, it seems to be a manifest idea (unless it was also MAL who proposed this the last time around, in which case it might only be manifest to him). If that meets all requirements, go for it - I'm just not sure how I would use for plain int, since I don't know its size in advance. Regards, Martin -- - Jack Jansen http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma Goldman - _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev From Jack.Jansen@cwi.nl Fri Nov 29 16:11:47 2002 From: Jack.Jansen@cwi.nl (Jack Jansen) Date: Fri, 29 Nov 2002 17:11:47 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: <007901c297bd$c8927610$6d29e8d9@mira> Message-ID: <42807616-03B5-11D7-97A4-0030655234CE@cwi.nl> On Friday, Nov 29, 2002, at 16:41 Europe/Amsterdam, Martin v. L=F6wis=20 wrote: >> Another option was to add "k1", "k2", "k4" and "k8", to mean >> uint8, uint16, uint32 and uint64. "k1" and "k2" would be synonyms >> for "B" and "H", but this would make the k-format-family consistent. > > Since this is the proposal that MAL just came up with also, it seems = to > be a manifest idea (unless it was also MAL who proposed this the last > time around, in which case it might only be manifest to him). If that > meets all requirements, go for it - I'm just not sure how I would use > for plain int, since I don't know its size in advance. I grabbed it from the old discussion, so there's a good chance MAL came=20= up with it at that time. But Guido mumbled somthing about "k1"=3D=3D"B" and=20 "k2"=3D=3D"H" and that being overkill, and then the discussion stopped. Hence my=20 question: is k1/k2/k4/k8 what it's going to be? -- - Jack Jansen =20 http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma=20 Goldman - From martin@v.loewis.de Fri Nov 29 16:19:00 2002 From: martin@v.loewis.de (Martin v. Löwis) Date: Fri, 29 Nov 2002 17:19:00 +0100 Subject: [Python-Dev] int/long FutureWarning References: <42807616-03B5-11D7-97A4-0030655234CE@cwi.nl> Message-ID: <009a01c297c3$0739ca80$6d29e8d9@mira> > I grabbed it from the old discussion, so there's a good chance MAL came > up with it at that time. But Guido mumbled somthing about "k1"=="B" and > "k2"=="H" and that being overkill, and then the discussion stopped. > Hence my question: is k1/k2/k4/k8 what it's going to be? +1. I personally don't care about the duplication, since I expect that both the implementation and the documentation can be quite compact and treat all sizes uniformly, so having one format more or less doesn't matter. Regards, Martin From bkline@rksystems.com Fri Nov 29 16:48:03 2002 From: bkline@rksystems.com (Bob Kline) Date: Fri, 29 Nov 2002 11:48:03 -0500 (EST) Subject: [Python-Dev] Getting python-bz2 into 2.3 Message-ID: Earlier this month, Tim P. wrote: > Semi-unfortunately, the author of that [bzip2] has > > no idea if it actually works on 95/98/ME/NT/XP > > and in the docs for "3.8 Making a Windows DLL" > > I haven't tried any of this stuff myself, but it all looks > plausible. > > That means it will require some real work to build and test this > stuff on 6 flavors of Windows. Not a showstopper, but does raise > the bar for getting into the PLabs Windows distro. Sorry if I'm way off base here, but does the underlying bzip2 package have to be in a DLL, or can't that be built as a static library, which gets linked into the .pyd, which *is* a DLL? In either case, it doesn't seem like it would be very difficult to create whatever flavor library is needed. The code for bzip2 seems to be very portably written. The python-bz2 code, on the other hand, needed a little bit of tweaking to make it compile with Microsoft's compiler. I'll be happy to help in whatever way would be useful in dealing with the "raised bar," as the prospect of having Python support on all platforms for bz2 compression (and tarfiles) is very appealing. -- Bob Kline mailto:bkline@rksystems.com http://www.rksystems.com From tim_one@email.msn.com Fri Nov 29 16:48:36 2002 From: tim_one@email.msn.com (Tim Peters) Date: Fri, 29 Nov 2002 11:48:36 -0500 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: Message-ID: [Bob Kline] > Earlier this month, Tim P. wrote: > >> Semi-unfortunately, the author of that [bzip2] has >> >> no idea if it actually works on 95/98/ME/NT/XP >> >> and in the docs for "3.8 Making a Windows DLL" >> >> I haven't tried any of this stuff myself, but it all looks >> plausible. >> >> That means it will require some real work to build and test this >> stuff on 6 flavors of Windows. Not a showstopper, but does raise >> the bar for getting into the PLabs Windows distro. > Sorry if I'm way off base here, but does the underlying bzip2 package > have to be in a DLL, or can't that be built as a static library, which > gets linked into the .pyd, which *is* a DLL? In either case, it doesn't > seem like it would be very difficult to create whatever flavor library > is needed. The code for bzip2 seems to be very portably written. > > The python-bz2 code, on the other hand, needed a little bit of tweaking > to make it compile with Microsoft's compiler. > > I'll be happy to help in whatever way would be useful in dealing with > the "raised bar," as the prospect of having Python support on all > platforms for bz2 compression (and tarfiles) is very appealing. This work has already been completed; current CVS Python has full bz2 support for Windows. From bkline@rksystems.com Fri Nov 29 17:07:37 2002 From: bkline@rksystems.com (Bob Kline) Date: Fri, 29 Nov 2002 12:07:37 -0500 (EST) Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: Message-ID: On Fri, 29 Nov 2002, Tim Peters wrote: > > I'll be happy to help in whatever way would be useful in dealing with > > the "raised bar," as the prospect of having Python support on all > > platforms for bz2 compression (and tarfiles) is very appealing. > > This work has already been completed; current CVS Python has full > bz2 support for Windows. Excellent! Can't ask for better turnaround than that. Thanks! -- Bob Kline mailto:bkline@rksystems.com http://www.rksystems.com From gward@python.net Fri Nov 29 18:27:39 2002 From: gward@python.net (Greg Ward) Date: Fri, 29 Nov 2002 13:27:39 -0500 Subject: [Python-Dev] Proposed changes to linuxaudiodev Message-ID: <20021129182739.GA23142@cthulhu.gerg.ca> Hi all -- I've been hacking sporadically on the linuxaudiodev module for the last couple of days. Initial results are in patch #645786 (www.python.org/sf/645786). The main goal of this patch is to make linuxaudiodev objects a thinner wrapper around the underlying OSS device driver, while still providing some highish-level convenience methods -- see my comments in the patch for details. There's a slight theoretical risk of backwards incompatibility, though. Since this module has never been documented, and since its current behaviour is such that doing anything really funky is severely curtailed, I very much doubt this will be a problem. Is anyone doing anything with linuxaudiodev more sophisticated than playing silly beep sounds? Should I ask around on python-list to be sure I won't ruin anyone's day with this change? Oh yeah: someone noted in the CVS history that the module is misnamed. This is true; it should probably have been called ossaudiodev -- OSS is the current standard audio API used by Linux and, I think, various BSDs. It's also available for a bunch of commercial Unices. Greg -- Greg Ward http://www.gerg.ca/ Just because you're paranoid doesn't mean they *aren't* out to get you. From Jack.Jansen@oratrix.com Fri Nov 29 21:42:10 2002 From: Jack.Jansen@oratrix.com (Jack Jansen) Date: Fri, 29 Nov 2002 22:42:10 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: <42807616-03B5-11D7-97A4-0030655234CE@cwi.nl> Message-ID: <6A306354-03E3-11D7-AC2C-000A27B19B96@oratrix.com> On vrijdag, nov 29, 2002, at 17:11 Europe/Amsterdam, Jack Jansen wrote: > I grabbed it from the old discussion, so there's a good chance MAL > came up with > it at that time. But Guido mumbled somthing about "k1"=="B" and > "k2"=="H" > and that being overkill, and then the discussion stopped. Hence my > question: > is k1/k2/k4/k8 what it's going to be? Cycling home I realized that this is may interfere with another plan I have: release an addon MacPython-OSX distribution for Apple's /usr/bin/python. This would basically be a distribution of the stuff in the 2.3 Mac subtree, which would graft itself on Apple's Python 2.2. At the moment this is possible, as the 2.3 Mac subtree is compatible with a 2.2 core. And MacPython-OSX 2.2+ is important to me, as it is part of my strategy for World Domination (by Python, not by me:-). Mark, do you still distribute PythonWin for multiple base Python versions? Would using new PyArg_ParseTuple format specifiers interfere with that? If not, how would you solve it? -- - Jack Jansen http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma Goldman - From Jack.Jansen@oratrix.com Fri Nov 29 21:52:32 2002 From: Jack.Jansen@oratrix.com (Jack Jansen) Date: Fri, 29 Nov 2002 22:52:32 +0100 Subject: [Python-Dev] Re: [Python-checkins] python/dist/src/Python import.c,2.210,2.211 In-Reply-To: Message-ID: On vrijdag, nov 29, 2002, at 21:47 Europe/Amsterdam, jvr@users.sourceforge.net wrote: > Update of /cvsroot/python/python/dist/src/Python > In directory sc8-pr-cvs1:/tmp/cvs-serv20813/Python > > Modified Files: > import.c > Log Message: > Slightly improved version of patch #642578: "Expose > PyImport_FrozenModules > in imp". This adds two functions to the imp module: get_frozenmodules() > and set_frozenmodules(). Something that's been bothering me about frozen modules in the classical sense (i.e. those that are stored in C static data structures) is that the memory used by them is gone without any chance at recovery. For big frozen Python programs that are to be run on small machines this is a waste of precious memory. With modules "frozen" with set_frozenmodules you could conceivably free the data again after it has been imported (similar to what MacPython-OS9 does with modules "frozen" in "PYC " resources). Would that be worth the added complexity? -- - Jack Jansen http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma Goldman - From mhammond@skippinet.com.au Fri Nov 29 22:28:53 2002 From: mhammond@skippinet.com.au (Mark Hammond) Date: Sat, 30 Nov 2002 09:28:53 +1100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Message-ID: [Martin] > "Mark Hammond" writes: > > > I am sure I am missing something, but I can't see it. I am > > ready to feel foolish . It would certainly be easier > > for me to work on such a strategy than to convert all my > > extensions. > It is a violation of the principle > > Errors should never pass silently. > Unless explicitly silenced. Ah-ha - elementary now you point it out > If you have a function > > my_account.add_amount_of_money(0x80000000L) > > and the implementation is a C function with an "i" converter, this > currently gives an indication of the error. Under your proposed > change, it would remove money from the account. True. But can you come up with a reasonable example? A few of us have this problem, and almost everyone has agreed that hex constants are generally used in operations where the numeric value is not important, but the bit pattern is. I haven't seen a real example of where your scenario actually exists - not to mention that any such code would have to be new (as that constant does not work with "i" today) so therefore the author should use the new correct type code! I'm not sure if you are playing devil's advocate, or your contempt for our problem is real? > If we think that the case "large numbers are bitmaps" is more common > than the case "large numbers are large numbers", we should drop the > range check for the existing converters, and add new converters in > case somebody wants large numbers instead of bitmaps, which then do > the range checks. No - no one is trying to say any of that at all. The problem is simply breaking existing code. That code was *not* broken in the first place. We understand the rationale - just we also have lots of existing code. [Jack] > This would be ideal, of course. But note that for me changing the > format char is a minor issue, as long as the behavior of the new > format char is as expected ... > But of course I don't know how this would be for Mark... Well, I wouldn't be happy, but it wouldn't kill me, but would have to be a semi-blind replace to keep me sane (Maybe I would just write a script to convert all my .py files with hex constants to decimal - the bitwise ops will still get me, but locating them may be OK.) Of course, someone will have to update PyXPCOM in the Mozilla tree, and any thing else using hex constants in that way but not actively maintained may struggle and become unavailable for a little while. Plenty of extension authors not on python-dev (including plenty of home-grown extensions taking win32con constants, for example), and obviously I can't speak for them. And as Jack's latest mail points out, supporting multiple versions of Python becomes almost impossible. The cost is real, and not able to be borne only by people on this list. Ultimately though, the end result is likely to be everyone just mass-replaces "i" with "?" in any set of code where they *may* have the problem - but carefully avoiding code where hex constants are passed as currency values Mark. From Jack.Jansen@oratrix.com Fri Nov 29 23:55:43 2002 From: Jack.Jansen@oratrix.com (Jack Jansen) Date: Sat, 30 Nov 2002 00:55:43 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: Message-ID: <1283719B-03F6-11D7-AC2C-000A27B19B96@oratrix.com> On vrijdag, nov 29, 2002, at 23:28 Europe/Amsterdam, Mark Hammond wrote: > [Jack] >> This would be ideal, of course. But note that for me changing the >> format char is a minor issue, [...] > And as Jack's > latest mail points out, supporting multiple versions of Python becomes > almost impossible. Please note that I now also think that changing the format char is *not* a minor issue (since I'm aware of the impossibility of backward compatibility). Survivable, okay, but definitely very bothersome. How about taking a completely different angle on this matter, and looking at PyArg_Parse itself? If we can keep PyArg_Parse 100% backward compatible (which would mean that its "i" format would take any IntObject or LongObject between -1e31 and 1e32-1) and introduce a new (preferred) way to parse arguments that not only does the right thing by being expressive enough to make a difference between "currency integers" and "bitmap integers", but also cleans up the incredible amount of cruft that PyArg_Parse has accumulated over the years? We could get rid of the prefix and suffix characters in the format, of the fact that essential parts of the process are not in the format but hidden in the argument list (O& and friends), of the fact that you cannot represent the format in Python (hmm, that's more-or-less the same problem as the previous one), of the unicode conversion problems (by having a structure like x = PyArg_GetArgs(...) ... PyArg_ReleaseArgs(x)) and probably of many more problems... -- - Jack Jansen http://www.cwi.nl/~jack - - If I can't dance I don't want to be part of your revolution -- Emma Goldman - From martin@v.loewis.de Sat Nov 30 00:03:51 2002 From: martin@v.loewis.de (Martin v. =?iso-8859-15?q?L=F6wis?=) Date: 30 Nov 2002 01:03:51 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: References: Message-ID: "Mark Hammond" writes: > True. But can you come up with a reasonable example? A few of us have this > problem, and almost everyone has agreed that hex constants are generally > used in operations where the numeric value is not important, but the bit > pattern is. The problem is that you won't know whether an integer was hex. If I do f = open("/tmp/large","w") f.seek(4278190080) (which is 0xff000000l) then, under your proposed change, this might seek to a negative offset on some systems, instead of the positive offset. Likewise [None] * (1l << 32) now gives an OverflowError. With your proposed change, it gives [] (actually, all calls to __mul__ will change their behaviour). > I haven't seen a real example of where your scenario actually exists > - not to mention that any such code would have to be new (as that > constant does not work with "i" today) so therefore the author > should use the new correct type code! See, this is a central mistake: Under your proposed change, existing code *will* change. Whether or not the constant was original hexadecimal, or whether it was a constant at all, is not known at the time ParseTuple makes its decision. > I'm not sure if you are playing devil's advocate, or your contempt > for our problem is real? At the moment, I just want everybody to understand what the implications of proposed changes are. It seems we are still in disagreement about facts. I don't know whether dropping the OverflowError test would cause real problems, since existing code likely does not trigger these errors (or else it would not work correctly now). However, I do believe that extension authors *will* complain if their extension crashes if Python has failed to check a range, and converted a positive value to a negative one. Some underlying C library might react badly when it gets large negative numbers. > No - no one is trying to say any of that at all. The problem is > simply breaking existing code. That code was *not* broken in the > first place. We understand the rationale - just we also have lots > of existing code. The question is what the proposed correction should be. I'm in favour of using stricter semantics when the semantics changes (i.e. give more exceptions than before); authors of applications that can accept to drop some of the built-in tests then need to make explicit changes to drop them. You seem to favour the reverse policy: drop the stronger checks by default, and give authors who want them an option to add them back. The problem with this strategy is that authors might not know that they have to change anything. With my strategy, they will find out automatically. > Well, I wouldn't be happy, but it wouldn't kill me, but would have to be a > semi-blind replace to keep me sane (Maybe I would just write a script to > convert all my .py files with hex constants to decimal - the bitwise ops > will still get me, but locating them may be OK.) That's what I did with h2py.py. It now generates decimal constants to silence the warning, and a unary - to get the negative value, since I cannot guesss whether the constant really was meant to be negative or not. > Of course, someone will have to update PyXPCOM in the Mozilla tree, > and any thing else using hex constants in that way but not actively > maintained may struggle and become unavailable for a little while. That's why Python 2.3 adds the warning. There is no change to semantics, and over a year of time to make modification (when you update to Python 2.4). > Plenty of extension authors not on python-dev (including plenty of > home-grown extensions taking win32con constants, for example), and > obviously I can't speak for them. And as Jack's latest mail points > out, supporting multiple versions of Python becomes almost > impossible. The cost is real, and not able to be borne only by > people on this list. If you can't afford to break code now, accept the warning. If you don't like the warning, silence it with a global filter. I estimate that Python users confronted with the warning will go through the same process as readers of this list: - What is this warning? - I want constants to be int. Can I have that back please? (answer: at the moment, they are still int. In the future, they will be long, and rightfully so). - Ok, what do I do? (answer: if you want negative constants, write a positive constant, and add a unary -. If you want a positive constant, add an L). - I don't care whether they are positive or negative. (answer: then add an L) - I don't want to add that many Ls. (answer: then use the future import) - This will break my C modules which will give OverflowErrors. (answer: use new formatting codes) - This will break backwards compatibility. (answer: this is what the warning is for. Breaking backwards compatibility cannot be avoided. The warning warns you that something *will* break at some point. It is your choice what breaks, and at what point. Reconsider whether you don't care whether the constants are positive or negative). When users truly understand the issues, and can make an educated decision what to do, the warning has served its purpose. Regards, Martin From martin@v.loewis.de Sat Nov 30 00:26:57 2002 From: martin@v.loewis.de (Martin v. =?iso-8859-15?q?L=F6wis?=) Date: 30 Nov 2002 01:26:57 +0100 Subject: [Python-Dev] int/long FutureWarning In-Reply-To: <1283719B-03F6-11D7-AC2C-000A27B19B96@oratrix.com> References: <1283719B-03F6-11D7-AC2C-000A27B19B96@oratrix.com> Message-ID: Jack Jansen writes: > How about taking a completely different angle on this matter, and > looking at PyArg_Parse itself? If we can keep PyArg_Parse 100% > backward compatible (which would mean that its "i" format would take > any IntObject or LongObject between -1e31 and 1e32-1) and introduce a > new (preferred) way to parse arguments that not only does the right > thing by being expressive enough to make a difference between > "currency integers" and "bitmap integers", but also cleans up the > incredible amount of cruft that PyArg_Parse has accumulated over the > years? I had a similar idea, so I'd encourage you to spell out your proposal in more detail, or even in an implementation. My idea was to provide a ParseTuple wrapper, which would be like int PyArg_ParseTupleLenient(PyObject *args, char *format, ...) { char format1[200]; int retval; va_list va; for(int i = 0; format[i]; i++) format1[i] = lenient_format(format[i]); va_start(va, format); retval = PyArg_VaParse(args, format1, va); va_end(va); return retval; } This would replace the "i" format with a "k" format, and perhaps make other changes to the format. Those of you needing to support older Python releases could #define PyArg_ParseTupleLenient PyArg_ParseTuple in your distribution, or provide other appropriate wrappers. Regards, Martin From guido@python.org Sat Nov 30 01:33:02 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 29 Nov 2002 20:33:02 -0500 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: Your message of "Fri, 29 Nov 2002 11:48:03 EST." References: Message-ID: <200211300133.gAU1X2u09904@pcp02138704pcs.reston01.va.comcast.net> > Earlier this month, Tim P. wrote: > > > Semi-unfortunately, the author of that [bzip2] has > > > > no idea if it actually works on 95/98/ME/NT/XP > > > > and in the docs for "3.8 Making a Windows DLL" > > > > I haven't tried any of this stuff myself, but it all looks > > plausible. > > > > That means it will require some real work to build and test this > > stuff on 6 flavors of Windows. Not a showstopper, but does raise > > the bar for getting into the PLabs Windows distro. Tim & I have different beliefs here. Tim believes that if something works on one flavor of Windows, that says nonthing about the other flavors. I believe that when it works on one flavor, you can assume that it works on all, unless proof to the contrary is shown. I'm sure we both have experience to back this up. :-) > Sorry if I'm way off base here, but does the underlying bzip2 package > have to be in a DLL, or can't that be built as a static library, which > gets linked into the .pyd, which *is* a DLL? In either case, it doesn't > seem like it would be very difficult to create whatever flavor library > is needed. The code for bzip2 seems to be very portably written. > > The python-bz2 code, on the other hand, needed a little bit of tweaking > to make it compile with Microsoft's compiler. > > I'll be happy to help in whatever way would be useful in dealing with > the "raised bar," as the prospect of having Python support on all > platforms for bz2 compression (and tarfiles) is very appealing. If you could get Python from CVS, build it with MSVC 6.0 for Windows (elaborate instructions are in PCBuild/readme.txt!!!), and see if the bz2 module works on all flavors of Windows to which you have access, that would be tremendously helpful IMO. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@python.org Sat Nov 30 01:37:11 2002 From: guido@python.org (Guido van Rossum) Date: Fri, 29 Nov 2002 20:37:11 -0500 Subject: [Python-Dev] Proposed changes to linuxaudiodev In-Reply-To: Your message of "Fri, 29 Nov 2002 13:27:39 EST." <20021129182739.GA23142@cthulhu.gerg.ca> References: <20021129182739.GA23142@cthulhu.gerg.ca> Message-ID: <200211300137.gAU1bB209985@pcp02138704pcs.reston01.va.comcast.net> > I've been hacking sporadically on the linuxaudiodev module for the last > couple of days. Initial results are in patch #645786 > (www.python.org/sf/645786). The main goal of this patch is to make > linuxaudiodev objects a thinner wrapper around the underlying OSS device > driver, while still providing some highish-level convenience methods -- > see my comments in the patch for details. > > There's a slight theoretical risk of backwards incompatibility, though. > Since this module has never been documented, and since its current > behaviour is such that doing anything really funky is severely > curtailed, I very much doubt this will be a problem. Is anyone doing > anything with linuxaudiodev more sophisticated than playing silly beep > sounds? Should I ask around on python-list to be sure I won't ruin > anyone's day with this change? > > Oh yeah: someone noted in the CVS history that the module is misnamed. > This is true; it should probably have been called ossaudiodev -- OSS is > the current standard audio API used by Linux and, I think, various BSDs. > It's also available for a bunch of commercial Unices. I recommend that you simply check in your new code as ossaudiodev and we'll mark the linuxaudiodev module as deprecated. --Guido van Rossum (home page: http://www.python.org/~guido/) From bkline@rksystems.com Sat Nov 30 04:32:01 2002 From: bkline@rksystems.com (Bob Kline) Date: Fri, 29 Nov 2002 23:32:01 -0500 (EST) Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: <200211300133.gAU1X2u09904@pcp02138704pcs.reston01.va.comcast.net> Message-ID: On Fri, 29 Nov 2002, Guido van Rossum wrote: > > I'll be happy to help in whatever way would be useful in dealing with > > the "raised bar," as the prospect of having Python support on all > > platforms for bz2 compression (and tarfiles) is very appealing. > > If you could get Python from CVS, build it with MSVC 6.0 for Windows > (elaborate instructions are in PCBuild/readme.txt!!!), and see if > the bz2 module works on all flavors of Windows to which you have > access, that would be tremendously helpful IMO. If MSVC 7.0 isn't at least as good, I'll find a machine which still has 6.0 on it (or reinstall it on one of our development systems), but I pulled down the sources from CVS and built it with MSC++ 7 and the all 28 tests in the bz2 test suite pass on both Windows 2000 and Windows XP. Should I dig up a MSC6 system, or is this just as good? -- Bob Kline mailto:bkline@rksystems.com http://www.rksystems.com From brett@python.org Sat Nov 30 04:17:35 2002 From: brett@python.org (Brett Cannon) Date: Fri, 29 Nov 2002 20:17:35 -0800 (PST) Subject: [Python-Dev] Can someone look at dummy_thread (#622537)? Message-ID: There is a slight chance some of you remember the discussion that Zack Weinberg brought up here back in September about his experiences in rewriting ``tempfile``. One of the things he brought up was having to write his own fake lock so that the code would work when ``thread`` was not available. Guido suggested writing a module called ``dummy_thread`` that was API compatible with ``thread`` but was available on all platforms; he said he would check it in if it was written. Well, I wrote it. But Guido has told me that he is too busy to check it in and so I am emailing you guys to see if someone can look at it and check it in. I would like to get it in before 2.3a so as to get the extra testing the alpha will get. The patch is #622537 ( http://www.python.org/sf/622537 ). It is complete to the best of my abilities. I have docs and the module and the tests and such. Should be pretty straight-forward to apply assuming I didn't muck up the docs (first attempt at doing any docs from scratch). The only thing beyond quality of code and documentation that might have to be dealt with is how far to integrate ``dummy_thread`` into the stdlib. I have a patch to have ``Queue``, ``threading``, and ``tempfile`` all use ``dummy_thread`` when ``thread`` is not available. Now I tested all of them with and without ``thread`` available and they all pass. But of course I wonder if this will break any code. The problem is that since ``dummy_thread`` basically just executes thread calls serially there is a problem when sommething like blocking I/O is used that blocks waiting for another thread (test_asynchat and test_socket do that, but they require ``thread`` so there is no issue there outright). So there is a chance that some code, when trying to run using ``threading`` will lock up. But I don't think this is an issue. I say in the docs that if you want to guarantee true threading to run ``import thread; del thread`` before importing ``threading``. But the main reason I don't think it will be an issue is that all other code that runs fine (but slowly) using ``dummy_thread`` now can be run where threads are not available. And those programs where deadlock would occur do not lose or gain anything since they couldn't run on a platform without ``thread`` anyway. The only issue is that if they don't check for ``thread`` someone trying to run the program will have a deadlocked program (although they shouldn't have been trying to run it in the first place). This is the only real questionable thing. Obviously the module can be checked in with no problem without touching the other modules (although ``Queue`` and ``tempfile`` should be patched regardless) if this seems to dicey to do. -Brett From bkline@rksystems.com Sat Nov 30 08:19:34 2002 From: bkline@rksystems.com (Bob Kline) Date: Sat, 30 Nov 2002 03:19:34 -0500 (EST) Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: Message-ID: On Fri, 29 Nov 2002, Bob Kline wrote: > On Fri, 29 Nov 2002, Guido van Rossum wrote: > > > > I'll be happy to help in whatever way would be useful in dealing with > > > the "raised bar," as the prospect of having Python support on all > > > platforms for bz2 compression (and tarfiles) is very appealing. > > > > If you could get Python from CVS, build it with MSVC 6.0 for Windows > > (elaborate instructions are in PCBuild/readme.txt!!!), and see if > > the bz2 module works on all flavors of Windows to which you have > > access, that would be tremendously helpful IMO. > > If MSVC 7.0 isn't at least as good, I'll find a machine which still > has 6.0 on it (or reinstall it on one of our development systems), > but I pulled down the sources from CVS and built it with MSC++ 7 and > the all 28 tests in the bz2 test suite pass on both Windows 2000 and > Windows XP. Should I dig up a MSC6 system, or is this just as good? I found a Windows 2000 Server that still had Visual Studio 6.0 on it. Pulled down the latest CVS sources, built Python (Release) with the bz2 module, and ran the test suite, which again passed. So far, therefore: W2K Pro + MSC7: passed all tests WXP Pro + MSC7: passed all tests W2K Server + MSC6: passed all tests -- Bob Kline mailto:bkline@rksystems.com http://www.rksystems.com From tim_one@email.msn.com Sat Nov 30 12:54:37 2002 From: tim_one@email.msn.com (Tim Peters) Date: Sat, 30 Nov 2002 07:54:37 -0500 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: <200211300133.gAU1X2u09904@pcp02138704pcs.reston01.va.comcast.net> Message-ID: [Guido] > Tim & I have different beliefs here. Tim believes that if something > works on one flavor of Windows, that says nonthing about the other > flavors. I believe that when it works on one flavor, you can assume > that it works on all, unless proof to the contrary is shown. I'm sure > we both have experience to back this up. :-) It depends on whether "something" relies on behavior that's more due to the OS than to C. It's really the same as whether something that works on one flavor of Unix works on all other flavors of Unix. The latest Windows example was just last week, where someone's "clever" use of the new tempfile.NamedTemporaryFile in the Zope test suite worked fine on Win9X but died with an IOError on Win2K, and couldn't be fixed short of rewriting that part of the code from scratch. The bz2 code, and especially its test suite, had Unix-specific stuff at first, but I didn't see anything plausibly Windows-flavor-specific in it (or Unix-flavor-specific either). > ... > If you could get Python from CVS, build it with MSVC 6.0 for Windows > (elaborate instructions are in PCBuild/readme.txt!!!), and see if the > bz2 module works on all flavors of Windows to which you have access, > that would be tremendously helpful IMO. Yes it would. You can't know for sure until it's tried. Don't forget the French and German variants of Windows too . From guido@python.org Sat Nov 30 15:31:03 2002 From: guido@python.org (Guido van Rossum) Date: Sat, 30 Nov 2002 10:31:03 -0500 Subject: [Python-Dev] Getting python-bz2 into 2.3 In-Reply-To: Your message of "Fri, 29 Nov 2002 23:32:01 EST." References: Message-ID: <200211301531.gAUFV4T12368@pcp02138704pcs.reston01.va.comcast.net> > > > I'll be happy to help in whatever way would be useful in dealing with > > > the "raised bar," as the prospect of having Python support on all > > > platforms for bz2 compression (and tarfiles) is very appealing. > > > > If you could get Python from CVS, build it with MSVC 6.0 for Windows > > (elaborate instructions are in PCBuild/readme.txt!!!), and see if > > the bz2 module works on all flavors of Windows to which you have > > access, that would be tremendously helpful IMO. > > If MSVC 7.0 isn't at least as good, I'll find a machine which still has > 6.0 on it (or reinstall it on one of our development systems), but I > pulled down the sources from CVS and built it with MSC++ 7 and the all > 28 tests in the bz2 test suite pass on both Windows 2000 and Windows XP. > Should I dig up a MSC6 system, or is this just as good? More testing is always welcome; if you could test-drive the instructions in PCbuild/readme.txt for Tcl/Tk and Sleepycat Berkeley DB on multiple platforms that would be super. I have understood (from Tim, again) that we can't upgrade to MSVC 7.0, because the runtime is incompatible in some places, and we don't want to break existing add-on distributions that are compiled with 6.0 -- not every open source developer can afford to upgrade. --Guido van Rossum (home page: http://www.python.org/~guido/) From gward@python.net Sat Nov 30 16:48:18 2002 From: gward@python.net (Greg Ward) Date: Sat, 30 Nov 2002 11:48:18 -0500 Subject: [Python-Dev] Proposed changes to linuxaudiodev In-Reply-To: <200211300137.gAU1bB209985@pcp02138704pcs.reston01.va.comcast.net> References: <20021129182739.GA23142@cthulhu.gerg.ca> <200211300137.gAU1bB209985@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <20021130164818.GA25894@cthulhu.gerg.ca> On 29 November 2002, Guido van Rossum said: > I recommend that you simply check in your new code as ossaudiodev and > we'll mark the linuxaudiodev module as deprecated. Sounds fine to me. Should I worry about adding an official DeprecationWarning to linuxaudiodev.c? Or does that wait until 2.4? Greg -- Greg Ward http://www.gerg.ca/ Jesus Saves -- but Buddha gets the rebound -- he shoots -- he SCORES!!! From guido@python.org Sat Nov 30 19:58:37 2002 From: guido@python.org (Guido van Rossum) Date: Sat, 30 Nov 2002 14:58:37 -0500 Subject: [Python-Dev] off to Zope3 sprintathon Message-ID: <200211301958.gAUJwbr13732@pcp02138704pcs.reston01.va.comcast.net> I'm off with Jim Fulton to Rotterdam to celebrate Sinterklaas and participate in the Zope3 Sprintathon: http://dev.zope.org/Wikis/DevSite/Projects/ComponentArchitecture/InfraeSprintathon I will have internet access but little time to check on my email. I'll be back at work (busy as ever :-) on December 9th. --Guido van Rossum (home page: http://www.python.org/~guido/)