From noreply at sourceforge.net Tue Jun 1 02:36:58 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 02:37:16 2004 Subject: [ python-Bugs-942952 ] Weakness in tuple hash Message-ID: Bugs item #942952, was opened at 2004-04-27 06:41 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=942952&group_id=5470 Category: Python Interpreter Core Group: None >Status: Closed >Resolution: Fixed Priority: 6 Submitted By: Steve Tregidgo (smst) Assigned to: Nobody/Anonymous (nobody) Summary: Weakness in tuple hash Initial Comment: I've encountered some performance problems when constructing dictionaries with keys of a particular form, and have pinned the problem down to the hashing function. I've reproduced the effect in Python 1.5.2, Python 2.2 and Python 2.3.3. I came across this when loading a marshalled dictionary with about 40000 entries. Loading other dictionaries of this size has always been fast, but in this case I killed the interpreter after a few minutes of CPU thrashing. The performance problem was caused because every key in the dictionary had the same hash value. The problem is as follows: for hashable values X and Y, hash( (X, (X, Y)) ) == hash(Y). This is always true (except in the corner case where hash((X, Y)) is internally calculated to be -1 (the error value) and so -2 is forced as the return value). With data in this form where X varies more than Y (Y is constant, or chosen from relatively few values compared to X) chances of collision become high as X is effectively ignored. The hash algorithm for tuples starts with a seed value, then generates a new value for each item in the tuple by multiplying the iteration's starting value by a constant (keeping the lowest 32 bits) and XORing with the hash of the item. The final value is then XORed with the tuple's length. In Python (ignoring the careful business with -1): # assume 'my_mul' would multiply two numbers and return the low 32 bits value = seed for item in tpl: value = my_mul(const, value) ^ hash(item) value = value ^ len(tpl) The tuple (X, Y) therefore has hash value: my_mul(const, my_mul(const, seed) ^ hash(X)) ^ hash(Y) ^ 2 ...and the tuple (X, (X, Y)) has hash value: my_mul(const, my_mul(const, seed) ^ hash(X)) ^ hash((X, Y)) ^ 2 The outer multiplication is repeated, and is XORed with itself (cancelling both of them). The XORed 2s cancel also, leaving just hash(Y). Note that this cancellation property also means that the same hash value is shared by (X, (X, (X, (X, Y)))), and (X, (X, (X, (X, (X, (X, Y)))))), and so on, and (X, Z, (X, Z, Y)) and so on. I realise that choosing a hash function is a difficult task, so it may be that the behaviour seen here is a tradeoff against other desireable properties -- I don't have the expertise to tell. My naive suggestion would be that an extra multiplication is necessary, which presumably has a performance impact (multiplication being more expensive than XOR) but would reduce the possibility of cancellation. On the other hand, perhaps this particular case is rare enough that it's not worth the effort. For my own application I'm fortunate in that I can probably rearrange the data structure to avoid this case. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-01 01:36 Message: Logged In: YES user_id=80475 Applied a small, slightly cheaper (no second multiply) variation on Tim's original. It is not as scientfic looking, but is does assure changing, odd multipliers for each position. When Tim moved the XOR before the multiply, that fixed the OP's original problem. The new variation survives Tim's torture test which I augmented a bit and put in the test suite for the benefit of future generations. See: Objects\tupleobject.c 2.91 Lib\test\test_tuple.py 1.3 ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2004-05-20 02:16 Message: Logged In: YES user_id=1033539 Re Tim's concern about cache hits: The trade-off is a slight risk of slight performance degradation when hashing very long tuples, versus the risk of complete app failure in certain rare cases like Steve's. I think it is worthwhile to use more constants. Anecdotally, I did not experience any qualitative slowdown while exercising these things with the unit tests. OTH, I didn't time anything, and anyway I am using a creaky old Celeron. Another alternative: use two constants: x = (len & 0x1 ? 0xd76aa471 : 0xe8c7b75f) * (++x ^ y); That passes all of the current unit tests, including Tim's set. However, with that I am less confident that this bug won't come up again in the future. ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2004-05-19 18:24 Message: Logged In: YES user_id=1033539 I uploaded my current diff and some unit tests to patch #957122. Please look over the unit tests carefully. I am sure there is much more we can do. Note that besides the addition of the table containing the constants, only three lines of code are changed from the original function. By inspection, this code should run at essentially the same speed as the old code on all platforms (tests anyone?) Based on the results of the unit tests, I had to do some tweaking. First of all, Tim is correct: constants that are divisible by any small prime (e.g., 2) are a problem. To be on the safe side, I replaced each md5 constant with a nearby prime. I ran into a problem with empty tuples nested in tuples. The seed kept getting xored with itself. It turns out that no matter how nice the md5 constants are, they all produce the same value when multiplied by zero. Find "++x" in the code to see the fix for this. Finally, I moved the xor with the tuple length from the end of the calculation to the beginning. That amplifies the effect very nicely. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-05-19 11:15 Message: Logged In: YES user_id=31435 If 250,050 balls are tossed into 2**32 bins at random, the mean number of occupied bins is ~250042.7 (== expected number of collisions is ~7.3), with std deviation ~2.7. While we're happy to get "better than random" distributions, "astronomically worse than random" distributions usually predict more of the same kind of problem reported later. That said, I certainly agree that would be a major, easy, and cheap improvement over the status quo. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 10:10 Message: Logged In: YES user_id=4771 Tim: > Most simple changes to the current algorithm don't > do significantly better, and some do much worse. The simple change suggested by ygale (put the xor before the multiply) does a quite reasonable job in your test suite. The number of collisions drop from 66966 to 1466. It also solves the original problem with (x, (x,y)), which should not be ignored because it could naturally occur if someone is using d.items() to compute a hash out of a dict built by d[x] = x,y. With this change it seems more difficult to find examples of regular families of tuples that all turn out to have the same hash value. Such examples do exist: (x, (1684537936,x)) yields 2000004 for any x. But the value 1684537936 has been carefully constructed for this purpose, it's the only number to misbehave like this :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-05-17 21:14 Message: Logged In: YES user_id=31435 Nothing wrong with being "better than random" -- it's not the purpose of Python's hash() to randomize, but to minimize hash collisions. That's why, e.g., hash(int) returns int unchanged. Then when indexing a dict by any contiguous range of ints (excepting -1), there are no collisions at all. That's a good thing. There's nothing in the structure of my example function geared toward the specific test instances used. It *should* do a good job of "spreading out" inputs that are "close together", and that's an important case, and the test inputs have a lot of that. It's a good thing that it avoids all collisions across them. About speed, the only answer is to time both, and across several major platforms. An extra mult may or may not cost more than blowing extra cache lines to suck up a table of ints. About the table of ints, it's always a dubious idea to multiply by an even number. Because the high bits of the product are thrown away, multiplying by an even number throws away information needlessly (it throws away as many useful bits as there are trailing zero bits in the multiplier). odd_number * 69069**i is always odd, so the multiplier in the table-free way is never even (or, worse, divisible by 4, 8, 16, etc). ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2004-05-17 08:39 Message: Logged In: YES user_id=1033539 Here is the Python code for the hash function that I am describing: def hx(x): if isinstance(x, tuple): result = 0x345678L for n, elt in enumerate(x): y = (tbl[n & 0x3f] + (n >> 6)) & 0xffffffffL result = (y * (result ^ hx(elt))) & 0xffffffffL result ^= len(x) else: result = hash(x) return result # 64 constants taken from MD5 # (see Modules/md5c.c in the Python sources) tbl = ( -0x28955b88, -0x173848aa, 0x242070db, -0x3e423112, -0xa83f051, 0x4787c62a, -0x57cfb9ed, -0x2b96aff, 0x698098d8, -0x74bb0851, -0xa44f, -0x76a32842, 0x6b901122, -0x2678e6d, -0x5986bc72, 0x49b40821, -0x9e1da9e, -0x3fbf4cc0, 0x265e5a51, -0x16493856, -0x29d0efa3, 0x2441453, -0x275e197f, -0x182c0438, 0x21e1cde6, -0x3cc8f82a, -0xb2af279, 0x455a14ed, -0x561c16fb, -0x3105c08, 0x676f02d9, -0x72d5b376, -0x5c6be, -0x788e097f, 0x6d9d6122, -0x21ac7f4, -0x5b4115bc, 0x4bdecfa9, -0x944b4a0, -0x41404390, 0x289b7ec6, -0x155ed806, -0x2b10cf7b, 0x4881d05, -0x262b2fc7, -0x1924661b, 0x1fa27cf8, -0x3b53a99b, -0xbd6ddbc, 0x432aff97, -0x546bdc59, -0x36c5fc7, 0x655b59c3, -0x70f3336e, -0x100b83, -0x7a7ba22f, 0x6fa87e4f, -0x1d31920, -0x5cfebcec, 0x4e0811a1, -0x8ac817e, -0x42c50dcb, 0x2ad7d2bb, -0x14792c6f ) ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2004-05-17 08:09 Message: Logged In: YES user_id=1033539 Tim, your test fixture is very nice. I verified your result that there are no collisions over the fixture set for the algorithm you specified. Based on that result, I would recommend that we NOT use your algorithm. The probability of no collisions is near zero for 250000 random samples taken from a set of 2^32 elements. So the result shows that your algorithm is far from a good hash function, though it is constructed to be great for that specific fixture set. I ran my algorithm on your fixture set using the table of 64 constants taken from MD5. I got 13 collisions, a reasonable result. It is not true that I repeat multipliers (very often) - note that I increment the elements of the table each time around, so the sequence repeats itself only after 2^38 tuple elements. Also, my algorithm runs at esentially the same speed as the current one: no additional multiplies, only a few adds and increments and such. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-05-16 16:54 Message: Logged In: YES user_id=31435 I can't make more time for this, so unassigned myself. This seems to make a good set of test inputs: N = whatever base = range(N) xp = [(i, j) for i in base for j in base] inps = base + [(i, j) for i in base for j in xp] + [(i, j) for i in xp for j in base] When N == 50, inps contains 250,050 distinct elements, and 66,966 of them generate a duplicated hash code. Most simple changes to the current algorithm don't do significantly better, and some do much worse. For example, just replacing the xor with an add zooms it to 119,666 duplicated hash codes across that set. Here's a hash function that yields no collisions across that set: def hx(x): if isinstance(x, tuple): result = 0x345678L mult = 1000003 for elt in x: y = hx(elt) result = (((result + y) & 0xffffffffL) * mult) & 0xffffffffL mult = (mult * 69069) & 0xffffffffL result ^= len(x) if result == -1: result = -2 else: result = hash(x) return result In C, none of the "& 0xffffffffL" clauses are needed; in Python code, they're needed to cut results back to 32 bits. 69069 is a well-known multiplier for a better-than-most pure multiplicative PRNG. This does add a multiply per loop trip over what we have today, but it can proceed in parallel with the existing multiply. Unlike a canned table, it won't repeat multipliers, (unless the tuple has more than a billion elements). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-05-16 16:09 Message: Logged In: YES user_id=31435 Noting that (X, (Y, Z)) and (Y, (X, Z)) hash to the same thing today too (module -1 endcases). For example, >>> hash((3, (4, 8))) == hash((4, (3, 8))) True >>> ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2004-05-16 07:25 Message: Logged In: YES user_id=1033539 Hmm, you are correct. This is appears to be an off-by-one problem: the original seed always gets multiplied by the constant (which is silly), and the last item in the tuple does not get multiplied (which causes the bug). The correct solution is to change: value = my_mul(const, value) ^ hash(item) in Steve's pseudo-code to: value = my_mul(const, value ^ hash(item)) Of course, you still get a lot more robustness for almost no cost if you vary "const" across the tuple via a table. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2004-05-03 08:40 Message: Logged In: YES user_id=764593 Note that in this case, the problem was because of nested tuples. X was the first element of both tuples, and both tuples had the same length -- so the X's would still have been multiplied by the same constant. (Not the same constant as Y, and hash(X, Y), but the same constant as each other.) A non-linear function might work, but would do bad things to other data. ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2004-05-01 21:50 Message: Logged In: YES user_id=1033539 I suggest leaving the function as is, but replacing the constant with a value that varies as we step through the tuple. Take the values from a fixed table. When we reach the end of the table, start again from the beginning and mung the values slightly (e.g., increment them). True, there will always be the possibilities of collisions, but this will make it very unlikely except in very weird cases. Using a table of constants is a standard technique for hashes. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2004-04-28 12:45 Message: Logged In: YES user_id=764593 Wouldn't that just shift the pathological case from (x, (x, y)) to (x, (-x, y))? My point was that any hash function will act badly on *some* pattern of input, and if a pattern must be penalized, this might be a good pattern to choose. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-04-28 11:36 Message: Logged In: YES user_id=80475 The OP was not referring to "some collisions"; his app collapsed all entries to a single hash value. Changing XOR to + would partially eliminate the self cancelling property of this hash function. Also, I am concerned about the tuple hash using the same multiplier as the hash for other objects. In sets.py, a naive combination of the component hash values caused many distinct sets to collapse to a handful of possibilities -- while tuples do not have an identical issue, it does highlight the risks involved. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2004-04-28 09:50 Message: Logged In: YES user_id=764593 Any hash will have some collisions. If there is going to be predictably bad data, this is probably a good place to have it. The obvious alternatives are a more complicated hash (slows everything down), a different hash for embedded tuples (bad, since hash can't be cached then) or ignoring some elements when determining the hash (bad in the normal case of different data). I would also expect your workaround of data rearrangement to be sensible almost any time (X, (X, Y)) is really a common case. (The intuitive meaning for me is "X - then map X to Y", which could be done as (X, Y) or at least (X, (None, Y)), or perhaps d[X]=(X,Y).) ---------------------------------------------------------------------- Comment By: Steve Tregidgo (smst) Date: 2004-04-27 06:45 Message: Logged In: YES user_id=42335 I'll repeat the tuple hashing algorithm in a fixed-width font (with the first line following "for..." being the loop body): # assume 'my_mul' would multiply two numbers and return the low 32 bits value = seed for item in tpl: value = my_mul(const, value) ^ hash(item) value = value ^ len(tpl) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=942952&group_id=5470 From noreply at sourceforge.net Tue Jun 1 02:59:08 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 03:01:37 2004 Subject: [ python-Bugs-962918 ] 2.3.4 can not be installed over 2.3.3 Message-ID: Bugs item #962918, was opened at 2004-05-29 23:54 Message generated for change (Comment added) made by zgoda You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=962918&group_id=5470 Category: Build Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jarek Zgoda (zgoda) Assigned to: Nobody/Anonymous (nobody) Summary: 2.3.4 can not be installed over 2.3.3 Initial Comment: When trying to install newly-compiled 2.3.4 over existing 2.3.3 I ran into "libinstall error" on zipfile.py. After removing whole /usr/lib/python2.3 and installing from scratch the installation was succesful. I think it's some omission from installer script, since I didn't have any problem when I upgraded 2.3.0 to 2.3.2 ant thet to 2.3.3. ---------------------------------------------------------------------- >Comment By: Jarek Zgoda (zgoda) Date: 2004-06-01 08:59 Message: Logged In: YES user_id=92222 System is Slackware Linux 9.1/2.6.6, Python 2.3.3 (version that I tried to overwrite) was compiled by me on the same machine. Error message was as I wrote: "libinstall error". Command chain was very usual: ./configure --prefix=/usr --disable-ipv6 make make test make install Error was raised during last step, the very last entry before error occured on the list of installed libraries was zipfile.py. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-05-31 23:09 Message: Logged In: YES user_id=21627 What operating system are you using? Can you report the precise wording of the error that you get, and the command that you have invoked to trigger this error? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=962918&group_id=5470 From noreply at sourceforge.net Tue Jun 1 04:17:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 04:18:01 2004 Subject: [ python-Bugs-944394 ] No examples or usage docs for urllib2 Message-ID: Bugs item #944394, was opened at 2004-04-29 11:02 Message generated for change (Comment added) made by fresh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944394&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Chris Withers (fresh) Assigned to: Nobody/Anonymous (nobody) Summary: No examples or usage docs for urllib2 Initial Comment: Hi there, I'm sure I reported this before, but it's a couple of major releases later, and there's still no usage docs for urllib2. The examples given are too trivial to be helpful, but I'm guessing people are using the module so there must be some examples out there somewhere ;-) With a bit o fhelp from Moshez, I found the docstring in the module source. At the very least, it'd be handy if that appeared somewhere at: http://docs.python.org/lib/module-urllib2.html But really, mroe extensive and helpful documentation on this cool new module would be very handy. Chris ---------------------------------------------------------------------- >Comment By: Chris Withers (fresh) Date: 2004-06-01 08:17 Message: Logged In: YES user_id=24723 I'm certainly willing, but I am totally incapable :-S The reason I opened this issue is because it would seem that urllib2 is better the urllib, but seems to be severely underdocumented, and hence I don't understand how to use it and so can't provide examples. As I said in the original submission, including the module's docstring in the Python module documentation would be a start, but doesn't cover what appears to be the full potential of a great module... ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-05-31 21:15 Message: Logged In: YES user_id=21627 Are you willing to provide examples? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944394&group_id=5470 From noreply at sourceforge.net Tue Jun 1 06:10:14 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 06:10:27 2004 Subject: [ python-Bugs-924301 ] A leak case with cmd.py & readline & exception Message-ID: Bugs item #924301, was opened at 2004-03-27 00:28 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Sverker Nilsson (svenil) Assigned to: Michael Hudson (mwh) Summary: A leak case with cmd.py & readline & exception Initial Comment: A leak to do with readline & cmd, in Python 2.3. I found out what hold on to my interactive objects too long ('for ever') in certain circumstances. The circumstance had to do with an exception being raised in Cmd.cmdloop and handled (or not handled) outside of Cmd.cmdloop. In cmd.py, class Cmd, in cmdloop(), if an exception is raised and propagated out from the interior of cmdloop, the function postloop() is not called. The default function of this, (in 2.3) when the readline library is present, is to restore the completer, via: readline.set_completer(self.old_completer) If this is not done, the newly (by preloop) inserted completer will remain. Even if the loop is called again and run without exception, the new completer will remain, because then in postloop the old completer will be set to our new completer. When we exit, the completer will remain the one we set. This will hold on to our object, aka 'leak'. - In cmd.py in 2.2 no attempt was made to restore the completer, so this was also a kind of leak, but it was replaced the next time a Cmd instance was initialized. Now, however, the next time we will not replace the old completer, but both of them will remain in memory. The old one will be stored as self.old_completer. If we terminate with an exception, bad luck... the stored completer retains both of the instances. If we terminate normally, the old one will be retained. In no case do we restore the space of the first instance. The only way that would happen, would be if the second instance first exited the loop with an exception, and then entered the loop again an exited normally. But then, the second instance is retained instead! If each instance happens to terminate with an exception, it seems well possible that an ever increasing chain of leaking instances will be accumulated. My fix is to always call the postloop, given the preloop succeeded. This is done via a try:finally clause. def cmdloop(self, intro=None): ... self.preloop() try: ... finally: # Make sure postloop called self.postloop() I am attaching my patched version of cmd.py. It was originally from the tarball of Python 2.3.3 downloaded from Python.org some month or so ago in which cmd.py had this size & date: 14504 Feb 19 2003 cmd.py Best regards, Sverker Nilsson ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-01 11:10 Message: Logged In: YES user_id=6656 Bah. I don't have the laptop with the patch with me, I'll try uploading again in a couple of days. ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-05-29 08:43 Message: Logged In: YES user_id=356603 I couldn't find a new attached file. I acknowledge some problems with my original patch, but have no other suggestion at the moment. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-26 17:36 Message: Logged In: YES user_id=6656 What do you think of the attached? This makes the documentation of pre & post loop more accurate again, which I think is nice. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-19 10:38 Message: Logged In: YES user_id=6656 This is where I go "I wish I'd reviewed that patch more carefully". In particular, the documentation of {pre,post}loop is now out of date. I wonder setting/getting the completer in these functions was a good idea. Hmm. This bug report confuses me :-) but I can certainly see the intent of the patch... ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-19 02:52 Message: Logged In: YES user_id=80475 Michael, this touches some of your code. Do you want to handle this one? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 From noreply at sourceforge.net Tue Jun 1 08:39:19 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 08:39:26 2004 Subject: [ python-Bugs-964230 ] random.choice([]) should return more intelligible exception Message-ID: Bugs item #964230, was opened at 2004-06-01 12:39 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Michael Hoffman (hoffmanm) Assigned to: Nobody/Anonymous (nobody) Summary: random.choice([]) should return more intelligible exception Initial Comment: Python 2.3.3 (#1, Mar 31 2004, 11:17:07) [GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import random >>> random.choice([]) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/random.py", line 231, in choice return seq[int(self.random() * len(seq))] IndexError: list index out of range This is simple enough here, but it's harder when it's at the bottom of a traceback. I suggest something like ValueError: sequence must not be empty. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 From noreply at sourceforge.net Tue Jun 1 08:41:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 08:42:02 2004 Subject: [ python-Bugs-963825 ] Distutils should be able to produce Debian packages (.deb) Message-ID: Bugs item #963825, was opened at 2004-05-31 17:16 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=963825&group_id=5470 Category: Distutils Group: Feature Request >Status: Closed >Resolution: Later Priority: 5 Submitted By: Kaleissin (kaleissin) >Assigned to: A.M. Kuchling (akuchling) Summary: Distutils should be able to produce Debian packages (.deb) Initial Comment: Distutils should be able to produce Debian packages (.deb), it's the Other[TM] large linux package-format after all. It might perhaps be done by using "alien" on an .rpm, but for the simple cases of pure Python modules and packages it ought to be handled internally. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-01 08:41 Message: Logged In: YES user_id=11375 Please see nondist/sandbox/Lib/bdist_dpkg.py in the Python CVS tree for a partial implementation. Help developing this would be gratefully accepted. Closing this bug report. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=963825&group_id=5470 From noreply at sourceforge.net Tue Jun 1 08:59:12 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 08:59:57 2004 Subject: [ python-Bugs-962631 ] Typo in Lib/test/test_sax.py can confuse Message-ID: Bugs item #962631, was opened at 2004-05-29 03:32 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=962631&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Bryan Blackburn (blb) >Assigned to: A.M. Kuchling (akuchling) Summary: Typo in Lib/test/test_sax.py can confuse Initial Comment: Not a major bug, but can be confusing...when test_sax.py is run verbosely, it'll say 'Failed' for passing tests. Patch to fix attached. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-01 08:59 Message: Logged In: YES user_id=11375 Applied to both CVS HEAD and the 2.3 branch. Thanks! ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=962631&group_id=5470 From noreply at sourceforge.net Tue Jun 1 09:01:59 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 09:02:06 2004 Subject: [ python-Bugs-953599 ] asyncore misses socket closes when poll is used Message-ID: Bugs item #953599, was opened at 2004-05-13 17:47 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=953599&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Shane Kerr (shane_kerr) >Assigned to: A.M. Kuchling (akuchling) Summary: asyncore misses socket closes when poll is used Initial Comment: Problem: If the loop() function of asyncore is invoked with poll rather than select, the function readwrite() is used when I/O is available on a socket. However, this function does not check for hangup - provided by POLLHUP. If a socket is attempting to write, then POLLOUT never gets set, so the socket hangs. Because poll() is returning immediately, but the return value is never used, asyncore busy-loops, consuming all available CPU. Possible solutions: The easy solution is to check for POLLHUP in the readwrite() function: if flags & (select.POLLOUT | select.POLLHUP): obj.handle_write_event() This makes the poll work exactly like the select - the application raises a socket.error set to EPIPE. An alternate solution - possibly more graceful - is to invoke the handle_close() method of the object: if flags & select.POLLHUP: obj.handle_close() else: if flags & select.POLLIN: obj.handle_read_event() if flags & select.pollout: obj.handle_write_event() This is incompatible with the select model, but it means that the read and write logic is now the same for socket hangups - handle_close() is invoked. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=953599&group_id=5470 From noreply at sourceforge.net Tue Jun 1 09:13:01 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 09:13:07 2004 Subject: [ python-Bugs-923315 ] AIX POLLNVAL definition causes problems Message-ID: Bugs item #923315, was opened at 2004-03-25 13:06 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=923315&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: John Marshall (john_marshall) >Assigned to: A.M. Kuchling (akuchling) Summary: AIX POLLNVAL definition causes problems Initial Comment: Under AIX (5.1 at least), POLLNVAL (from sys/poll.h) is 0x8000. This causes a problem because it is stored in a (signed) short (e.g., revents): ----- struct pollfd { int fd; /* file descriptor or file ptr */ short events; /* requested events */ short revents; /* returned events */ }; ----- As such, the following tests and results are given: ----- ashort (%hx) = 8000, ashort (%x) = ffff8000 POLLNVAL (%hx) = 8000, POLLNVAL (%x) = 8000 ashort == POLLNVAL => 0 ashort == (short)POLLNVAL => 1 ----- Note that the 'ashort == POLLNVAL' test is 0 rather than 1 because (I believe) POLLNVAL is treated as a signed integer, the ashort value is then promoted to signed integer thus giving 0xffff8000, not 0x8000. The problem arises because IBM has chosen to use a negative short value (0x8000) in a signed short variable. Neither Linux or IRIX have this problem because they use POLLNVAL=0x20 which promotes to 0x20. This situation will cause the test_poll to fail and will certainly be a gotcha for AIX users. I have added the following code to the selectmodule.c to address the problem (mask out the upper 32 bits): -----~ line 513:selectmodule.c num = PyInt_FromLong(self->ufds[i].revents & 0xffff); ----- John ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=923315&group_id=5470 From noreply at sourceforge.net Tue Jun 1 09:58:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 09:58:42 2004 Subject: [ python-Bugs-964284 ] Cannot encode 5 digit unicode. Message-ID: Bugs item #964284, was opened at 2004-06-01 13:58 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964284&group_id=5470 Category: Unicode Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Bugs Fly (mozbugbox) Assigned to: M.-A. Lemburg (lemburg) Summary: Cannot encode 5 digit unicode. Initial Comment: For this unicode point: http://www.unicode.org/cgi-bin/GetUnihanData.pl?codepoint=21484 If I do a: >>> u"\u21484" or >>> "\u21484".decode('unicode-escape') I got 2 character: \u2148 and 4. outside BMP? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964284&group_id=5470 From noreply at sourceforge.net Tue Jun 1 10:04:19 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 10:04:26 2004 Subject: [ python-Bugs-964284 ] Cannot encode 5 digit unicode. Message-ID: Bugs item #964284, was opened at 2004-06-01 13:58 Message generated for change (Comment added) made by mozbugbox You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964284&group_id=5470 Category: Unicode Group: Python 2.3 >Status: Closed Resolution: None Priority: 5 Submitted By: Bugs Fly (mozbugbox) Assigned to: M.-A. Lemburg (lemburg) Summary: Cannot encode 5 digit unicode. Initial Comment: For this unicode point: http://www.unicode.org/cgi-bin/GetUnihanData.pl?codepoint=21484 If I do a: >>> u"\u21484" or >>> "\u21484".decode('unicode-escape') I got 2 character: \u2148 and 4. outside BMP? ---------------------------------------------------------------------- >Comment By: Bugs Fly (mozbugbox) Date: 2004-06-01 14:04 Message: Logged In: YES user_id=1033842 It seems that I have to use \U instead of \u here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964284&group_id=5470 From noreply at sourceforge.net Tue Jun 1 10:52:18 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 10:52:38 2004 Subject: [ python-Bugs-818006 ] ossaudiodev FileObject does not support closed const Message-ID: Bugs item #818006, was opened at 2003-10-05 02:30 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=818006&group_id=5470 Category: Extension Modules Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Dave Cinege (dcinege) >Assigned to: Greg Ward (gward) Summary: ossaudiodev FileObject does not support closed const Initial Comment: fin = ossaudiodev.open(dspfile, 'r') if fin.closed == True: AttributeError: closed ---------------------------------------------------------------------- Comment By: Dave Cinege (dcinege) Date: 2003-10-05 16:32 Message: Logged In: YES user_id=314434 Please see: http://python.org/doc/current/lib/bltin-file-objects.html """ File objects also offer a number of other interesting attributes. These are not required for file-like objects, but should be implemented if they make sense for the particular object. "" "Should be" when they "make sense" is my rational for reporting this as a bug. I found this by trying to convert existing code from a normal open of /dev/dsp to ossaudiodev.open(), that IMO "should" have worked. : P Other attributes that "should be" implemented (mode and name) because they "make sense" may also be missing...I haven't checked. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2003-10-05 16:16 Message: Logged In: YES user_id=593130 >From Lib Ref 14.11 ossaudiodev "open( [device, ]mode) Open an audio device and return an OSS audio device object. " Checking http://python.org/doc/current/lib/ossaudio-device- objects.html 14.11.1 Audio Device Objects I can find no mention of closed attribute or indeed of any attributes other than methods. Why were you expecting such? If report is a mistake, please close. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=818006&group_id=5470 From noreply at sourceforge.net Tue Jun 1 13:08:53 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 13:09:01 2004 Subject: [ python-Bugs-955772 ] Nested generator terminates prematurely Message-ID: Bugs item #955772, was opened at 2004-05-18 06:02 Message generated for change (Comment added) made by tjreedy You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=955772&group_id=5470 Category: Python Interpreter Core Group: None Status: Closed Resolution: Invalid Priority: 5 Submitted By: Yitz Gale (ygale) Assigned to: Nobody/Anonymous (nobody) Summary: Nested generator terminates prematurely Initial Comment: def g(x, y): for i in x: for j in y: yield i, j r2 = (0, 1) [e for e in g(r2, g(r2, r2))] Expected result: [(0, (0, 0)), (0, (0, 1)), (0, (1, 0)), (0, (1, 1)), (1, (0, 0)), (1, (0, 1)), (1, (1, 0)), (1, (1, 1))] Actual result: [(0, (0, 0)), (0, (0, 1)), (0, (1, 0)), (0, (1, 1))] ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 13:08 Message: Logged In: YES user_id=593130 Unless you are generating a very long list, you worked too hard. But not as cute. >>> def g(x, y): ... y = list(y) ... for i in x: ... for j in y: ... yield i, j ... >>> r2 = (0, 1) >>> [e for e in g(r2, g(r2, r2))] [(0, (0, 0)), (0, (0, 1)), (0, (1, 0)), (0, (1, 1)), (1, (0, 0)), (1, (0, 1)), (1, (1, 0)), (1, (1, 1))] ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2004-05-19 17:33 Message: Logged In: YES user_id=1033539 OK. I can get the semantics I want using the following: def g(x, y): for i in x: for j in y: yield i, j g = restartable(g) where I have defined: class restartable: def __init__(self, genfn): self.genfn = genfn def __call__(self, *args): return restartable_generator(self.genfn, *args) class restartable_generator: def __init__(self, genfn, *args): self.genfn = genfn self.args = args def __iter__(self): return self.genfn(*self.args) ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-19 12:36 Message: Logged In: YES user_id=80475 Marking this as invalid and closing. Sorry, non-re-iterability is documented fact of life in the world of generators and iterators. The work arounds include making the inner generator into a list or re-instantiating a new generator on every loop: def g(x, y): for i in x for j in g(x, y) yield i, j ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 08:55 Message: Logged In: YES user_id=4771 Your issue is that you only create a total of two generator instances. The 'inner' one is immediately exhausted. Afterwards, this same instance is used again on other 'for' loops but this has no effect, as it has already been exhausted. The difference is the same as between r2 and iter(r2). If you do that: it = iter(r2) for x in it: print x for x in it: print x the second loop will be empty beause the first loop has exhausted the iterator. Generators are iterators (like it) and not sequences (like r2). Using the same iterator on several 'for' loops is useful, though (e.g. if the first loop can be interrupted with 'break'), so there is no way your code could raise an exception, short of saying that it is not allowed to call next() on already-exhausted iterators -- this would be too big a change. ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2004-05-19 07:59 Message: Logged In: YES user_id=1033539 Python functions can be called recursively in general. They know how to save their local namespace in a separate frame for each call. That includes arguments, since the arguments live in the local namespace of the function. Generator functions also seem to be supported, as my example shows. There is a restriction on a generator object that you may not call its next() method again while a previous call to next() is still running. But this is definitely not a case of that restriction - we have two separate generator instances, and each ought to have its own frame. If there is some other restriction, I think it ought to be documented. And if possible, it should raise an exception, like the other restriction. This smells like a bug to me, though. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-19 06:59 Message: Logged In: YES user_id=6656 Well, it's impossible in general. You'd have to store any arguments the generator took somewhere too, wouldn't you? What about things like: def foo(aList): while aList: yield aList.pop() ? ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2004-05-19 06:57 Message: Logged In: YES user_id=1033539 Too bad. What exactly is the restriction? I didn't find anything in the docs. And things like this often do work and are useful. For example: def primes(): yield 2 for n in count(3): for p in primes(): if p > sqrt(n): yield n break if n % p == 0: break ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-18 07:24 Message: Logged In: YES user_id=6656 Um. I think the answer to this is "generators are not reiterable". ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2004-05-18 06:04 Message: Logged In: YES user_id=1033539 Trying again to get the indentation correct: def g(x, y): for i in x: for j in y: yield i, j ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=955772&group_id=5470 From noreply at sourceforge.net Tue Jun 1 13:25:05 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 13:25:12 2004 Subject: [ python-Bugs-957003 ] RFE: Extend smtplib.py with support for LMTP Message-ID: Bugs item #957003, was opened at 2004-05-19 15:56 Message generated for change (Comment added) made by tjreedy You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=957003&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Leif Hedstrom (zwoop) Assigned to: Nobody/Anonymous (nobody) Summary: RFE: Extend smtplib.py with support for LMTP Initial Comment: Hi, attached is a proposal to extend the existing smtplib.py module with support for LMTP (RFC2033). I find it very useful together with IMAP servers like Cyrus. Thanks, -- leif ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 13:25 Message: Logged In: YES user_id=593130 If you were mere requesting a feature enhancement, this would belong in the RFE list and not the bug list. Since you submitted a patch, and not just a proposal, this belongs in the patch list. However, as a patch submission, it also needs 1) a patch to the documentation for smtplib (at least the suggested new text if you can't do Latex) and 2) a patch to the test suite for smtplib (assuming there is one already). Suggestion: close this bug report as invalid and open a patch item with the additional material. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=957003&group_id=5470 From noreply at sourceforge.net Tue Jun 1 13:45:37 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 13:45:49 2004 Subject: [ python-Bugs-951851 ] Crash when reading "import table" of certain windows DLLs Message-ID: Bugs item #951851, was opened at 2004-05-11 13:02 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=951851&group_id=5470 Category: Windows Group: Python 2.3 Status: Open Resolution: Accepted Priority: 7 Submitted By: Mark Hammond (mhammond) Assigned to: Mark Hammond (mhammond) Summary: Crash when reading "import table" of certain windows DLLs Initial Comment: As diagnosed by Thomas Heller, via the python-win32 list. On Windows 2000, if your sys.path includes the Windows system32 directory, 'import wmi' will crash Python. To reproduce, change to the system32 directory, start Python, and execute 'import wmi'. Note that Windows XP does not crash. The problem is in GetPythonImport(), in code that tries to walk the PE 'import table'. AFAIK, this is code that checks the correct Python version is used, but I've never seen this code before. I'm not sure why the code is actually crashing (ie, what assumptions made about the import table are wrong), but I have a patch that checks a the pointers are valid before they are de-referenced. After the patch is applied, the result is the correct: "ImportError: dynamic module does not define init function (initwmi)" exception. Assigning to thomas for his review, then I'm happy to check in. I propose this as a 2.3 bugfix. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2004-06-01 19:45 Message: Logged In: YES user_id=11105 The reason the current code crashed when Python tries to import Win2k's or XP's wmi.dll as extension is that the size of the import table in this dll is zero. The first patch 'dynload_win.c-1.patch' fixes this by returning NULL in that case. The code, however, doesn't do what is intended in a debug build of Python. It looks for imports of 'python23.dll', when it should look for 'python23_d.dll' instead. The second patch 'dynload_win.c-2.patch' fixes this also (and includes the first patch as well). ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-12 03:56 Message: Logged In: YES user_id=14198 Seeing as it was the de-referencing of 'import_name' that crashed, I think a better patch is to terminate the outer while look as soon as we hit a bad string. Otherwise, this outer loop continues, 20 bytes at a time, until the outer pointer ('import_data') also goes bad or happens to reference \0. Attaching a slightly different patch, with comments and sizeof() change. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-12 01:00 Message: Logged In: YES user_id=14198 OK - will change to 12+so(WORD) And yes, I had seen this code - I meant "familiar with" :) Tim: Note that the import failure is not due to a bad import table (but the crash was). This code is trying to determine if a different version of Python is used. We are effectively skipping that check, and landing directly in the "does it have an init function?", then faling normally - ie, the code is now correctly *not* finding other Python versions linked against it. Thus, a different error message doesn't make much sense to me. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-11 17:45 Message: Logged In: YES user_id=11105 Oh, we have to give the /all option to dumpbin ;-) ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-11 17:42 Message: Logged In: YES user_id=11105 Tim, I don't think the import table format has changed, instead wmi.dll doesn't have an import table (for whatever reason). Maybe the code isn't able to handle that correctly. Since Python 2.3 as well at it's extensions are still built with MSVC 6, I *think* we should be safe with this code. I'll attach the output of running MSVC.NET 2003's 'dumpbin.exe \windows\system32\wmi.dll' on my WinXP Pro SP1 for the curious. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-05-11 17:20 Message: Logged In: YES user_id=31435 Mark, while you may not have seen this code before, you checked it in . IIRC, though, the person who *created* the patch was never heard from again. I don't understand what the code thinks it's doing either, exactly. The obvious concern: if the import table format has changed, won't we also fail to import legit C extensions? I haven't seen that happen yet, but I haven't yet built any extensions using VC 7.1 either. In any case, I'd agree it's better to get a mysterious import failure than a crash. Maybe the detail in the ImportError could be changed to indicate whan an import failure is due to a bad pointer. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-11 16:49 Message: Logged In: YES user_id=11105 IMO, IsBadReadPointer(import_data, 12 + sizeof(DWORD)) should be enough. Yes, please add a comment in the code. This is a little bit hackish, but it fixes the problem. And the real problem can always be fixed later, if needed. And, BTW, python 2.3.3 crashes on Windows XP as well. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-11 13:05 Message: Logged In: YES user_id=14198 Actually, I guess a comment regarding the pointer checks and referencing this bug would be nice :) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=951851&group_id=5470 From noreply at sourceforge.net Tue Jun 1 13:53:49 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 13:53:55 2004 Subject: [ python-Bugs-959379 ] Implicit close() should check for errors Message-ID: Bugs item #959379, was opened at 2004-05-24 07:32 Message generated for change (Comment added) made by tjreedy You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=959379&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.3 Status: Open Resolution: None Priority: 5 Submitted By: Peter ?strand (astrand) Assigned to: Nobody/Anonymous (nobody) Summary: Implicit close() should check for errors Initial Comment: As we all know, the fileobjects destructor invokes the close() method automatically. But, most people are not aware of that errors from close() are silently ignored. This can lead to silent data loss. Consider this example: $ python -c 'open("foo", "w").write("aaa")' No traceback or warning message is printed, but the file is zero bytes large, because the close() system call returned EDQUOT. Another similiar example is: $ python -c 'f=open("foo", "w"); f.write("aaa")' When using an explicit close(), you get a traceback: $ python -c 'f=open("foo", "w"); f.write("aaa"); f.close()' Traceback (most recent call last): File "", line 1, in ? IOError: [Errno 122] Disk quota exceeded I'm aware of that exceptions cannot be raised in destructors, but wouldn't it be possible to at least print a warning message? ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 13:53 Message: Logged In: YES user_id=593130 I think there are two separate behavior issues: implicit file close and interpreter shutdown. What happens with $ python -c 'f=open("foo", "w"); f.write("aaa"); del f' which forces the implicit close *before* shutdown. As I recall, the ref manual says little about the shutdown process, which I believe is necessarily implementation/system dependent. There certainly is little that can be guaranteed once the interpreter is partly deconstructed itself. >I'm aware of that exceptions cannot be raised in destructors, but wouldn't it be possible to at least print a warning message? Is there already a runtime warning mechanism, or are you proposing that one be added? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=959379&group_id=5470 From noreply at sourceforge.net Tue Jun 1 13:57:10 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 13:57:14 2004 Subject: [ python-Bugs-964433 ] email package uses \n to rebuild content of a message Message-ID: Bugs item #964433, was opened at 2004-06-01 19:57 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964433&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Marco Bizzarri (emmebi) Assigned to: Nobody/Anonymous (nobody) Summary: email package uses \n to rebuild content of a message Initial Comment: As stated, the email.Parser class uses '\n' to add the firstbodyline to the rest of the message. This is done *AFTER* the splitlines() have been used to remove the first line from the body of a multipart message. Even though this is not a problem in many cases, it can be a great problem when you are dealing with signed files, as in my case. I've indeed a multipart message where I have: a pdf file a pkcs7 signature If I use the parser to analyze the message, the pdf file is actually one byte less, because the original file was \r\n terminated, rather than \n. When the parser tries to parse, it splits the first line (containing the %PDF1.4\r\n), and translates it to %PDF1.4, and then it is joined to the rest of the PDF file using a simple \n. In this way, the file is exactly one byte less of the original file, and, therefore, the signature can't be verified. I think we could avoid this problem using a splitlines(1)[0][:-1] which would keep the original \r\n, remove the \n, which can then be safely added. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964433&group_id=5470 From noreply at sourceforge.net Tue Jun 1 13:58:19 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 13:58:24 2004 Subject: [ python-Bugs-960325 ] "require " configure option Message-ID: Bugs item #960325, was opened at 2004-05-25 15:07 Message generated for change (Comment added) made by tjreedy You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960325&group_id=5470 Category: Build Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Hallvard B Furuseth (hfuru) Assigned to: Nobody/Anonymous (nobody) Summary: "require " configure option Initial Comment: I'd like to be able to configure Python so that Configure or Make will fail if a particular feature is unavailable. Currently I'm concerned with SSL, which just gets a warning from Make: building '_ssl' extension *** WARNING: renaming "_ssl" since importing it failed: ld.so.1: ./python: fatal: libssl.so.0.9.8: open failed: No such file or directory Since that's buried in a lot of Make output, it's easy to miss. Besides, for semi-automatic builds it's in any case good to get a non-success exit status from the build process. Looking at the Make output, I see the bz2 extension is another example where this might be useful. Maybe the option would simply be '--enable-ssl', unless you want that to merely try to build with ssl. Or '--require=ssl,bz2,...'. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 13:58 Message: Logged In: YES user_id=593130 Are you claiming that there is an actual bug, or is this merely an RFE (Request For Enhancement) item? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960325&group_id=5470 From noreply at sourceforge.net Tue Jun 1 14:05:03 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 14:05:07 2004 Subject: [ python-Bugs-964437 ] idle help is modal Message-ID: Bugs item #964437, was opened at 2004-06-01 18:05 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964437&group_id=5470 Category: IDLE Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: idle help is modal Initial Comment: [forwarded from http://bugs.debian.org/252130] the idle online help is unfortunately modal so that one cannot have the help window open and read it, and at the same time work in idle. One must close the help window before continuing in idle which is a nuisance. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964437&group_id=5470 From noreply at sourceforge.net Tue Jun 1 14:13:04 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 14:13:12 2004 Subject: [ python-Bugs-960325 ] "require " configure option Message-ID: Bugs item #960325, was opened at 2004-05-25 21:07 Message generated for change (Comment added) made by hfuru You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960325&group_id=5470 Category: Build Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Hallvard B Furuseth (hfuru) Assigned to: Nobody/Anonymous (nobody) Summary: "require " configure option Initial Comment: I'd like to be able to configure Python so that Configure or Make will fail if a particular feature is unavailable. Currently I'm concerned with SSL, which just gets a warning from Make: building '_ssl' extension *** WARNING: renaming "_ssl" since importing it failed: ld.so.1: ./python: fatal: libssl.so.0.9.8: open failed: No such file or directory Since that's buried in a lot of Make output, it's easy to miss. Besides, for semi-automatic builds it's in any case good to get a non-success exit status from the build process. Looking at the Make output, I see the bz2 extension is another example where this might be useful. Maybe the option would simply be '--enable-ssl', unless you want that to merely try to build with ssl. Or '--require=ssl,bz2,...'. ---------------------------------------------------------------------- >Comment By: Hallvard B Furuseth (hfuru) Date: 2004-06-01 20:13 Message: Logged In: YES user_id=726647 I marked it with Group: Feature Request. Not a bug, but a quality of implementation issue. It seemed more proper here than as a PEP. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 19:58 Message: Logged In: YES user_id=593130 Are you claiming that there is an actual bug, or is this merely an RFE (Request For Enhancement) item? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960325&group_id=5470 From noreply at sourceforge.net Tue Jun 1 14:16:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 14:16:33 2004 Subject: [ python-Bugs-959379 ] Implicit close() should check for errors Message-ID: Bugs item #959379, was opened at 2004-05-24 13:32 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=959379&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.3 Status: Open Resolution: None Priority: 5 Submitted By: Peter ?strand (astrand) Assigned to: Nobody/Anonymous (nobody) Summary: Implicit close() should check for errors Initial Comment: As we all know, the fileobjects destructor invokes the close() method automatically. But, most people are not aware of that errors from close() are silently ignored. This can lead to silent data loss. Consider this example: $ python -c 'open("foo", "w").write("aaa")' No traceback or warning message is printed, but the file is zero bytes large, because the close() system call returned EDQUOT. Another similiar example is: $ python -c 'f=open("foo", "w"); f.write("aaa")' When using an explicit close(), you get a traceback: $ python -c 'f=open("foo", "w"); f.write("aaa"); f.close()' Traceback (most recent call last): File "", line 1, in ? IOError: [Errno 122] Disk quota exceeded I'm aware of that exceptions cannot be raised in destructors, but wouldn't it be possible to at least print a warning message? ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2004-06-01 20:16 Message: Logged In: YES user_id=344921 It has nothing to do with the interpreter shutdown; the same thing happens for long-lived processed, when the file object falls off a function end. For example, the code below fails silently: def foo(): f = open("foo", "w") f.write("bar") foo() time.sleep(1000) ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 19:53 Message: Logged In: YES user_id=593130 I think there are two separate behavior issues: implicit file close and interpreter shutdown. What happens with $ python -c 'f=open("foo", "w"); f.write("aaa"); del f' which forces the implicit close *before* shutdown. As I recall, the ref manual says little about the shutdown process, which I believe is necessarily implementation/system dependent. There certainly is little that can be guaranteed once the interpreter is partly deconstructed itself. >I'm aware of that exceptions cannot be raised in destructors, but wouldn't it be possible to at least print a warning message? Is there already a runtime warning mechanism, or are you proposing that one be added? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=959379&group_id=5470 From noreply at sourceforge.net Tue Jun 1 14:23:23 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 14:23:30 2004 Subject: [ python-Bugs-959379 ] Implicit close() should check for errors Message-ID: Bugs item #959379, was opened at 2004-05-24 07:32 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=959379&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.3 Status: Open Resolution: None Priority: 5 Submitted By: Peter ?strand (astrand) Assigned to: Nobody/Anonymous (nobody) Summary: Implicit close() should check for errors Initial Comment: As we all know, the fileobjects destructor invokes the close() method automatically. But, most people are not aware of that errors from close() are silently ignored. This can lead to silent data loss. Consider this example: $ python -c 'open("foo", "w").write("aaa")' No traceback or warning message is printed, but the file is zero bytes large, because the close() system call returned EDQUOT. Another similiar example is: $ python -c 'f=open("foo", "w"); f.write("aaa")' When using an explicit close(), you get a traceback: $ python -c 'f=open("foo", "w"); f.write("aaa"); f.close()' Traceback (most recent call last): File "", line 1, in ? IOError: [Errno 122] Disk quota exceeded I'm aware of that exceptions cannot be raised in destructors, but wouldn't it be possible to at least print a warning message? ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-01 14:23 Message: Logged In: YES user_id=31435 I think the issue here is mainly that an explicit file.close() maps to fileobject.c's file_close(), which checks the return value of the underlying C-level close call and raises an exception (or not) as appropriate; but file_dealloc(), which is called as part of recycling garbage fileobjects, does not look at the return value from the underlying C-level close call it makes (and, of course, then doesn't raise any exceptions either based on that return value). ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2004-06-01 14:16 Message: Logged In: YES user_id=344921 It has nothing to do with the interpreter shutdown; the same thing happens for long-lived processed, when the file object falls off a function end. For example, the code below fails silently: def foo(): f = open("foo", "w") f.write("bar") foo() time.sleep(1000) ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 13:53 Message: Logged In: YES user_id=593130 I think there are two separate behavior issues: implicit file close and interpreter shutdown. What happens with $ python -c 'f=open("foo", "w"); f.write("aaa"); del f' which forces the implicit close *before* shutdown. As I recall, the ref manual says little about the shutdown process, which I believe is necessarily implementation/system dependent. There certainly is little that can be guaranteed once the interpreter is partly deconstructed itself. >I'm aware of that exceptions cannot be raised in destructors, but wouldn't it be possible to at least print a warning message? Is there already a runtime warning mechanism, or are you proposing that one be added? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=959379&group_id=5470 From noreply at sourceforge.net Tue Jun 1 14:35:19 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 14:35:25 2004 Subject: [ python-Bugs-951851 ] Crash when reading "import table" of certain windows DLLs Message-ID: Bugs item #951851, was opened at 2004-05-11 13:02 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=951851&group_id=5470 Category: Windows Group: Python 2.3 Status: Open >Resolution: None Priority: 7 Submitted By: Mark Hammond (mhammond) Assigned to: Mark Hammond (mhammond) Summary: Crash when reading "import table" of certain windows DLLs Initial Comment: As diagnosed by Thomas Heller, via the python-win32 list. On Windows 2000, if your sys.path includes the Windows system32 directory, 'import wmi' will crash Python. To reproduce, change to the system32 directory, start Python, and execute 'import wmi'. Note that Windows XP does not crash. The problem is in GetPythonImport(), in code that tries to walk the PE 'import table'. AFAIK, this is code that checks the correct Python version is used, but I've never seen this code before. I'm not sure why the code is actually crashing (ie, what assumptions made about the import table are wrong), but I have a patch that checks a the pointers are valid before they are de-referenced. After the patch is applied, the result is the correct: "ImportError: dynamic module does not define init function (initwmi)" exception. Assigning to thomas for his review, then I'm happy to check in. I propose this as a 2.3 bugfix. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2004-06-01 20:35 Message: Logged In: YES user_id=11105 This is not yet accepted. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-01 19:45 Message: Logged In: YES user_id=11105 The reason the current code crashed when Python tries to import Win2k's or XP's wmi.dll as extension is that the size of the import table in this dll is zero. The first patch 'dynload_win.c-1.patch' fixes this by returning NULL in that case. The code, however, doesn't do what is intended in a debug build of Python. It looks for imports of 'python23.dll', when it should look for 'python23_d.dll' instead. The second patch 'dynload_win.c-2.patch' fixes this also (and includes the first patch as well). ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-12 03:56 Message: Logged In: YES user_id=14198 Seeing as it was the de-referencing of 'import_name' that crashed, I think a better patch is to terminate the outer while look as soon as we hit a bad string. Otherwise, this outer loop continues, 20 bytes at a time, until the outer pointer ('import_data') also goes bad or happens to reference \0. Attaching a slightly different patch, with comments and sizeof() change. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-12 01:00 Message: Logged In: YES user_id=14198 OK - will change to 12+so(WORD) And yes, I had seen this code - I meant "familiar with" :) Tim: Note that the import failure is not due to a bad import table (but the crash was). This code is trying to determine if a different version of Python is used. We are effectively skipping that check, and landing directly in the "does it have an init function?", then faling normally - ie, the code is now correctly *not* finding other Python versions linked against it. Thus, a different error message doesn't make much sense to me. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-11 17:45 Message: Logged In: YES user_id=11105 Oh, we have to give the /all option to dumpbin ;-) ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-11 17:42 Message: Logged In: YES user_id=11105 Tim, I don't think the import table format has changed, instead wmi.dll doesn't have an import table (for whatever reason). Maybe the code isn't able to handle that correctly. Since Python 2.3 as well at it's extensions are still built with MSVC 6, I *think* we should be safe with this code. I'll attach the output of running MSVC.NET 2003's 'dumpbin.exe \windows\system32\wmi.dll' on my WinXP Pro SP1 for the curious. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-05-11 17:20 Message: Logged In: YES user_id=31435 Mark, while you may not have seen this code before, you checked it in . IIRC, though, the person who *created* the patch was never heard from again. I don't understand what the code thinks it's doing either, exactly. The obvious concern: if the import table format has changed, won't we also fail to import legit C extensions? I haven't seen that happen yet, but I haven't yet built any extensions using VC 7.1 either. In any case, I'd agree it's better to get a mysterious import failure than a crash. Maybe the detail in the ImportError could be changed to indicate whan an import failure is due to a bad pointer. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-11 16:49 Message: Logged In: YES user_id=11105 IMO, IsBadReadPointer(import_data, 12 + sizeof(DWORD)) should be enough. Yes, please add a comment in the code. This is a little bit hackish, but it fixes the problem. And the real problem can always be fixed later, if needed. And, BTW, python 2.3.3 crashes on Windows XP as well. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-11 13:05 Message: Logged In: YES user_id=14198 Actually, I guess a comment regarding the pointer checks and referencing this bug would be nice :) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=951851&group_id=5470 From noreply at sourceforge.net Tue Jun 1 14:37:51 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 14:37:57 2004 Subject: [ python-Bugs-960340 ] Poor documentation of new-style classes Message-ID: Bugs item #960340, was opened at 2004-05-25 21:30 Message generated for change (Comment added) made by hfuru You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960340&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Hallvard B Furuseth (hfuru) Assigned to: Nobody/Anonymous (nobody) Summary: Poor documentation of new-style classes Initial Comment: The Python Reference Manual (info file python-ref) talks a lot about new-style classes, but does not say what they are, except in a brief note buried in node 'Coercion rules'. The library reference does say that object() creates such classes, that too lacks a way to look up 'new-style classes' and find object(). Also, since 'object' is a type, it seems strange that the Library Reference has it in the 'Built-in Functions' node instead of a node about (callable) types. The same applies to several other types. If you want to keep them there, at least add index entries for them in the Class-Exception-Object Index. This refers to the doc in info-2.3.3.tar.bz2 from . ---------------------------------------------------------------------- >Comment By: Hallvard B Furuseth (hfuru) Date: 2004-06-01 20:37 Message: Logged In: YES user_id=726647 Index to the language reference, yes. And I think the types of classes should be lifted to the header of a paragraph. Maybe just something like this, before the Programmer's note: New-style vs. old-style/classic classes: Subclasses of 'object' are called new-style classes, other classes are called old-style or classic classes. Note that all standard types such as 'int' and 'dict' are subclasses of 'object'. [If that latest part is true. It seems to be about right, anyway.] Maybe you had better also explain here or in section 3.1 (Objects, values and types) that not all objects are subclasses of 'object'. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2004-05-26 15:48 Message: Logged In: YES user_id=764593 That's a start, but I do think "classic class", "old class", and "new-style class" should show up in the index to the language reference as well. One obvious (but perhaps not sufficient?) place is the programmers note at the bottom of 7.6, class definitions. Just change: "For new-style classes, descriptors ..." to: "For new-style classes (those inheriting from object), descriptors ..." The language lawyer reference also seems like the right place to list all the differences between classic and new classes, but I am less certain how to do that properly. (And it starts to be an Enhancement request.) ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-26 06:45 Message: Logged In: YES user_id=80475 The glossary in the tutorial was added to meet this need. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2004-05-26 00:13 Message: Logged In: YES user_id=764593 object() doesn't create a new-style class; it creates an instance of class object. Note that the definition of a new-style class is just a class inheriting from object, so object itself is a new-style class. That said, the distributed documentation should probably have something more about "new-style" vs "old-style" classes, and should have a reference in the index. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960340&group_id=5470 From noreply at sourceforge.net Tue Jun 1 16:22:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 16:23:08 2004 Subject: [ python-Bugs-964525 ] Boolean operations section includes lambda_form grammar rule Message-ID: Bugs item #964525, was opened at 2004-06-01 16:22 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964525&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Stefanus Du Toit (sdt) Assigned to: Nobody/Anonymous (nobody) Summary: Boolean operations section includes lambda_form grammar rule Initial Comment: http://docs.python.org/ref/Booleans.html includes th erule: lambda_form ::= "lambda" [parameter_list]: expression I imagine this belongs in http://docs.python.org/ref/lambdas.html instead. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964525&group_id=5470 From noreply at sourceforge.net Tue Jun 1 16:27:23 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 16:27:36 2004 Subject: [ python-Bugs-964437 ] idle help is modal Message-ID: Bugs item #964437, was opened at 2004-06-01 13:05 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964437&group_id=5470 Category: IDLE Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Matthias Klose (doko) >Assigned to: Kurt B. Kaiser (kbk) Summary: idle help is modal Initial Comment: [forwarded from http://bugs.debian.org/252130] the idle online help is unfortunately modal so that one cannot have the help window open and read it, and at the same time work in idle. One must close the help window before continuing in idle which is a nuisance. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964437&group_id=5470 From noreply at sourceforge.net Tue Jun 1 17:14:48 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 17:14:55 2004 Subject: [ python-Bugs-957003 ] RFE: Extend smtplib.py with support for LMTP Message-ID: Bugs item #957003, was opened at 2004-05-19 12:56 Message generated for change (Comment added) made by zwoop You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=957003&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Leif Hedstrom (zwoop) Assigned to: Nobody/Anonymous (nobody) Summary: RFE: Extend smtplib.py with support for LMTP Initial Comment: Hi, attached is a proposal to extend the existing smtplib.py module with support for LMTP (RFC2033). I find it very useful together with IMAP servers like Cyrus. Thanks, -- leif ---------------------------------------------------------------------- >Comment By: Leif Hedstrom (zwoop) Date: 2004-06-01 14:14 Message: Logged In: YES user_id=480913 Is the documentation provided in the patch for the LMTP class not sufficient? I can extend on that if necessary, although bear in mind that LMTP is very, very similar to SMTP. The main difference is the support for Unix sockets. As for adding test code, I could do that, although I'm guessing most people will not have an LMTP capable server running. If you feel strongly on this, I'll add something that will check for port 2003, and submit the same test message through an LMTP instance. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 10:25 Message: Logged In: YES user_id=593130 If you were mere requesting a feature enhancement, this would belong in the RFE list and not the bug list. Since you submitted a patch, and not just a proposal, this belongs in the patch list. However, as a patch submission, it also needs 1) a patch to the documentation for smtplib (at least the suggested new text if you can't do Latex) and 2) a patch to the test suite for smtplib (assuming there is one already). Suggestion: close this bug report as invalid and open a patch item with the additional material. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=957003&group_id=5470 From noreply at sourceforge.net Tue Jun 1 17:40:53 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 17:40:56 2004 Subject: [ python-Bugs-964592 ] 2.3.4 Language Reference Typo, Section 2.3.2 Message-ID: Bugs item #964592, was opened at 2004-06-02 09:40 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964592&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Chris Wood (gracefool) Assigned to: Nobody/Anonymous (nobody) Summary: 2.3.4 Language Reference Typo, Section 2.3.2 Initial Comment: In the Language Reference, documentation updated 20 May 2004, section 2.3.2: "Class-private names. Names in this category, when used within the context of a class definition, are re-written to use a mangled ***for*** to help avoid name clashes between ``private'' attributes of base and derived classes." I assume this should be "form", not "for". ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964592&group_id=5470 From noreply at sourceforge.net Tue Jun 1 21:57:31 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 21:57:39 2004 Subject: [ python-Bugs-964433 ] email package uses \n to rebuild content of a message Message-ID: Bugs item #964433, was opened at 2004-06-01 12:57 Message generated for change (Settings changed) made by montanaro You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964433&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Marco Bizzarri (emmebi) >Assigned to: Barry A. Warsaw (bwarsaw) Summary: email package uses \n to rebuild content of a message Initial Comment: As stated, the email.Parser class uses '\n' to add the firstbodyline to the rest of the message. This is done *AFTER* the splitlines() have been used to remove the first line from the body of a multipart message. Even though this is not a problem in many cases, it can be a great problem when you are dealing with signed files, as in my case. I've indeed a multipart message where I have: a pdf file a pkcs7 signature If I use the parser to analyze the message, the pdf file is actually one byte less, because the original file was \r\n terminated, rather than \n. When the parser tries to parse, it splits the first line (containing the %PDF1.4\r\n), and translates it to %PDF1.4, and then it is joined to the rest of the PDF file using a simple \n. In this way, the file is exactly one byte less of the original file, and, therefore, the signature can't be verified. I think we could avoid this problem using a splitlines(1)[0][:-1] which would keep the original \r\n, remove the \n, which can then be safely added. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964433&group_id=5470 From noreply at sourceforge.net Tue Jun 1 22:07:07 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 22:07:17 2004 Subject: [ python-Bugs-960325 ] "require " configure option Message-ID: Bugs item #960325, was opened at 2004-05-25 15:07 Message generated for change (Comment added) made by tjreedy You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960325&group_id=5470 Category: Build Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Hallvard B Furuseth (hfuru) Assigned to: Nobody/Anonymous (nobody) Summary: "require " configure option Initial Comment: I'd like to be able to configure Python so that Configure or Make will fail if a particular feature is unavailable. Currently I'm concerned with SSL, which just gets a warning from Make: building '_ssl' extension *** WARNING: renaming "_ssl" since importing it failed: ld.so.1: ./python: fatal: libssl.so.0.9.8: open failed: No such file or directory Since that's buried in a lot of Make output, it's easy to miss. Besides, for semi-automatic builds it's in any case good to get a non-success exit status from the build process. Looking at the Make output, I see the bz2 extension is another example where this might be useful. Maybe the option would simply be '--enable-ssl', unless you want that to merely try to build with ssl. Or '--require=ssl,bz2,...'. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 22:07 Message: Logged In: YES user_id=593130 Yes, this is not a PEP item. I didn't notice Feature Reqest since it is redundant vis a vis the separate RFE list. ---------------------------------------------------------------------- Comment By: Hallvard B Furuseth (hfuru) Date: 2004-06-01 14:13 Message: Logged In: YES user_id=726647 I marked it with Group: Feature Request. Not a bug, but a quality of implementation issue. It seemed more proper here than as a PEP. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 13:58 Message: Logged In: YES user_id=593130 Are you claiming that there is an actual bug, or is this merely an RFE (Request For Enhancement) item? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960325&group_id=5470 From noreply at sourceforge.net Tue Jun 1 22:14:20 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 22:14:23 2004 Subject: [ python-Bugs-964703 ] RFE versus Bug group Feature Request Message-ID: Bugs item #964703, was opened at 2004-06-01 22:14 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 Category: None Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Terry J. Reedy (tjreedy) Assigned to: Nobody/Anonymous (nobody) Summary: RFE versus Bug group Feature Request Initial Comment: The Category is 'Source Forge Item Tracker'. The possible bug is the redundancy of having both an RFE (Request For Enhancement) list separate from the Bugs list and a Feature Request Group within Bugs. Is this intentional or an historical artifact that should be removed in order to direct feature requests one place or another. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 From noreply at sourceforge.net Tue Jun 1 22:22:18 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 1 22:22:24 2004 Subject: [ python-Bugs-964703 ] RFE versus Bug group Feature Request Message-ID: Bugs item #964703, was opened at 2004-06-01 22:14 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 Category: None >Group: 3rd Party >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Terry J. Reedy (tjreedy) >Assigned to: Tim Peters (tim_one) Summary: RFE versus Bug group Feature Request Initial Comment: The Category is 'Source Forge Item Tracker'. The possible bug is the redundancy of having both an RFE (Request For Enhancement) list separate from the Bugs list and a Feature Request Group within Bugs. Is this intentional or an historical artifact that should be removed in order to direct feature requests one place or another. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-01 22:22 Message: Logged In: YES user_id=31435 Sorry, we can't do anything about this. Group names cannot be deleted in SourceForge's system, and can't even be renamed. So we'll have a "Feature Request" Group in the Bugs tracker forever -- or unless SF changes their system. When we first moved to SF, RFE trackers didn't exist. That's why Bugs grew a Feature Request group to begin with. Closing as 3rdParty, WontFix. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 From noreply at sourceforge.net Wed Jun 2 00:26:24 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 00:26:41 2004 Subject: [ python-Bugs-964703 ] RFE versus Bug group Feature Request Message-ID: Bugs item #964703, was opened at 2004-06-01 22:14 Message generated for change (Comment added) made by tjreedy You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 Category: None Group: 3rd Party Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Terry J. Reedy (tjreedy) Assigned to: Tim Peters (tim_one) Summary: RFE versus Bug group Feature Request Initial Comment: The Category is 'Source Forge Item Tracker'. The possible bug is the redundancy of having both an RFE (Request For Enhancement) list separate from the Bugs list and a Feature Request Group within Bugs. Is this intentional or an historical artifact that should be removed in order to direct feature requests one place or another. ---------------------------------------------------------------------- >Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-02 00:26 Message: Logged In: YES user_id=593130 In the meanwhile... is it appropriate to recommend that requests go in RFE (as I somewhat ignorantly and indirectly did today, see #960325) or is this a "don't care" issue for the developers (that I should ignore)? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-01 22:22 Message: Logged In: YES user_id=31435 Sorry, we can't do anything about this. Group names cannot be deleted in SourceForge's system, and can't even be renamed. So we'll have a "Feature Request" Group in the Bugs tracker forever -- or unless SF changes their system. When we first moved to SF, RFE trackers didn't exist. That's why Bugs grew a Feature Request group to begin with. Closing as 3rdParty, WontFix. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 From noreply at sourceforge.net Wed Jun 2 01:00:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 01:00:35 2004 Subject: [ python-Bugs-964703 ] RFE versus Bug group Feature Request Message-ID: Bugs item #964703, was opened at 2004-06-01 22:14 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 Category: None Group: 3rd Party Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Terry J. Reedy (tjreedy) Assigned to: Tim Peters (tim_one) Summary: RFE versus Bug group Feature Request Initial Comment: The Category is 'Source Forge Item Tracker'. The possible bug is the redundancy of having both an RFE (Request For Enhancement) list separate from the Bugs list and a Feature Request Group within Bugs. Is this intentional or an historical artifact that should be removed in order to direct feature requests one place or another. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-02 01:00 Message: Logged In: YES user_id=31435 Oh yes! Overall, we'd rather reduce the mushrooming backlog of patch and bug reports than slam in new features, so we want to keep feature requests out of the bug tracker. That's why the RFE tracker was added. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-02 00:26 Message: Logged In: YES user_id=593130 In the meanwhile... is it appropriate to recommend that requests go in RFE (as I somewhat ignorantly and indirectly did today, see #960325) or is this a "don't care" issue for the developers (that I should ignore)? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-01 22:22 Message: Logged In: YES user_id=31435 Sorry, we can't do anything about this. Group names cannot be deleted in SourceForge's system, and can't even be renamed. So we'll have a "Feature Request" Group in the Bugs tracker forever -- or unless SF changes their system. When we first moved to SF, RFE trackers didn't exist. That's why Bugs grew a Feature Request group to begin with. Closing as 3rdParty, WontFix. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 From noreply at sourceforge.net Wed Jun 2 04:51:11 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 04:51:42 2004 Subject: [ python-Bugs-805194 ] Inappropriate error received using socket timeout Message-ID: Bugs item #805194, was opened at 2003-09-12 18:56 Message generated for change (Comment added) made by troels You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=805194&group_id=5470 Category: Windows Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Popov (evgeni_popov) Assigned to: Nobody/Anonymous (nobody) Summary: Inappropriate error received using socket timeout Initial Comment: When using the timeout option with a socket object (python 2.3), I don't have the same behaviour under Windows than under Linux / Mac. Specifically, if trying to connect to an unopened port of the localhost, I get a timeout exception on Windows (tested under W2K Server), whereas I get a "111 - Connection Refused" exception on Linux and "22 - Invalid Argument" on Mac (OS X). Even if the error message under Mac is not really appropriate (someone said to me he got the right 'Connection Refused' on his MAC), I think that the behaviour under Linux and Mac is the right one, in that it sends (quickly) an error message and not timeouting. Note that when using blocking socket the behaviour is ok under all platforms: they each return back quickly a "Connection refused" error message (err codes are different depending on the platform (61=Mac, 111=Linux, 10061=Windows)). FYI, I don't use firewall or similar prog on my windows box (but that should not matter, because it does work in blocking mode, so that can't be a firewall problem). I heard that the timeout option was implemented based on Timothy O'Malley timeoutsocket.py. Then, maybe the pb can come from the usage of select in the connection function: select is not asked to get back exceptions in the returned triple, whereas I think some errors can be returned back through this mean under Windows (according to Tip 25 of Jon C. Snader book's "Effective TCP/IP Programming"). So, by not checking the returned exceptions, we would miss the "connection refused" error and get instead the timeout error... ---------------------------------------------------------------------- Comment By: Troels Walsted Hansen (troels) Date: 2004-06-02 10:51 Message: Logged In: YES user_id=32863 The Windows part of this bug is a duplicate of bug #777597. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=805194&group_id=5470 From noreply at sourceforge.net Wed Jun 2 05:02:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 05:02:51 2004 Subject: [ python-Bugs-964861 ] Cookie module does not parse date Message-ID: Bugs item #964861, was opened at 2004-06-02 09:02 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964861&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: Cookie module does not parse date Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. The standard Cookie module does not parse date string. Here is and example: >>> import Cookie >>> s = 'Set-Cookie: key=value; path=/; expires=Fri, 21-May-2004 10:40:51 GMT' >>> c = Cookie.BaseCookie(s) >>> print c Set-Cookie: key=value; expires=Fri,; Path=/; In the attached file I have reported the correct (I think) regex pattern. Thanks and Regards Manlio Perillo ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964861&group_id=5470 From noreply at sourceforge.net Wed Jun 2 05:12:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 05:12:43 2004 Subject: [ python-Bugs-964868 ] pickle protocol 2 is incompatible(?) with Cookie module Message-ID: Bugs item #964868, was opened at 2004-06-02 09:12 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964868&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: pickle protocol 2 is incompatible(?) with Cookie module Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. I don't know if this is a bug of Cookie module or of pickle. When I dump a Cookie instance with protocol = 2, the data is 'corrupted'. With protocol = 1 there are no problems. Here is an example: >>> s = 'Set-Cookie: key=value; path=/; expires=Fri, 21-May-2004 10:40:51 GMT' >>> c = Cookie.BaseCookie(s) >>> print c Set-Cookie: key=value; expires=Fri,; Path=/; >>> buf = pickle.dumps(c, protocol = 2) >>> print pickle.loads(buf) Set-Cookie: key=Set-Cookie: key=value; expires=Fri,; Path=/;; >>> buf = pickle.dumps(c, protocol = 1) >>> print pickle.loads(buf) Set-Cookie: key=value; expires=Fri,; Path=/; Thanks and regards Manlio Perillo ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964868&group_id=5470 From noreply at sourceforge.net Wed Jun 2 05:15:10 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 05:15:17 2004 Subject: [ python-Bugs-964870 ] sys.getfilesystemencoding() Message-ID: Bugs item #964870, was opened at 2004-06-02 09:15 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964870&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: sys.getfilesystemencoding() Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. In the documentation it is reported that: 'On Windows NT+, file names are Unicode natively, so no conversion is performed'. But: import sys >>> sys.getfilesystemencoding() 'mbcs' Thanks and regards Manlio Perillo ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964870&group_id=5470 From noreply at sourceforge.net Wed Jun 2 05:28:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 05:28:17 2004 Subject: [ python-Bugs-964876 ] mapping a 0 length file Message-ID: Bugs item #964876, was opened at 2004-06-02 09:28 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: mapping a 0 length file Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. If I mmap a 0 length file on winnt, I obtain an exception: >>> import mmap, os >>> file = os.open(file_name, os.O_RDWR | os.O_BINARY) >>> buf = mmap.mmap(file, 0, access = map.ACCESS_WRITE) Traceback (most recent call last): File "", line 1, in -toplevel- buf = mmap.mmap(file, 0, access = mmap.ACCESS_WRITE) WindowsError: [Errno 1006] Il volume corrispondente al file ? stato alterato dall'esterno. Il file aperto non ? pi? valido This is a windows problem, but I think it should be at least documented. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 From noreply at sourceforge.net Wed Jun 2 06:05:43 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 06:06:08 2004 Subject: [ python-Bugs-513572 ] isdir behavior getting odder on UNC path Message-ID: Bugs item #513572, was opened at 2002-02-06 03:07 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 Category: Python Library Group: Python 2.2 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Gary Herron (herron) Assigned to: Mark Hammond (mhammond) Summary: isdir behavior getting odder on UNC path Initial Comment: It's been documented in earlier version of Python on windows that os.path.isdir returns true on a UNC directory only if there was an extra backslash at the end of the argument. In Python2.2 (at least on windows 2000) it appears that *TWO* extra backslashes are needed. Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> >>> import os >>> os.path.isdir('\\trainer\island') 0 >>> os.path.isdir('\\trainer\island\') 0 >>> os.path.isdir('\\trainer\island\\') 1 >>> In a perfect world, the first call should return 1, but never has. In older versions of python, the second returned 1, but no longer. In limited tests, appending 2 or more backslashes to the end of any pathname returns the correct answer in both isfile and isdir. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-02 12:05 Message: Logged In: YES user_id=21627 This is fixed with Greg's patch. ---------------------------------------------------------------------- Comment By: Greg Chapman (glchapman) Date: 2004-05-14 20:02 Message: Logged In: YES user_id=86307 I took a stab at fixing this, see: www.python.org/sf/954115 ---------------------------------------------------------------------- Comment By: Greg Chapman (glchapman) Date: 2004-04-20 20:21 Message: Logged In: YES user_id=86307 I just ran into this bug. I checked the CVS and it appears that no patch has yet been committed for it. Does a patch exist? Am I correct that the suggested change is essentially: if (IsRootUNCName(path)) EnsureTrailingSlash(path); else if (!IsRootDir(path)) NukeTrailingSlashIfPresent(path); stat(path, st); ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-04-18 03:10 Message: Logged In: YES user_id=31435 Sounds good to me! I agree it shouldn't be all that hard to special-case UNC roots too -- what I wonder about is how many other forms of "root" syntax MS will make up out of thin air next year . ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2002-04-17 16:30 Message: Logged In: YES user_id=14198 I have done a little analysis of how we use stat and how it performs by instrumenting posixmodule.c. It seems that Tim's concern about Python starup/import is largely unfounded. While Python does call stat() repeatedly at startup, it does so from C rather than os.stat(). Thus, starting and stopping Python yields the following (with my instrumentation): Success: 9 in 1.47592ms, avg 0.164 Failure: 2 in 0.334504ms, avg 0.1673 (ie, os.stat() is called with a valid file 9 times, and invalid file twice. Average time for stat() is 0.16ms per call.) python -c "import os, string, httplib, urllib" shows the same results (ie, no extra stats for imports) However, this is not the typical case. The Python test suite (which takes ~110 seconds wall time on my PC) yields the following: Success: 383 in 84.3571ms, avg 0.2203 Failure: 1253 in 3805.52ms, avg 3.037 egads - 4 seconds spent in failed stat calls, averaging 3ms each!! Further instrumentation shows that stat() can be very slow on directories with many files. In this case, os.stat() in the %TEMP% directory for tempfiles() occasionally took extremely long. OK - so assuming this tempfile behaviour is also not "typical", I tried the COM test suite: Success: 972 in 303.856ms, avg 0.3126 Failure: 16 in 2.60549ms, avg 0.1628 (also with some extremely long times on files that did exist in a directory with many files) So - all this implies to me that: * stat() can be quite slow in some cases, success or failure * We probably shouldn't make this twice as long in every case that fails! So, I am moving back to trying to outguess the stat() implementation. Looking at it shows that indeed UNC roots are treated specially along with the root directory case already handled by Python (courtesy of Tim). Adding an extra check for a UNC root shouldn't be too hard, and can't possibly be as expensive as an extra stat() :) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-04-05 03:51 Message: Logged In: YES user_id=31435 Nice to see you, Mark! If you want to pursue this, the caution I had about my idea, but forgot to write down, is that Python does lots of stats during imports, and especially stats on things that usually don't exist (is it there with a .pyd suffix? a .dll suffix? a .py suffix? a .pyw suffix? a .pyc suffix?). If the idea has a bad effect on startup time, that may kill it; startup time is already a sore point for some. OTOH, on Windows we should really, say, be using FindFirstFile() with a wildcard extension for that purpose anyway. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2002-04-05 02:46 Message: Logged In: YES user_id=14198 Sorry - I missed this bug. It is not that I wasn't paying attention, but rather that SF's Tracker didn't get my attention :( Have I mentioned how much I have SF and love Bugzilla yet? :) I quite like Tim's algorithm. One extra stat in that case is OK IMO. I can't imagine too many speed sensitive bits of code that continuously check for a non-existent directory. Everyone still OK with that? ---------------------------------------------------------------------- Comment By: Trent Mick (tmick) Date: 2002-04-04 20:08 Message: Logged In: YES user_id=34892 I have struggled with this too. Currently I tend to use this _isdir(). Hopefully this is helpful. def _isdir(dirname): """os.path.isdir() doesn't work for UNC mount points. Fake it. # For an existing mount point # (want: _isdir() == 1) os.path.ismount(r"\crimper\apps") -> 1 os.path.exists(r"\crimper\apps") -> 0 os.path.isdir(r"\crimper\apps") -> 0 os.listdir(r"\crimper\apps") -> [...contents...] # For a non-existant mount point # (want: _isdir() == 0) os.path.ismount(r"\crimper\foo") -> 1 os.path.exists(r"\crimper\foo") -> 0 os.path.isdir(r"\crimper\foo") -> 0 os.listdir(r"\crimper\foo") -> WindowsError # For an existing dir under a mount point # (want: _isdir() == 1) os.path.mount(r"\crimper\apps\Komodo") -> 0 os.path.exists(r"\crimper\apps\Komodo") -> 1 os.path.isdir(r"\crimper\apps\Komodo") -> 1 os.listdir(r"\crimper\apps\Komodo") -> [...contents...] # For a non-existant dir/file under a mount point # (want: _isdir() == 0) os.path.ismount(r"\crimper\apps\foo") -> 0 os.path.exists(r"\crimper\apps\foo") -> 0 os.path.isdir(r"\crimper\apps\foo") -> 0 os.listdir(r"\crimper\apps\foo") -> [] # as if empty contents # For an existing file under a mount point # (want: _isdir() == 0) os.path.ismount(r"\crimper\apps\Komodo\exists.txt") -> 0 os.path.exists(r"\crimper\apps\Komodo\exists.txt") -> 1 os.path.isdir(r"\crimper\apps\Komodo\exists.txt") -> 0 os.listdir(r"\crimper\apps\Komodo\exists.txt") -> WindowsError """ if sys.platform[:3] == 'win' and dirname[:2] == r'\': if os.path.exists(dirname): return os.path.isdir(dirname) try: os.listdir(dirname) except WindowsError: return 0 else: return os.path.ismount(dirname) else: return os.path.isdir(dirname) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-03-10 10:03 Message: Logged In: YES user_id=31435 Gordon, none of those are UNC roots -- they follow the rules exactly as stated for non-UNC paths: MS stat() recognizes \ME\E\java if and only if there's no trailing backslash. That's why your first example succeeds. The complication is that Python removes one trailing backslash "by magic" unless the path "looks like a root", and none of these do. That's why your third example works. Your second and fourth examples fail because you specified two trailing backslashes in those, and Python only removes one of them by magic. An example of "a UNC root" would be \ME\E. The MS stat() recognizes a root directory if and only if it *does* have a trailing backslash, and Python's magical backslash removal doesn't know UNC roots from a Euro symbol. So the only way to get Python's isdir() (etc) to recognize \ME\E is to follow it with two backslashes, one because Python strips one away (due to not realizing "it looks like a root"), and another else MS stat() refuses to recognize it. Anyway, I'm unassigning this now, cuz MarkH isn't paying any attentino. If someone wants to write a pile of tedious code to "recognize a UNC root when it sees one", I'd accept the patch. I doubt I'll get it to it myself in this lifetime. ---------------------------------------------------------------------- Comment By: Gordon B. McMillan (gmcm) Date: 2002-03-07 16:31 Message: Logged In: YES user_id=4923 Data point: run on a win2k box, where \ME is an NT box Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 >>> os.path.isdir(r"\ME\E\java") 1 >>> os.path.isdir(r"\ME\E\java\") 0 >>> os.path.isdir("\\ME\E\java\") 1 >>> os.path.isdir("\\ME\E\java\\") 0 ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-11 09:28 Message: Logged In: YES user_id=31435 Mark, what do you think about a different approach here? 1. Leave the string alone and *try* stat. If it succeeds, great, we're done. 2. Else if the string doesn't have a trailing (back)slash, append one and try again. Win or lose, that's the end. 3. Else the string does have a trailing (back)slash. If the string has more than one character, strip a trailing (back)slash and try again. Win or lose, that's the end. 4. Else the string is a single (back)slash, yet stat() failed. This shouldn't be possible. It doubles the number of stats in cases where the file path doesn't correspond to anything that exists. OTOH, MS's (back)slash rules are undocumented and incomprehensible (read their implementation of stat() for the whole truth -- we're not out-thinking lots of it now, and the gimmick added after 1.5.2 to out-think part of it is at least breaking Gary's thoroughly sensible use). ---------------------------------------------------------------------- Comment By: Gary Herron (herron) Date: 2002-02-11 09:03 Message: Logged In: YES user_id=395736 Sorry, but I don't have much of an idea which versions I was refering to. I picked up the idea of an extra backslashes in a faq from a web site, the search for which I can't seem to reproduce. It claimed one backslash was enough, but did not specify a python version. It *might* have been old enough to be pre 1.5.2. The two versions I can test are 1.5.1 (where one backslash is enough) and 2.2 (where two are required). This seems to me to support (or at least not contradict) Tim's hypothesis. Gary ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-10 19:57 Message: Logged In: YES user_id=31435 Gary, exactly what do you mean by "older versions of Python"? That is, specifically which versions? The Microsoft stat() function is extremely picky about trailing (back)slashes. For example, if you have a directory c:/python, and pass "c:/python/" to the MS stat (), it claims no such thing exists. This isn't documented by MS, but that's how it works: a trailing (back)slash is required if and only if the path passed in "is a root". So MS stat() doesn't understand "/python/", and doesn't understand "d:" either. The former doesn't tolerate a (back)slash, while the latter requires one. This is impossible for people to keep straight, so after 1.5.2 Python started removing (back)slashes on its own to make MS stat() happy. The code currently leaves a trailing (back)slash alone if and only if one exists, and in addition of these obtains: 1) The (back)slash is the only character in the path. or 2) The path has 3 characters, and the middle one is a colon. UNC roots don't fit either of those, so do get one (back) slash chopped off. However, just as for any other roots, the MS stat() refuses to recognize them as valid unless they do have a trailing (back)slash. Indeed, the last time I applied a contributed patch to this code, I added a /* XXX UNC root drives should also be exempted? */ comment there. However, this explanation doesn't make sense unless by "older versions of Python" you mean nothing more recent than 1.5.2. If I'm understanding the source of the problem, it should exist in all Pythons after 1.5.2. So if you don't see the same problem in 1.6, 2.0 or 2.1, I'm on the wrong track. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-09 00:33 Message: Logged In: YES user_id=31435 BTW, it occurs to me that this *may* be a consequence of whatever was done in 2.2 to encode/decode filename strings for system calls on Windows. I didn't follow that, and Mark may be the only one who fully understands the details. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-09 00:17 Message: Logged In: YES user_id=31435 Here's the implementation of Windows isdir(): def isdir(path): . """Test whether a path is a directory""" . try: . st = os.stat(path) . except os.error: . return 0 . return stat.S_ISDIR(st[stat.ST_MODE]) That is, we return whatever Microsoft's stat() tells us, and our code is the same in 2.2 as in 2.1. I don't have Win2K here, and my Win98 box isn't on a Windows network so I can't even try real UNC paths here. Reassigning to MarkH in case he can do better on either count. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 23:05 Message: Logged In: YES user_id=6380 Tim, I hate to do this to you, but you're the only person I trust with researching this. (My laptop is currently off the net again. :-( ) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 From noreply at sourceforge.net Wed Jun 2 06:48:34 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 06:48:45 2004 Subject: [ python-Bugs-964929 ] Unicode String formatting does not correctly handle objects Message-ID: Bugs item #964929, was opened at 2004-06-02 10:48 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964929&group_id=5470 Category: Unicode Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Giles Antonio Radford (mewf) Assigned to: M.-A. Lemburg (lemburg) Summary: Unicode String formatting does not correctly handle objects Initial Comment: I have a problem with the way '%s' is handled in unicode strings when formatted. The Python Language refrence states that a unicode serialisation of an object should be in __unicode__, and I have seen python break down if unicode data is returned in __str__. The problem is that there does not appear to be a way to interpolate the results from __unicode__ within a string: class EuroHolder: def __init__(self, price): self._price = price def __str__(self): return "%.02f euro" % self._price def __unicode__(self): return u"%.02f\u20ac" % self._price >>> class EuroHolder: ... def __init__(self, price): ... self._price = price ... def __str__(self): ... return "%.02f euro" % self._price ... def __unicode__(self): ... return u"%.02f\u20ac" % self._price ... >>> e = EuroHolder(123.45) >>> str(e) '123.45 euro' >>> unicode(e) u'123.45\u20ac' >>> "%s" % e '123.45 euro' >>> u"%s" % e #this is wrong u'123.45 euro' >>> u"%s" % unicode(e) # This is silly u'123.45\u20ac' >>> The first case is wrong, as I actually could cope with unicode data in the string I was substituting into, and I should be able to request the unicode data be put in. The second case is silly, as the whole point of string substion variables such as %s, %d and %f is to remove the need for coercion on the right of the %. Proposed solution #1: Make %s in unicode string substitution automatically check __unicode__() of the rvalue before trying __str__(). This is the most logical thing to expect of %s, if you insist on overloading it the way it currently does when a unicode object in the rvalue will ensure the result is unicode. Proposed solution #2: Make a new string conversion operator, such as %S or %U which will explicitly call __unicode__() on the rvalue even if the lvalue is a non-unicode string Solution #2 has the advantage that it does not break any previous behaviour of %s, and also allows for explicit conversion to unicode of 8-bits string in the lvalue. I prefer solution #1 as I feel that the current operation of %s is incorrect, and it's unliekly to break much, whereas the "advantage" of converting 8-bit strings in the lvalue to unicode which solution #2 advocates will just lead to encoding problems and sloppy code. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964929&group_id=5470 From noreply at sourceforge.net Wed Jun 2 07:17:47 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 07:17:53 2004 Subject: [ python-Bugs-964949 ] Ctrl-C causes odd behaviour Message-ID: Bugs item #964949, was opened at 2004-06-02 04:17 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964949&group_id=5470 Category: Windows Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Michael Bax (mrbax) Assigned to: Thomas Heller (theller) Summary: Ctrl-C causes odd behaviour Initial Comment: With various versions of console Python 2.3.x under Windows 2000, executed using the "Python (command- line)" Start Menu shortcut, I have noticed the following intermittent errors: 1. When pressing Ctrl-C at the prompt, Python terminates. 2. When pressing Ctrl-C during a raw_input, Python raises an EOFError instead of KeyboardInterrupt. I usually cannot duplicate this behaviour by repeatedly pressing Ctrl-C or repeating the steps that led to it. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964949&group_id=5470 From noreply at sourceforge.net Wed Jun 2 07:22:10 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 07:22:50 2004 Subject: [ python-Bugs-708927 ] socket timeouts produce wrong errors in win32 Message-ID: Bugs item #708927, was opened at 2003-03-24 18:59 Message generated for change (Comment added) made by troels You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=708927&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Greg Chapman (glchapman) Assigned to: Nobody/Anonymous (nobody) Summary: socket timeouts produce wrong errors in win32 Initial Comment: Here's a session: Python 2.3a2 (#39, Feb 19 2003, 17:58:58) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> socket.setdefaulttimeout(0.01) >>> import urllib >>> urllib.urlopen('http://www.python.org') Traceback (most recent call last): File "", line 1, in ? File "c:\Python23\lib\urllib.py", line 76, in urlopen return opener.open(url) File "c:\Python23\lib\urllib.py", line 181, in open return getattr(self, name)(url) File "c:\Python23\lib\urllib.py", line 297, in open_http h.endheaders() File "c:\Python23\lib\httplib.py", line 705, in endheaders self._send_output() File "c:\Python23\lib\httplib.py", line 591, in _send_output self.send(msg) File "c:\Python23\lib\httplib.py", line 558, in send self.connect() File "c:\Python23\lib\httplib.py", line 798, in connect IOError: [Errno socket error] (2, 'No such file or directory') >>> urllib.urlopen('http://www.python.org') < SNIP > IOError: [Errno socket error] (0, 'Error') Looking at socketmodule.c, it appears internal_connect must be taking the path which (under MS_WINDOWS) calls select to see if there was a timeout. select must be returning 0 (to signal a timeout), but it apparently does not call WSASetLastError, so when set_error is called, WSAGetLastError returns 0, and the ultimate error generated comes from the call to PyErr_SetFromErrNo. Perhaps in this case internal_connect should itself call WSASetLastError (with WSAETIMEDOUT rather than WSAEWOULDBLOCK?). The reason I ran into this is I was planning to convert some code which used the timeoutsocket module under 2.2. That module raises a Timeout exception (which the code was catching) and I was trying to figure out what the equivalent exception would be from the normal socket module. Perhaps the socket module should define some sort of timeout exception class so it would be easier to detect timeouts as opposed to other socket errors. ---------------------------------------------------------------------- Comment By: Troels Walsted Hansen (troels) Date: 2004-06-02 13:22 Message: Logged In: YES user_id=32863 I think this may be fixed. I wasn't able to reproduce the problem with Python 2.3.4 on Windows XP SP1. Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> socket.setdefaulttimeout(0.01) >>> import urllib >>> urllib.urlopen('http://www.python.org') Traceback (most recent call last): File "", line 1, in ? File "C:\Program Files\Python23\lib\urllib.py", line 76, in urlopen return opener.open(url) File "C:\Program Files\Python23\lib\urllib.py", line 181, in open return getattr(self, name)(url) File "C:\Program Files\Python23\lib\urllib.py", line 297, in open_http h.endheaders() File "C:\Program Files\Python23\lib\httplib.py", line 712, in endheaders self._send_output() File "C:\Program Files\Python23\lib\httplib.py", line 597, in _send_output self.send(msg) File "C:\Program Files\Python23\lib\httplib.py", line 564, in send self.connect() File "C:\Program Files\Python23\lib\httplib.py", line 548, in connect raise socket.error, msg IOError: [Errno socket error] timed out Repeatedly calling the code below gives the same exception and backtrace every time. >>> urllib.urlopen('http://www.python.org') ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=708927&group_id=5470 From noreply at sourceforge.net Wed Jun 2 07:56:45 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 07:56:53 2004 Subject: [ python-Bugs-960325 ] "require " configure option Message-ID: Bugs item #960325, was opened at 2004-05-25 21:07 Message generated for change (Comment added) made by hfuru You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960325&group_id=5470 Category: Build Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Hallvard B Furuseth (hfuru) Assigned to: Nobody/Anonymous (nobody) Summary: "require " configure option Initial Comment: I'd like to be able to configure Python so that Configure or Make will fail if a particular feature is unavailable. Currently I'm concerned with SSL, which just gets a warning from Make: building '_ssl' extension *** WARNING: renaming "_ssl" since importing it failed: ld.so.1: ./python: fatal: libssl.so.0.9.8: open failed: No such file or directory Since that's buried in a lot of Make output, it's easy to miss. Besides, for semi-automatic builds it's in any case good to get a non-success exit status from the build process. Looking at the Make output, I see the bz2 extension is another example where this might be useful. Maybe the option would simply be '--enable-ssl', unless you want that to merely try to build with ssl. Or '--require=ssl,bz2,...'. ---------------------------------------------------------------------- >Comment By: Hallvard B Furuseth (hfuru) Date: 2004-06-02 13:56 Message: Logged In: YES user_id=726647 Ah, so that's what RFE means. You could rename that to 'Enhancement Requests'. Anyway, QoI issues tend to resemble bug issues more than enhancement issues, so '"bug" of type feature request' looks good to me. Though I'll resubmit as RFE if you ask. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-02 04:07 Message: Logged In: YES user_id=593130 Yes, this is not a PEP item. I didn't notice Feature Reqest since it is redundant vis a vis the separate RFE list. ---------------------------------------------------------------------- Comment By: Hallvard B Furuseth (hfuru) Date: 2004-06-01 20:13 Message: Logged In: YES user_id=726647 I marked it with Group: Feature Request. Not a bug, but a quality of implementation issue. It seemed more proper here than as a PEP. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 19:58 Message: Logged In: YES user_id=593130 Are you claiming that there is an actual bug, or is this merely an RFE (Request For Enhancement) item? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960325&group_id=5470 From noreply at sourceforge.net Wed Jun 2 08:37:46 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 08:37:50 2004 Subject: [ python-Bugs-924294 ] IPV6 not correctly ifdef'd in socketmodule.c Message-ID: Bugs item #924294, was opened at 2004-03-27 01:07 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924294&group_id=5470 Category: Build Group: Python 2.3 >Status: Closed >Resolution: Accepted Priority: 7 Submitted By: David Meleedy (dmeleedy) Assigned to: Martin v. L?wis (loewis) Summary: IPV6 not correctly ifdef'd in socketmodule.c Initial Comment: buckaroo-75: diff -c3 socketmodule.c-dist socketmodule.c *** socketmodule.c-dist Fri Mar 26 18:51:52 2004 --- socketmodule.c Fri Mar 26 18:52:47 2004 *************** *** 2971,2977 **** return NULL; } ! #ifndef ENABLE_IPV6 if(af == AF_INET6) { PyErr_SetString(socket_error, "can't use AF_INET6, IPv6 is disabled"); --- 2971,2977 ---- return NULL; } ! #ifdef ENABLE_IPV6 if(af == AF_INET6) { PyErr_SetString(socket_error, "can't use AF_INET6, IPv6 is disabled"); ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-02 14:37 Message: Logged In: YES user_id=21627 I have applied a similar patch as socketmodule.c 1.290, 1.271.6.8 NEWS 1.831.4.114 ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-05-06 03:58 Message: Logged In: YES user_id=21627 Can you please elaborate? Why should Python raise an exception that IPv6 is disabled if ENABLE_IPV6 is defined? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924294&group_id=5470 From noreply at sourceforge.net Wed Jun 2 08:40:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 08:40:47 2004 Subject: [ python-Bugs-964870 ] sys.getfilesystemencoding() Message-ID: Bugs item #964870, was opened at 2004-06-02 11:15 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964870&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: sys.getfilesystemencoding() Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. In the documentation it is reported that: 'On Windows NT+, file names are Unicode natively, so no conversion is performed'. But: import sys >>> sys.getfilesystemencoding() 'mbcs' Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-02 14:40 Message: Logged In: YES user_id=21627 The documentation is correct. Filenames are not converted. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964870&group_id=5470 From noreply at sourceforge.net Wed Jun 2 08:42:44 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 08:42:51 2004 Subject: [ python-Feature Requests-963845 ] There ought to be a way to uninstall Message-ID: Feature Requests item #963845, was opened at 2004-05-31 23:49 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=963845&group_id=5470 >Category: None >Group: None Status: Open Resolution: None Priority: 5 Submitted By: Kaleissin (kaleissin) Assigned to: Nobody/Anonymous (nobody) Summary: There ought to be a way to uninstall Initial Comment: There ought to be a way to uninstall, if for no other reason that it is polite behavior in a civilized society :) The usual way to uninstall something from a "package" seems to be to keep a list of all installed files somewhere. Doing the equivalent of a "find . -print" at the right point during "bdist/build" should do the trick, or maybe a command of its own, the filelist to be stored in the sdist-result? Such a list of files would also be useful for packagers, be it maintainers of .rpm, .deb or *BSD ports. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-02 14:42 Message: Logged In: YES user_id=21627 Moving to RFE tracker. Notice that binary packages (created by bdist_rpm or bdist_wininst) do support uninstall. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=963845&group_id=5470 From noreply at sourceforge.net Wed Jun 2 08:46:10 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 08:46:23 2004 Subject: [ python-Bugs-962471 ] PyModule_AddIntConstant documented to take an int, not a lon Message-ID: Bugs item #962471, was opened at 2004-05-28 23:46 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=962471&group_id=5470 Category: Documentation Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: PyModule_AddIntConstant documented to take an int, not a lon Initial Comment: [forwarded from http://bugs.debian.org/250826] In 7.5.4 (Module Objects) PyModule_AddIntConstant is documented to take an int as third parameter, however it takes an long. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-02 14:46 Message: Logged In: YES user_id=21627 Thanks for the report. Fixed in concrete.tex 1.41 and 1.25.10.8. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=962471&group_id=5470 From noreply at sourceforge.net Wed Jun 2 08:49:18 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 08:49:23 2004 Subject: [ python-Bugs-964592 ] 2.3.4 Language Reference Typo, Section 2.3.2 Message-ID: Bugs item #964592, was opened at 2004-06-01 23:40 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964592&group_id=5470 Category: Documentation Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Chris Wood (gracefool) Assigned to: Nobody/Anonymous (nobody) Summary: 2.3.4 Language Reference Typo, Section 2.3.2 Initial Comment: In the Language Reference, documentation updated 20 May 2004, section 2.3.2: "Class-private names. Names in this category, when used within the context of a class definition, are re-written to use a mangled ***for*** to help avoid name clashes between ``private'' attributes of base and derived classes." I assume this should be "form", not "for". ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-02 14:49 Message: Logged In: YES user_id=21627 Thanks for the report. Fixed in ref2.tex 1.52 and 1.48.10.3. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964592&group_id=5470 From noreply at sourceforge.net Wed Jun 2 08:54:59 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 08:56:05 2004 Subject: [ python-Bugs-964525 ] Boolean operations section includes lambda_form grammar rule Message-ID: Bugs item #964525, was opened at 2004-06-01 22:22 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964525&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Stefanus Du Toit (sdt) Assigned to: Nobody/Anonymous (nobody) Summary: Boolean operations section includes lambda_form grammar rule Initial Comment: http://docs.python.org/ref/Booleans.html includes th erule: lambda_form ::= "lambda" [parameter_list]: expression I imagine this belongs in http://docs.python.org/ref/lambdas.html instead. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-02 14:54 Message: Logged In: YES user_id=21627 Thanks for the report. Fixed in ref5.tex 1.81, to appear with Python 2.4. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964525&group_id=5470 From noreply at sourceforge.net Wed Jun 2 09:01:02 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 09:01:07 2004 Subject: [ python-Bugs-960448 ] grammar for "class" inheritance production slightly wrong Message-ID: Bugs item #960448, was opened at 2004-05-26 00:20 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960448&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Jim Jewett (jimjjewett) Assigned to: Nobody/Anonymous (nobody) Summary: grammar for "class" inheritance production slightly wrong Initial Comment: http://www.python.org/dev/doc/devel/ref/class.html lists the grammar for a class statement, including an optional inheritance production of "(" [expression_list] ")". expression_list is *not* optional. If the inheritance production (the parentheses) is present, then the expression list must be present (non-empty) as well. """ >>> class C(): SyntaxError: invalid syntax """ ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-02 15:01 Message: Logged In: YES user_id=21627 Thanks for the report. Fixed in ref7.tex 1.38 and 1.35.16.2. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960448&group_id=5470 From noreply at sourceforge.net Wed Jun 2 09:50:26 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 09:50:31 2004 Subject: [ python-Bugs-965032 ] datetime.isoformat() contaiins 'T0' Message-ID: Bugs item #965032, was opened at 2004-06-02 09:50 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965032&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Doug Fort (dougfort) Assigned to: Nobody/Anonymous (nobody) Summary: datetime.isoformat() contaiins 'T0' Initial Comment: Python 2.3.4 (#1, May 27 2004, 13:38:36) [GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import datetime >>> datetime.datetime.today().isoformat() '2004-06-02T09:36:28.893992' As of 2.3.4 the datetime.isoformat() includes the characters 'T0' between the date and the time. I assume this is something to do with the time zone. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965032&group_id=5470 From noreply at sourceforge.net Wed Jun 2 09:59:55 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 10:00:05 2004 Subject: [ python-Bugs-777597 ] socketmodule.c connection handling incorect on windows Message-ID: Bugs item #777597, was opened at 2003-07-25 17:01 Message generated for change (Comment added) made by troels You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=777597&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Garth Bushell (garth42) Assigned to: Nobody/Anonymous (nobody) Summary: socketmodule.c connection handling incorect on windows Initial Comment: The socketmodule.c code does not handle connection refused correctly. This is due to a different of operation on windows of select. The offending code is in internal_connect in the MS_WINDOWS ifdef. The code in should test exceptfds to check for connecttion refused. If this is so itshould call getsockopt(SOL_SOCKET, SO_ERROR,..) to get the error status. (Source microsoft Platform SDK) The suggested fix is shown below (untested) #ifdef MS_WINDOWS f (s->sock_timeout > 0.0) { if (res < 0 && WSAGetLastError() == WSAEWOULDBLOCK) { /* This is a mess. Best solution: trust select */ fd_set exfds; struct timeval tv; tv.tv_sec = (int)s->sock_timeout; tv.tv_usec = (int)((s->sock_timeout - tv.tv_sec) * 1e6); FD_ZERO(&exfds); FD_SET(s->sock_fd, &exfds); /* Platform SDK says so */ res = select(s->sock_fd+1, NULL, NULL, &exfds, &tv); if (res > 0) { if( FD_ISSET( &exfds ) ) { /* Get the real reason */ getsockopt(s->sock_fd,SOL_SOCKET,SO_ERROR,(char*)&res,sizeof(res)); } else { /* God knows how we got here */ res = 0; } } else if( res == 0 ) { res = WSAEWOULDBLOCK; } else { /* Not sure we should return the erro from select? */ res = WSAGetLastError(); } } } else if (res < 0) res = WSAGetLastError(); #else ---------------------------------------------------------------------- Comment By: Troels Walsted Hansen (troels) Date: 2004-06-02 15:59 Message: Logged In: YES user_id=32863 I have turned Garth's code into a patch versus Python 2.3.4. I don't believe the fix is correct and complete, but it should serve as a starting point. Patch is in http://python.org/sf/965036 ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-07-29 00:00 Message: Logged In: YES user_id=33168 Garth could you produce a patch against 2.3c2 with your selected change and test it? It would help us a lot as we are all very overloaded. Thanks. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=777597&group_id=5470 From noreply at sourceforge.net Wed Jun 2 10:36:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 10:36:13 2004 Subject: [ python-Bugs-965065 ] document docs.python.org in PEP-101 Message-ID: Bugs item #965065, was opened at 2004-06-03 00:36 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965065&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: document docs.python.org in PEP-101 Initial Comment: We need documentation, ironically, for docs.python.org. The release PEP (101) needs to spell out what needs to be done when a new release is made. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965065&group_id=5470 From noreply at sourceforge.net Wed Jun 2 11:09:48 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 11:10:06 2004 Subject: [ python-Bugs-708927 ] socket timeouts produce wrong errors in win32 Message-ID: Bugs item #708927, was opened at 2003-03-24 08:59 Message generated for change (Comment added) made by glchapman You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=708927&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed Resolution: None Priority: 5 Submitted By: Greg Chapman (glchapman) Assigned to: Nobody/Anonymous (nobody) Summary: socket timeouts produce wrong errors in win32 Initial Comment: Here's a session: Python 2.3a2 (#39, Feb 19 2003, 17:58:58) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> socket.setdefaulttimeout(0.01) >>> import urllib >>> urllib.urlopen('http://www.python.org') Traceback (most recent call last): File "", line 1, in ? File "c:\Python23\lib\urllib.py", line 76, in urlopen return opener.open(url) File "c:\Python23\lib\urllib.py", line 181, in open return getattr(self, name)(url) File "c:\Python23\lib\urllib.py", line 297, in open_http h.endheaders() File "c:\Python23\lib\httplib.py", line 705, in endheaders self._send_output() File "c:\Python23\lib\httplib.py", line 591, in _send_output self.send(msg) File "c:\Python23\lib\httplib.py", line 558, in send self.connect() File "c:\Python23\lib\httplib.py", line 798, in connect IOError: [Errno socket error] (2, 'No such file or directory') >>> urllib.urlopen('http://www.python.org') < SNIP > IOError: [Errno socket error] (0, 'Error') Looking at socketmodule.c, it appears internal_connect must be taking the path which (under MS_WINDOWS) calls select to see if there was a timeout. select must be returning 0 (to signal a timeout), but it apparently does not call WSASetLastError, so when set_error is called, WSAGetLastError returns 0, and the ultimate error generated comes from the call to PyErr_SetFromErrNo. Perhaps in this case internal_connect should itself call WSASetLastError (with WSAETIMEDOUT rather than WSAEWOULDBLOCK?). The reason I ran into this is I was planning to convert some code which used the timeoutsocket module under 2.2. That module raises a Timeout exception (which the code was catching) and I was trying to figure out what the equivalent exception would be from the normal socket module. Perhaps the socket module should define some sort of timeout exception class so it would be easier to detect timeouts as opposed to other socket errors. ---------------------------------------------------------------------- >Comment By: Greg Chapman (glchapman) Date: 2004-06-02 07:09 Message: Logged In: YES user_id=86307 I agree that it has been fixed. I think the timeout parameter to internal_connect was not there for 2.3a2 (but Sourceforge won't let me connect to the web CVS right now, so I'm not sure). Anyway, since internal_connect sets timeout to true in this case, the correct error is generated by sock_connect. ---------------------------------------------------------------------- Comment By: Troels Walsted Hansen (troels) Date: 2004-06-02 03:22 Message: Logged In: YES user_id=32863 I think this may be fixed. I wasn't able to reproduce the problem with Python 2.3.4 on Windows XP SP1. Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> socket.setdefaulttimeout(0.01) >>> import urllib >>> urllib.urlopen('http://www.python.org') Traceback (most recent call last): File "", line 1, in ? File "C:\Program Files\Python23\lib\urllib.py", line 76, in urlopen return opener.open(url) File "C:\Program Files\Python23\lib\urllib.py", line 181, in open return getattr(self, name)(url) File "C:\Program Files\Python23\lib\urllib.py", line 297, in open_http h.endheaders() File "C:\Program Files\Python23\lib\httplib.py", line 712, in endheaders self._send_output() File "C:\Program Files\Python23\lib\httplib.py", line 597, in _send_output self.send(msg) File "C:\Program Files\Python23\lib\httplib.py", line 564, in send self.connect() File "C:\Program Files\Python23\lib\httplib.py", line 548, in connect raise socket.error, msg IOError: [Errno socket error] timed out Repeatedly calling the code below gives the same exception and backtrace every time. >>> urllib.urlopen('http://www.python.org') ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=708927&group_id=5470 From noreply at sourceforge.net Wed Jun 2 11:15:21 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 11:15:31 2004 Subject: [ python-Bugs-965032 ] datetime.isoformat() contaiins 'T0' Message-ID: Bugs item #965032, was opened at 2004-06-02 23:50 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965032&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Doug Fort (dougfort) Assigned to: Nobody/Anonymous (nobody) Summary: datetime.isoformat() contaiins 'T0' Initial Comment: Python 2.3.4 (#1, May 27 2004, 13:38:36) [GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import datetime >>> datetime.datetime.today().isoformat() '2004-06-02T09:36:28.893992' As of 2.3.4 the datetime.isoformat() includes the characters 'T0' between the date and the time. I assume this is something to do with the time zone. ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-03 01:15 Message: Logged In: YES user_id=29957 This isn't a bug. The 0 is the first digit of the hour, and the T between the date and the hour is an optional part of the standard (ISO 8601). The ISO standard itself isn't freely available, but these two links provide more information: http://www.w3.org/TR/NOTE-datetime http://www.cl.cam.ac.uk/~mgk25/iso-time.html ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965032&group_id=5470 From noreply at sourceforge.net Wed Jun 2 11:17:37 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 11:17:43 2004 Subject: [ python-Bugs-965032 ] datetime.isoformat() contaiins 'T0' Message-ID: Bugs item #965032, was opened at 2004-06-02 09:50 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965032&group_id=5470 Category: Python Library >Group: Not a Bug Status: Closed Resolution: Invalid Priority: 5 Submitted By: Doug Fort (dougfort) >Assigned to: Tim Peters (tim_one) Summary: datetime.isoformat() contaiins 'T0' Initial Comment: Python 2.3.4 (#1, May 27 2004, 13:38:36) [GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import datetime >>> datetime.datetime.today().isoformat() '2004-06-02T09:36:28.893992' As of 2.3.4 the datetime.isoformat() includes the characters 'T0' between the date and the time. I assume this is something to do with the time zone. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-02 11:17 Message: Logged In: YES user_id=31435 No, nothing to do with time zones. This is an ISO 8601 date+time format. ISO requires that a "T" separates the date portion from the time portion. The "0" is from "09"; ISO requires exactly two digits for the hour portion of the time (and for minute, second, month, and day -- ISO 8601 is a fixed-width format). There's no bug here, so closing. Use Google to find the 8601 standard (or one of the many summaries of it on the web). The method is called isoformat because of ISO . ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-02 11:15 Message: Logged In: YES user_id=29957 This isn't a bug. The 0 is the first digit of the hour, and the T between the date and the hour is an optional part of the standard (ISO 8601). The ISO standard itself isn't freely available, but these two links provide more information: http://www.w3.org/TR/NOTE-datetime http://www.cl.cam.ac.uk/~mgk25/iso-time.html ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965032&group_id=5470 From noreply at sourceforge.net Wed Jun 2 11:35:22 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 11:35:27 2004 Subject: [ python-Bugs-210832 ] urljoin() bug with odd no of '..' (PR#194) Message-ID: Bugs item #210832, was opened at 2000-08-01 16:13 Message generated for change (Comment added) made by jnelson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=210832&group_id=5470 Category: Python Library Group: None Status: Closed Resolution: Fixed Priority: 6 Submitted By: Nobody/Anonymous (nobody) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: urljoin() bug with odd no of '..' (PR#194) Initial Comment: Jitterbug-Id: 194 Submitted-By: DrMalte@ddd.de Date: Sun, 30 Jan 2000 19:40:45 -0500 (EST) Version: 1.5.2 and 1.4 OS: Linux While playing with linbot I noticed some failed requests to 'http://xxx.xxx.xx/../img/xxx.gif' for a document in the root directory containing . The Reason is in urlparse.urljoin() urljoin() fails to remove an odd number of '../' from the path. Demonstration: from urlparse import urljoin print urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) # gives 'http://127.0.0.1/../imgs/logo.gif' # should give 'http://127.0.0.1/imgs/logo.gif' print urljoin( 'http://127.0.0.1/', '../../imgs/logo.gif' ) # gives 'http://127.0.0.1/imgs/logo.gif' # works # '../../imgs/logo.gif' gives 'http://127.0.0.1/../imgs/logo.gif' and so on The patch for 1.5.2 ( I'm not sure if it works generally, but tests with linbot looked good) *** /usr/local/lib/python1.5/urlparse.py Sat Jun 26 19:11:59 1999 --- urlparse.py Mon Jan 31 01:31:45 2000 *************** *** 170,175 **** --- 170,180 ---- segments[-1] = '' elif len(segments) >= 2 and segments[-1] == '..': segments[-2:] = [''] + + if segments[0] == '': + while segments[1] == '..': # remove all leading '..' + del segments[1] + return urlunparse((scheme, netloc, joinfields(segments, '/'), params, query, fragment)) ==================================================================== Audit trail: Mon Feb 07 12:35:35 2000 guido changed notes Mon Feb 07 12:35:35 2000 guido moved from incoming to request ---------------------------------------------------------------------- Comment By: Jon Nelson (jnelson) Date: 2004-06-02 10:35 Message: Logged In: YES user_id=8446 I'm not 100% sure, but as of Python 2.2.2 (#1, Feb 24 2003, 19:13:11) for RedHat, this is still a problem: >>> import urlparse >>> print urlparse.urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) http://127.0.0.1/../imgs/logo.gif >>> The patch above obviously no longer applies. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-01-04 23:59 Message: Lib/urlparse.py revision 1.27 conforms to all recommended practices from RFC 1808 which don't conflict with RFC 1630. Test cases have been added to ensure we don't lose this attribute. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2000-12-19 10:41 Message: Ok, confirmed. Reopening the bug until I get a chance to look at the proposed patch and can update the test suite. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-12-19 10:38 Message: OK, reopened. ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2000-12-19 10:30 Message: Section 5.2 of RFC 1808 states that in the context of the base URL <> = URLs that have more .. than the base has directory names, should be resolved in the following way: ../../../g = ../../../../g = i.e. they should be preserved, which urljoin does in the first example gives in the bug report: print urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) http://127.0.0.1/../imgs/logo.gif but not in the second example: print urljoin( 'http://127.0.0.1/', '../../imgs/logo.gif' ) http://127.0.0.1/imgs/logo.gif where the result should have been http://127.0.0.1/../../imgs/logo.gif ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2000-08-23 23:22 Message: RFC 1808 gives examples of this form in section 5.2, "Abnormal Examples," and gives the current behavior as the desired treatment, stating that all parsers (urljoin() counts given the RFC's terminology) should treat the abnormal examples consistently. ---------------------------------------------------------------------- Comment By: Moshe Zadka (moshez) Date: 2000-08-13 03:36 Message: OK, Jeremy -- this one is yours. Either notabug it, or check in the relevant patch (101064 -- assigned to you) ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-08-01 16:13 Message: Patch being considered. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-08-01 16:13 Message: From: Guido van Rossum Subject: Re: [Python-bugs-list] urljoin() bug with odd no of '..' (PR#194) Date: Mon, 31 Jan 2000 12:28:55 -0500 Thanks for your bug report and fix. I agree with your diagnosis. Would you please be so kind as to resend your patch with the legal disclaimer from http://www.python.org/1.5/bugrelease.html --Guido van Rossum (home page: http://www.python.org/~guido/) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=210832&group_id=5470 From noreply at sourceforge.net Wed Jun 2 11:49:47 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 11:49:54 2004 Subject: [ python-Bugs-210832 ] urljoin() bug with odd no of '..' (PR#194) Message-ID: Bugs item #210832, was opened at 2000-08-01 23:13 Message generated for change (Comment added) made by doerwalter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=210832&group_id=5470 Category: Python Library Group: None Status: Closed Resolution: Fixed Priority: 6 Submitted By: Nobody/Anonymous (nobody) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: urljoin() bug with odd no of '..' (PR#194) Initial Comment: Jitterbug-Id: 194 Submitted-By: DrMalte@ddd.de Date: Sun, 30 Jan 2000 19:40:45 -0500 (EST) Version: 1.5.2 and 1.4 OS: Linux While playing with linbot I noticed some failed requests to 'http://xxx.xxx.xx/../img/xxx.gif' for a document in the root directory containing . The Reason is in urlparse.urljoin() urljoin() fails to remove an odd number of '../' from the path. Demonstration: from urlparse import urljoin print urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) # gives 'http://127.0.0.1/../imgs/logo.gif' # should give 'http://127.0.0.1/imgs/logo.gif' print urljoin( 'http://127.0.0.1/', '../../imgs/logo.gif' ) # gives 'http://127.0.0.1/imgs/logo.gif' # works # '../../imgs/logo.gif' gives 'http://127.0.0.1/../imgs/logo.gif' and so on The patch for 1.5.2 ( I'm not sure if it works generally, but tests with linbot looked good) *** /usr/local/lib/python1.5/urlparse.py Sat Jun 26 19:11:59 1999 --- urlparse.py Mon Jan 31 01:31:45 2000 *************** *** 170,175 **** --- 170,180 ---- segments[-1] = '' elif len(segments) >= 2 and segments[-1] == '..': segments[-2:] = [''] + + if segments[0] == '': + while segments[1] == '..': # remove all leading '..' + del segments[1] + return urlunparse((scheme, netloc, joinfields(segments, '/'), params, query, fragment)) ==================================================================== Audit trail: Mon Feb 07 12:35:35 2000 guido changed notes Mon Feb 07 12:35:35 2000 guido moved from incoming to request ---------------------------------------------------------------------- >Comment By: Walter D?rwald (doerwalter) Date: 2004-06-02 17:49 Message: Logged In: YES user_id=89016 This is the same behaviour as in Python 2.3.4 and exactly what RFC 1808 specifies (see http://www.ietf.org/rfc/rfc1808.txt and scroll down to section 5.2. "Abnormal Examples"). Why do you think this is a problem? ---------------------------------------------------------------------- Comment By: Jon Nelson (jnelson) Date: 2004-06-02 17:35 Message: Logged In: YES user_id=8446 I'm not 100% sure, but as of Python 2.2.2 (#1, Feb 24 2003, 19:13:11) for RedHat, this is still a problem: >>> import urlparse >>> print urlparse.urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) http://127.0.0.1/../imgs/logo.gif >>> The patch above obviously no longer applies. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-01-05 06:59 Message: Lib/urlparse.py revision 1.27 conforms to all recommended practices from RFC 1808 which don't conflict with RFC 1630. Test cases have been added to ensure we don't lose this attribute. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2000-12-19 17:41 Message: Ok, confirmed. Reopening the bug until I get a chance to look at the proposed patch and can update the test suite. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-12-19 17:38 Message: OK, reopened. ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2000-12-19 17:30 Message: Section 5.2 of RFC 1808 states that in the context of the base URL <> = URLs that have more .. than the base has directory names, should be resolved in the following way: ../../../g = ../../../../g = i.e. they should be preserved, which urljoin does in the first example gives in the bug report: print urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) http://127.0.0.1/../imgs/logo.gif but not in the second example: print urljoin( 'http://127.0.0.1/', '../../imgs/logo.gif' ) http://127.0.0.1/imgs/logo.gif where the result should have been http://127.0.0.1/../../imgs/logo.gif ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2000-08-24 06:22 Message: RFC 1808 gives examples of this form in section 5.2, "Abnormal Examples," and gives the current behavior as the desired treatment, stating that all parsers (urljoin() counts given the RFC's terminology) should treat the abnormal examples consistently. ---------------------------------------------------------------------- Comment By: Moshe Zadka (moshez) Date: 2000-08-13 10:36 Message: OK, Jeremy -- this one is yours. Either notabug it, or check in the relevant patch (101064 -- assigned to you) ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-08-01 23:13 Message: Patch being considered. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-08-01 23:13 Message: From: Guido van Rossum Subject: Re: [Python-bugs-list] urljoin() bug with odd no of '..' (PR#194) Date: Mon, 31 Jan 2000 12:28:55 -0500 Thanks for your bug report and fix. I agree with your diagnosis. Would you please be so kind as to resend your patch with the legal disclaimer from http://www.python.org/1.5/bugrelease.html --Guido van Rossum (home page: http://www.python.org/~guido/) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=210832&group_id=5470 From noreply at sourceforge.net Wed Jun 2 13:48:48 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 13:57:33 2004 Subject: [ python-Bugs-210832 ] urljoin() bug with odd no of '..' (PR#194) Message-ID: Bugs item #210832, was opened at 2000-08-01 16:13 Message generated for change (Comment added) made by jnelson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=210832&group_id=5470 Category: Python Library Group: None Status: Closed Resolution: Fixed Priority: 6 Submitted By: Nobody/Anonymous (nobody) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: urljoin() bug with odd no of '..' (PR#194) Initial Comment: Jitterbug-Id: 194 Submitted-By: DrMalte@ddd.de Date: Sun, 30 Jan 2000 19:40:45 -0500 (EST) Version: 1.5.2 and 1.4 OS: Linux While playing with linbot I noticed some failed requests to 'http://xxx.xxx.xx/../img/xxx.gif' for a document in the root directory containing . The Reason is in urlparse.urljoin() urljoin() fails to remove an odd number of '../' from the path. Demonstration: from urlparse import urljoin print urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) # gives 'http://127.0.0.1/../imgs/logo.gif' # should give 'http://127.0.0.1/imgs/logo.gif' print urljoin( 'http://127.0.0.1/', '../../imgs/logo.gif' ) # gives 'http://127.0.0.1/imgs/logo.gif' # works # '../../imgs/logo.gif' gives 'http://127.0.0.1/../imgs/logo.gif' and so on The patch for 1.5.2 ( I'm not sure if it works generally, but tests with linbot looked good) *** /usr/local/lib/python1.5/urlparse.py Sat Jun 26 19:11:59 1999 --- urlparse.py Mon Jan 31 01:31:45 2000 *************** *** 170,175 **** --- 170,180 ---- segments[-1] = '' elif len(segments) >= 2 and segments[-1] == '..': segments[-2:] = [''] + + if segments[0] == '': + while segments[1] == '..': # remove all leading '..' + del segments[1] + return urlunparse((scheme, netloc, joinfields(segments, '/'), params, query, fragment)) ==================================================================== Audit trail: Mon Feb 07 12:35:35 2000 guido changed notes Mon Feb 07 12:35:35 2000 guido moved from incoming to request ---------------------------------------------------------------------- Comment By: Jon Nelson (jnelson) Date: 2004-06-02 12:48 Message: Logged In: YES user_id=8446 I stand corrected. Just because browsers deal with it, doesn't mean that it's right. FYI - RFC1808 has been superceded by 2396. ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2004-06-02 10:49 Message: Logged In: YES user_id=89016 This is the same behaviour as in Python 2.3.4 and exactly what RFC 1808 specifies (see http://www.ietf.org/rfc/rfc1808.txt and scroll down to section 5.2. "Abnormal Examples"). Why do you think this is a problem? ---------------------------------------------------------------------- Comment By: Jon Nelson (jnelson) Date: 2004-06-02 10:35 Message: Logged In: YES user_id=8446 I'm not 100% sure, but as of Python 2.2.2 (#1, Feb 24 2003, 19:13:11) for RedHat, this is still a problem: >>> import urlparse >>> print urlparse.urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) http://127.0.0.1/../imgs/logo.gif >>> The patch above obviously no longer applies. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-01-04 23:59 Message: Lib/urlparse.py revision 1.27 conforms to all recommended practices from RFC 1808 which don't conflict with RFC 1630. Test cases have been added to ensure we don't lose this attribute. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2000-12-19 10:41 Message: Ok, confirmed. Reopening the bug until I get a chance to look at the proposed patch and can update the test suite. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-12-19 10:38 Message: OK, reopened. ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2000-12-19 10:30 Message: Section 5.2 of RFC 1808 states that in the context of the base URL <> = URLs that have more .. than the base has directory names, should be resolved in the following way: ../../../g = ../../../../g = i.e. they should be preserved, which urljoin does in the first example gives in the bug report: print urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) http://127.0.0.1/../imgs/logo.gif but not in the second example: print urljoin( 'http://127.0.0.1/', '../../imgs/logo.gif' ) http://127.0.0.1/imgs/logo.gif where the result should have been http://127.0.0.1/../../imgs/logo.gif ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2000-08-23 23:22 Message: RFC 1808 gives examples of this form in section 5.2, "Abnormal Examples," and gives the current behavior as the desired treatment, stating that all parsers (urljoin() counts given the RFC's terminology) should treat the abnormal examples consistently. ---------------------------------------------------------------------- Comment By: Moshe Zadka (moshez) Date: 2000-08-13 03:36 Message: OK, Jeremy -- this one is yours. Either notabug it, or check in the relevant patch (101064 -- assigned to you) ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-08-01 16:13 Message: Patch being considered. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-08-01 16:13 Message: From: Guido van Rossum Subject: Re: [Python-bugs-list] urljoin() bug with odd no of '..' (PR#194) Date: Mon, 31 Jan 2000 12:28:55 -0500 Thanks for your bug report and fix. I agree with your diagnosis. Would you please be so kind as to resend your patch with the legal disclaimer from http://www.python.org/1.5/bugrelease.html --Guido van Rossum (home page: http://www.python.org/~guido/) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=210832&group_id=5470 From noreply at sourceforge.net Wed Jun 2 14:29:32 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 14:29:40 2004 Subject: [ python-Bugs-947571 ] urllib.urlopen() fails to raise exception Message-ID: Bugs item #947571, was opened at 2004-05-04 11:57 Message generated for change (Comment added) made by doerwalter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=947571&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: M.-A. Lemburg (lemburg) Assigned to: Nobody/Anonymous (nobody) Summary: urllib.urlopen() fails to raise exception Initial Comment: I've come across a strange problem: even though the docs say that urllib.urlopen() should raise an IOError for server errors (e.g. 404s), all versions of Python that I've tested (1.5.2 - 2.3) fail to do so. Example: >>> import urllib >>> f = urllib.urlopen('http://www.example.net/this-url-does-not-exist/') >>> print f.read() 404 Not Found

Not Found

The requested URL /this-url-does-not-exist/ was not found on this server.


Apache/1.3.27 Server at www.example.com Port 80
Either the docs are wrong or the implementation has a really long standing bug or I am missing something. ---------------------------------------------------------------------- >Comment By: Walter D?rwald (doerwalter) Date: 2004-06-02 20:29 Message: Logged In: YES user_id=89016 This seems to work with urllib2: >>> import urllib2 >>> f = urllib2.urlopen('http://www.example.net/this-url-does- not-exist/') Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.3/urllib2.py", line 129, in urlopen return _opener.open(url, data) File "/usr/local/lib/python2.3/urllib2.py", line 326, in open '_open', req) File "/usr/local/lib/python2.3/urllib2.py", line 306, in _call_chain result = func(*args) File "/usr/local/lib/python2.3/urllib2.py", line 901, in http_open return self.do_open(httplib.HTTP, req) File "/usr/local/lib/python2.3/urllib2.py", line 895, in do_open return self.parent.error('http', req, fp, code, msg, hdrs) File "/usr/local/lib/python2.3/urllib2.py", line 352, in error return self._call_chain(*args) File "/usr/local/lib/python2.3/urllib2.py", line 306, in _call_chain result = func(*args) File "/usr/local/lib/python2.3/urllib2.py", line 412, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 404: Not Found ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=947571&group_id=5470 From noreply at sourceforge.net Wed Jun 2 14:42:32 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 14:42:36 2004 Subject: [ python-Bugs-965206 ] importing dynamic modules via embedded python Message-ID: Bugs item #965206, was opened at 2004-06-02 18:42 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965206&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Anne Wilson (awilson123456) Assigned to: Nobody/Anonymous (nobody) Summary: importing dynamic modules via embedded python Initial Comment: I had existing C and Python code in which the C code invoked the Python interpreter via a system call. For efficiency reasons, the code needed modification to invoke the interpreter directly from the code. This is a Fedora Core Linux system using Python 2.2. Based on the Extending/Embedding documentation it all seemed very easy, but it was not. It cost me 12 hours just to figure out why both PyImport_Import and PyImport_ImportModule would fail. Turns on that when embeddeding Python and importing dynamically linked modules or modules that in turn import dynamically linked modules, the code must be linked with the -Wl,-E option to the compiler (e.g. the -E option to ld). In my up-until-then-blissful ignorance, I didn't know (or care) that some modules were statically linked while others were dynamic. And, of course, I was clueless about the implications wrt embedded Python. It took much painful debugging to track this down. (Special thanks to Bob Hepple and Jim Bublitz for helping me sort this out.) This should be mentioned up front in the documentation. Thanks! Anne ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965206&group_id=5470 From noreply at sourceforge.net Wed Jun 2 14:53:57 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 14:54:12 2004 Subject: [ python-Bugs-947894 ] calendar.weekheader() undocumented Message-ID: Bugs item #947894, was opened at 2004-05-04 20:20 Message generated for change (Comment added) made by doerwalter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=947894&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open >Resolution: Accepted Priority: 5 Submitted By: Leonardo Rochael Almeida (rochael) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: calendar.weekheader() undocumented Initial Comment: http://www.python.org/doc/current/lib/module-calendar.html makes no mention of calendar.weekheader() ---------------------------------------------------------------------- >Comment By: Walter D?rwald (doerwalter) Date: 2004-06-02 20:53 Message: Logged In: YES user_id=89016 How about the following patch? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=947894&group_id=5470 From noreply at sourceforge.net Wed Jun 2 15:08:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 15:08:40 2004 Subject: [ python-Bugs-947906 ] calendar.weekheader(n): n should mean chars not bytes Message-ID: Bugs item #947906, was opened at 2004-05-04 20:38 Message generated for change (Comment added) made by doerwalter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=947906&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open >Resolution: Accepted Priority: 7 Submitted By: Leonardo Rochael Almeida (rochael) >Assigned to: Martin v. L?wis (loewis) Summary: calendar.weekheader(n): n should mean chars not bytes Initial Comment: calendar.weekheader(n) is locale aware, which is good in principle. The parameter n, however, is interpreted as meaning bytes, not chars, which can generate broken strings for, e.g. localized weekday names: >>> calendar.weekheader(2) 'Mo Tu We Th Fr Sa Su' >>> locale.setlocale(locale.LC_ALL, "pt_BR.UTF-8") 'pt_BR.UTF-8' >>> calendar.weekheader(2) 'Se Te Qu Qu Se S\xc3 Do' Notice how "S?bado" (Saturday) above is missing the second utf-8 byte for the encoding of "?": >>> u"S?".encode("utf-8") 'S\xc3\xa1' The implementation of weekheader (and of all of calendar.py, it seems) is based on localized 8 bit strings. I suppose the correct fix for this bug will involve a roundtrip thru unicode. ---------------------------------------------------------------------- >Comment By: Walter D?rwald (doerwalter) Date: 2004-06-02 21:08 Message: Logged In: YES user_id=89016 Maybe we should have a second version of calendar (named ucalendar?) that works with unicode strings? Could those two modules be rewritten to use as much common functionality as possible? Or we could use a module global to configure whether str or unicode should be returned? Most of the localization functionality in calendar seems to come from datetime.datetime.strftime(), so it probably would help to have a method datetime.datetime.ustrftime() that returns the formatted string as unicode (using the locale encoding). Assigning to MvL as the locale/unicode expert. ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-05-08 01:57 Message: Logged In: YES user_id=55188 I think calendar.weekheader should mean not chars nor bytes but width. Because the function is currectly used for fixed width representations of calendars. Yes. They are same for western alphabets. But, for many of CJK characters are in full width. So, they need only 1 character for calendar.weekheader(2); and it's conventional in real life, too. But, we don't have unicode.width() support to implement the feature yet. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=947906&group_id=5470 From noreply at sourceforge.net Wed Jun 2 15:17:45 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 15:17:51 2004 Subject: [ python-Bugs-942706 ] Python crash on __init__/__getattr__/__setattr__ interaction Message-ID: Bugs item #942706, was opened at 2004-04-27 01:39 Message generated for change (Comment added) made by doerwalter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=942706&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: has (hhas) Assigned to: Nobody/Anonymous (nobody) Summary: Python crash on __init__/__getattr__/__setattr__ interaction Initial Comment: The following code causes [Mac]Python 2.3 process to crash (Bad!) rather than raise an error (good) when creating a new instance of Foo: class Foo: def __init__(self): self.x = 1 def __getattr__(self, name): if self.x: pass # etc... def __setattr__(self, name, val): if self.x: pass # etc... (See for a working example plus general solution to the referencing-instance-var-before-it's-created paradox that threw up this bug in the first place.) ---------------------------------------------------------------------- >Comment By: Walter D?rwald (doerwalter) Date: 2004-06-02 21:17 Message: Logged In: YES user_id=89016 Assigning to self.x in __init__() calls __setattr__(), which checks self.x, which calls __getattr__() which checks self.x, which leads to endless recursion. This usually leads to a "RuntimeError: maximum recursion depth exceeded". In what way does Python 2.3 crash? To avoid the recursion access the instance dictionary directly: class Foo: def __init__(self): self.x = 1 def __getattr__(self, name): if "x" in self.__dict__ and self.__dict__["x"]: pass # etc... def __setattr__(self, name, val): if self.x: pass # etc... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=942706&group_id=5470 From noreply at sourceforge.net Wed Jun 2 15:25:02 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 15:25:14 2004 Subject: [ python-Bugs-927248 ] Python segfaults when freeing "deep" objects Message-ID: Bugs item #927248, was opened at 2004-04-01 06:07 Message generated for change (Comment added) made by doerwalter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=927248&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jp Calderone (kuran) Assigned to: Nobody/Anonymous (nobody) Summary: Python segfaults when freeing "deep" objects Initial Comment: An example to produce this behavior: >>> f = lambda: None >>> for i in range(1000000): ... f = f.__call__ ... >>> f = None Segmentation fault ---------------------------------------------------------------------- >Comment By: Walter D?rwald (doerwalter) Date: 2004-06-02 21:25 Message: Logged In: YES user_id=89016 Python CVS from 2004-06-02 seems to work: Python 2.4a0 (#5, Jun 2 2004, 20:23:30) [GCC 2.96 20000731 (Red Hat Linux 7.3 2.96-113)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> f = lambda: None >>> for x in xrange(1000000): ... f = f.__call__ ... >>> f >>> f = None >>> f >>> ---------------------------------------------------------------------- Comment By: Jacob Smullyan (smulloni) Date: 2004-04-08 05:20 Message: Logged In: YES user_id=108556 Python CVS as of April 7th consistently segfaults with the above example for me: smulloni@bracknell src $ ~/apps/python-cvs/bin/python Python 2.4a0 (#1, Apr 7 2004, 23:10:34) [GCC 3.3.2 20031218 (Gentoo Linux 3.3.2-r5, propolice-3.3-7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> f=lambda: None >>> for x in xrange(1000000): ... f=f.__call__ ... >>> f=None Segmentation fault Of course, maybe that's a good thing :). ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2004-04-05 02:32 Message: Logged In: YES user_id=764593 CVS for 2.4 has comments for (and a fix for) problems similar to this. Does the bug still exist with that source code? ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-04-01 07:21 Message: Logged In: YES user_id=55188 Oh. my patch doesn't fix another scenario that using recursive by two or more types of slots. So I remove. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=927248&group_id=5470 From noreply at sourceforge.net Wed Jun 2 17:28:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 17:28:38 2004 Subject: [ python-Bugs-923576 ] Incorrect __name__ assignment Message-ID: Bugs item #923576, was opened at 2004-03-26 01:43 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=923576&group_id=5470 Category: Macintosh Group: Python 2.3 >Status: Closed >Resolution: Works For Me Priority: 5 Submitted By: El cepi (elcepi) >Assigned to: Jack Jansen (jackjansen) Summary: Incorrect __name__ assignment Initial Comment: When you use PythonLauncher or PythonIDE the value of __name__ is incorrectly assigned. For example, in the attached file the output is "The module has been load" when the module is fun in PythonIDE. With this you can not disciminate between the code you should run when you load the module and when you try to run it. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-02 23:28 Message: Logged In: YES user_id=45365 First off: sorry for not replying sooner, I hadn't seen the bug report. Second: I don't think this is a bug:-) In the case of running the script in the PythonIDE: if you want the script to be run as a main program you need to check the "Run as __main__" menu entry (in the run options popup menu, at the top of the vertical scrollbar). In the case of running the script with PythonLauncher I don't see the problem: for me it prints "The module has been run". If there's a scenario whereby running with PythonLauncher does print "The module has been load": please reopen the bug report and provide a scenario. ---------------------------------------------------------------------- Comment By: El cepi (elcepi) Date: 2004-03-31 21:14 Message: Logged In: YES user_id=1006669 No problem tjreedy Here are the version information relevant to the report * PythonIDE Version 1.0.1 Python 2.3.3 (#1 March 12 2004, 13:49:58) GCC 3.1 20020420 (prerelease) * PythonLauncher Version 0.1 * Mac OS X Version 10.2.8 (6R73) Darwin Kernel Version 6.8: Wed Sep 10 15:20:55 PDT 2003; root:xnu/xnu-344.49.obj~2/RELEASE_PPC * Machine PowerBook G4 (version = 11.3) This bug cause that your applications behave different when are executed from PythonLauncher or PythonIDE than from the shell For example, If I double-click the foo.pyc file it is executed by PythonLauncher but it execute the code under __name__==?test? instead of under __name__==?__main__? as append when I run it from the shell. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-03-31 17:07 Message: Logged In: YES user_id=593130 Lesson learned to check that header field. Inappropriate comments withdrawn. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-03-31 12:56 Message: Logged In: YES user_id=6656 Terry, this bug is in the 'Macintosh' category for a reason :-) ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-03-30 19:14 Message: Logged In: YES user_id=593130 When making a bug report, please include at least the Python version. The system you are running on is sometimes essential info also. I am not familiar with either PythonLauncher or PythonIDE. Just PyWin and Idle. If they are third-party items not part of the distribution, then this report should go to their authors. 'When you use' is also somewhat vague. Saying what command line option or menu entry or buttom you used to run/load might also get a better answer. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=923576&group_id=5470 From noreply at sourceforge.net Wed Jun 2 17:37:44 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 17:37:54 2004 Subject: [ python-Bugs-887242 ] "-framework Python" for building modules is bad Message-ID: Bugs item #887242, was opened at 2004-01-29 21:40 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=887242&group_id=5470 Category: Macintosh Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Bob Ippolito (etrepum) Assigned to: Jack Jansen (jackjansen) Summary: "-framework Python" for building modules is bad Initial Comment: We should use the -bundle_loader method for linking modules for both the framework and non-framework verisons of Python. All of these "version mismatch" errors would pretty much be avoided if this were the case, since a 10.2 and 10.3 MacPython 2.3 should be binary compatible. There are other reasons to use -bundle_loader, such as using the same suite of modules for both Stackless and regular Python. Besides, -bundle_loader is for building -bundles :) It's a simple change to the configure script, and it would be great if this could happen before OS X 10.4. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-02 23:37 Message: Logged In: YES user_id=45365 I'm finally getting around to revisiting this bug, and I ran into another problem (that we've already discussed offline, but I'm adding it here for posterity): -undefined dynamic_lookup only works if you have MACOSX_DEPLOYMENT_TARGET set, and set to >= "10.2". I'm now experimenting with the following: if you have that variable set during configure time it will use dynamic_lookup. Moreover, it will record the value of MACOSX_DEPLOYMENT_TARGET in the Makefile. Distutils will check that the current value of MACOSX_DEPLOYMENT_TARGET is the same as that during configure time, and complain if not. I've resisted the temptation to force MACOSX_DEPLOYMENT_TARGET to the configure-time value in distutils, because I think we may break things that way. Feel free to convince me otherwise:-) I'm only doing this for 2.4 right now, as a straight backport to 2.3 is useless: the Makefile is already supplied by Apple. So, any fix for 2.3.X will need to be a band-aid in distutils (possibly triggered by an environment variable?). ---------------------------------------------------------------------- Comment By: Bob Ippolito (etrepum) Date: 2004-04-02 10:34 Message: Logged In: YES user_id=139309 minimal patch for Python 2.4 CVS configure.in (and configure) attached. ---------------------------------------------------------------------- Comment By: Bob Ippolito (etrepum) Date: 2004-01-30 00:21 Message: Logged In: YES user_id=139309 -undefined dynamic_lookup has a localized effect, it still uses two level namespaces and doesn't force the whole process to go flat. Apple uses this flag for Perl in 10.3 (maybe other things, like Apache), so presumably they designed it with situations like ours in mind. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2004-01-30 00:11 Message: Logged In: YES user_id=45365 Ok, I only tried -bundle_loader Python.framework, and when that didn't work I stopped trying. But I have some misgivings about the -undefined dynamic_lookup: doesn't this open up the whole flat namespace/undefined suppress can of worms that we had under 10.1? What we *really* want is to a way to tell the linker "at runtime, the host program must already have loaded a Python.framework, any Python.framework, and that is what we want to link against". ---------------------------------------------------------------------- Comment By: Bob Ippolito (etrepum) Date: 2004-01-30 00:05 Message: Logged In: YES user_id=139309 That's not true. -bundle_loader does not generate a mach header load command, it is merely so that ld can make sure that all of your symbols are defined at link time (it will work for an embedded Python, try it). You do need -undefined dynamic_lookup though, because -bundle_loader doesn't respect indirect symbols. I'm not sure it's possible to make Python.framework get "imported" into the executable so that it's possible to -bundle_loader without -undefined dynamic_lookup (-prebind maybe, but I doubt it). ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2004-01-29 23:53 Message: Logged In: YES user_id=45365 There's a reason why I use -framework in stead of -bundle_loader: you can only specify an application as bundle_loader, not Python.framework itself. This means you have to specify the Python executable, which makes it impossible to load extension modules (including all the standard extension modules) into an application embedding Python. I don't think this is acceptable. ---------------------------------------------------------------------- Comment By: Bob Ippolito (etrepum) Date: 2004-01-29 23:24 Message: Logged In: YES user_id=139309 err, this is a 10.3+ only request, and requires use of -undefined dynamic_lookup as well ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=887242&group_id=5470 From noreply at sourceforge.net Wed Jun 2 21:24:16 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 21:24:21 2004 Subject: [ python-Bugs-965425 ] textwrap does not handle single quotes with hyphens properly Message-ID: Bugs item #965425, was opened at 2004-06-02 21:24 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965425&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Greg Ward (gward) Assigned to: Greg Ward (gward) Summary: textwrap does not handle single quotes with hyphens properly Initial Comment: This problem was reported as an Optik bug (#813077), but it's really a problem in textwrap. (Darn, textwrap is harder to tweak.) In a nutshell, TextWrapper._split() splits this string wrong: "the 'wibble-wobble' widget" It should split into ['the', ' ', "'wibble-", "wobble'", ' ', 'widget'] but it actually splits into ['the', ' ', "'", 'wibble-', "wobble'", ' ', 'widget'] Looks like that damn regex needs a bit more tweaking. SIgh. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965425&group_id=5470 From noreply at sourceforge.net Wed Jun 2 22:00:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 22:00:20 2004 Subject: [ python-Bugs-965425 ] textwrap does not handle single quotes with hyphens properly Message-ID: Bugs item #965425, was opened at 2004-06-02 21:24 Message generated for change (Comment added) made by gward You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965425&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Greg Ward (gward) Assigned to: Greg Ward (gward) Summary: textwrap does not handle single quotes with hyphens properly Initial Comment: This problem was reported as an Optik bug (#813077), but it's really a problem in textwrap. (Darn, textwrap is harder to tweak.) In a nutshell, TextWrapper._split() splits this string wrong: "the 'wibble-wobble' widget" It should split into ['the', ' ', "'wibble-", "wobble'", ' ', 'widget'] but it actually splits into ['the', ' ', "'", 'wibble-', "wobble'", ' ', 'widget'] Looks like that damn regex needs a bit more tweaking. SIgh. ---------------------------------------------------------------------- >Comment By: Greg Ward (gward) Date: 2004-06-02 22:00 Message: Logged In: YES user_id=14422 Turns out the fix was fairly easy: there was already a special case in wordsep_re so that Optik and Docutils could wrap long options like "--foo-bar" correctly. Generalizing that special case to any punctuation ([^\s\w]) fixes this bug. Fixed on release23-maint branch: Lib/textwrap.py rev 1.32.8.3 Lib/test/test_textwrap.py rev 1.22.8.3 Merged onto trunk: Lib/textwrap.py rev 1.35 Lib/test/test_textwrap.py rev 1.26 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965425&group_id=5470 From noreply at sourceforge.net Wed Jun 2 23:01:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 2 23:01:36 2004 Subject: [ python-Bugs-754449 ] Exceptions when a thread exits Message-ID: Bugs item #754449, was opened at 2003-06-14 20:32 Message generated for change (Comment added) made by carlosayam You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=754449&group_id=5470 Category: Threads Group: Python 2.3 Status: Open Resolution: None Priority: 3 Submitted By: Matthias Klose (doko) Assigned to: Brett Cannon (bcannon) Summary: Exceptions when a thread exits Initial Comment: [forwarded from http://bugs.debian.org/195812] The application mentioned is offlineimap, available from ftp://ftp.debian.org/dists/unstable/main/source/. This behavior is new to Python 2.3. When my application exits, I get a lot of these messages: Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 426, in __bootstrap self.__stop() File "/usr/lib/python2.3/threading.py", line 435, in __stop self.__block.notifyAll() File "/usr/lib/python2.3/threading.py", line 239, in notifyAll self.notify(len(self.__waiters)) File "/usr/lib/python2.3/threading.py", line 221, in notify currentThread() # for side-effect TypeError: 'NoneType' object is not callable jgoerzen@christoph:~/tree/offlineimap-3.99.18$ ./offlineimap.py -l log -d maildir -a Personal,Excel Unhandled exception in thread started by > Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 426, in __bootstrap self.__stop() File "/usr/lib/python2.3/threading.py", line 435, in __stop self.__block.notifyAll() File "/usr/lib/python2.3/threading.py", line 239, in notifyAll self.notify(len(self.__waiters)) File "/usr/lib/python2.3/threading.py", line 221, in notify currentThread() # for side-effect TypeError: 'NoneType' object is not callable Unhandled exception in thread started by > Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 426, in __bootstrap self.__stop() File "/usr/lib/python2.3/threading.py", line 435, in __stop self.__block.notifyAll() File "/usr/lib/python2.3/threading.py", line 239, in notifyAll self.notify(len(self.__waiters)) File "/usr/lib/python2.3/threading.py", line 221, in notify currentThread() # for side-effect TypeError: 'NoneType' object is not callable ---------------------------------------------------------------------- Comment By: k (carlosayam) Date: 2004-06-03 13:01 Message: Logged In: YES user_id=990060 bcannon said on 2004-02-17 >> Does anyone know how to cause this error in isolation? I'm getting the same error when I start a new thread, daemonise it and the thread goes into a very slow database operation (loading a large object); meanwhile the main thread starts a GUI; then I close the window, exiting the main loop and the python interpreter ends (or tries to end.) In relation to bcannon comment on how to reproduce the error (setting all variables to None in the module), my guess is that while exiting, the python interpreter is somehow freeing all variables in the module (cleaning the module or something), but the module is still running and that raises the error... is this possible? Note: if the thread is not daemonised, the problem desapears but the script (the python interpreter) takes a while to finish. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-05-17 14:36 Message: Logged In: YES user_id=357491 How to reproduce the bug: * Follow the instructions at http://mnetproject.org/repos/ on how to get a copy of the in-dev version of mnet (I used the instructions for using wget and it worked fine). * Run ``python setup.py``. Do realize that this app will download other Python code (crypto stuff mostly) to finish its install. Takes up about 54 MB on my machine after it is compiled. * Run ``python setup.py test -a``. This executes the testing suite. The bug manifests itself after the testing suite finishes execution. This will consistently cause the bug every time. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-05-17 04:45 Message: Logged In: YES user_id=357491 Discovered this is not fixed after all (previous fix didn't go far enough; still needed, though). Patch 954922 is the second attempt to fix this. This time, though, I had code that could trigger the problem reliably and thus this should be the proper, and final, fix. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-03-09 09:36 Message: Logged In: YES user_id=357491 OK, this has been fixed in Python 2.4 (rev. 1.41 of Lib/threading.py; just removed the two calls to currentThread in the _Condition class that were commented to be for their side-effect). Just because I don't feel rock-solid on my fix to the point of it not changing some obscure semantics that I would be willing to put a gun to my head over it I am not going to backport this until I hear from someone who had this bug and reports that they have not had any issues for a long time. Otherwise I will just leave the fix in 2.4 only. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-03-09 09:32 Message: Logged In: YES user_id=357491 OK, this has been fixed in Python 2.4 (rev. 1.41 of Lib/threading.py; just removed the two calls to currentThread in the _Condition class that were commented to be for their side-effect). Just because I don't feel rock-solid on my fix to the point of it not changing some obscure semantics that I would be willing to put a gun to my head over it I am not going to backport this until I hear from someone who had this bug and reports that they have not had any issues for a long time. Otherwise I will just leave the fix in 2.4 only. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-03-03 15:29 Message: Logged In: YES user_id=357491 To force this error I inserted code to set all attributes of the threading module to None and then throw and exception. After commenting out the two calls to currentThread in _Condition the thrown exception to trigger the problem propogated to the top correctly. I did have the clean-up function give out since I set *everything* to None, but it doesn't look like a normal issue as shown in the exception traceback in the OP. I don't think it is an issue. So it seems commenting the two calls to currentThread in _Condition solves the problem. But since this is threading code and the comment explicitly states it is for the side-effect I am going to try to get a second opinion. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-02-17 15:59 Message: Logged In: YES user_id=357491 Does anyone know how to cause this error in isolation? I have tried a bunch of different ways but cannot cause an exception to be raised at the correct point in Threading.__bootstrap() to lead to self.__stop() to be called while the interpreter is tearing itself down. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-02-10 12:18 Message: Logged In: YES user_id=357491 After staring at the code, I am not even sure if the calls for its side- effect are needed. If you call currentThread(), it either returns a value from _active which is a dict of running Thread instances indexed by thread idents, or a new _DummyThread instance that inserts itself into _active. Now the two calls in the _Condition class are purely for this possible side-effect. And yet at no point is anything from _active, through currentThread or direct access, even touched by a _Condition method. The only chance this might be an issue is if a _Condition instance uses an RLock instance for its locking, which is the default behavior. But this still makes the side-effect need useless. RLocks will call currentThread on their own and since this is all in the same thread the result won't change. And in this specific case of this bug, the _Condition instance is created with a Lock instance since that is what the Thread instance uses for constructing the _Condition instance it use, which is just thread.allocate_lock() which is obviously not an RLock. In other words I can't find the point to the side-effect in _Condition. I will try to come up with some testing code that reproduces the error and then see if just removing the calls in _Condition remove the error and still pass the regression tests. ---------------------------------------------------------------------- Comment By: John Goerzen (jgoerzen) Date: 2003-06-16 23:26 Message: Logged In: YES user_id=491567 I can confirm that this behavior is not present in Python 2.2 in the same version that I am using to test against Python 2.3. I will be on vacation for most of this and next week. I'll try to get to the logging script before I leave, but I might not get to it until I return. FYI, you can also obtain OfflineIMAP at http://quux.org/devel/offlineimap. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-16 16:58 Message: Logged In: YES user_id=357491 OK, following Tim's advice I checked and it seems that Thread calls a method while shutting itself down that calls Condition.notifyAll which calls currentThread which is a global. It would appear that offlineimap is leaving its threads running, the program gets shut down, the threads raise an error while shutting down (probably because things are being torn down), this triggers the stopping method in Thread, and this raises its own exception because of the teardown which is what we are seeing as the TypeError. So the question is whether Condition should store a local reference to currentThread or not. It is not the most pure solution since this shutdown issue is not Condition's, but the only other solution I can think of is to have Thread keep a reference to currentThread, inject it into the current frame's global namespace, call Condition.notifyAll, and then remove the reference from the frame again. I vote for the cleaner, less pure solution. =) Am I insane on this or does it at least sound like this is the problem and proper solution? ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-16 16:19 Message: Logged In: YES user_id=357491 Nuts. For some reason I thought the OP had said when threads were exiting. I will ask on python-dev for a good explanation of what happens when Python is shutting down. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2003-06-16 11:05 Message: Logged In: YES user_id=31435 Note that the OP said "when my application exits". When Python is tearing down the universe, it systematically sets module-global bindings to None. currentThread() is a module global. I can't make more time for this now, so for more info talk about it on Python-Dev. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-16 09:44 Message: Logged In: YES user_id=357491 Well, I'm stumped. I checked the diff from when 2.2 was initially released until now and the only change that seems to be related to any of this is that what is returned by currentThread is not saved in a variable. But since the error is the calling of currentThread itself and not saving the return value I don't see how that would affect anything. I also went through offlineimap but I didn't see anything there that seemed to be setting currentThread to None. Although since several files do ``import *`` so there still is a possibility of overwriting currentThread locally. So, for my owning learning and to help solve this, I have written a tracing function that writes to stderr using the logging package when it detects that either currentThread or threading.currentThread has been set to None, locally or globally (I assume the code is not injecting into builtins so I didn't bother checking there). The file is named tracer.py and I have attached it to this bug report. If you can execute ``sys.settrace(tracer.trace_currentThread)`` before offlinemap starts executing and immediately within each thread (it has to be called in *every* thread since tracing functions are no inherited from the main thread) it should print out a message when currentThread becomes None. If you *really* want to make this robust you can also have it check sys.modules['threading'] every time as well, but I figure there is not going to be much renaming and masking of currentThread. ---------------------------------------------------------------------- Comment By: Matthias Klose (doko) Date: 2003-06-15 18:44 Message: Logged In: YES user_id=60903 Please see http://packages.debian.org/unstable/mail/offlineimap.html or for the tarball: http://ftp.debian.org/debian/pool/main/o/offlineimap/offlineimap_3.99.18.tar.gz ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-15 18:19 Message: Logged In: YES user_id=357491 I went to the FTP site and all I found was some huge, compressed files (after changing the path to ftp://ftp.debian.org/ debian/dists/sid/main/source); no specific program called offlinemap. If it is in one of those files can you just add the file to the bug report? As for the reported bug, it looks like currentThread is being redefined, although how that is happening is beyond me. I checked the 'threading' module and no where is currentThread redefined which could lead to None. Has the app being run been changed at all since Python 2.2 was released? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=754449&group_id=5470 From noreply at sourceforge.net Thu Jun 3 00:43:11 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 00:43:21 2004 Subject: [ python-Bugs-947906 ] calendar.weekheader(n): n should mean chars not bytes Message-ID: Bugs item #947906, was opened at 2004-05-04 20:38 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=947906&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: Accepted Priority: 7 Submitted By: Leonardo Rochael Almeida (rochael) >Assigned to: Nobody/Anonymous (nobody) Summary: calendar.weekheader(n): n should mean chars not bytes Initial Comment: calendar.weekheader(n) is locale aware, which is good in principle. The parameter n, however, is interpreted as meaning bytes, not chars, which can generate broken strings for, e.g. localized weekday names: >>> calendar.weekheader(2) 'Mo Tu We Th Fr Sa Su' >>> locale.setlocale(locale.LC_ALL, "pt_BR.UTF-8") 'pt_BR.UTF-8' >>> calendar.weekheader(2) 'Se Te Qu Qu Se S\xc3 Do' Notice how "S?bado" (Saturday) above is missing the second utf-8 byte for the encoding of "?": >>> u"S?".encode("utf-8") 'S\xc3\xa1' The implementation of weekheader (and of all of calendar.py, it seems) is based on localized 8 bit strings. I suppose the correct fix for this bug will involve a roundtrip thru unicode. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-03 06:43 Message: Logged In: YES user_id=21627 Adding an ucalendar module would be reasonable, IMO. Introducing ustrftime is not necessary - we could just apply the "unicode in/unicode out" procedure (i.e. if the format is a Unicode string, return a Unicode result). The tricky part of that is to convert the strftime result to Unicode. We could try mbstowcs, but that would fail if the locale doesn't use Unicode for wchar_t. Once ucalendar is written, we could document that the calendar module has known problems if the locale's encoding is not Latin-1. However, I'm not going to implement that any time soon, so unassigning. ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2004-06-02 21:08 Message: Logged In: YES user_id=89016 Maybe we should have a second version of calendar (named ucalendar?) that works with unicode strings? Could those two modules be rewritten to use as much common functionality as possible? Or we could use a module global to configure whether str or unicode should be returned? Most of the localization functionality in calendar seems to come from datetime.datetime.strftime(), so it probably would help to have a method datetime.datetime.ustrftime() that returns the formatted string as unicode (using the locale encoding). Assigning to MvL as the locale/unicode expert. ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-05-08 01:57 Message: Logged In: YES user_id=55188 I think calendar.weekheader should mean not chars nor bytes but width. Because the function is currectly used for fixed width representations of calendars. Yes. They are same for western alphabets. But, for many of CJK characters are in full width. So, they need only 1 character for calendar.weekheader(2); and it's conventional in real life, too. But, we don't have unicode.width() support to implement the feature yet. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=947906&group_id=5470 From noreply at sourceforge.net Thu Jun 3 01:08:34 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 01:08:43 2004 Subject: [ python-Bugs-754449 ] Exceptions when a thread exits Message-ID: Bugs item #754449, was opened at 2003-06-14 03:32 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=754449&group_id=5470 Category: Threads Group: Python 2.3 Status: Open Resolution: None Priority: 3 Submitted By: Matthias Klose (doko) Assigned to: Brett Cannon (bcannon) Summary: Exceptions when a thread exits Initial Comment: [forwarded from http://bugs.debian.org/195812] The application mentioned is offlineimap, available from ftp://ftp.debian.org/dists/unstable/main/source/. This behavior is new to Python 2.3. When my application exits, I get a lot of these messages: Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 426, in __bootstrap self.__stop() File "/usr/lib/python2.3/threading.py", line 435, in __stop self.__block.notifyAll() File "/usr/lib/python2.3/threading.py", line 239, in notifyAll self.notify(len(self.__waiters)) File "/usr/lib/python2.3/threading.py", line 221, in notify currentThread() # for side-effect TypeError: 'NoneType' object is not callable jgoerzen@christoph:~/tree/offlineimap-3.99.18$ ./offlineimap.py -l log -d maildir -a Personal,Excel Unhandled exception in thread started by > Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 426, in __bootstrap self.__stop() File "/usr/lib/python2.3/threading.py", line 435, in __stop self.__block.notifyAll() File "/usr/lib/python2.3/threading.py", line 239, in notifyAll self.notify(len(self.__waiters)) File "/usr/lib/python2.3/threading.py", line 221, in notify currentThread() # for side-effect TypeError: 'NoneType' object is not callable Unhandled exception in thread started by > Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 426, in __bootstrap self.__stop() File "/usr/lib/python2.3/threading.py", line 435, in __stop self.__block.notifyAll() File "/usr/lib/python2.3/threading.py", line 239, in notifyAll self.notify(len(self.__waiters)) File "/usr/lib/python2.3/threading.py", line 221, in notify currentThread() # for side-effect TypeError: 'NoneType' object is not callable ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-02 22:08 Message: Logged In: YES user_id=357491 Yep, that is how Python cleans up a module; sets everything in the module to None and then removes it from sys.modules . As for your case, I don't know enough about daemonized threads. My patch for this only tries to let the exception make it to the top without 'threading' hiding the exception by causing its own exception from interpreter shutdown. In other words I don't know if this is a related issue or not. ---------------------------------------------------------------------- Comment By: k (carlosayam) Date: 2004-06-02 20:01 Message: Logged In: YES user_id=990060 bcannon said on 2004-02-17 >> Does anyone know how to cause this error in isolation? I'm getting the same error when I start a new thread, daemonise it and the thread goes into a very slow database operation (loading a large object); meanwhile the main thread starts a GUI; then I close the window, exiting the main loop and the python interpreter ends (or tries to end.) In relation to bcannon comment on how to reproduce the error (setting all variables to None in the module), my guess is that while exiting, the python interpreter is somehow freeing all variables in the module (cleaning the module or something), but the module is still running and that raises the error... is this possible? Note: if the thread is not daemonised, the problem desapears but the script (the python interpreter) takes a while to finish. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-05-16 21:36 Message: Logged In: YES user_id=357491 How to reproduce the bug: * Follow the instructions at http://mnetproject.org/repos/ on how to get a copy of the in-dev version of mnet (I used the instructions for using wget and it worked fine). * Run ``python setup.py``. Do realize that this app will download other Python code (crypto stuff mostly) to finish its install. Takes up about 54 MB on my machine after it is compiled. * Run ``python setup.py test -a``. This executes the testing suite. The bug manifests itself after the testing suite finishes execution. This will consistently cause the bug every time. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-05-16 11:45 Message: Logged In: YES user_id=357491 Discovered this is not fixed after all (previous fix didn't go far enough; still needed, though). Patch 954922 is the second attempt to fix this. This time, though, I had code that could trigger the problem reliably and thus this should be the proper, and final, fix. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-03-08 14:36 Message: Logged In: YES user_id=357491 OK, this has been fixed in Python 2.4 (rev. 1.41 of Lib/threading.py; just removed the two calls to currentThread in the _Condition class that were commented to be for their side-effect). Just because I don't feel rock-solid on my fix to the point of it not changing some obscure semantics that I would be willing to put a gun to my head over it I am not going to backport this until I hear from someone who had this bug and reports that they have not had any issues for a long time. Otherwise I will just leave the fix in 2.4 only. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-03-08 14:32 Message: Logged In: YES user_id=357491 OK, this has been fixed in Python 2.4 (rev. 1.41 of Lib/threading.py; just removed the two calls to currentThread in the _Condition class that were commented to be for their side-effect). Just because I don't feel rock-solid on my fix to the point of it not changing some obscure semantics that I would be willing to put a gun to my head over it I am not going to backport this until I hear from someone who had this bug and reports that they have not had any issues for a long time. Otherwise I will just leave the fix in 2.4 only. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-03-02 20:29 Message: Logged In: YES user_id=357491 To force this error I inserted code to set all attributes of the threading module to None and then throw and exception. After commenting out the two calls to currentThread in _Condition the thrown exception to trigger the problem propogated to the top correctly. I did have the clean-up function give out since I set *everything* to None, but it doesn't look like a normal issue as shown in the exception traceback in the OP. I don't think it is an issue. So it seems commenting the two calls to currentThread in _Condition solves the problem. But since this is threading code and the comment explicitly states it is for the side-effect I am going to try to get a second opinion. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-02-16 20:59 Message: Logged In: YES user_id=357491 Does anyone know how to cause this error in isolation? I have tried a bunch of different ways but cannot cause an exception to be raised at the correct point in Threading.__bootstrap() to lead to self.__stop() to be called while the interpreter is tearing itself down. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-02-09 17:18 Message: Logged In: YES user_id=357491 After staring at the code, I am not even sure if the calls for its side- effect are needed. If you call currentThread(), it either returns a value from _active which is a dict of running Thread instances indexed by thread idents, or a new _DummyThread instance that inserts itself into _active. Now the two calls in the _Condition class are purely for this possible side-effect. And yet at no point is anything from _active, through currentThread or direct access, even touched by a _Condition method. The only chance this might be an issue is if a _Condition instance uses an RLock instance for its locking, which is the default behavior. But this still makes the side-effect need useless. RLocks will call currentThread on their own and since this is all in the same thread the result won't change. And in this specific case of this bug, the _Condition instance is created with a Lock instance since that is what the Thread instance uses for constructing the _Condition instance it use, which is just thread.allocate_lock() which is obviously not an RLock. In other words I can't find the point to the side-effect in _Condition. I will try to come up with some testing code that reproduces the error and then see if just removing the calls in _Condition remove the error and still pass the regression tests. ---------------------------------------------------------------------- Comment By: John Goerzen (jgoerzen) Date: 2003-06-16 06:26 Message: Logged In: YES user_id=491567 I can confirm that this behavior is not present in Python 2.2 in the same version that I am using to test against Python 2.3. I will be on vacation for most of this and next week. I'll try to get to the logging script before I leave, but I might not get to it until I return. FYI, you can also obtain OfflineIMAP at http://quux.org/devel/offlineimap. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-15 23:58 Message: Logged In: YES user_id=357491 OK, following Tim's advice I checked and it seems that Thread calls a method while shutting itself down that calls Condition.notifyAll which calls currentThread which is a global. It would appear that offlineimap is leaving its threads running, the program gets shut down, the threads raise an error while shutting down (probably because things are being torn down), this triggers the stopping method in Thread, and this raises its own exception because of the teardown which is what we are seeing as the TypeError. So the question is whether Condition should store a local reference to currentThread or not. It is not the most pure solution since this shutdown issue is not Condition's, but the only other solution I can think of is to have Thread keep a reference to currentThread, inject it into the current frame's global namespace, call Condition.notifyAll, and then remove the reference from the frame again. I vote for the cleaner, less pure solution. =) Am I insane on this or does it at least sound like this is the problem and proper solution? ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-15 23:19 Message: Logged In: YES user_id=357491 Nuts. For some reason I thought the OP had said when threads were exiting. I will ask on python-dev for a good explanation of what happens when Python is shutting down. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2003-06-15 18:05 Message: Logged In: YES user_id=31435 Note that the OP said "when my application exits". When Python is tearing down the universe, it systematically sets module-global bindings to None. currentThread() is a module global. I can't make more time for this now, so for more info talk about it on Python-Dev. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-15 16:44 Message: Logged In: YES user_id=357491 Well, I'm stumped. I checked the diff from when 2.2 was initially released until now and the only change that seems to be related to any of this is that what is returned by currentThread is not saved in a variable. But since the error is the calling of currentThread itself and not saving the return value I don't see how that would affect anything. I also went through offlineimap but I didn't see anything there that seemed to be setting currentThread to None. Although since several files do ``import *`` so there still is a possibility of overwriting currentThread locally. So, for my owning learning and to help solve this, I have written a tracing function that writes to stderr using the logging package when it detects that either currentThread or threading.currentThread has been set to None, locally or globally (I assume the code is not injecting into builtins so I didn't bother checking there). The file is named tracer.py and I have attached it to this bug report. If you can execute ``sys.settrace(tracer.trace_currentThread)`` before offlinemap starts executing and immediately within each thread (it has to be called in *every* thread since tracing functions are no inherited from the main thread) it should print out a message when currentThread becomes None. If you *really* want to make this robust you can also have it check sys.modules['threading'] every time as well, but I figure there is not going to be much renaming and masking of currentThread. ---------------------------------------------------------------------- Comment By: Matthias Klose (doko) Date: 2003-06-15 01:44 Message: Logged In: YES user_id=60903 Please see http://packages.debian.org/unstable/mail/offlineimap.html or for the tarball: http://ftp.debian.org/debian/pool/main/o/offlineimap/offlineimap_3.99.18.tar.gz ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-15 01:19 Message: Logged In: YES user_id=357491 I went to the FTP site and all I found was some huge, compressed files (after changing the path to ftp://ftp.debian.org/ debian/dists/sid/main/source); no specific program called offlinemap. If it is in one of those files can you just add the file to the bug report? As for the reported bug, it looks like currentThread is being redefined, although how that is happening is beyond me. I checked the 'threading' module and no where is currentThread redefined which could lead to None. Has the app being run been changed at all since Python 2.2 was released? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=754449&group_id=5470 From noreply at sourceforge.net Thu Jun 3 01:25:45 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 01:25:53 2004 Subject: [ python-Bugs-754449 ] Exceptions when a thread exits Message-ID: Bugs item #754449, was opened at 2003-06-14 06:32 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=754449&group_id=5470 Category: Threads Group: Python 2.3 Status: Open Resolution: None Priority: 3 Submitted By: Matthias Klose (doko) Assigned to: Brett Cannon (bcannon) Summary: Exceptions when a thread exits Initial Comment: [forwarded from http://bugs.debian.org/195812] The application mentioned is offlineimap, available from ftp://ftp.debian.org/dists/unstable/main/source/. This behavior is new to Python 2.3. When my application exits, I get a lot of these messages: Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 426, in __bootstrap self.__stop() File "/usr/lib/python2.3/threading.py", line 435, in __stop self.__block.notifyAll() File "/usr/lib/python2.3/threading.py", line 239, in notifyAll self.notify(len(self.__waiters)) File "/usr/lib/python2.3/threading.py", line 221, in notify currentThread() # for side-effect TypeError: 'NoneType' object is not callable jgoerzen@christoph:~/tree/offlineimap-3.99.18$ ./offlineimap.py -l log -d maildir -a Personal,Excel Unhandled exception in thread started by > Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 426, in __bootstrap self.__stop() File "/usr/lib/python2.3/threading.py", line 435, in __stop self.__block.notifyAll() File "/usr/lib/python2.3/threading.py", line 239, in notifyAll self.notify(len(self.__waiters)) File "/usr/lib/python2.3/threading.py", line 221, in notify currentThread() # for side-effect TypeError: 'NoneType' object is not callable Unhandled exception in thread started by > Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 426, in __bootstrap self.__stop() File "/usr/lib/python2.3/threading.py", line 435, in __stop self.__block.notifyAll() File "/usr/lib/python2.3/threading.py", line 239, in notifyAll self.notify(len(self.__waiters)) File "/usr/lib/python2.3/threading.py", line 221, in notify currentThread() # for side-effect TypeError: 'NoneType' object is not callable ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-03 01:25 Message: Logged In: YES user_id=31435 Brett, FYI, a daemon thread differs from a non-daemon thread in only one respect: Python shuts down when only daemon threads remain. It waits for non-daemon threads to finish. So a daemon thread can keep running after the interpreter has torn itself down completely. For that reason, problems in daemon threads doing non-trivial things are almost guaranteed. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-03 01:08 Message: Logged In: YES user_id=357491 Yep, that is how Python cleans up a module; sets everything in the module to None and then removes it from sys.modules . As for your case, I don't know enough about daemonized threads. My patch for this only tries to let the exception make it to the top without 'threading' hiding the exception by causing its own exception from interpreter shutdown. In other words I don't know if this is a related issue or not. ---------------------------------------------------------------------- Comment By: k (carlosayam) Date: 2004-06-02 23:01 Message: Logged In: YES user_id=990060 bcannon said on 2004-02-17 >> Does anyone know how to cause this error in isolation? I'm getting the same error when I start a new thread, daemonise it and the thread goes into a very slow database operation (loading a large object); meanwhile the main thread starts a GUI; then I close the window, exiting the main loop and the python interpreter ends (or tries to end.) In relation to bcannon comment on how to reproduce the error (setting all variables to None in the module), my guess is that while exiting, the python interpreter is somehow freeing all variables in the module (cleaning the module or something), but the module is still running and that raises the error... is this possible? Note: if the thread is not daemonised, the problem desapears but the script (the python interpreter) takes a while to finish. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-05-17 00:36 Message: Logged In: YES user_id=357491 How to reproduce the bug: * Follow the instructions at http://mnetproject.org/repos/ on how to get a copy of the in-dev version of mnet (I used the instructions for using wget and it worked fine). * Run ``python setup.py``. Do realize that this app will download other Python code (crypto stuff mostly) to finish its install. Takes up about 54 MB on my machine after it is compiled. * Run ``python setup.py test -a``. This executes the testing suite. The bug manifests itself after the testing suite finishes execution. This will consistently cause the bug every time. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-05-16 14:45 Message: Logged In: YES user_id=357491 Discovered this is not fixed after all (previous fix didn't go far enough; still needed, though). Patch 954922 is the second attempt to fix this. This time, though, I had code that could trigger the problem reliably and thus this should be the proper, and final, fix. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-03-08 17:36 Message: Logged In: YES user_id=357491 OK, this has been fixed in Python 2.4 (rev. 1.41 of Lib/threading.py; just removed the two calls to currentThread in the _Condition class that were commented to be for their side-effect). Just because I don't feel rock-solid on my fix to the point of it not changing some obscure semantics that I would be willing to put a gun to my head over it I am not going to backport this until I hear from someone who had this bug and reports that they have not had any issues for a long time. Otherwise I will just leave the fix in 2.4 only. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-03-08 17:32 Message: Logged In: YES user_id=357491 OK, this has been fixed in Python 2.4 (rev. 1.41 of Lib/threading.py; just removed the two calls to currentThread in the _Condition class that were commented to be for their side-effect). Just because I don't feel rock-solid on my fix to the point of it not changing some obscure semantics that I would be willing to put a gun to my head over it I am not going to backport this until I hear from someone who had this bug and reports that they have not had any issues for a long time. Otherwise I will just leave the fix in 2.4 only. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-03-02 23:29 Message: Logged In: YES user_id=357491 To force this error I inserted code to set all attributes of the threading module to None and then throw and exception. After commenting out the two calls to currentThread in _Condition the thrown exception to trigger the problem propogated to the top correctly. I did have the clean-up function give out since I set *everything* to None, but it doesn't look like a normal issue as shown in the exception traceback in the OP. I don't think it is an issue. So it seems commenting the two calls to currentThread in _Condition solves the problem. But since this is threading code and the comment explicitly states it is for the side-effect I am going to try to get a second opinion. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-02-16 23:59 Message: Logged In: YES user_id=357491 Does anyone know how to cause this error in isolation? I have tried a bunch of different ways but cannot cause an exception to be raised at the correct point in Threading.__bootstrap() to lead to self.__stop() to be called while the interpreter is tearing itself down. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-02-09 20:18 Message: Logged In: YES user_id=357491 After staring at the code, I am not even sure if the calls for its side- effect are needed. If you call currentThread(), it either returns a value from _active which is a dict of running Thread instances indexed by thread idents, or a new _DummyThread instance that inserts itself into _active. Now the two calls in the _Condition class are purely for this possible side-effect. And yet at no point is anything from _active, through currentThread or direct access, even touched by a _Condition method. The only chance this might be an issue is if a _Condition instance uses an RLock instance for its locking, which is the default behavior. But this still makes the side-effect need useless. RLocks will call currentThread on their own and since this is all in the same thread the result won't change. And in this specific case of this bug, the _Condition instance is created with a Lock instance since that is what the Thread instance uses for constructing the _Condition instance it use, which is just thread.allocate_lock() which is obviously not an RLock. In other words I can't find the point to the side-effect in _Condition. I will try to come up with some testing code that reproduces the error and then see if just removing the calls in _Condition remove the error and still pass the regression tests. ---------------------------------------------------------------------- Comment By: John Goerzen (jgoerzen) Date: 2003-06-16 09:26 Message: Logged In: YES user_id=491567 I can confirm that this behavior is not present in Python 2.2 in the same version that I am using to test against Python 2.3. I will be on vacation for most of this and next week. I'll try to get to the logging script before I leave, but I might not get to it until I return. FYI, you can also obtain OfflineIMAP at http://quux.org/devel/offlineimap. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-16 02:58 Message: Logged In: YES user_id=357491 OK, following Tim's advice I checked and it seems that Thread calls a method while shutting itself down that calls Condition.notifyAll which calls currentThread which is a global. It would appear that offlineimap is leaving its threads running, the program gets shut down, the threads raise an error while shutting down (probably because things are being torn down), this triggers the stopping method in Thread, and this raises its own exception because of the teardown which is what we are seeing as the TypeError. So the question is whether Condition should store a local reference to currentThread or not. It is not the most pure solution since this shutdown issue is not Condition's, but the only other solution I can think of is to have Thread keep a reference to currentThread, inject it into the current frame's global namespace, call Condition.notifyAll, and then remove the reference from the frame again. I vote for the cleaner, less pure solution. =) Am I insane on this or does it at least sound like this is the problem and proper solution? ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-16 02:19 Message: Logged In: YES user_id=357491 Nuts. For some reason I thought the OP had said when threads were exiting. I will ask on python-dev for a good explanation of what happens when Python is shutting down. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2003-06-15 21:05 Message: Logged In: YES user_id=31435 Note that the OP said "when my application exits". When Python is tearing down the universe, it systematically sets module-global bindings to None. currentThread() is a module global. I can't make more time for this now, so for more info talk about it on Python-Dev. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-15 19:44 Message: Logged In: YES user_id=357491 Well, I'm stumped. I checked the diff from when 2.2 was initially released until now and the only change that seems to be related to any of this is that what is returned by currentThread is not saved in a variable. But since the error is the calling of currentThread itself and not saving the return value I don't see how that would affect anything. I also went through offlineimap but I didn't see anything there that seemed to be setting currentThread to None. Although since several files do ``import *`` so there still is a possibility of overwriting currentThread locally. So, for my owning learning and to help solve this, I have written a tracing function that writes to stderr using the logging package when it detects that either currentThread or threading.currentThread has been set to None, locally or globally (I assume the code is not injecting into builtins so I didn't bother checking there). The file is named tracer.py and I have attached it to this bug report. If you can execute ``sys.settrace(tracer.trace_currentThread)`` before offlinemap starts executing and immediately within each thread (it has to be called in *every* thread since tracing functions are no inherited from the main thread) it should print out a message when currentThread becomes None. If you *really* want to make this robust you can also have it check sys.modules['threading'] every time as well, but I figure there is not going to be much renaming and masking of currentThread. ---------------------------------------------------------------------- Comment By: Matthias Klose (doko) Date: 2003-06-15 04:44 Message: Logged In: YES user_id=60903 Please see http://packages.debian.org/unstable/mail/offlineimap.html or for the tarball: http://ftp.debian.org/debian/pool/main/o/offlineimap/offlineimap_3.99.18.tar.gz ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-15 04:19 Message: Logged In: YES user_id=357491 I went to the FTP site and all I found was some huge, compressed files (after changing the path to ftp://ftp.debian.org/ debian/dists/sid/main/source); no specific program called offlinemap. If it is in one of those files can you just add the file to the bug report? As for the reported bug, it looks like currentThread is being redefined, although how that is happening is beyond me. I checked the 'threading' module and no where is currentThread redefined which could lead to None. Has the app being run been changed at all since Python 2.2 was released? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=754449&group_id=5470 From noreply at sourceforge.net Thu Jun 3 02:05:44 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 02:05:51 2004 Subject: [ python-Bugs-754449 ] Exceptions when a thread exits Message-ID: Bugs item #754449, was opened at 2003-06-14 20:32 Message generated for change (Comment added) made by carlosayam You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=754449&group_id=5470 Category: Threads Group: Python 2.3 Status: Open Resolution: None Priority: 3 Submitted By: Matthias Klose (doko) Assigned to: Brett Cannon (bcannon) Summary: Exceptions when a thread exits Initial Comment: [forwarded from http://bugs.debian.org/195812] The application mentioned is offlineimap, available from ftp://ftp.debian.org/dists/unstable/main/source/. This behavior is new to Python 2.3. When my application exits, I get a lot of these messages: Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 426, in __bootstrap self.__stop() File "/usr/lib/python2.3/threading.py", line 435, in __stop self.__block.notifyAll() File "/usr/lib/python2.3/threading.py", line 239, in notifyAll self.notify(len(self.__waiters)) File "/usr/lib/python2.3/threading.py", line 221, in notify currentThread() # for side-effect TypeError: 'NoneType' object is not callable jgoerzen@christoph:~/tree/offlineimap-3.99.18$ ./offlineimap.py -l log -d maildir -a Personal,Excel Unhandled exception in thread started by > Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 426, in __bootstrap self.__stop() File "/usr/lib/python2.3/threading.py", line 435, in __stop self.__block.notifyAll() File "/usr/lib/python2.3/threading.py", line 239, in notifyAll self.notify(len(self.__waiters)) File "/usr/lib/python2.3/threading.py", line 221, in notify currentThread() # for side-effect TypeError: 'NoneType' object is not callable Unhandled exception in thread started by > Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 426, in __bootstrap self.__stop() File "/usr/lib/python2.3/threading.py", line 435, in __stop self.__block.notifyAll() File "/usr/lib/python2.3/threading.py", line 239, in notifyAll self.notify(len(self.__waiters)) File "/usr/lib/python2.3/threading.py", line 221, in notify currentThread() # for side-effect TypeError: 'NoneType' object is not callable ---------------------------------------------------------------------- Comment By: k (carlosayam) Date: 2004-06-03 16:05 Message: Logged In: YES user_id=990060 I don't understand one thing: how can python shut down itself and, at the same time, keep a python code running in a daemonised thread? or is that thread running in a different "threaded" python byte-code interpreter? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-03 15:25 Message: Logged In: YES user_id=31435 Brett, FYI, a daemon thread differs from a non-daemon thread in only one respect: Python shuts down when only daemon threads remain. It waits for non-daemon threads to finish. So a daemon thread can keep running after the interpreter has torn itself down completely. For that reason, problems in daemon threads doing non-trivial things are almost guaranteed. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-03 15:08 Message: Logged In: YES user_id=357491 Yep, that is how Python cleans up a module; sets everything in the module to None and then removes it from sys.modules . As for your case, I don't know enough about daemonized threads. My patch for this only tries to let the exception make it to the top without 'threading' hiding the exception by causing its own exception from interpreter shutdown. In other words I don't know if this is a related issue or not. ---------------------------------------------------------------------- Comment By: k (carlosayam) Date: 2004-06-03 13:01 Message: Logged In: YES user_id=990060 bcannon said on 2004-02-17 >> Does anyone know how to cause this error in isolation? I'm getting the same error when I start a new thread, daemonise it and the thread goes into a very slow database operation (loading a large object); meanwhile the main thread starts a GUI; then I close the window, exiting the main loop and the python interpreter ends (or tries to end.) In relation to bcannon comment on how to reproduce the error (setting all variables to None in the module), my guess is that while exiting, the python interpreter is somehow freeing all variables in the module (cleaning the module or something), but the module is still running and that raises the error... is this possible? Note: if the thread is not daemonised, the problem desapears but the script (the python interpreter) takes a while to finish. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-05-17 14:36 Message: Logged In: YES user_id=357491 How to reproduce the bug: * Follow the instructions at http://mnetproject.org/repos/ on how to get a copy of the in-dev version of mnet (I used the instructions for using wget and it worked fine). * Run ``python setup.py``. Do realize that this app will download other Python code (crypto stuff mostly) to finish its install. Takes up about 54 MB on my machine after it is compiled. * Run ``python setup.py test -a``. This executes the testing suite. The bug manifests itself after the testing suite finishes execution. This will consistently cause the bug every time. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-05-17 04:45 Message: Logged In: YES user_id=357491 Discovered this is not fixed after all (previous fix didn't go far enough; still needed, though). Patch 954922 is the second attempt to fix this. This time, though, I had code that could trigger the problem reliably and thus this should be the proper, and final, fix. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-03-09 09:36 Message: Logged In: YES user_id=357491 OK, this has been fixed in Python 2.4 (rev. 1.41 of Lib/threading.py; just removed the two calls to currentThread in the _Condition class that were commented to be for their side-effect). Just because I don't feel rock-solid on my fix to the point of it not changing some obscure semantics that I would be willing to put a gun to my head over it I am not going to backport this until I hear from someone who had this bug and reports that they have not had any issues for a long time. Otherwise I will just leave the fix in 2.4 only. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-03-09 09:32 Message: Logged In: YES user_id=357491 OK, this has been fixed in Python 2.4 (rev. 1.41 of Lib/threading.py; just removed the two calls to currentThread in the _Condition class that were commented to be for their side-effect). Just because I don't feel rock-solid on my fix to the point of it not changing some obscure semantics that I would be willing to put a gun to my head over it I am not going to backport this until I hear from someone who had this bug and reports that they have not had any issues for a long time. Otherwise I will just leave the fix in 2.4 only. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-03-03 15:29 Message: Logged In: YES user_id=357491 To force this error I inserted code to set all attributes of the threading module to None and then throw and exception. After commenting out the two calls to currentThread in _Condition the thrown exception to trigger the problem propogated to the top correctly. I did have the clean-up function give out since I set *everything* to None, but it doesn't look like a normal issue as shown in the exception traceback in the OP. I don't think it is an issue. So it seems commenting the two calls to currentThread in _Condition solves the problem. But since this is threading code and the comment explicitly states it is for the side-effect I am going to try to get a second opinion. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-02-17 15:59 Message: Logged In: YES user_id=357491 Does anyone know how to cause this error in isolation? I have tried a bunch of different ways but cannot cause an exception to be raised at the correct point in Threading.__bootstrap() to lead to self.__stop() to be called while the interpreter is tearing itself down. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-02-10 12:18 Message: Logged In: YES user_id=357491 After staring at the code, I am not even sure if the calls for its side- effect are needed. If you call currentThread(), it either returns a value from _active which is a dict of running Thread instances indexed by thread idents, or a new _DummyThread instance that inserts itself into _active. Now the two calls in the _Condition class are purely for this possible side-effect. And yet at no point is anything from _active, through currentThread or direct access, even touched by a _Condition method. The only chance this might be an issue is if a _Condition instance uses an RLock instance for its locking, which is the default behavior. But this still makes the side-effect need useless. RLocks will call currentThread on their own and since this is all in the same thread the result won't change. And in this specific case of this bug, the _Condition instance is created with a Lock instance since that is what the Thread instance uses for constructing the _Condition instance it use, which is just thread.allocate_lock() which is obviously not an RLock. In other words I can't find the point to the side-effect in _Condition. I will try to come up with some testing code that reproduces the error and then see if just removing the calls in _Condition remove the error and still pass the regression tests. ---------------------------------------------------------------------- Comment By: John Goerzen (jgoerzen) Date: 2003-06-16 23:26 Message: Logged In: YES user_id=491567 I can confirm that this behavior is not present in Python 2.2 in the same version that I am using to test against Python 2.3. I will be on vacation for most of this and next week. I'll try to get to the logging script before I leave, but I might not get to it until I return. FYI, you can also obtain OfflineIMAP at http://quux.org/devel/offlineimap. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-16 16:58 Message: Logged In: YES user_id=357491 OK, following Tim's advice I checked and it seems that Thread calls a method while shutting itself down that calls Condition.notifyAll which calls currentThread which is a global. It would appear that offlineimap is leaving its threads running, the program gets shut down, the threads raise an error while shutting down (probably because things are being torn down), this triggers the stopping method in Thread, and this raises its own exception because of the teardown which is what we are seeing as the TypeError. So the question is whether Condition should store a local reference to currentThread or not. It is not the most pure solution since this shutdown issue is not Condition's, but the only other solution I can think of is to have Thread keep a reference to currentThread, inject it into the current frame's global namespace, call Condition.notifyAll, and then remove the reference from the frame again. I vote for the cleaner, less pure solution. =) Am I insane on this or does it at least sound like this is the problem and proper solution? ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-16 16:19 Message: Logged In: YES user_id=357491 Nuts. For some reason I thought the OP had said when threads were exiting. I will ask on python-dev for a good explanation of what happens when Python is shutting down. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2003-06-16 11:05 Message: Logged In: YES user_id=31435 Note that the OP said "when my application exits". When Python is tearing down the universe, it systematically sets module-global bindings to None. currentThread() is a module global. I can't make more time for this now, so for more info talk about it on Python-Dev. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-16 09:44 Message: Logged In: YES user_id=357491 Well, I'm stumped. I checked the diff from when 2.2 was initially released until now and the only change that seems to be related to any of this is that what is returned by currentThread is not saved in a variable. But since the error is the calling of currentThread itself and not saving the return value I don't see how that would affect anything. I also went through offlineimap but I didn't see anything there that seemed to be setting currentThread to None. Although since several files do ``import *`` so there still is a possibility of overwriting currentThread locally. So, for my owning learning and to help solve this, I have written a tracing function that writes to stderr using the logging package when it detects that either currentThread or threading.currentThread has been set to None, locally or globally (I assume the code is not injecting into builtins so I didn't bother checking there). The file is named tracer.py and I have attached it to this bug report. If you can execute ``sys.settrace(tracer.trace_currentThread)`` before offlinemap starts executing and immediately within each thread (it has to be called in *every* thread since tracing functions are no inherited from the main thread) it should print out a message when currentThread becomes None. If you *really* want to make this robust you can also have it check sys.modules['threading'] every time as well, but I figure there is not going to be much renaming and masking of currentThread. ---------------------------------------------------------------------- Comment By: Matthias Klose (doko) Date: 2003-06-15 18:44 Message: Logged In: YES user_id=60903 Please see http://packages.debian.org/unstable/mail/offlineimap.html or for the tarball: http://ftp.debian.org/debian/pool/main/o/offlineimap/offlineimap_3.99.18.tar.gz ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-06-15 18:19 Message: Logged In: YES user_id=357491 I went to the FTP site and all I found was some huge, compressed files (after changing the path to ftp://ftp.debian.org/ debian/dists/sid/main/source); no specific program called offlinemap. If it is in one of those files can you just add the file to the bug report? As for the reported bug, it looks like currentThread is being redefined, although how that is happening is beyond me. I checked the 'threading' module and no where is currentThread redefined which could lead to None. Has the app being run been changed at all since Python 2.2 was released? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=754449&group_id=5470 From noreply at sourceforge.net Thu Jun 3 03:55:38 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 03:55:42 2004 Subject: [ python-Bugs-965562 ] isinstance fails to recognize instance Message-ID: Bugs item #965562, was opened at 2004-06-03 09:55 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965562&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.3 Status: Open Resolution: None Priority: 5 Submitted By: Ib J?rgensen (ijorgensen) Assigned to: Nobody/Anonymous (nobody) Summary: isinstance fails to recognize instance Initial Comment: I searched the bugbase and found a somewhat similar problem that was termed invalid (id 563730 ) I did however spend some time figuring out what was going wrong so..? The following two modules document the problem: MODULE A: import b class A: def __init__(self): self.name = "a" if __name__ == "__main__": x = A() print "main",isinstance(x,A) y = b.B() y.notify(x) MODULE B: import a class B: def __init__(self): self.name = "b" def notify(self,cl): print "notify",isinstance(cl,a.A) if __name__ == "__main__": x = a.A() print isinstance(x,a.A) y = B() y.notify(x) running module A gives inconsistent results the printout is: main True notify False The modules do have circular imports and it is clear that this is behind the problem - but that may happen in real life. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965562&group_id=5470 From noreply at sourceforge.net Thu Jun 3 05:19:52 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 05:20:04 2004 Subject: [ python-Feature Requests-491033 ] asyncore - api doesn't provide doneevent Message-ID: Feature Requests item #491033, was opened at 2001-12-10 07:00 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=491033&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Accepted Priority: 5 Submitted By: Joseph Wayne Norton (natsukashi) Assigned to: Nobody/Anonymous (nobody) Summary: asyncore - api doesn't provide doneevent Initial Comment: The asyncore api doesn't provide a way to invoke the "loop" method but to only perform one iteration. Since the loop method does some selection of which poll method to call before entering the while loop, I have to duplicate the same behavior in my own "dooneiteration" method. I would like some functionality similar to the Tk interp model where one can enter the mainloop or simply do one event. regards, - joe n. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-03 11:19 Message: Logged In: YES user_id=21627 Thanks to the patch, the feature will be available in Python 2.4. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2004-05-20 09:19 Message: Logged In: YES user_id=341410 Sorry about the double post, I was distracted by pretty colors or something. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2004-05-20 08:45 Message: Logged In: YES user_id=341410 I have posted the patch to implement the desired functionality. SF patch #957240 ; python.org/sf/957240 ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2004-05-20 08:28 Message: Logged In: YES user_id=341410 I've implemented and submitted a patch to implement the desired functionality in asyncore.loop via a 4th keyword argument called 'count'. The patch to current CVS can be found here: python.org/sf/957240 ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-10 18:41 Message: Logged In: YES user_id=31435 The Feature Requests tracker didn't have any Categories defined. I defined the same ones as the Bug tracker has, and purt this report in category "Python Library". ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-10 18:15 Message: Logged In: YES user_id=31435 Moved to the Feature Request tracker. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-10 15:47 Message: Logged In: YES user_id=6380 If you can supply a decent patch, we'd happily apply it to Python 2.3. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=491033&group_id=5470 From noreply at sourceforge.net Thu Jun 3 05:38:15 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 05:38:35 2004 Subject: [ python-Bugs-954981 ] Error in urllib2 Examples Message-ID: Bugs item #954981, was opened at 2004-05-16 23:33 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=954981&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Phillip (phillip_) Assigned to: Nobody/Anonymous (nobody) Summary: Error in urllib2 Examples Initial Comment: I'll quote from the urllib2 examples: >>> Here we are sending a data-stream to the stdin of a CGI and reading the data it returns to us: >>> import urllib2 >>> req = urllib2.Request(url='https://localhost/cgi-bin/test.cgi', ... data='This data is passed to stdin of the CGI') >>> f = urllib2.urlopen(req) ... <<< urllib2 returns: urllib2.URLError: (This is the Errormsg in 2.3.3, 2.2.3 is a different syntax but says the same. This is seriously misleading, as it implies that somehow urllib2 can handle https, which it can't in this manner. At least not in the way exampled here. Please correct or delete this example. It would be nice if you could provide a working example for handling https. Examples are a good thing in the whole httplib / urllib / urllib2 area, since it appears somewhat hard to overlook. I'm sorry I can't provide a working version of this example, as I am trying to figure out how to handle https myself. Anyway, it's a documentation bug and needs fixing. Thanks. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-03 11:38 Message: Logged In: YES user_id=21627 What operating system are you using? The example works fine for me, when modified to use an actual web server: >>> urllib2.Request("https://sourceforge.net/tracker/index.php","func=detail&aid=954981&group_id=5470&atid=105470") >>> r=urllib2.Request("https://sourceforge.net/tracker/index.php","func=detail&aid=954981&group_id=5470&atid=105470") >>> f=urllib2.urlopen(r) >>> f.read() Can you provide the complete traceback? For https to work, your Python installation must support SSL. For this, import _ssl must succeed. For that to work, OpenSSL must have been used when Python was built. I can add a text to the documentation indicating that the example fails if the Python installation does not support SSL. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=954981&group_id=5470 From noreply at sourceforge.net Thu Jun 3 05:49:11 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 05:49:20 2004 Subject: [ python-Bugs-935749 ] locale dependency of string methods undocumented Message-ID: Bugs item #935749, was opened at 2004-04-15 18:26 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=935749&group_id=5470 Category: Documentation Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: George Yoshida (quiver) Assigned to: Martin v. L?wis (loewis) Summary: locale dependency of string methods undocumented Initial Comment: Some string methods/constants are locale dependent, but unlike string constants, string methods lack the description about the locale dependency. This wasn't a big problem in older Python, but since Python 2.3 each time when you start up an IDLE, lib/idlelib/IOBinding.py sets the locale to LC_CTYPE: locale.setlocale(locale.LC_CTYPE, "") This affects the behavior of string methods and module (at least on Windows when locale is set to ('Japanese_Japan', 'cp932')). Here is the result of locale-dependent constants/methods. IDLE 1.0.2 # IDLE on Windows 2000 >>> import string, locale >>> string.letters 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuv wxyz\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1 \xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9 \xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6 \xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3 \xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf' >>> _[-1].isalpha() True >>> locale.getlocale() ['Japanese_Japan', '932'] This *feature* can be easily reproduced from the command line. Python 2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import string, locale >>> string.letters 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUV WXYZ' >>> locale.getlocale() (None, None) >>> locale.setlocale(locale.LC_CTYPE, '') 'Japanese_Japan.932' >>> string.letters 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuv wxyz\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1 \xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9 \xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6 \xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3 \xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf' >>> _[-1].isalnum() True Quote from msdn.microsoft.com: * http://msdn.microsoft.com/library/en- us/vclib/html/_crt_isalpha.2c_.iswalpha.asp The result of the test condition for the isalpha function depends on the LC_CTYPE category setting of the current locale; see setlocale for more information. string methods affected are: * isalpha * isalpnum MS also says that some other string methods(islower, isdigit, etc.) are locale dependent, so it might be betterto add a note about the dependency to those methods too. Thanks. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-03 11:49 Message: Logged In: YES user_id=21627 Thanks for the report. This list of locale-dependent methods is actually longer; they are now correctly listed in libstdtypes.tex 1.155 and 1.129.8.10. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=935749&group_id=5470 From noreply at sourceforge.net Thu Jun 3 05:55:59 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 05:56:09 2004 Subject: [ python-Bugs-881861 ] type of Py_UNICODE depends on ./configure options Message-ID: Bugs item #881861, was opened at 2004-01-22 02:02 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=881861&group_id=5470 Category: Documentation Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Biehunko Michael (aszlig) Assigned to: Nobody/Anonymous (nobody) Summary: type of Py_UNICODE depends on ./configure options Initial Comment: "u" (Unicode string) [Py_UNICODE *] Convert a null-terminated buffer of Unicode (UCS-2) data to a Python Unicode object. If the Unicode buffer pointer is NULL, None is returned. ^^^^^ it's either ucs-2 _or_ ucs-4 (./configure with --enable-unicode=ucs?). url: http://www.python.org/doc/current/api/arg-parsing.html ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-03 11:55 Message: Logged In: YES user_id=21627 Thanks for the report. Fixed in concrete.tex 1.43 utilities.tex 1.13 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=881861&group_id=5470 From noreply at sourceforge.net Thu Jun 3 06:43:34 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 06:43:50 2004 Subject: [ python-Bugs-965562 ] isinstance fails to recognize instance Message-ID: Bugs item #965562, was opened at 2004-06-03 09:55 Message generated for change (Comment added) made by doerwalter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965562&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.3 Status: Open Resolution: None Priority: 5 Submitted By: Ib J?rgensen (ijorgensen) Assigned to: Nobody/Anonymous (nobody) Summary: isinstance fails to recognize instance Initial Comment: I searched the bugbase and found a somewhat similar problem that was termed invalid (id 563730 ) I did however spend some time figuring out what was going wrong so..? The following two modules document the problem: MODULE A: import b class A: def __init__(self): self.name = "a" if __name__ == "__main__": x = A() print "main",isinstance(x,A) y = b.B() y.notify(x) MODULE B: import a class B: def __init__(self): self.name = "b" def notify(self,cl): print "notify",isinstance(cl,a.A) if __name__ == "__main__": x = a.A() print isinstance(x,a.A) y = B() y.notify(x) running module A gives inconsistent results the printout is: main True notify False The modules do have circular imports and it is clear that this is behind the problem - but that may happen in real life. ---------------------------------------------------------------------- >Comment By: Walter D?rwald (doerwalter) Date: 2004-06-03 12:43 Message: Logged In: YES user_id=89016 The problem is not the circular import per se, but the fact that module a gets imported twice, once as the __main__ module (with it's version of the class A) and once as the module A (with it's own version of the class A). >From Python's point of view these are two completely unrelated classes with on inheritance relationship whatsoever. Can this bug be closed? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965562&group_id=5470 From noreply at sourceforge.net Thu Jun 3 07:23:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 07:23:34 2004 Subject: [ python-Bugs-965562 ] isinstance fails to recognize instance Message-ID: Bugs item #965562, was opened at 2004-06-03 09:55 Message generated for change (Comment added) made by ijorgensen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965562&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.3 Status: Open Resolution: None Priority: 5 Submitted By: Ib J?rgensen (ijorgensen) Assigned to: Nobody/Anonymous (nobody) Summary: isinstance fails to recognize instance Initial Comment: I searched the bugbase and found a somewhat similar problem that was termed invalid (id 563730 ) I did however spend some time figuring out what was going wrong so..? The following two modules document the problem: MODULE A: import b class A: def __init__(self): self.name = "a" if __name__ == "__main__": x = A() print "main",isinstance(x,A) y = b.B() y.notify(x) MODULE B: import a class B: def __init__(self): self.name = "b" def notify(self,cl): print "notify",isinstance(cl,a.A) if __name__ == "__main__": x = a.A() print isinstance(x,a.A) y = B() y.notify(x) running module A gives inconsistent results the printout is: main True notify False The modules do have circular imports and it is clear that this is behind the problem - but that may happen in real life. ---------------------------------------------------------------------- >Comment By: Ib J?rgensen (ijorgensen) Date: 2004-06-03 13:23 Message: Logged In: YES user_id=1055299 I have no problems if you close this bug. I do however feel that it is quite subtle to distinguish classes by the rather random import order. ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2004-06-03 12:43 Message: Logged In: YES user_id=89016 The problem is not the circular import per se, but the fact that module a gets imported twice, once as the __main__ module (with it's version of the class A) and once as the module A (with it's own version of the class A). >From Python's point of view these are two completely unrelated classes with on inheritance relationship whatsoever. Can this bug be closed? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965562&group_id=5470 From noreply at sourceforge.net Thu Jun 3 08:42:22 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 08:42:32 2004 Subject: [ python-Bugs-887242 ] "-framework Python" for building modules is bad Message-ID: Bugs item #887242, was opened at 2004-01-29 21:40 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=887242&group_id=5470 Category: Macintosh Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Bob Ippolito (etrepum) Assigned to: Jack Jansen (jackjansen) Summary: "-framework Python" for building modules is bad Initial Comment: We should use the -bundle_loader method for linking modules for both the framework and non-framework verisons of Python. All of these "version mismatch" errors would pretty much be avoided if this were the case, since a 10.2 and 10.3 MacPython 2.3 should be binary compatible. There are other reasons to use -bundle_loader, such as using the same suite of modules for both Stackless and regular Python. Besides, -bundle_loader is for building -bundles :) It's a simple change to the configure script, and it would be great if this could happen before OS X 10.4. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 14:42 Message: Logged In: YES user_id=45365 I checked in the mods I mentioned in my previous followup: configure.in rev. 1.455 configure rev. 1.444 Makefile.pre.in rev. 1.143 Lib/distutils/sysconfig.py rev.1.58 ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2004-06-02 23:37 Message: Logged In: YES user_id=45365 I'm finally getting around to revisiting this bug, and I ran into another problem (that we've already discussed offline, but I'm adding it here for posterity): -undefined dynamic_lookup only works if you have MACOSX_DEPLOYMENT_TARGET set, and set to >= "10.2". I'm now experimenting with the following: if you have that variable set during configure time it will use dynamic_lookup. Moreover, it will record the value of MACOSX_DEPLOYMENT_TARGET in the Makefile. Distutils will check that the current value of MACOSX_DEPLOYMENT_TARGET is the same as that during configure time, and complain if not. I've resisted the temptation to force MACOSX_DEPLOYMENT_TARGET to the configure-time value in distutils, because I think we may break things that way. Feel free to convince me otherwise:-) I'm only doing this for 2.4 right now, as a straight backport to 2.3 is useless: the Makefile is already supplied by Apple. So, any fix for 2.3.X will need to be a band-aid in distutils (possibly triggered by an environment variable?). ---------------------------------------------------------------------- Comment By: Bob Ippolito (etrepum) Date: 2004-04-02 10:34 Message: Logged In: YES user_id=139309 minimal patch for Python 2.4 CVS configure.in (and configure) attached. ---------------------------------------------------------------------- Comment By: Bob Ippolito (etrepum) Date: 2004-01-30 00:21 Message: Logged In: YES user_id=139309 -undefined dynamic_lookup has a localized effect, it still uses two level namespaces and doesn't force the whole process to go flat. Apple uses this flag for Perl in 10.3 (maybe other things, like Apache), so presumably they designed it with situations like ours in mind. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2004-01-30 00:11 Message: Logged In: YES user_id=45365 Ok, I only tried -bundle_loader Python.framework, and when that didn't work I stopped trying. But I have some misgivings about the -undefined dynamic_lookup: doesn't this open up the whole flat namespace/undefined suppress can of worms that we had under 10.1? What we *really* want is to a way to tell the linker "at runtime, the host program must already have loaded a Python.framework, any Python.framework, and that is what we want to link against". ---------------------------------------------------------------------- Comment By: Bob Ippolito (etrepum) Date: 2004-01-30 00:05 Message: Logged In: YES user_id=139309 That's not true. -bundle_loader does not generate a mach header load command, it is merely so that ld can make sure that all of your symbols are defined at link time (it will work for an embedded Python, try it). You do need -undefined dynamic_lookup though, because -bundle_loader doesn't respect indirect symbols. I'm not sure it's possible to make Python.framework get "imported" into the executable so that it's possible to -bundle_loader without -undefined dynamic_lookup (-prebind maybe, but I doubt it). ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2004-01-29 23:53 Message: Logged In: YES user_id=45365 There's a reason why I use -framework in stead of -bundle_loader: you can only specify an application as bundle_loader, not Python.framework itself. This means you have to specify the Python executable, which makes it impossible to load extension modules (including all the standard extension modules) into an application embedding Python. I don't think this is acceptable. ---------------------------------------------------------------------- Comment By: Bob Ippolito (etrepum) Date: 2004-01-29 23:24 Message: Logged In: YES user_id=139309 err, this is a 10.3+ only request, and requires use of -undefined dynamic_lookup as well ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=887242&group_id=5470 From noreply at sourceforge.net Thu Jun 3 08:50:14 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 08:50:22 2004 Subject: [ python-Bugs-696535 ] Python 2.4: Warn about omitted mutable_flag. Message-ID: Bugs item #696535, was opened at 2003-03-03 14:25 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=696535&group_id=5470 Category: Extension Modules Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Michael Hudson (mwh) Assigned to: Martin v. L?wis (loewis) Summary: Python 2.4: Warn about omitted mutable_flag. Initial Comment: We need to add a warning to fcntl.ioctl() for 2.4. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-03 14:50 Message: Logged In: YES user_id=21627 I've added something to "Porting to 2.4" as well. Also, I'm taking a different route with the reminders: Instead of filing a new reminder that the warning should be removed in 2.5, I've added a #error in the source which is triggered when the minor version changes to 5. Committed the changes as whatsnew24.tex 1.50 NEWS 1.988 fcntlmodule.c 2.42 ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-26 18:52 Message: Logged In: YES user_id=6656 Is the attached all that is needed? Do we want a test that the warning is indeed emitted? Reading http://python.org/sf/555817 first might be good for the memory. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=696535&group_id=5470 From noreply at sourceforge.net Thu Jun 3 09:09:32 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 09:09:37 2004 Subject: [ python-Bugs-719300 ] pimp needs to do download itself Message-ID: Bugs item #719300, was opened at 2003-04-10 23:35 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=719300&group_id=5470 Category: Macintosh Group: None >Status: Closed >Resolution: Fixed Priority: 3 Submitted By: Jack Jansen (jackjansen) Assigned to: Jack Jansen (jackjansen) Summary: pimp needs to do download itself Initial Comment: Pimp currently uses the OSX programs curl and tar to download distributions and unpack them. There is absolutely no reason not to use the urllib and tarfile modules for this. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 15:09 Message: Logged In: YES user_id=45365 urllib2-based downloading has been implemented in pimp.py rev. 1.31 and 1.27.4.2, thanks to code donated by Kevin Ollivier. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2003-04-22 15:58 Message: Logged In: YES user_id=45365 Unpack has been implemented, downloading isn't that important, leaving that for later. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2003-04-14 16:47 Message: Logged In: YES user_id=45365 There is actually a very good reason to use at least the tarfile module: if we use that in stead of unix tar we can fiddle pathnames while we unpack. Thereby we can do per-user installs of packages even if the tarfile was created for a system-wide installation. (That is, once the details of where per-user packages are going to be stored have been worked out). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=719300&group_id=5470 From noreply at sourceforge.net Thu Jun 3 09:11:33 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 09:11:38 2004 Subject: [ python-Bugs-783095 ] MacOS9: installer should test CarbonLib version Message-ID: Bugs item #783095, was opened at 2003-08-04 22:54 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=783095&group_id=5470 Category: Macintosh Group: None >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Jack Jansen (jackjansen) Summary: MacOS9: installer should test CarbonLib version Initial Comment: Apparently MacPython 2.3 needs CarbonLib 1.3 or later. The installer should test for this. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 15:11 Message: Logged In: YES user_id=45365 This isn't going to be fixed. In stead, the download page has an exact specification of which version of MacOS MacPython-OS9 will run on. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=783095&group_id=5470 From noreply at sourceforge.net Thu Jun 3 09:14:14 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 09:14:19 2004 Subject: [ python-Bugs-963494 ] packman upgrade issue Message-ID: Bugs item #963494, was opened at 2004-05-31 11:39 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=963494&group_id=5470 Category: Macintosh >Group: Feature Request Status: Open Resolution: None >Priority: 3 Submitted By: Ronald Oussoren (ronaldoussoren) Assigned to: Jack Jansen (jackjansen) Summary: packman upgrade issue Initial Comment: When you upgrade a package using packman the installer doesn't remove the old version before installing the new version. The end result is that old files might interfere with the correct operation of the upgraded package. I ran into this with an upgrade from PyObjC 1.0 to PyObjC 1.1. Some extension modules have moved between those two releases. When upgrading using PackMan the old extension modules still exists, and cause problems when you try to use the package. Three possible solutions: 1) Remove the existing package directory before installing the upgrade 2) Add pre/post install/upgrade scripts 3) Use a real package database ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 15:14 Message: Logged In: YES user_id=45365 I'm moving this to the feature requests category: it requires major surgery, and it could be argued that this functionality belongs more in distutils than in Package Manager. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=963494&group_id=5470 From noreply at sourceforge.net Thu Jun 3 09:35:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 09:35:18 2004 Subject: [ python-Bugs-959291 ] PythonIDE crashes on very large scripts folder Message-ID: Bugs item #959291, was opened at 2004-05-24 10:01 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=959291&group_id=5470 Category: Macintosh Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Jack Jansen (jackjansen) Summary: PythonIDE crashes on very large scripts folder Initial Comment: The MacPython IDE crashes if your scripts folder is very large (or, alternatively, if you select a very large folder as the scripts folder), and here "large" counts files in subdirs, etc. The problem is that we have a fixed number of menu entry IDs, and we run out of them. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 15:35 Message: Logged In: YES user_id=45365 Fixed by stopping traversal of the scripts folder as soon as we allocate a menu ID > 200. This leaves 55 menu ids as headroom. Fixed in Wapplication.py rev. 1.24 and 1.22.8.1. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=959291&group_id=5470 From noreply at sourceforge.net Thu Jun 3 09:40:37 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 09:40:47 2004 Subject: [ python-Bugs-965562 ] isinstance fails to recognize instance Message-ID: Bugs item #965562, was opened at 2004-06-03 09:55 Message generated for change (Comment added) made by doerwalter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965562&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.3 >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Ib J?rgensen (ijorgensen) Assigned to: Nobody/Anonymous (nobody) Summary: isinstance fails to recognize instance Initial Comment: I searched the bugbase and found a somewhat similar problem that was termed invalid (id 563730 ) I did however spend some time figuring out what was going wrong so..? The following two modules document the problem: MODULE A: import b class A: def __init__(self): self.name = "a" if __name__ == "__main__": x = A() print "main",isinstance(x,A) y = b.B() y.notify(x) MODULE B: import a class B: def __init__(self): self.name = "b" def notify(self,cl): print "notify",isinstance(cl,a.A) if __name__ == "__main__": x = a.A() print isinstance(x,a.A) y = B() y.notify(x) running module A gives inconsistent results the printout is: main True notify False The modules do have circular imports and it is clear that this is behind the problem - but that may happen in real life. ---------------------------------------------------------------------- >Comment By: Walter D?rwald (doerwalter) Date: 2004-06-03 15:40 Message: Logged In: YES user_id=89016 It has nothing to do with the *order* of the imports. You can get this problem even if you have only one module. Write a module foo.py with the following content: -- class Foo: pass import foo print isinstance(foo.Foo(), Foo) -- When you run this script with "python foo.py", it will print True False The first output is from the "import foo" statement, which imports the module under the name foo. The second output will come from the __main__ script. The consequence of all this is: Don't import the __main__ script. Closing the bug report. ---------------------------------------------------------------------- Comment By: Ib J?rgensen (ijorgensen) Date: 2004-06-03 13:23 Message: Logged In: YES user_id=1055299 I have no problems if you close this bug. I do however feel that it is quite subtle to distinguish classes by the rather random import order. ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2004-06-03 12:43 Message: Logged In: YES user_id=89016 The problem is not the circular import per se, but the fact that module a gets imported twice, once as the __main__ module (with it's version of the class A) and once as the module A (with it's own version of the class A). >From Python's point of view these are two completely unrelated classes with on inheritance relationship whatsoever. Can this bug be closed? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965562&group_id=5470 From noreply at sourceforge.net Thu Jun 3 09:40:31 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 09:43:25 2004 Subject: [ python-Bugs-912758 ] AskFolder (EasyDialogs) does not work? Message-ID: Bugs item #912758, was opened at 2004-03-09 15:33 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=912758&group_id=5470 Category: Macintosh Group: Python 2.3 >Status: Closed >Resolution: Works For Me Priority: 5 Submitted By: Benjamin Schollnick (bscholln) Assigned to: Jack Jansen (jackjansen) Summary: AskFolder (EasyDialogs) does not work? Initial Comment: I previously was using this code: #fss , ok = macfs.GetDirectory('Folder to search:') Which worked fine on Python v2.3 on MOSX v10.3. But, I am updating the code to use: test = EasyDialogs.AskFolder ( message="Please choose a folder to process", defaultLocation=".", wanted=string) Which results in: Traceback (most recent call last): File "Panther:Users:benjamin:Desktop:Python:finders:re-dup- file.py", line 59, in ? test = EasyDialogs.AskFolder ( message="Please choose a folder to process", defaultLocation=".", wanted=string) AttributeError: 'module' object has no attribute 'AskFolder' If I run the EasyDialogs.py self test, it dies (without a traceback), right after the following code: rv = AskFileForSave(wanted=Carbon.File.FSRef, savedFileName="%s.txt"%s) Message("rv.as_pathname: %s"%rv.as_pathname()) Which the AskFolder test is what appears to be crashing the EasyDialogs test... (Since the pathname test message is appearing during the self test). ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 15:40 Message: Logged In: YES user_id=45365 Sorry for being so late replying to this: I missed it. But: I cannot repeat this. Could you please test again, to see that there wasn't some transient problem? Please run with "pythonw -v" and verify that you're getting the correct EasyDialogs, not some modified (incomplete) copy. If you still see the problem: please reopen the bug report, and supply exact version of Python and MacOS, plus the output of "pythonw -v" of your code and "dir(EasyDialogs)". ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=912758&group_id=5470 From noreply at sourceforge.net Thu Jun 3 09:50:47 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 09:51:00 2004 Subject: [ python-Bugs-950482 ] -fconstant-cfstrings required for Xcode 1.2 Message-ID: Bugs item #950482, was opened at 2004-05-08 18:14 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=950482&group_id=5470 Category: Macintosh Group: Platform-specific >Status: Closed >Resolution: Works For Me Priority: 5 Submitted By: Bob Ippolito (etrepum) Assigned to: Jack Jansen (jackjansen) Summary: -fconstant-cfstrings required for Xcode 1.2 Initial Comment: Xcode 1.2's compiler requires -fconstant-cfstrings in order to use CFSTR(...), so it should be added to Darwin's BASECFLAGS. It might need to be conditional, I'm not sure if previous versions of GCC supported the flag. cc (GCC) 3.3 20030304 (Apple Computer, Inc. build 1640) ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 15:50 Message: Logged In: YES user_id=45365 I don't see this problem. The only references to CFSTR() in the Python sources are in mactoolboxglue.c, and it compiles just fine without the -fconstant-cfstrings with the xcode-1.2-supplied gcc. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=950482&group_id=5470 From noreply at sourceforge.net Thu Jun 3 09:54:42 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 09:54:49 2004 Subject: [ python-Bugs-912747 ] Unable to overwrite file with findertools.move Message-ID: Bugs item #912747, was opened at 2004-03-09 15:19 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=912747&group_id=5470 Category: Macintosh Group: Python 2.3 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Benjamin Schollnick (bscholln) Assigned to: Jack Jansen (jackjansen) Summary: Unable to overwrite file with findertools.move Initial Comment: I am revising a application that was working on MOSX 10.2, but Python v2.3 appears to have subtle broke a few items... if mac_mode: findertools.move (old_path + filename, new_path) Is dying with the following error... Main, filename : 01_100_QTSKINS_L_12.JPG Moving ------------------------------------- Traceback (most recent call last): File "Panther:Users:benjamin:Desktop:Python:finders:re-dup- file.py", line 102, in ? move_file ( pathname, new_dir, filename ) File "Panther:Users:benjamin:Desktop:Python:finders:re-dup- file.py", line 53, in move_file findertools.move (old_path + filename, new_path) File "mac:Applications:Python 2.2.2:Mac:Lib:findertools.py", line 74, in move return finder.move(src_fss, to=dst_fss) File "mac:Applications:Python 2.2.2:Mac:Lib:lib-scriptpackages: Finder:Standard_Suite.py", line 294, in move raise aetools.Error, aetools.decodeerror(_arguments) aetools.Error: (-10000, 'errAEEventFailed', None) As far as I can tell, the move command is not allowing a overwrite situation... If this is expected behavior, is there any way we could add a flag for allowing overwrite? ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 15:54 Message: Logged In: YES user_id=45365 If the behaviour changed between 10.2 and 10.3 then this is because of a change in the Finder: findertools simply uses AppleEvents to let the Finder do the actual work. If the finder "move" command has an option to overwrite the target we could easily add this, but I'll leave it to you to investigate this and supply the patch:-) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=912747&group_id=5470 From noreply at sourceforge.net Thu Jun 3 10:17:37 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 10:17:43 2004 Subject: [ python-Bugs-913581 ] PythonLauncher-run scripts have funny $CWD Message-ID: Bugs item #913581, was opened at 2004-03-10 17:02 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=913581&group_id=5470 Category: Macintosh Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Jack Jansen (jackjansen) Summary: PythonLauncher-run scripts have funny $CWD Initial Comment: PythonLauncher suffers from the general .app problem that it's working directory is "/". This affects scripts run with PythonLauncher without a terminal window: they inherit that home directory. Scripts run with a terminal window get $HOME as working directory. PythonLauncher should do something unified, best is probably to either always use $HOME, or to use the directory where the script is located. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 16:17 Message: Logged In: YES user_id=45365 Fixed in PythonLauncher/main.m rev. 1.2 and 1.1.14.1, by doing a chdir() to $HOME on startup. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=913581&group_id=5470 From noreply at sourceforge.net Thu Jun 3 10:39:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 10:39:33 2004 Subject: [ python-Bugs-932977 ] #!/usr/bin/python can find wrong python Message-ID: Bugs item #932977, was opened at 2004-04-10 23:34 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932977&group_id=5470 Category: Macintosh Group: None >Status: Closed >Resolution: Fixed Priority: 7 Submitted By: Jack Jansen (jackjansen) Assigned to: Jack Jansen (jackjansen) Summary: #!/usr/bin/python can find wrong python Initial Comment: It seems that in the event of a script run with #!/usr/bin/python MacOSX will not pass a full pathname as argv[0], but only "python". When getpath.c notices this situation it looks over $PATH to try and find python. But if it doesn't find the python from #!, or finds another one first, it will use that python as the basis for its sys.path setting. The darwinports folks have a fix for this: . ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 16:39 Message: Logged In: YES user_id=45365 Fixed in 1.47 and 1.46.14.1 by applying the darwinports fix. Thanks! ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2004-04-16 00:10 Message: Logged In: YES user_id=45365 I think I agree that it is technically a darwin bug. But as a fix is available (use another means to get the pathname of the current executable if argv[0] seems bogus) I think we should put it in. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-04-15 22:54 Message: Logged In: YES user_id=21627 Isn't that a Darwin bug? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932977&group_id=5470 From noreply at sourceforge.net Thu Jun 3 10:41:38 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 10:41:42 2004 Subject: [ python-Bugs-864985 ] Python 2.3.3 won't build on MacOSX 10.2 Message-ID: Bugs item #864985, was opened at 2003-12-23 15:11 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=864985&group_id=5470 Category: Macintosh Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Kent Johnson (kjohnson) Assigned to: Jack Jansen (jackjansen) Summary: Python 2.3.3 won't build on MacOSX 10.2 Initial Comment: Python 2.3.3 won't build out-of-the-box on MacOSX 10.2 because Mac/OSX/Makefile is expecting xcodebuild instead of pbxbuild. The fix is already there, just change the commenting on the lines # For 10.2: #PBXBUILD=pbxbuild # For 10.3: PBXBUILD=xcodebuild This fix should either be handled automatically or documented in ./Mac/OSX/README Thanks! Kent ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 16:41 Message: Logged In: YES user_id=45365 This has been fixed for the next 2.3.X release: the Makefile now tests whether it should use pbxbuild or xcodebuild in stead of relying on the user uncommenting lines. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=864985&group_id=5470 From noreply at sourceforge.net Thu Jun 3 14:48:29 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 14:48:37 2004 Subject: [ python-Bugs-965991 ] Build Python 2.3.3 or 4 on RedHat Enterprise fails Message-ID: Bugs item #965991, was opened at 2004-06-03 11:48 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965991&group_id=5470 Category: Build Group: None Status: Open Resolution: None Priority: 5 Submitted By: Russell Owen (reowen) Assigned to: Nobody/Anonymous (nobody) Summary: Build Python 2.3.3 or 4 on RedHat Enterprise fails Initial Comment: We recently upgraded from RH 9 to RH Enterprise 3. In detail: cat /etc/issue Red Hat Enterprise Linux WS release 3 (Taroon Update 2) cat /proc/version Linux version 2.4.21-15.EL (bhcompile@daffy.perf.redhat.com) (gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-34)) #1 Thu Apr 22 00:26:34 EDT 2004 I then tried to build a Python 2.3.3 from source for installation in /net/python. ./configure --prefix=/net/python --enable-unicode=ucs4 make the result is a python with no Tkinter. Looking at the output of ./configure I see: checking for UCS-4 tcl... no The following all result in the same problem: Omitting --enable-unicode=ucs4 Using --enable-unicode=ucs2 All the same but with Python 2.3.4. So...I tried a 2nd class of tests (both with Python 2.3.3 and 2.3.4, same bad result each time): I built my own Tcl/Tk using --prefix=/net/python (which went fine). I then edited Modules/Setup so that Python uses that. This time the make fails with: Modules/posixmodule.c:5788: the use of `tempnam' is dangerous, better use `mkstemp' case $MAKEFLAGS in *-s*) CC='gcc -pthread' LDSHARED='gcc -pthread -shared' OPT='-DNDEBUG -g -O3 -Wall -Wstrict-prototypes' ./python -E ./setup.py -q build;; *) CC='gcc -pthread' LDSHARED='gcc -pthread -shared' OPT='-DNDEBUG -g -O3 -Wall -Wstrict-prototypes' ./python -E ./setup.py build;; esac ./python: error while loading shared libraries: libtk8.4.so: cannot open shared object file: No such file or directory make: *** [sharedmods] Error 127 [rowen@apollo Python-2.3.3]$ ls -l /net/python I have attached the Setup I used for that 2nd class of tests. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965991&group_id=5470 From noreply at sourceforge.net Thu Jun 3 17:04:49 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 17:05:03 2004 Subject: [ python-Bugs-765888 ] InfoPlist.strings files are UTF-16. Message-ID: Bugs item #765888, was opened at 2003-07-04 13:43 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=765888&group_id=5470 Category: Macintosh Group: None >Status: Closed >Resolution: Fixed Priority: 3 Submitted By: Jack Jansen (jackjansen) Assigned to: Jack Jansen (jackjansen) Summary: InfoPlist.strings files are UTF-16. Initial Comment: Comment by Edward Moy: 6) Python.framework/Versions/2.3/Resources/English.lproj/ InfoPlist.strings and Python.app/Contents/Resources/ English.lproj/InfoPlist.strings needs to be UTF-16. This needs to be fixed. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 23:04 Message: Logged In: YES user_id=45365 This was fixed a long time ago (just after the 2.3 release, I think). ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2003-07-21 23:15 Message: Logged In: YES user_id=45365 Fixing this would require setting the filetype to binary in the CVS repository, and I think it's unwise to do this just before 2.3 goes out, so I'm punting on it. Note that it will be difficult to fix later too, as the "-kb" flag is per- file, not per-revision, so setting the binary flag would change history. A better solution is probably to create new InfoPlist.strings files for 2.4, in a different location. Or do away with them altogether, and have them be generated (which will also solve the problem that the version numbers here have to be updated). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=765888&group_id=5470 From noreply at sourceforge.net Thu Jun 3 17:11:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 17:11:47 2004 Subject: [ python-Bugs-889200 ] bundlebuilder standalone app doesn't fully quit Message-ID: Bugs item #889200, was opened at 2004-02-02 19:08 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=889200&group_id=5470 Category: Macintosh Group: Python 2.3 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Russell Owen (reowen) Assigned to: Jack Jansen (jackjansen) Summary: bundlebuilder standalone app doesn't fully quit Initial Comment: I used bundlebuilder to create a fully standalone app (well, it fails on Jaguar, alas, but the python framework is inside the app). It works fine, but when I quit the app and try to delete it I get the following message from Finder and the item "python" in the .app is not deleted: The operation cannot be completed because the item "Python" is in use. The only way I've found to delete the app is to REBOOT. Simply logging out and in again does not do the job (which I find quite startling). I worked around the problem by making the app semi_standalone, but would rather have it fully self-contained It's a huge app; I've not tried to break it down, but some things to consider: - it uses networking (though I need not make a connection to cause this problem) - it uses Tkinter (and yes the Tcl and Tk frameworks were installed in the app along with the Python framework) - it uses a few threads for networking (again, though, the problem occurs even if no connection is ever made, suggesting that no threads have to be started to cause the problem) -- Russell I have attached the two code files I use to build the application (combined in one zip archive). ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 23:11 Message: Logged In: YES user_id=45365 I'm going to close this bug: (a) no progress was made for 4 months, (b) it's pretty esoteric and (c) doing standalone buildapplication on Panther is useless anyway. Feel free to reopen if you don't agree, though... ---------------------------------------------------------------------- Comment By: Bob Ippolito (etrepum) Date: 2004-02-26 00:31 Message: Logged In: YES user_id=139309 The version of Python that comes with OS X 10.3 is **NOT** future or backwards compatible (even with --semi-standalone!). There are a lot of reasons for this, but I do not wish to explain them. 10.4's /System Python will probably be **future** compatible (assuming 10.5 includes the same major python version), but with --semi- standalone and not --standalone. If you want future compatibility right now you **must** use a non-Apple compiled Python with --standalone. If you want backwards compatibility, you can't have it (by default). Technically, you can (in theory) set up an elaborate environment such that you can compile Python and any extensions you need from 10.3 that would work on 10.2, but this is *not the default* and to my knowledge *nobody has tried it*. Currently, the best solution is to have access to a 10.2 machine for compiling and bundling Python and its extensions. I don't consider this backwards compatibility though, because in either case you area compiling/building against OS X 10.2, whether or not you happen to be running it at the time. Surely this is not what you want to hear, but it is what it is. Sometime in the future, hopefully I or someone else will get this elaborate cross- compiling environment going and write up a tutorial. As for me, I'm FAR more interested in fixing current and future problems than spinning my wheels catering to people who can't/won't upgrade. ---------------------------------------------------------------------- Comment By: Russell Owen (reowen) Date: 2004-02-26 00:02 Message: Logged In: YES user_id=431773 % pwd /Users/rowen/PythonRO/testbuild/TUI 0.84.app/Contents/Frameworks/Python.framework/Versions/2.3 % ls -l Python -rwxr-xr-x 1 rowen staff 1088276 19 Feb 09:21 Python % otool -L Python Python: /System/Library/Frameworks/Python.framework/Versions/2.3/Python (compatibility version 2.3.0, current version 2.3.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 71.0.0) /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices (compatibility version 1.0.0, current version 16.0.0) /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation (compatibility version 300.0.0, current version 496.0.0) This data was taken on a fresh build that has not yet been run. If that makes any difference I can run the app and trash it, so only this file is left. But I'd rather not generate undeletable files unnecessarily. Anyway, I hope it helps. This is NOT a Python built by myself. I'm using the standard Python framework that comes with Panther (in /System). Yes it's not stunningly useful to build a fully standalone app with Panther. I originally did it because I naively thought it would run under Jaguar. I stumbled across the bug at that time and reported it. I can still imagine uses for doing a full standalone build (if they'll be compatible with later versions of MacOS X). Mainly Python has not been 100% backwards compatible (it's been good but not perfect). I had a few changes to make before my app would run under Python 2.3. A fully standalone build would eliminate this issue. Meanwhile, I use semi_standlone for my real releases. Partly because of this bug and partly because I'm not sure it's worth the extra app size to guard against future problems. At least for now the app is fully maintained. At some future date it could well become an issue. ---------------------------------------------------------------------- Comment By: Bob Ippolito (etrepum) Date: 2004-02-25 23:40 Message: Logged In: YES user_id=139309 I would like to see the output of "ls -l" and "otool -L" on that file.. This is a Python framework build built by yourself, correct? If you are running --standalone w/ a "/System" Python then you are not really doing anything useful (or supported, etc.) in the first place. ---------------------------------------------------------------------- Comment By: Russell Owen (reowen) Date: 2004-02-25 23:33 Message: Logged In: YES user_id=431773 First of all, after trying to empty the trash (and saying "Continue" when it complains that Python is in use), all that is left of the application bundle is: Contents/Frameworks/Python.framework/Versions/2.3/Python I opened the system.log and console.log and watched while I started and stopped the application and then deleted as much of it as I could. Absolutely nothing showed up in system.log. The console showed: "running sitecustomize.py" as usual on startup and then 0 (the digit zero) on quit. No messages during the partial emptying of trash. Next I tried rm -R. It did empty the trash. I hope it doesn't cause any problems (nothing so far, but I just did it a minute ago). Any other ideas? (Trying rm on a specific file seems sort of pointless because I think I already know which single file is tied up.) ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2004-02-25 21:26 Message: Logged In: YES user_id=45365 Hmm. Let's first try and find out what else can be meant by "in use". It is probably either a filesystem-related thing or a process/framework- related thing. First: could you check both the console log file and the system log file (both with Utilities->Console) for any clues? Probably best to open both, then run the app, then try to delete it, and see whether anything shows up. Next thing to try is good old unix "rm -r" on the app. Does that work or not? If rm -r works then I think I would proceed by "rm -r"ing specific parts of the app, and see whether there's any part that, when removed, will make empty trash continue. ---------------------------------------------------------------------- Comment By: Russell Owen (reowen) Date: 2004-02-25 20:53 Message: Logged In: YES user_id=431773 I build the applications on Panther (10.3.2 with all but the 2004-02-23 security patch installed). I checked ps -x before running the app and then again after I did all this: ran, quit and trashed the app and (unsuccessfully, as per the bug report) to empty the trash. Unfortunately there was NO change (aside from CPU time) in the ps -x listing from before to after -- same PIDs and names (and of course the same # of processes). I saved the output in case anyone wants a copy. I only checked by eye so I might have missed something very subtle, but I doubt it. Any other tests you can think of? I'm really puzzled. ---------------------------------------------------------------------- Comment By: Bob Ippolito (etrepum) Date: 2004-02-13 05:55 Message: Logged In: YES user_id=139309 Was the standalone bundle created on Jaguar or Panther? ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2004-02-07 00:17 Message: Logged In: YES user_id=45365 Russell, could you try using "ps" (probably "ps -x", both before running the .app and after running it) to check that the application has indeed fully exited? The fact that you mention there are multiple threads somehow suggests that there is something still using that Python... ---------------------------------------------------------------------- Comment By: Russell Owen (reowen) Date: 2004-02-02 19:10 Message: Logged In: YES user_id=431773 Note: the included BuildMacTUI.py uses semi_standalone = True, but the problem only occurs if I use standalone = True instead. Sorry for the potential confusion. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=889200&group_id=5470 From noreply at sourceforge.net Thu Jun 3 17:23:26 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 17:23:32 2004 Subject: [ python-Bugs-878581 ] I would like an easy way to get to file creation date Message-ID: Bugs item #878581, was opened at 2004-01-16 23:52 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=878581&group_id=5470 Category: Macintosh Group: Feature Request >Status: Closed >Resolution: Works For Me Priority: 5 Submitted By: Russell Owen (reowen) Assigned to: Jack Jansen (jackjansen) Summary: I would like an easy way to get to file creation date Initial Comment: With Carbon MacPython it is trivial to get to the file creation date; it is one of the fields returned by os.stat. With Framework MacPython this is no longer the case. I am sure that the interface to the MacOS APIs allow one to do this. I played around some and could see how I could probably get the info, but in what format I had no idea and it was fairly complicated. If you are willing, it'd be nice to have easier access to this info (and perhaps it exists and I've missed it). I can imagine two possible solutions: - expand os.stat so the return object includes the extra attribute; this would be convenient, but would make the mac version nonstandard. Of course the windows version could also include this info, leaving unix the "odd man out". - add a convenience method or function to some existing mac library Of course it's an interesting question how many users actually need this info. I ended up using it for processing some measurement results (the language used to write the data files was too crude to include a date in the files). Still...it's clear folks to have use for file info, and it's nice if it's readily available. Thanks for your consideration. -- Russell ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 23:23 Message: Logged In: YES user_id=45365 macfs.FSSpec(filename).GetDates() returns 3 time value: creation, modification and backup. macfs is listed as deprecated, but that won't happen until there are easy interfaces to some of the things that are still missing from Carbon.File (such as getting at the creation date:-) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=878581&group_id=5470 From noreply at sourceforge.net Thu Jun 3 17:37:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 17:37:49 2004 Subject: [ python-Bugs-862941 ] PythonIDE on osx can't run script with commandline python Message-ID: Bugs item #862941, was opened at 2003-12-19 14:58 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=862941&group_id=5470 Category: Macintosh Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Flemming Funch (ffunch) Assigned to: Jack Jansen (jackjansen) Summary: PythonIDE on osx can't run script with commandline python Initial Comment: In Python IDE, if I have a script open, any script, and I check "Run with Command Line Python" and run it, I get: TypeError: do_script() takes at least 2 non-keyword arguments (1 given) This is MacPython2.3-Panther, PythonIDE1.01, OSX10.3.2. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 23:37 Message: Logged In: YES user_id=45365 Fixed in PyEdit.py 1.46 and 1.43.8.2. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=862941&group_id=5470 From noreply at sourceforge.net Thu Jun 3 17:58:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 17:58:22 2004 Subject: [ python-Bugs-860242 ] PythonIDE does not save for PythonLauncher Message-ID: Bugs item #860242, was opened at 2003-12-15 10:51 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=860242&group_id=5470 Category: Macintosh Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Jack Jansen (jackjansen) Summary: PythonIDE does not save for PythonLauncher Initial Comment: PythonIDE Save options has a selection for the (now defunct) PythonW creator code, but not for PythonLauncher. The naming in the dialog could also be a little clearer. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-03 23:58 Message: Logged In: YES user_id=45365 Fixed in PyEdit.py 1.47 and 1.43.8.3. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=860242&group_id=5470 From noreply at sourceforge.net Thu Jun 3 22:49:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 22:49:34 2004 Subject: [ python-Bugs-966256 ] realpath description misleading Message-ID: Bugs item #966256, was opened at 2004-06-04 12:49 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966256&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: GaryD (gazzadee) Assigned to: Nobody/Anonymous (nobody) Summary: realpath description misleading Initial Comment: The current description for os.path.realpath is: Return the canonical path of the specified filename, eliminating any symbolic links encountered in the path. Availability: Unix. New in version 2.2. Firstly, realpath _is_ available under windows also (at least, it does on my Win XP box). Secondly, it is not immediately obvious that realpath will also return an absolute path. An alternative understanding is that, when supplied with a relative path, realpath will figure out the absolute path to determine what components are symbolic links, but return the relative path with the links removed. This is quite obvious once you use the function, but if we're going to have documentation, it may as well be complete and straightforward. My suggestion is to change the documentation for realpath to read: "Return the absolute canonical path of the specified filename, eliminating any symbolic links encountered in the path." ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966256&group_id=5470 From noreply at sourceforge.net Thu Jun 3 22:53:46 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 3 22:53:54 2004 Subject: [ python-Bugs-701936 ] getsockopt/setsockopt with SO_RCVTIMEO are inconsistent Message-ID: Bugs item #701936, was opened at 2003-03-12 13:54 Message generated for change (Comment added) made by gazzadee You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=701936&group_id=5470 Category: Python Library Group: Python 2.2.2 Status: Open Resolution: None Priority: 5 Submitted By: GaryD (gazzadee) Assigned to: Nobody/Anonymous (nobody) Summary: getsockopt/setsockopt with SO_RCVTIMEO are inconsistent Initial Comment: The SO_RCVTIMEO option to getsockopt/setsockopt seems to vary it's parameter format when used under Linux. With setsockopt, the parameter seems to be a struct of {long seconds, long microseconds}, as you would expect since it's modelling a C "struct timeval". However, with getsockopt, the parameter format seems to be {long seconds, long milliseconds} --- ie. it uses milliseconds rather than microseconds. The attached python script demonstrates this problem. Am I doing something crucially wrong, or is this meant to happen, or ... ? What I'm using: Python 2.2.2 (#1, Feb 24 2003, 17:36:09) [GCC 3.0.4 (Mandrake Linux 8.2 3.0.4-2mdk)] on linux2 ---------------------------------------------------------------------- >Comment By: GaryD (gazzadee) Date: 2004-06-04 12:53 Message: Logged In: YES user_id=693152 What do we do about this? Since it does not seem to be an explciitly python problem, do we just resolve it as "Rejected" or "Wont Fix"? ---------------------------------------------------------------------- Comment By: GaryD (gazzadee) Date: 2003-08-14 14:33 Message: Logged In: YES user_id=693152 Yes, you're right - the same thing happens with C. Here's the output from sockopt.c on my system: base (len 8) - 12, 345670 default (len 8) - 0, 0 after setsockopt (len 8) - 12, 20 ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-05-25 12:42 Message: Logged In: YES user_id=33168 I just tested this on my box (Redhat 9/Linux 2.4). I get similar results with a C program as Python. (Not sure why I didn't get exactly the same results, but I'm tired.) So I'm not sure Python has a problem, since it is just exposing what is happening in C. Take a look at the C example and try it on your box. What results do you get? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=701936&group_id=5470 From noreply at sourceforge.net Fri Jun 4 01:19:05 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 01:19:12 2004 Subject: [ python-Feature Requests-964437 ] idle help is modal Message-ID: Feature Requests item #964437, was opened at 2004-06-01 13:05 Message generated for change (Comment added) made by kbk You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=964437&group_id=5470 >Category: None >Group: None Status: Open Resolution: None >Priority: 4 Submitted By: Matthias Klose (doko) >Assigned to: Nobody/Anonymous (nobody) Summary: idle help is modal Initial Comment: [forwarded from http://bugs.debian.org/252130] the idle online help is unfortunately modal so that one cannot have the help window open and read it, and at the same time work in idle. One must close the help window before continuing in idle which is a nuisance. ---------------------------------------------------------------------- >Comment By: Kurt B. Kaiser (kbk) Date: 2004-06-04 00:19 Message: Logged In: YES user_id=149084 Making this an RFE. If you have time to work up a patch, that would be a big help. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=964437&group_id=5470 From noreply at sourceforge.net Fri Jun 4 05:15:48 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 05:15:57 2004 Subject: [ python-Bugs-966375 ] Typo in whatsnew24/genexprs Message-ID: Bugs item #966375, was opened at 2004-06-04 09:15 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966375&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Kristian Ovaska (kovaska) Assigned to: Nobody/Anonymous (nobody) Summary: Typo in whatsnew24/genexprs Initial Comment: Section "Generator Expressions" of whatsnew24 says: "Generator expressions are more compact but less versatile than full generator definitions and __the__ tend to be more memory friendly than equivalent list comprehensions." Should probably be "they". ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966375&group_id=5470 From noreply at sourceforge.net Fri Jun 4 05:33:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 05:34:00 2004 Subject: [ python-Bugs-966387 ] Typo in whatsnew24/genexprs Message-ID: Bugs item #966387, was opened at 2004-06-04 09:33 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966387&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Kristian Ovaska (kovaska) Assigned to: Nobody/Anonymous (nobody) Summary: Typo in whatsnew24/genexprs Initial Comment: Section "Generator Expressions" of whatsnew24 says: "Generator expressions are more compact but less versatile than full generator definitions and __the__ tend to be more memory friendly than equivalent list comprehensions." Should probably be "they". ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966387&group_id=5470 From noreply at sourceforge.net Fri Jun 4 05:34:11 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 05:34:17 2004 Subject: [ python-Bugs-966375 ] Typo in whatsnew24/genexprs Message-ID: Bugs item #966375, was opened at 2004-06-04 18:15 Message generated for change (Settings changed) made by perky You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966375&group_id=5470 Category: Documentation Group: Python 2.4 >Status: Closed >Resolution: Accepted Priority: 5 Submitted By: Kristian Ovaska (kovaska) Assigned to: Nobody/Anonymous (nobody) Summary: Typo in whatsnew24/genexprs Initial Comment: Section "Generator Expressions" of whatsnew24 says: "Generator expressions are more compact but less versatile than full generator definitions and __the__ tend to be more memory friendly than equivalent list comprehensions." Should probably be "they". ---------------------------------------------------------------------- >Comment By: Hye-Shik Chang (perky) Date: 2004-06-04 18:34 Message: Logged In: YES user_id=55188 Fixed in CVS. Thank you! Doc/whatsnew/whatsnew24.tex 1.54 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966375&group_id=5470 From noreply at sourceforge.net Fri Jun 4 05:35:26 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 05:35:39 2004 Subject: [ python-Bugs-966387 ] Typo in whatsnew24/genexprs Message-ID: Bugs item #966387, was opened at 2004-06-04 18:33 Message generated for change (Comment added) made by perky You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966387&group_id=5470 Category: Documentation Group: Python 2.4 >Status: Deleted >Resolution: Duplicate Priority: 5 Submitted By: Kristian Ovaska (kovaska) Assigned to: Nobody/Anonymous (nobody) Summary: Typo in whatsnew24/genexprs Initial Comment: Section "Generator Expressions" of whatsnew24 says: "Generator expressions are more compact but less versatile than full generator definitions and __the__ tend to be more memory friendly than equivalent list comprehensions." Should probably be "they". ---------------------------------------------------------------------- >Comment By: Hye-Shik Chang (perky) Date: 2004-06-04 18:35 Message: Logged In: YES user_id=55188 Duplicated with #966375 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966387&group_id=5470 From noreply at sourceforge.net Fri Jun 4 05:36:35 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 05:36:41 2004 Subject: [ python-Bugs-966387 ] Typo in whatsnew24/genexprs Message-ID: Bugs item #966387, was opened at 2004-06-04 09:33 Message generated for change (Comment added) made by kovaska You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966387&group_id=5470 Category: Documentation Group: Python 2.4 Status: Deleted >Resolution: None Priority: 5 Submitted By: Kristian Ovaska (kovaska) Assigned to: Nobody/Anonymous (nobody) Summary: Typo in whatsnew24/genexprs Initial Comment: Section "Generator Expressions" of whatsnew24 says: "Generator expressions are more compact but less versatile than full generator definitions and __the__ tend to be more memory friendly than equivalent list comprehensions." Should probably be "they". ---------------------------------------------------------------------- >Comment By: Kristian Ovaska (kovaska) Date: 2004-06-04 09:36 Message: Logged In: YES user_id=27092 Blah. My browser resubmitted this. ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-04 09:35 Message: Logged In: YES user_id=55188 Duplicated with #966375 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966387&group_id=5470 From noreply at sourceforge.net Fri Jun 4 06:58:59 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 06:59:04 2004 Subject: [ python-Bugs-966431 ] import x.y inside of module x.y Message-ID: Bugs item #966431, was opened at 2004-06-04 10:58 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966431&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: import x.y inside of module x.y Initial Comment: To get to the module object from the body of the module itself, the usual trick is to import it from itself, as in: x.py: import x do_stuff_with(x) This fails strangely if x is in a package: package/x.py: import package.x do_stuff_with(package.x) The last line triggers an AttributeError: 'module' object has no attribute 'x'. In other words, the import succeeds but the expression 'package.x' still isn't valid after it. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966431&group_id=5470 From noreply at sourceforge.net Fri Jun 4 11:39:13 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 11:39:20 2004 Subject: [ python-Bugs-966618 ] float_subtype_new() bug Message-ID: Bugs item #966618, was opened at 2004-06-04 15:39 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: float_subtype_new() bug Initial Comment: A rather obsure bug in the subclassing code: >>> class A: ... def __float__(self): return 'hello' ... >>> float(A()) 'hello' >>> class f(float): pass ... >>> f(A()) -5.7590155905901735e-56 In debug mode, the assert() in float_subtype_new() fails instead. In non-debug mode, the value we get is the result of typecasting the PyStringObject* to a PyFloatObject*. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 From noreply at sourceforge.net Fri Jun 4 11:46:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 11:46:22 2004 Subject: [ python-Bugs-966623 ] execfile, type, __module__, who knows ;) Message-ID: Bugs item #966623, was opened at 2004-06-05 01:46 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966623&group_id=5470 Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Nobody/Anonymous (nobody) Summary: execfile, type, __module__, who knows ;) Initial Comment: Apologies for the imprecise summary - I have no idea where the problem is here. Thanks to JP Calderone for this little horror. (distilled down from his example) bonanza% cat foo.py print type('F', (object,), {})().__class__.__module__ bonanza% python2.3 -c "execfile('foo.py', {})" Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 1, in ? print type('F', (object,), {})().__class__.__module__ SystemError: error return without exception set bonanza% python2.4 -c "execfile('foo.py', {})" Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 1, in ? print type('F', (object,), {})().__class__.__module__ SystemError: error return without exception set ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966623&group_id=5470 From noreply at sourceforge.net Fri Jun 4 11:50:00 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 11:50:06 2004 Subject: [ python-Bugs-966625 ] Documentation for Descriptors in the main docs Message-ID: Bugs item #966625, was opened at 2004-06-05 01:50 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966625&group_id=5470 Category: Documentation Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Raide (raide) Assigned to: Nobody/Anonymous (nobody) Summary: Documentation for Descriptors in the main docs Initial Comment: It would be good to see something like: http://users.rcn.com/python/download/Descriptor.htm included with the proper documentation. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966625&group_id=5470 From noreply at sourceforge.net Fri Jun 4 12:02:53 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 12:03:01 2004 Subject: [ python-Bugs-957198 ] C/C++ extensions w/ Python + Mingw (Windows) Message-ID: Bugs item #957198, was opened at 2004-05-20 05:36 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=957198&group_id=5470 Category: None >Group: Not a Bug >Status: Closed >Resolution: Rejected Priority: 5 Submitted By: Connelly (connelly) >Assigned to: Thomas Heller (theller) Summary: C/C++ extensions w/ Python + Mingw (Windows) Initial Comment: I am asking you to distribute libpython23.a with Python. This is a library built by the Mingw compiler (www.mingw.org). When this file is present, it greatly simplifies the process of installing a C/C++ extension. The below document explains how make this library, and install a C/C++ extension with the Mingw compiler. Currently, a library is provided for the proprietary MSVC++ compiler (python23.lib), but not the open source Mingw compiler. Normally, one uses the following procedure to build and install a C/C++ extension: python setup.py build --compiler=your_compiler python setup.py install For Python 2.3.3 on Windows, with the Mingw (Minimalist GNU) compiler, the following steps must be taken: 1. Find your Mingw bin directory. Copy gcc.exe to cc.exe. 2. Get PExports from either of: http://sebsauvage.net/python/pexports-0.42h.zip http://starship.python.net/crew/kernr/mingw32/pexports-0.42h.zip Extract pexports.exe to your Mingw bin directory. 3. Find pythonxx.dll. It should be in your main Python directory. Do the following: pexports python23.dll > python23.def dlltool --dllname python23.dll --def python23.def --output-lib libpython23.a 4. Copy libpythonxx.a to \python\libs. 5. Patch distutils. Locate \python\lib\distutils\msvccompiler.py, open it, and find the following lines (around line 211): if len (self.__paths) == 0: raise DistutilsPlatformError, ("Python was built with version %s of Visual Studio, " "and extensions need to be built with the same " "version of the compiler, but it isn't installed." % self.__version) Delete these. 6. Move back to the directory of your extension. Do the following: python setup.py build --compiler=mingw32 python setup.py install Ideally, only step 6 should be required to install an extension. I submitted the patch for step 5 to python.sourceforge.net. Steps 2-4 can be avoided if the libpythonxx.a file is distributed with Python. Step 1 can probably be avoided with another patch. This document is based on http://sebsauvage.net/python/mingw.html, which was written for Mingw + Python 2.2. Thanks, Connelly ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2004-06-04 18:02 Message: Logged In: YES user_id=11105 python23.lib (from MSVC) is provided with the Windows installer because Python is built with that compiler. I refuse to distribute libpython23.a for MingW with the official installer mainly because I cannot test it, and I'm not clear if there are issues with different mingw versions. Wouldn't it be a better idea to provide a distutils patch which will build libpython23.a with mingw itself, if the library is not found? ---------------------------------------------------------------------- Comment By: Connelly (connelly) Date: 2004-05-25 20:23 Message: Logged In: YES user_id=1039782 I am using Mingw 3.1.0-1, released on Sep 15, 2003. It is the 'current' release of Mingw. I'm using Python 2.3.3. Issuing python setup.py install --compiler=mingw32 causes an error. I'm not sure which error -- I'll post it here tomorrow when I'm at the right machine. So I left off the --compiler option for the install step. This produced the error "Python was built with version %s of Visual Studio, and extensions need to be built with the same version of the compiler, but it isn't installed." This error is not produced if VC++ is installed. Thus you need to find a machine WITHOUT VC++ to test out the build process with Mingw32. I don't know how to tell MinGW to use python23.lib. Perhaps this is all the result of not passing the right flag to 'python setup.py install'? After all, it's using msvccompiler.py, which seems suspicious. I don't think I'm the only one having this problem. See: http://randomthoughts.vandorp.ca/WK/blog/758?t=item ---------------------------------------------------------------------- Comment By: Andrew I MacIntyre (aimacintyre) Date: 2004-05-23 10:11 Message: Logged In: YES user_id=250749 While older releases of MinGW certainly required this library fiddling, such as the MingW 1.1 pkg with gcc 2.95, I had been under the impression that recent versions could quite happily use the MSVC python23.lib. If you have tried this and had it fail, which version of MinGW? Using the --compiler=mingw32 build option should not require patching Distutils, though you probably have to use it for both build and install invocations. You can just issue to the install command (with the --compiler switch) which will both build and install the extention. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=957198&group_id=5470 From noreply at sourceforge.net Fri Jun 4 12:16:48 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 12:16:54 2004 Subject: [ python-Bugs-965991 ] Build Python 2.3.3 or 4 on RedHat Enterprise fails Message-ID: Bugs item #965991, was opened at 2004-06-03 11:48 Message generated for change (Comment added) made by reowen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965991&group_id=5470 Category: Build Group: None Status: Open Resolution: None Priority: 5 Submitted By: Russell Owen (reowen) Assigned to: Nobody/Anonymous (nobody) Summary: Build Python 2.3.3 or 4 on RedHat Enterprise fails Initial Comment: We recently upgraded from RH 9 to RH Enterprise 3. In detail: cat /etc/issue Red Hat Enterprise Linux WS release 3 (Taroon Update 2) cat /proc/version Linux version 2.4.21-15.EL (bhcompile@daffy.perf.redhat.com) (gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-34)) #1 Thu Apr 22 00:26:34 EDT 2004 I then tried to build a Python 2.3.3 from source for installation in /net/python. ./configure --prefix=/net/python --enable-unicode=ucs4 make the result is a python with no Tkinter. Looking at the output of ./configure I see: checking for UCS-4 tcl... no The following all result in the same problem: Omitting --enable-unicode=ucs4 Using --enable-unicode=ucs2 All the same but with Python 2.3.4. So...I tried a 2nd class of tests (both with Python 2.3.3 and 2.3.4, same bad result each time): I built my own Tcl/Tk using --prefix=/net/python (which went fine). I then edited Modules/Setup so that Python uses that. This time the make fails with: Modules/posixmodule.c:5788: the use of `tempnam' is dangerous, better use `mkstemp' case $MAKEFLAGS in *-s*) CC='gcc -pthread' LDSHARED='gcc -pthread -shared' OPT='-DNDEBUG -g -O3 -Wall -Wstrict-prototypes' ./python -E ./setup.py -q build;; *) CC='gcc -pthread' LDSHARED='gcc -pthread -shared' OPT='-DNDEBUG -g -O3 -Wall -Wstrict-prototypes' ./python -E ./setup.py build;; esac ./python: error while loading shared libraries: libtk8.4.so: cannot open shared object file: No such file or directory make: *** [sharedmods] Error 127 [rowen@apollo Python-2.3.3]$ ls -l /net/python I have attached the Setup I used for that 2nd class of tests. ---------------------------------------------------------------------- >Comment By: Russell Owen (reowen) Date: 2004-06-04 09:16 Message: Logged In: YES user_id=431773 False alarm, more or less. Problem #1 was caused by an incomplete installation of RH Enterprise (missing tcl.h and tk.h, at least). Totally my problem, no argument. Problem #2 is a bit more subtle. When I setenv LD_LIBRARY_PATH /net/ python/lib then everything built correctly. However, I still am puzzled about two things: * why is this necessary, when I already told Modules/Setup exactly where to look * since it is necessary, surely it would be wise to say so in the comments in Modules/Setup? Maybe any unix maven would already know that, but I'm certainly not a unix maven and I can't be the only person struggling with things like this. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965991&group_id=5470 From noreply at sourceforge.net Fri Jun 4 12:27:43 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 12:27:50 2004 Subject: [ python-Bugs-966623 ] execfile -> type() created objects w/ no __module__ error Message-ID: Bugs item #966623, was opened at 2004-06-05 01:46 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966623&group_id=5470 Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Nobody/Anonymous (nobody) >Summary: execfile -> type() created objects w/ no __module__ error Initial Comment: Apologies for the imprecise summary - I have no idea where the problem is here. Thanks to JP Calderone for this little horror. (distilled down from his example) bonanza% cat foo.py print type('F', (object,), {})().__class__.__module__ bonanza% python2.3 -c "execfile('foo.py', {})" Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 1, in ? print type('F', (object,), {})().__class__.__module__ SystemError: error return without exception set bonanza% python2.4 -c "execfile('foo.py', {})" Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 1, in ? print type('F', (object,), {})().__class__.__module__ SystemError: error return without exception set ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-05 02:27 Message: Logged In: YES user_id=29957 The attached patch fixes this to raise an AttributeError if the object has no __module__. The other approach to fixing it would be to make sure that the object created always gets a __module__, but I have no idea in that case what a reasonable fix would be. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966623&group_id=5470 From noreply at sourceforge.net Fri Jun 4 12:30:40 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 12:30:46 2004 Subject: [ python-Bugs-965991 ] Build Python 2.3.3 or 4 on RedHat Enterprise fails Message-ID: Bugs item #965991, was opened at 2004-06-04 04:48 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965991&group_id=5470 Category: Build >Group: Not a Bug >Status: Closed >Resolution: Rejected Priority: 5 Submitted By: Russell Owen (reowen) Assigned to: Nobody/Anonymous (nobody) Summary: Build Python 2.3.3 or 4 on RedHat Enterprise fails Initial Comment: We recently upgraded from RH 9 to RH Enterprise 3. In detail: cat /etc/issue Red Hat Enterprise Linux WS release 3 (Taroon Update 2) cat /proc/version Linux version 2.4.21-15.EL (bhcompile@daffy.perf.redhat.com) (gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-34)) #1 Thu Apr 22 00:26:34 EDT 2004 I then tried to build a Python 2.3.3 from source for installation in /net/python. ./configure --prefix=/net/python --enable-unicode=ucs4 make the result is a python with no Tkinter. Looking at the output of ./configure I see: checking for UCS-4 tcl... no The following all result in the same problem: Omitting --enable-unicode=ucs4 Using --enable-unicode=ucs2 All the same but with Python 2.3.4. So...I tried a 2nd class of tests (both with Python 2.3.3 and 2.3.4, same bad result each time): I built my own Tcl/Tk using --prefix=/net/python (which went fine). I then edited Modules/Setup so that Python uses that. This time the make fails with: Modules/posixmodule.c:5788: the use of `tempnam' is dangerous, better use `mkstemp' case $MAKEFLAGS in *-s*) CC='gcc -pthread' LDSHARED='gcc -pthread -shared' OPT='-DNDEBUG -g -O3 -Wall -Wstrict-prototypes' ./python -E ./setup.py -q build;; *) CC='gcc -pthread' LDSHARED='gcc -pthread -shared' OPT='-DNDEBUG -g -O3 -Wall -Wstrict-prototypes' ./python -E ./setup.py build;; esac ./python: error while loading shared libraries: libtk8.4.so: cannot open shared object file: No such file or directory make: *** [sharedmods] Error 127 [rowen@apollo Python-2.3.3]$ ls -l /net/python I have attached the Setup I used for that 2nd class of tests. ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-05 02:30 Message: Logged In: YES user_id=29957 The other approaches you could try: Add the appropriate directory to your /etc/ld.so.conf Add a -R/net/python/lib to the options to LD. Closing as not-a-bug, since you seemed to have fixed the problem. The problem sounds like you didn't have the tcl/tk -devel packages installed. ---------------------------------------------------------------------- Comment By: Russell Owen (reowen) Date: 2004-06-05 02:16 Message: Logged In: YES user_id=431773 False alarm, more or less. Problem #1 was caused by an incomplete installation of RH Enterprise (missing tcl.h and tk.h, at least). Totally my problem, no argument. Problem #2 is a bit more subtle. When I setenv LD_LIBRARY_PATH /net/ python/lib then everything built correctly. However, I still am puzzled about two things: * why is this necessary, when I already told Modules/Setup exactly where to look * since it is necessary, surely it would be wise to say so in the comments in Modules/Setup? Maybe any unix maven would already know that, but I'm certainly not a unix maven and I can't be the only person struggling with things like this. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=965991&group_id=5470 From noreply at sourceforge.net Fri Jun 4 12:36:34 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 12:36:43 2004 Subject: [ python-Bugs-966623 ] execfile -> type() created objects w/ no __module__ error Message-ID: Bugs item #966623, was opened at 2004-06-04 16:46 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966623&group_id=5470 Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Nobody/Anonymous (nobody) Summary: execfile -> type() created objects w/ no __module__ error Initial Comment: Apologies for the imprecise summary - I have no idea where the problem is here. Thanks to JP Calderone for this little horror. (distilled down from his example) bonanza% cat foo.py print type('F', (object,), {})().__class__.__module__ bonanza% python2.3 -c "execfile('foo.py', {})" Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 1, in ? print type('F', (object,), {})().__class__.__module__ SystemError: error return without exception set bonanza% python2.4 -c "execfile('foo.py', {})" Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 1, in ? print type('F', (object,), {})().__class__.__module__ SystemError: error return without exception set ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-04 17:36 Message: Logged In: YES user_id=6656 Ah, I was about to attach the same test :-) Do add a test and use PEP 7 code if you check it in... ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-04 17:27 Message: Logged In: YES user_id=29957 The attached patch fixes this to raise an AttributeError if the object has no __module__. The other approach to fixing it would be to make sure that the object created always gets a __module__, but I have no idea in that case what a reasonable fix would be. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966623&group_id=5470 From noreply at sourceforge.net Fri Jun 4 12:49:47 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 12:50:24 2004 Subject: [ python-Bugs-966623 ] execfile -> type() created objects w/ no __module__ error Message-ID: Bugs item #966623, was opened at 2004-06-05 01:46 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966623&group_id=5470 Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Nobody/Anonymous (nobody) Summary: execfile -> type() created objects w/ no __module__ error Initial Comment: Apologies for the imprecise summary - I have no idea where the problem is here. Thanks to JP Calderone for this little horror. (distilled down from his example) bonanza% cat foo.py print type('F', (object,), {})().__class__.__module__ bonanza% python2.3 -c "execfile('foo.py', {})" Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 1, in ? print type('F', (object,), {})().__class__.__module__ SystemError: error return without exception set bonanza% python2.4 -c "execfile('foo.py', {})" Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 1, in ? print type('F', (object,), {})().__class__.__module__ SystemError: error return without exception set ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-05 02:49 Message: Logged In: YES user_id=29957 Is it better to fix this here, or in the type() call to make sure there's always a __module__ ? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-05 02:36 Message: Logged In: YES user_id=6656 Ah, I was about to attach the same test :-) Do add a test and use PEP 7 code if you check it in... ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-05 02:27 Message: Logged In: YES user_id=29957 The attached patch fixes this to raise an AttributeError if the object has no __module__. The other approach to fixing it would be to make sure that the object created always gets a __module__, but I have no idea in that case what a reasonable fix would be. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966623&group_id=5470 From noreply at sourceforge.net Fri Jun 4 13:48:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 13:48:44 2004 Subject: [ python-Bugs-948517 ] LaTeX not required Message-ID: Bugs item #948517, was opened at 2004-05-05 07:36 Message generated for change (Comment added) made by aahz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=948517&group_id=5470 Category: Documentation Group: None >Status: Open >Resolution: None Priority: 5 Submitted By: Aahz (aahz) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: LaTeX not required Initial Comment: /doc.html and about.html ("Comments and Questions") should make clear that LaTeX is not required for submitting additions and changes to the docs. Suggested wording: Although we prefer additions and changes to be in the format prescribed by /doc.html, we welcome plain-text, too -- we're happy to handle the formatting for you. ---------------------------------------------------------------------- >Comment By: Aahz (aahz) Date: 2004-06-04 10:48 Message: Logged In: YES user_id=175591 I've seen many posts to c.l.py over the years where people have said that they don't submit doc patches because they don't know LaTeX. I'm not clear on why they think LaTeX is necessary; my suspicion is that it's because the "Documenting Python" link in the doc root is all about LaTeX. It seems to me that the simplest way of approaching this issue is to explicitly encourage people to send in plain-text docs. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-05-11 07:19 Message: Logged In: YES user_id=3066 I don't see anything in the locations you cite (if I understood correctly; they weren't completely clear) that even suggests LaTeX, much less implies a requirement for LaTeX. Closing as Invalid; if I'm not seeing something, please re-open with detailed information on what current content you find misleading. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=948517&group_id=5470 From noreply at sourceforge.net Fri Jun 4 14:26:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 14:26:29 2004 Subject: [ python-Feature Requests-779160 ] Enhance PackageManager functionality Message-ID: Feature Requests item #779160, was opened at 2003-07-28 20:52 Message generated for change (Comment added) made by hhas You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=779160&group_id=5470 Category: Macintosh Group: None Status: Open Resolution: None Priority: 5 Submitted By: Bob Ippolito (etrepum) Assigned to: Jack Jansen (jackjansen) Summary: Enhance PackageManager functionality Initial Comment: PackageManager should probably let you choose a tarball/ zip/folder on disk (or drag + drop) and look for a setup.py that it would do the Usual Thing with. Perhaps also let you choose an arbitrary .py file and scan for 'distutils' -- but I've only seen a called-something-other-than-setup.py distutils installer in one package (it had multiple setup-prefixed installers that installed different related modules, and an annoying no-op setup.py.. so I would say this is a bug on his part). PackageManager should be refactored to be responsive and non-blocking (i.e. threaded or forking). ---------------------------------------------------------------------- Comment By: has (hhas) Date: 2004-06-04 18:26 Message: Logged In: YES user_id=996627 "PackageManager should probably let you choose a tarball/ zip/folder on disk (or drag + drop) and look for a setup.py that it would do the Usual Thing with." See my PyMod application for a simple standalone solution that fulfills this requirement: http://freespace.virgin.net/hamish.sanderson/PyMod.app.sit Supports building and installing distutils packages, and PyPI registration through a simple GUI. Could use a better name and icon, and needs a bit more user testing (only tested on 10.2.6 so far), but is otherwise ready to roll, and you're welcome to include in the MacPython 2.4 distribution. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=779160&group_id=5470 From noreply at sourceforge.net Fri Jun 4 16:40:42 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 16:40:50 2004 Subject: [ python-Bugs-948517 ] LaTeX not required Message-ID: Bugs item #948517, was opened at 2004-05-05 10:36 Message generated for change (Comment added) made by fdrake You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=948517&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Aahz (aahz) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: LaTeX not required Initial Comment: /doc.html and about.html ("Comments and Questions") should make clear that LaTeX is not required for submitting additions and changes to the docs. Suggested wording: Although we prefer additions and changes to be in the format prescribed by /doc.html, we welcome plain-text, too -- we're happy to handle the formatting for you. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-04 16:40 Message: Logged In: YES user_id=3066 Ok, I think I understand where you're coming from now. We can approach this in one (or more) of two ways: - We can stop you from reading c.l.py so you don't see these reports of confusion. - I can write something about it. Let's assume we can't do the former without your consent. I'll plan on the later regardless. Thanks for explaining the genesis of your initial report! ---------------------------------------------------------------------- Comment By: Aahz (aahz) Date: 2004-06-04 13:48 Message: Logged In: YES user_id=175591 I've seen many posts to c.l.py over the years where people have said that they don't submit doc patches because they don't know LaTeX. I'm not clear on why they think LaTeX is necessary; my suspicion is that it's because the "Documenting Python" link in the doc root is all about LaTeX. It seems to me that the simplest way of approaching this issue is to explicitly encourage people to send in plain-text docs. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-05-11 10:19 Message: Logged In: YES user_id=3066 I don't see anything in the locations you cite (if I understood correctly; they weren't completely clear) that even suggests LaTeX, much less implies a requirement for LaTeX. Closing as Invalid; if I'm not seeing something, please re-open with detailed information on what current content you find misleading. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=948517&group_id=5470 From noreply at sourceforge.net Fri Jun 4 17:30:12 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 17:30:20 2004 Subject: [ python-Bugs-964703 ] RFE versus Bug group Feature Request Message-ID: Bugs item #964703, was opened at 2004-06-01 22:14 Message generated for change (Comment added) made by tjreedy You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 Category: None Group: 3rd Party Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Terry J. Reedy (tjreedy) Assigned to: Tim Peters (tim_one) Summary: RFE versus Bug group Feature Request Initial Comment: The Category is 'Source Forge Item Tracker'. The possible bug is the redundancy of having both an RFE (Request For Enhancement) list separate from the Bugs list and a Feature Request Group within Bugs. Is this intentional or an historical artifact that should be removed in order to direct feature requests one place or another. ---------------------------------------------------------------------- >Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-04 17:30 Message: Logged In: YES user_id=593130 Thanks for the clear directive. A thought for feature requesters: Conflicts between promise (the References) and performance (the CPython implementation) are clearly bugs. So to are sufficiently muddled docs. While a 'missing' feature might look like a bug to one who wants it, it might not to a developer looking to prune a bloated bug list (>700 today, about double what it was not too long ago.) On the other hand, a feature request in the smaller RFE list is only competing with other feature requests instead of sometimes serious bugs. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-02 01:00 Message: Logged In: YES user_id=31435 Oh yes! Overall, we'd rather reduce the mushrooming backlog of patch and bug reports than slam in new features, so we want to keep feature requests out of the bug tracker. That's why the RFE tracker was added. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-02 00:26 Message: Logged In: YES user_id=593130 In the meanwhile... is it appropriate to recommend that requests go in RFE (as I somewhat ignorantly and indirectly did today, see #960325) or is this a "don't care" issue for the developers (that I should ignore)? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-01 22:22 Message: Logged In: YES user_id=31435 Sorry, we can't do anything about this. Group names cannot be deleted in SourceForge's system, and can't even be renamed. So we'll have a "Feature Request" Group in the Bugs tracker forever -- or unless SF changes their system. When we first moved to SF, RFE trackers didn't exist. That's why Bugs grew a Feature Request group to begin with. Closing as 3rdParty, WontFix. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 From noreply at sourceforge.net Fri Jun 4 17:35:08 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 17:35:16 2004 Subject: [ python-Bugs-960325 ] "require " configure option Message-ID: Bugs item #960325, was opened at 2004-05-25 15:07 Message generated for change (Comment added) made by tjreedy You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960325&group_id=5470 Category: Build Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Hallvard B Furuseth (hfuru) Assigned to: Nobody/Anonymous (nobody) Summary: "require " configure option Initial Comment: I'd like to be able to configure Python so that Configure or Make will fail if a particular feature is unavailable. Currently I'm concerned with SSL, which just gets a warning from Make: building '_ssl' extension *** WARNING: renaming "_ssl" since importing it failed: ld.so.1: ./python: fatal: libssl.so.0.9.8: open failed: No such file or directory Since that's buried in a lot of Make output, it's easy to miss. Besides, for semi-automatic builds it's in any case good to get a non-success exit status from the build process. Looking at the Make output, I see the bz2 extension is another example where this might be useful. Maybe the option would simply be '--enable-ssl', unless you want that to merely try to build with ssl. Or '--require=ssl,bz2,...'. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-04 17:35 Message: Logged In: YES user_id=593130 See item 964703 for further information and then decide. ---------------------------------------------------------------------- Comment By: Hallvard B Furuseth (hfuru) Date: 2004-06-02 07:56 Message: Logged In: YES user_id=726647 Ah, so that's what RFE means. You could rename that to 'Enhancement Requests'. Anyway, QoI issues tend to resemble bug issues more than enhancement issues, so '"bug" of type feature request' looks good to me. Though I'll resubmit as RFE if you ask. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 22:07 Message: Logged In: YES user_id=593130 Yes, this is not a PEP item. I didn't notice Feature Reqest since it is redundant vis a vis the separate RFE list. ---------------------------------------------------------------------- Comment By: Hallvard B Furuseth (hfuru) Date: 2004-06-01 14:13 Message: Logged In: YES user_id=726647 I marked it with Group: Feature Request. Not a bug, but a quality of implementation issue. It seemed more proper here than as a PEP. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 13:58 Message: Logged In: YES user_id=593130 Are you claiming that there is an actual bug, or is this merely an RFE (Request For Enhancement) item? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960325&group_id=5470 From noreply at sourceforge.net Fri Jun 4 17:58:55 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 17:59:03 2004 Subject: [ python-Bugs-964703 ] RFE versus Bug group Feature Request Message-ID: Bugs item #964703, was opened at 2004-06-01 22:14 Message generated for change (Comment added) made by fdrake You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 Category: None Group: 3rd Party Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Terry J. Reedy (tjreedy) Assigned to: Tim Peters (tim_one) Summary: RFE versus Bug group Feature Request Initial Comment: The Category is 'Source Forge Item Tracker'. The possible bug is the redundancy of having both an RFE (Request For Enhancement) list separate from the Bugs list and a Feature Request Group within Bugs. Is this intentional or an historical artifact that should be removed in order to direct feature requests one place or another. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-04 17:58 Message: Logged In: YES user_id=3066 I will note that not everyone agrees on this. Having to look in multiple trackers is quite painful as well. The separation of the RFE tracker would be less of a problem if there were no "patches" tracker; a patch should be attached to a bug report or to a feature request, not separate. Grr. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-04 17:30 Message: Logged In: YES user_id=593130 Thanks for the clear directive. A thought for feature requesters: Conflicts between promise (the References) and performance (the CPython implementation) are clearly bugs. So to are sufficiently muddled docs. While a 'missing' feature might look like a bug to one who wants it, it might not to a developer looking to prune a bloated bug list (>700 today, about double what it was not too long ago.) On the other hand, a feature request in the smaller RFE list is only competing with other feature requests instead of sometimes serious bugs. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-02 01:00 Message: Logged In: YES user_id=31435 Oh yes! Overall, we'd rather reduce the mushrooming backlog of patch and bug reports than slam in new features, so we want to keep feature requests out of the bug tracker. That's why the RFE tracker was added. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-02 00:26 Message: Logged In: YES user_id=593130 In the meanwhile... is it appropriate to recommend that requests go in RFE (as I somewhat ignorantly and indirectly did today, see #960325) or is this a "don't care" issue for the developers (that I should ignore)? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-01 22:22 Message: Logged In: YES user_id=31435 Sorry, we can't do anything about this. Group names cannot be deleted in SourceForge's system, and can't even be renamed. So we'll have a "Feature Request" Group in the Bugs tracker forever -- or unless SF changes their system. When we first moved to SF, RFE trackers didn't exist. That's why Bugs grew a Feature Request group to begin with. Closing as 3rdParty, WontFix. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 From noreply at sourceforge.net Fri Jun 4 20:10:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 4 20:11:13 2004 Subject: [ python-Bugs-964703 ] RFE versus Bug group Feature Request Message-ID: Bugs item #964703, was opened at 2004-06-01 22:14 Message generated for change (Comment added) made by gvanrossum You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 Category: None Group: 3rd Party Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Terry J. Reedy (tjreedy) Assigned to: Tim Peters (tim_one) Summary: RFE versus Bug group Feature Request Initial Comment: The Category is 'Source Forge Item Tracker'. The possible bug is the redundancy of having both an RFE (Request For Enhancement) list separate from the Bugs list and a Feature Request Group within Bugs. Is this intentional or an historical artifact that should be removed in order to direct feature requests one place or another. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-04 20:10 Message: Logged In: YES user_id=6380 We should switch to RoundUp on python.org. A single unified tracker that we can modify. Grr. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-04 17:58 Message: Logged In: YES user_id=3066 I will note that not everyone agrees on this. Having to look in multiple trackers is quite painful as well. The separation of the RFE tracker would be less of a problem if there were no "patches" tracker; a patch should be attached to a bug report or to a feature request, not separate. Grr. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-04 17:30 Message: Logged In: YES user_id=593130 Thanks for the clear directive. A thought for feature requesters: Conflicts between promise (the References) and performance (the CPython implementation) are clearly bugs. So to are sufficiently muddled docs. While a 'missing' feature might look like a bug to one who wants it, it might not to a developer looking to prune a bloated bug list (>700 today, about double what it was not too long ago.) On the other hand, a feature request in the smaller RFE list is only competing with other feature requests instead of sometimes serious bugs. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-02 01:00 Message: Logged In: YES user_id=31435 Oh yes! Overall, we'd rather reduce the mushrooming backlog of patch and bug reports than slam in new features, so we want to keep feature requests out of the bug tracker. That's why the RFE tracker was added. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-02 00:26 Message: Logged In: YES user_id=593130 In the meanwhile... is it appropriate to recommend that requests go in RFE (as I somewhat ignorantly and indirectly did today, see #960325) or is this a "don't care" issue for the developers (that I should ignore)? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-01 22:22 Message: Logged In: YES user_id=31435 Sorry, we can't do anything about this. Group names cannot be deleted in SourceForge's system, and can't even be renamed. So we'll have a "Feature Request" Group in the Bugs tracker forever -- or unless SF changes their system. When we first moved to SF, RFE trackers didn't exist. That's why Bugs grew a Feature Request group to begin with. Closing as 3rdParty, WontFix. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 From noreply at sourceforge.net Sat Jun 5 01:29:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 01:29:34 2004 Subject: [ python-Bugs-966625 ] Documentation for Descriptors in the main docs Message-ID: Bugs item #966625, was opened at 2004-06-04 10:50 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966625&group_id=5470 Category: Documentation Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Raide (raide) Assigned to: Nobody/Anonymous (nobody) Summary: Documentation for Descriptors in the main docs Initial Comment: It would be good to see something like: http://users.rcn.com/python/download/Descriptor.htm included with the proper documentation. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 00:29 Message: Logged In: YES user_id=80475 FWIW, I put much of this information in the Library Reference Manual section 3.3.2, Customizing attribute access. Being part of the technical reference, it is dry reading and does not take a tutorial approach with overviews, examples, and pure python equivalents. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966625&group_id=5470 From noreply at sourceforge.net Sat Jun 5 01:39:03 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 01:39:09 2004 Subject: [ python-Bugs-964230 ] random.choice([]) should return more intelligible exception Message-ID: Bugs item #964230, was opened at 2004-06-01 07:39 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Michael Hoffman (hoffmanm) Assigned to: Nobody/Anonymous (nobody) Summary: random.choice([]) should return more intelligible exception Initial Comment: Python 2.3.3 (#1, Mar 31 2004, 11:17:07) [GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import random >>> random.choice([]) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/random.py", line 231, in choice return seq[int(self.random() * len(seq))] IndexError: list index out of range This is simple enough here, but it's harder when it's at the bottom of a traceback. I suggest something like ValueError: sequence must not be empty. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 00:39 Message: Logged In: YES user_id=80475 -0 While a ValueError would be appropriate, the status quo doesn't bug me much and changing it could break code if someone is currently trapping the IndexError. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 From noreply at sourceforge.net Sat Jun 5 02:17:15 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 02:17:52 2004 Subject: [ python-Bugs-963956 ] Bad error mesage when subclassing a module Message-ID: Bugs item #963956, was opened at 2004-05-31 21:18 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=963956&group_id=5470 Category: Type/class unification Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Edward C. Jones (edcjones) Assigned to: Nobody/Anonymous (nobody) Summary: Bad error mesage when subclassing a module Initial Comment: I made a common mistake and ended up trying to subclass a module. Here are two small python files: ---- a.py: class a(object): pass ---- b.py: import a class B(a): # Should be "a.a". pass ---- If I run "python b.py" I get the uninformative error message: Traceback (most recent call last): File "./b.py", line 3, in ? class B(a): TypeError: function takes at most 2 arguments (3 given) I think the message should say that "a" is not a class. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 01:17 Message: Logged In: YES user_id=80475 Fixed. See Python/ceval.c 2.398. Thanks for the report. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=963956&group_id=5470 From noreply at sourceforge.net Sat Jun 5 02:19:04 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 02:19:09 2004 Subject: [ python-Bugs-964230 ] random.choice([]) should return more intelligible exception Message-ID: Bugs item #964230, was opened at 2004-06-01 12:39 Message generated for change (Comment added) made by hoffmanm You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Michael Hoffman (hoffmanm) Assigned to: Nobody/Anonymous (nobody) Summary: random.choice([]) should return more intelligible exception Initial Comment: Python 2.3.3 (#1, Mar 31 2004, 11:17:07) [GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import random >>> random.choice([]) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/random.py", line 231, in choice return seq[int(self.random() * len(seq))] IndexError: list index out of range This is simple enough here, but it's harder when it's at the bottom of a traceback. I suggest something like ValueError: sequence must not be empty. ---------------------------------------------------------------------- >Comment By: Michael Hoffman (hoffmanm) Date: 2004-06-05 06:19 Message: Logged In: YES user_id=987664 I thought of that after I submitted. :-) Might it be better to raise an IndexError with a message similar to "sequence must not be empty" instead? It would just make debugging that much easier. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 05:39 Message: Logged In: YES user_id=80475 -0 While a ValueError would be appropriate, the status quo doesn't bug me much and changing it could break code if someone is currently trapping the IndexError. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 From noreply at sourceforge.net Sat Jun 5 02:19:39 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 02:20:04 2004 Subject: [ python-Bugs-944890 ] csv writer bug on windows Message-ID: Bugs item #944890, was opened at 2004-04-29 16:06 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944890&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Brian Kelley (wc2so1) >Assigned to: Skip Montanaro (montanaro) Summary: csv writer bug on windows Initial Comment: The excel dialect is set up to be class excel(Dialect): delimiter = ',' quotechar = '"' doublequote = True skipinitialspace = False lineterminator = '\r\n' quoting = QUOTE_MINIMAL register_dialect("excel", excel) However, on the windows platform, the lineterminator should be simply "\n" My suggested fix is: class excel(Dialect): delimiter = ',' quotechar = '"' doublequote = True skipinitialspace = False if sys.platform == "win32": lineterminator = '\n' else: lineterminator = '\r\n' quoting = QUOTE_MINIMAL Which seems to work. It could be that I'm missing something, but the universal readlines doesn't appear to work for writing files. If this is a usage issue, it probably should be a documentation fix. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944890&group_id=5470 From noreply at sourceforge.net Sat Jun 5 02:22:04 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 02:22:12 2004 Subject: [ python-Bugs-924703 ] test_unicode_file fails on Win98SE Message-ID: Bugs item #924703, was opened at 2004-03-27 20:48 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924703&group_id=5470 Category: Unicode Group: Python 2.4 Status: Open Resolution: None >Priority: 6 Submitted By: Tim Peters (tim_one) Assigned to: Martin v. L?wis (loewis) Summary: test_unicode_file fails on Win98SE Initial Comment: In current CVS, test_unicode_file fails on Win98SE. This has been going on for some time, actually. ERROR: test_single_files (__main__.TestUnicodeFiles) Traceback (most recent call last): File ".../lib/test/test_unicode_file.py", line 162, in test_single_files self._test_single(TESTFN_UNICODE) File ".../lib/test/test_unicode_file.py", line 136, in _test_single self._do_single(filename) File ".../lib/test/test_unicode_file.py", line 49, in _do_single new_base = unicodedata.normalize("NFD", new_base) TypeError: normalized() argument 2 must be unicode, not str At this point, filename is TESTFN_UNICODE is u'@test-\xe0\xf2' os.path.abspath(filename) is 'C:\Code\python\PC\VC6\@test-\xe0\xf2' new_base is '@test-\xe0\xf2 So abspath() removed the "Unicodeness" of filename, and new_base is indeed not a Unicode string at this point. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-03-30 00:44 Message: Logged In: YES user_id=31435 Just a guess: the os.path functions are generally just string manipulation, and on Windows I sometimes import posixpath.py directly to do Unixish path manipulations. So it's conceivable that someone (not me) on a non-Windows box imports ntpath directly to manipulate Windows paths. In fact, I see that Fredrik's "Python Standard Library" book explicitly mentions this use case for importing ntpath directly. So maybe he actually did it -- once . ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-03-30 00:25 Message: Logged In: YES user_id=21627 I see. I'll look into changing _getfullpathname to return Unicode output for Unicode input even if unicode_file_names() is false. However, I do wonder what the purpose of _abspath then is: On what system would it be used??? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-03-29 18:11 Message: Logged In: YES user_id=31435 Nope, that can't help -- ntpath.py's _abspath doesn't exist on Win98SE (the "from nt import _getfullpathname" succeeds, so _abspath is never defined). It's _getfullpathname() that's taking a Unicode input and returning a str output here. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-03-29 17:17 Message: Logged In: YES user_id=21627 abspath(unicode) should return a Unicode path. Does it help if _abspath (in ntpath.py) is changed to contain if not isabs(path): if isinstance(path, unicode): cwd = os.getcwdu() else: cwd = os.getcwd() path = join(cwd, path) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924703&group_id=5470 From noreply at sourceforge.net Sat Jun 5 02:41:48 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 02:41:53 2004 Subject: [ python-Bugs-964230 ] random.choice([]) should return more intelligible exception Message-ID: Bugs item #964230, was opened at 2004-06-01 07:39 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None >Priority: 2 Submitted By: Michael Hoffman (hoffmanm) Assigned to: Nobody/Anonymous (nobody) Summary: random.choice([]) should return more intelligible exception Initial Comment: Python 2.3.3 (#1, Mar 31 2004, 11:17:07) [GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import random >>> random.choice([]) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/random.py", line 231, in choice return seq[int(self.random() * len(seq))] IndexError: list index out of range This is simple enough here, but it's harder when it's at the bottom of a traceback. I suggest something like ValueError: sequence must not be empty. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 01:41 Message: Logged In: YES user_id=80475 That's better, but I'm still -0. This function is apt to be called inside a loop, so it would be a bummer to slow down everyone's code just to rewrite the error message. For better or worse, it is the nature of Python tracebacks to raise RoadWeavingErrors when a DontDrinkAndDriveWarning would be more to the point. I recommend closing this, but if you really think it's an essential life saver, then assign to Tim and see if you can persuade him. ---------------------------------------------------------------------- Comment By: Michael Hoffman (hoffmanm) Date: 2004-06-05 01:19 Message: Logged In: YES user_id=987664 I thought of that after I submitted. :-) Might it be better to raise an IndexError with a message similar to "sequence must not be empty" instead? It would just make debugging that much easier. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 00:39 Message: Logged In: YES user_id=80475 -0 While a ValueError would be appropriate, the status quo doesn't bug me much and changing it could break code if someone is currently trapping the IndexError. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 From noreply at sourceforge.net Sat Jun 5 02:45:16 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 02:45:29 2004 Subject: [ python-Bugs-964703 ] RFE versus Bug group Feature Request Message-ID: Bugs item #964703, was opened at 2004-06-02 04:14 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 Category: None Group: 3rd Party Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Terry J. Reedy (tjreedy) Assigned to: Tim Peters (tim_one) Summary: RFE versus Bug group Feature Request Initial Comment: The Category is 'Source Forge Item Tracker'. The possible bug is the redundancy of having both an RFE (Request For Enhancement) list separate from the Bugs list and a Feature Request Group within Bugs. Is this intentional or an historical artifact that should be removed in order to direct feature requests one place or another. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-05 08:45 Message: Logged In: YES user_id=21627 fdrake: I deal with the RFE tracker by never (or very infrequently) looking at it. If Python has 700 bugs, and 250 unreviewed patches, I'm not going to implement features that people have requested just because they requested them. So I would be happy if the RFE tracker grew to 700 items, if the bugs tracker shrank to 150 entries simultaneously. Only at that point I will wonder what to do next, and look at RFEs. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-05 02:10 Message: Logged In: YES user_id=6380 We should switch to RoundUp on python.org. A single unified tracker that we can modify. Grr. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-04 23:58 Message: Logged In: YES user_id=3066 I will note that not everyone agrees on this. Having to look in multiple trackers is quite painful as well. The separation of the RFE tracker would be less of a problem if there were no "patches" tracker; a patch should be attached to a bug report or to a feature request, not separate. Grr. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-04 23:30 Message: Logged In: YES user_id=593130 Thanks for the clear directive. A thought for feature requesters: Conflicts between promise (the References) and performance (the CPython implementation) are clearly bugs. So to are sufficiently muddled docs. While a 'missing' feature might look like a bug to one who wants it, it might not to a developer looking to prune a bloated bug list (>700 today, about double what it was not too long ago.) On the other hand, a feature request in the smaller RFE list is only competing with other feature requests instead of sometimes serious bugs. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-02 07:00 Message: Logged In: YES user_id=31435 Oh yes! Overall, we'd rather reduce the mushrooming backlog of patch and bug reports than slam in new features, so we want to keep feature requests out of the bug tracker. That's why the RFE tracker was added. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-02 06:26 Message: Logged In: YES user_id=593130 In the meanwhile... is it appropriate to recommend that requests go in RFE (as I somewhat ignorantly and indirectly did today, see #960325) or is this a "don't care" issue for the developers (that I should ignore)? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-02 04:22 Message: Logged In: YES user_id=31435 Sorry, we can't do anything about this. Group names cannot be deleted in SourceForge's system, and can't even be renamed. So we'll have a "Feature Request" Group in the Bugs tracker forever -- or unless SF changes their system. When we first moved to SF, RFE trackers didn't exist. That's why Bugs grew a Feature Request group to begin with. Closing as 3rdParty, WontFix. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 From noreply at sourceforge.net Sat Jun 5 04:59:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 04:59:40 2004 Subject: [ python-Bugs-966992 ] cgitb.scanvars fails Message-ID: Bugs item #966992, was opened at 2004-06-05 08:59 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966992&group_id=5470 Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Robin Becker (rgbecker) Assigned to: Nobody/Anonymous (nobody) Summary: cgitb.scanvars fails Initial Comment: Under certain circumstances cgitb.scanvars fails with because of an unititialized value variable. This bug is present in 2.3.3 and 2.4a0. The following script demonstrates #####start import cgitb;cgitb.enable() def err(L): if 'never' in L: return if 1: print '\n'.join(L) v=2 err(['',None]) #####finish when run this results in mangled output because scanvars attempts to evaluate '\n'.join(L) where L=['',None]. A fix is to set value=__UNDEF__ at the start of scanvars. Index: cgitb.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/cgitb.py,v retrieving revision 1.11 diff -r1.11 cgitb.py 63c63 < vars, lasttoken, parent, prefix = [], None, None, '' --- > vars, lasttoken, parent, prefix, value = [], None, None, '', __UNDEF__ ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966992&group_id=5470 From noreply at sourceforge.net Sat Jun 5 05:24:04 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 05:24:18 2004 Subject: [ python-Bugs-964230 ] random.choice([]) should return more intelligible exception Message-ID: Bugs item #964230, was opened at 2004-06-01 12:39 Message generated for change (Comment added) made by hoffmanm You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 2 Submitted By: Michael Hoffman (hoffmanm) Assigned to: Nobody/Anonymous (nobody) Summary: random.choice([]) should return more intelligible exception Initial Comment: Python 2.3.3 (#1, Mar 31 2004, 11:17:07) [GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import random >>> random.choice([]) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/random.py", line 231, in choice return seq[int(self.random() * len(seq))] IndexError: list index out of range This is simple enough here, but it's harder when it's at the bottom of a traceback. I suggest something like ValueError: sequence must not be empty. ---------------------------------------------------------------------- >Comment By: Michael Hoffman (hoffmanm) Date: 2004-06-05 09:24 Message: Logged In: YES user_id=987664 You have a point. What about rewriting the line to read: return seq[int(self.random() * len(seq))] # raises IndexError if seq is empty The comment would be a good hint. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 06:41 Message: Logged In: YES user_id=80475 That's better, but I'm still -0. This function is apt to be called inside a loop, so it would be a bummer to slow down everyone's code just to rewrite the error message. For better or worse, it is the nature of Python tracebacks to raise RoadWeavingErrors when a DontDrinkAndDriveWarning would be more to the point. I recommend closing this, but if you really think it's an essential life saver, then assign to Tim and see if you can persuade him. ---------------------------------------------------------------------- Comment By: Michael Hoffman (hoffmanm) Date: 2004-06-05 06:19 Message: Logged In: YES user_id=987664 I thought of that after I submitted. :-) Might it be better to raise an IndexError with a message similar to "sequence must not be empty" instead? It would just make debugging that much easier. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 05:39 Message: Logged In: YES user_id=80475 -0 While a ValueError would be appropriate, the status quo doesn't bug me much and changing it could break code if someone is currently trapping the IndexError. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 From noreply at sourceforge.net Sat Jun 5 06:30:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 06:30:33 2004 Subject: [ python-Bugs-954364 ] 2.3 inspect.getframeinfo wrong tb line numbers Message-ID: Bugs item #954364, was opened at 2004-05-15 08:41 Message generated for change (Comment added) made by rgbecker You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=954364&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Robin Becker (rgbecker) Assigned to: Nobody/Anonymous (nobody) Summary: 2.3 inspect.getframeinfo wrong tb line numbers Initial Comment: inspect.getframeinfo always uses f_lineno even when passed a tb. In practice it is not always true that tb.tb_frame.f_lineno==tb.tb_lineno so user functions like inspect.getinnerframes (and hence cgitb) will get wrong information back sometimes. I fixed this using the attached patch. This script illustrates via cgitb. ############ def raise_an_error(): a = 3 b = 4 c = 0 try: a = a/c except: import sys, cgitb, traceback, inspect tbt,tbv,tb = sys.exc_info() print 'traceback\n',''.join(traceback.format_exception(tbt,tbv,tb)) print '\n\ncgitb\n',cgitb.text((tbt,tbv,tb),1) raise_an_error() ############ ---------------------------------------------------------------------- >Comment By: Robin Becker (rgbecker) Date: 2004-06-05 10:30 Message: Logged In: YES user_id=6946 Whoops tabs and all that the above script should look like #####start def raise_an_error(): a = 3 b = 4 c = 0 try: a = a/c except: import sys, cgitb, traceback, inspect tbt,tbv,tb = sys.exc_info() print 'traceback\n',''.join(traceback.format_exception(tbt,tbv,tb)) print '\n\ncgitb\n',cgitb.text((tbt,tbv,tb),1) raise_an_error() #####finish The same problem occurs with inspect.trace. this code #####start import inspect try: raise ValueError except: print inspect.trace() #####finish produces [(, 'C:\tmp\ttt.py', 5, '?', [' print inspect.trace()\n'], 0)] ie the returned trace doesn't reflect the current traceback, but rather the current frame location. This is also fixed by the same patch. Still present in 2.4a0. This is a semantic issue. The current test_inspect.py insists that the above behaviour is correct so it will also need modifying if the fix goes in. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=954364&group_id=5470 From noreply at sourceforge.net Sat Jun 5 08:44:24 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 08:44:29 2004 Subject: [ python-Bugs-954364 ] 2.3 inspect.getframeinfo wrong tb line numbers Message-ID: Bugs item #954364, was opened at 2004-05-15 08:41 Message generated for change (Comment added) made by rgbecker You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=954364&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Robin Becker (rgbecker) Assigned to: Nobody/Anonymous (nobody) Summary: 2.3 inspect.getframeinfo wrong tb line numbers Initial Comment: inspect.getframeinfo always uses f_lineno even when passed a tb. In practice it is not always true that tb.tb_frame.f_lineno==tb.tb_lineno so user functions like inspect.getinnerframes (and hence cgitb) will get wrong information back sometimes. I fixed this using the attached patch. This script illustrates via cgitb. ############ def raise_an_error(): a = 3 b = 4 c = 0 try: a = a/c except: import sys, cgitb, traceback, inspect tbt,tbv,tb = sys.exc_info() print 'traceback\n',''.join(traceback.format_exception(tbt,tbv,tb)) print '\n\ncgitb\n',cgitb.text((tbt,tbv,tb),1) raise_an_error() ############ ---------------------------------------------------------------------- >Comment By: Robin Becker (rgbecker) Date: 2004-06-05 12:44 Message: Logged In: YES user_id=6946 python-2.4a0-954364.patch affects inspect.py, test_inspect.py fixes this for me in 2.4a0 and 2.3.3, but the semantics of inspect.trace changed so that test_inspect.py must alter. If inspect.trace() should always return something starting at the call of inspect.trace then this patch is *WRONG* ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 10:30 Message: Logged In: YES user_id=6946 Whoops tabs and all that the above script should look like #####start def raise_an_error(): a = 3 b = 4 c = 0 try: a = a/c except: import sys, cgitb, traceback, inspect tbt,tbv,tb = sys.exc_info() print 'traceback\n',''.join(traceback.format_exception(tbt,tbv,tb)) print '\n\ncgitb\n',cgitb.text((tbt,tbv,tb),1) raise_an_error() #####finish The same problem occurs with inspect.trace. this code #####start import inspect try: raise ValueError except: print inspect.trace() #####finish produces [(, 'C:\tmp\ttt.py', 5, '?', [' print inspect.trace()\n'], 0)] ie the returned trace doesn't reflect the current traceback, but rather the current frame location. This is also fixed by the same patch. Still present in 2.4a0. This is a semantic issue. The current test_inspect.py insists that the above behaviour is correct so it will also need modifying if the fix goes in. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=954364&group_id=5470 From noreply at sourceforge.net Sat Jun 5 08:56:34 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 08:56:42 2004 Subject: [ python-Bugs-841757 ] xmlrpclib chokes on Unicode keys Message-ID: Bugs item #841757, was opened at 2003-11-13 16:43 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=841757&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed >Resolution: Accepted Priority: 5 Submitted By: Fredrik Lundh (effbot) >Assigned to: A.M. Kuchling (akuchling) Summary: xmlrpclib chokes on Unicode keys Initial Comment: the type check in dump_struct is too strict; I suggest changing the loop to: for k, v in value.items(): write("\n") if type(k) is UnicodeType: k = k.encode(self.encoding) elif type(k) is not StringType: raise TypeError, "dictionary key must be string" write("%s\n" % escape(k)) dump(v, write) write("\n") ths applies to all Python versions since 2.2. backport as necessary. regards /F ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 08:56 Message: Logged In: YES user_id=11375 Applied to CVS HEAD and to the 2.3 maintenance branch. Thanks, Fredrik! ---------------------------------------------------------------------- Comment By: Fredrik Lundh (effbot) Date: 2003-11-13 17:29 Message: Logged In: YES user_id=38376 The XML-RPC specification wasn't exactly clear on this issue, and was changed earlier this year. See http://www.effbot.org/zone/xmlrpc-errata.htm http://www.effbot.org/zone/xmlrpc-ascii.htm And even if the spec hadn't been changed, xmlrpclib has always "done the right thing" with Unicode strings in all other cases. Trust me, I wrote the code, so I should know ;-) ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2003-11-13 17:10 Message: Logged In: YES user_id=44345 I long ago got tired of Dave Wiener's stance on the XML-RPC protocol and got off any mailing lists he participated in, but I seem to recall that he was adamant about not allowing anything but ASCII characters. Accordingly, either Unicode encodings would be verboten or some other transformation should be applied to them to make them truly ASCII (like % encoding them). Has Dave's position on whether or not to accept non-ASCII on the wire changed? ---------------------------------------------------------------------- Comment By: Fredrik Lundh (effbot) Date: 2003-11-13 17:00 Message: Logged In: YES user_id=38376 to avoid a performance hit for existing code, the type test is better written as: if type(k) is not StringType: if unicode and type(k) is UnicodeType: k = k.encode(self.encoding) else: raise TypeError, "dictionary key must be string" (this also works under 1.5.2) (see attachment for a more readable "patch"). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=841757&group_id=5470 From noreply at sourceforge.net Sat Jun 5 09:01:59 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 09:02:03 2004 Subject: [ python-Bugs-966618 ] float_subtype_new() bug Message-ID: Bugs item #966618, was opened at 2004-06-04 15:39 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: float_subtype_new() bug Initial Comment: A rather obsure bug in the subclassing code: >>> class A: ... def __float__(self): return 'hello' ... >>> float(A()) 'hello' >>> class f(float): pass ... >>> f(A()) -5.7590155905901735e-56 In debug mode, the assert() in float_subtype_new() fails instead. In non-debug mode, the value we get is the result of typecasting the PyStringObject* to a PyFloatObject*. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:01 Message: Logged In: YES user_id=1057404 floatobject.c contains an assertion that the value can be coerced into a float, but not a runtime if. I've changed it to be in line with what int_subtype_new() does. This may not be 100% correct, however, as they both allow a string to be returned from __int__() and __float__(), respectively. complex() does not allow this, however, and it throws TypeError (while int_subtype_new() and float_subtype_new() throw ValueError). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 From noreply at sourceforge.net Sat Jun 5 09:11:29 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 09:11:33 2004 Subject: [ python-Bugs-966618 ] float_subtype_new() bug Message-ID: Bugs item #966618, was opened at 2004-06-04 15:39 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: float_subtype_new() bug Initial Comment: A rather obsure bug in the subclassing code: >>> class A: ... def __float__(self): return 'hello' ... >>> float(A()) 'hello' >>> class f(float): pass ... >>> f(A()) -5.7590155905901735e-56 In debug mode, the assert() in float_subtype_new() fails instead. In non-debug mode, the value we get is the result of typecasting the PyStringObject* to a PyFloatObject*. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:11 Message: Logged In: YES user_id=1057404 (Inline, I can't seem to attach things) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:01 Message: Logged In: YES user_id=1057404 floatobject.c contains an assertion that the value can be coerced into a float, but not a runtime if. I've changed it to be in line with what int_subtype_new() does. This may not be 100% correct, however, as they both allow a string to be returned from __int__() and __float__(), respectively. complex() does not allow this, however, and it throws TypeError (while int_subtype_new() and float_subtype_new() throw ValueError). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 From noreply at sourceforge.net Sat Jun 5 09:13:07 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 09:13:11 2004 Subject: [ python-Bugs-966618 ] float_subtype_new() bug Message-ID: Bugs item #966618, was opened at 2004-06-04 15:39 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: float_subtype_new() bug Initial Comment: A rather obsure bug in the subclassing code: >>> class A: ... def __float__(self): return 'hello' ... >>> float(A()) 'hello' >>> class f(float): pass ... >>> f(A()) -5.7590155905901735e-56 In debug mode, the assert() in float_subtype_new() fails instead. In non-debug mode, the value we get is the result of typecasting the PyStringObject* to a PyFloatObject*. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:13 Message: Logged In: YES user_id=1057404 (ack, spelling error copied from intobject.c) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must be convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:11 Message: Logged In: YES user_id=1057404 (Inline, I can't seem to attach things) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:01 Message: Logged In: YES user_id=1057404 floatobject.c contains an assertion that the value can be coerced into a float, but not a runtime if. I've changed it to be in line with what int_subtype_new() does. This may not be 100% correct, however, as they both allow a string to be returned from __int__() and __float__(), respectively. complex() does not allow this, however, and it throws TypeError (while int_subtype_new() and float_subtype_new() throw ValueError). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 From noreply at sourceforge.net Sat Jun 5 09:20:11 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 09:20:21 2004 Subject: [ python-Bugs-954364 ] 2.3 inspect.getframeinfo wrong tb line numbers Message-ID: Bugs item #954364, was opened at 2004-05-15 04:41 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=954364&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Robin Becker (rgbecker) Assigned to: Nobody/Anonymous (nobody) Summary: 2.3 inspect.getframeinfo wrong tb line numbers Initial Comment: inspect.getframeinfo always uses f_lineno even when passed a tb. In practice it is not always true that tb.tb_frame.f_lineno==tb.tb_lineno so user functions like inspect.getinnerframes (and hence cgitb) will get wrong information back sometimes. I fixed this using the attached patch. This script illustrates via cgitb. ############ def raise_an_error(): a = 3 b = 4 c = 0 try: a = a/c except: import sys, cgitb, traceback, inspect tbt,tbv,tb = sys.exc_info() print 'traceback\n',''.join(traceback.format_exception(tbt,tbv,tb)) print '\n\ncgitb\n',cgitb.text((tbt,tbv,tb),1) raise_an_error() ############ ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 09:20 Message: Logged In: YES user_id=11375 In the test program, the exception is raised at line 3, but the .trace() call says it was at line 5. ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 08:44 Message: Logged In: YES user_id=6946 python-2.4a0-954364.patch affects inspect.py, test_inspect.py fixes this for me in 2.4a0 and 2.3.3, but the semantics of inspect.trace changed so that test_inspect.py must alter. If inspect.trace() should always return something starting at the call of inspect.trace then this patch is *WRONG* ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 06:30 Message: Logged In: YES user_id=6946 Whoops tabs and all that the above script should look like #####start def raise_an_error(): a = 3 b = 4 c = 0 try: a = a/c except: import sys, cgitb, traceback, inspect tbt,tbv,tb = sys.exc_info() print 'traceback\n',''.join(traceback.format_exception(tbt,tbv,tb)) print '\n\ncgitb\n',cgitb.text((tbt,tbv,tb),1) raise_an_error() #####finish The same problem occurs with inspect.trace. this code #####start import inspect try: raise ValueError except: print inspect.trace() #####finish produces [(, 'C:\tmp\ttt.py', 5, '?', [' print inspect.trace()\n'], 0)] ie the returned trace doesn't reflect the current traceback, but rather the current frame location. This is also fixed by the same patch. Still present in 2.4a0. This is a semantic issue. The current test_inspect.py insists that the above behaviour is correct so it will also need modifying if the fix goes in. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=954364&group_id=5470 From noreply at sourceforge.net Sat Jun 5 09:35:14 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 09:35:19 2004 Subject: [ python-Bugs-966992 ] cgitb.scanvars fails Message-ID: Bugs item #966992, was opened at 2004-06-05 08:59 Message generated for change (Comment added) made by rgbecker You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966992&group_id=5470 Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Robin Becker (rgbecker) Assigned to: Nobody/Anonymous (nobody) Summary: cgitb.scanvars fails Initial Comment: Under certain circumstances cgitb.scanvars fails with because of an unititialized value variable. This bug is present in 2.3.3 and 2.4a0. The following script demonstrates #####start import cgitb;cgitb.enable() def err(L): if 'never' in L: return if 1: print '\n'.join(L) v=2 err(['',None]) #####finish when run this results in mangled output because scanvars attempts to evaluate '\n'.join(L) where L=['',None]. A fix is to set value=__UNDEF__ at the start of scanvars. Index: cgitb.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/cgitb.py,v retrieving revision 1.11 diff -r1.11 cgitb.py 63c63 < vars, lasttoken, parent, prefix = [], None, None, '' --- > vars, lasttoken, parent, prefix, value = [], None, None, '', __UNDEF__ ---------------------------------------------------------------------- >Comment By: Robin Becker (rgbecker) Date: 2004-06-05 13:35 Message: Logged In: YES user_id=6946 The script cgitb_scanvars_bug.py produces mangled output under 2.3.3 and 2.4a0. The error is caused by an error during evaluation in cgitb.scanvars. I believe a fix is to intialize value to __UNDEF__. Patch python-2.4a0-966992.patch appears to fix in 2.3.3 and 2.4a0. There may be other ways in which scanvars can error, but I haven't found them yet :( ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966992&group_id=5470 From noreply at sourceforge.net Sat Jun 5 09:37:55 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 09:38:02 2004 Subject: [ python-Bugs-954364 ] 2.3 inspect.getframeinfo wrong tb line numbers Message-ID: Bugs item #954364, was opened at 2004-05-15 04:41 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=954364&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Robin Becker (rgbecker) Assigned to: Nobody/Anonymous (nobody) Summary: 2.3 inspect.getframeinfo wrong tb line numbers Initial Comment: inspect.getframeinfo always uses f_lineno even when passed a tb. In practice it is not always true that tb.tb_frame.f_lineno==tb.tb_lineno so user functions like inspect.getinnerframes (and hence cgitb) will get wrong information back sometimes. I fixed this using the attached patch. This script illustrates via cgitb. ############ def raise_an_error(): a = 3 b = 4 c = 0 try: a = a/c except: import sys, cgitb, traceback, inspect tbt,tbv,tb = sys.exc_info() print 'traceback\n',''.join(traceback.format_exception(tbt,tbv,tb)) print '\n\ncgitb\n',cgitb.text((tbt,tbv,tb),1) raise_an_error() ############ ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 09:37 Message: Logged In: YES user_id=11375 Alternative version of the patch. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 09:20 Message: Logged In: YES user_id=11375 In the test program, the exception is raised at line 3, but the .trace() call says it was at line 5. ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 08:44 Message: Logged In: YES user_id=6946 python-2.4a0-954364.patch affects inspect.py, test_inspect.py fixes this for me in 2.4a0 and 2.3.3, but the semantics of inspect.trace changed so that test_inspect.py must alter. If inspect.trace() should always return something starting at the call of inspect.trace then this patch is *WRONG* ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 06:30 Message: Logged In: YES user_id=6946 Whoops tabs and all that the above script should look like #####start def raise_an_error(): a = 3 b = 4 c = 0 try: a = a/c except: import sys, cgitb, traceback, inspect tbt,tbv,tb = sys.exc_info() print 'traceback\n',''.join(traceback.format_exception(tbt,tbv,tb)) print '\n\ncgitb\n',cgitb.text((tbt,tbv,tb),1) raise_an_error() #####finish The same problem occurs with inspect.trace. this code #####start import inspect try: raise ValueError except: print inspect.trace() #####finish produces [(, 'C:\tmp\ttt.py', 5, '?', [' print inspect.trace()\n'], 0)] ie the returned trace doesn't reflect the current traceback, but rather the current frame location. This is also fixed by the same patch. Still present in 2.4a0. This is a semantic issue. The current test_inspect.py insists that the above behaviour is correct so it will also need modifying if the fix goes in. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=954364&group_id=5470 From noreply at sourceforge.net Sat Jun 5 09:43:10 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 09:43:14 2004 Subject: [ python-Bugs-964861 ] Cookie module does not parse date Message-ID: Bugs item #964861, was opened at 2004-06-02 09:02 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964861&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: Cookie module does not parse date Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. The standard Cookie module does not parse date string. Here is and example: >>> import Cookie >>> s = 'Set-Cookie: key=value; path=/; expires=Fri, 21-May-2004 10:40:51 GMT' >>> c = Cookie.BaseCookie(s) >>> print c Set-Cookie: key=value; expires=Fri,; Path=/; In the attached file I have reported the correct (I think) regex pattern. Thanks and Regards Manlio Perillo ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:43 Message: Logged In: YES user_id=1057404 This bug is in error; RFC2109 specifies the BNF grammar as: av-pairs = av-pair *(";" av-pair) av-pair = attr ["=" value] ; optional value attr = token value = word word = token | quoted-string If you surround the date in double quotes, as per the RFC, then the above works correctly. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964861&group_id=5470 From noreply at sourceforge.net Sat Jun 5 10:07:47 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 10:07:51 2004 Subject: [ python-Bugs-945665 ] platform.system() Windows inconsistency Message-ID: Bugs item #945665, was opened at 2004-05-01 01:19 Message generated for change (Comment added) made by jlgijsbers You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Guido van Rossum (gvanrossum) Assigned to: Nobody/Anonymous (nobody) Summary: platform.system() Windows inconsistency Initial Comment: On Windows, platform.system() (and platform.uname() [0]) return 'Windows' or 'Microsoft Windows' depending on whether win32api is available or not. This is confusing and can lead to hard-to-find bugs where testing in one environment doesn't reveal a bug that only occurs in another environment. I believe this hasn't been fixed in Python 2.4 yet (only the XP recognition has been fixed, it is also broken in 2.3 when win32api was available). ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 16:07 Message: Logged In: YES user_id=469548 Yep, _syscmd_ver() returns 'Microsoft Windows' while the default is 'Windows'. Setting the default to Microsoft Windows seems the easiest way here (putting patch inline because I can't attach it): Index: platform.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/platform.py,v retrieving revision 1.13 diff -u -r1.13 platform.py --- platform.py 4 May 2004 18:18:59 -0000 1.13 +++ platform.py 5 Jun 2004 13:07:45 -0000 @@ -966,7 +966,7 @@ version = '32bit' else: version = '16bit' - system = 'Windows' + system = 'Microsoft Windows' elif system[:4] == 'java': release,vendor,vminfo,osinfo = java_ver() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 From noreply at sourceforge.net Sat Jun 5 10:16:08 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 10:16:13 2004 Subject: [ python-Bugs-954364 ] 2.3 inspect.getframeinfo wrong tb line numbers Message-ID: Bugs item #954364, was opened at 2004-05-15 04:41 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=954364&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Robin Becker (rgbecker) >Assigned to: A.M. Kuchling (akuchling) Summary: 2.3 inspect.getframeinfo wrong tb line numbers Initial Comment: inspect.getframeinfo always uses f_lineno even when passed a tb. In practice it is not always true that tb.tb_frame.f_lineno==tb.tb_lineno so user functions like inspect.getinnerframes (and hence cgitb) will get wrong information back sometimes. I fixed this using the attached patch. This script illustrates via cgitb. ############ def raise_an_error(): a = 3 b = 4 c = 0 try: a = a/c except: import sys, cgitb, traceback, inspect tbt,tbv,tb = sys.exc_info() print 'traceback\n',''.join(traceback.format_exception(tbt,tbv,tb)) print '\n\ncgitb\n',cgitb.text((tbt,tbv,tb),1) raise_an_error() ############ ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 10:16 Message: Logged In: YES user_id=11375 Applied to both CVS HEAD and the 2.3 branch. Thanks! ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 09:37 Message: Logged In: YES user_id=11375 Alternative version of the patch. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 09:20 Message: Logged In: YES user_id=11375 In the test program, the exception is raised at line 3, but the .trace() call says it was at line 5. ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 08:44 Message: Logged In: YES user_id=6946 python-2.4a0-954364.patch affects inspect.py, test_inspect.py fixes this for me in 2.4a0 and 2.3.3, but the semantics of inspect.trace changed so that test_inspect.py must alter. If inspect.trace() should always return something starting at the call of inspect.trace then this patch is *WRONG* ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 06:30 Message: Logged In: YES user_id=6946 Whoops tabs and all that the above script should look like #####start def raise_an_error(): a = 3 b = 4 c = 0 try: a = a/c except: import sys, cgitb, traceback, inspect tbt,tbv,tb = sys.exc_info() print 'traceback\n',''.join(traceback.format_exception(tbt,tbv,tb)) print '\n\ncgitb\n',cgitb.text((tbt,tbv,tb),1) raise_an_error() #####finish The same problem occurs with inspect.trace. this code #####start import inspect try: raise ValueError except: print inspect.trace() #####finish produces [(, 'C:\tmp\ttt.py', 5, '?', [' print inspect.trace()\n'], 0)] ie the returned trace doesn't reflect the current traceback, but rather the current frame location. This is also fixed by the same patch. Still present in 2.4a0. This is a semantic issue. The current test_inspect.py insists that the above behaviour is correct so it will also need modifying if the fix goes in. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=954364&group_id=5470 From noreply at sourceforge.net Sat Jun 5 10:37:21 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 10:37:35 2004 Subject: [ python-Bugs-964703 ] RFE versus Bug group Feature Request Message-ID: Bugs item #964703, was opened at 2004-06-01 22:14 Message generated for change (Comment added) made by tjreedy You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 Category: None Group: 3rd Party Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Terry J. Reedy (tjreedy) Assigned to: Tim Peters (tim_one) Summary: RFE versus Bug group Feature Request Initial Comment: The Category is 'Source Forge Item Tracker'. The possible bug is the redundancy of having both an RFE (Request For Enhancement) list separate from the Bugs list and a Feature Request Group within Bugs. Is this intentional or an historical artifact that should be removed in order to direct feature requests one place or another. ---------------------------------------------------------------------- >Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-05 10:37 Message: Logged In: YES user_id=593130 loewis: I currently interprete your advice to me as "Yes, encourage movement of RFEs to the RFE list so I can focus on real bugs" rather than "No, leave them so I can reward queue jumpers with attention I would not otherwise give". ;-) There are people who *have* read and responded to RFEs (about half have been closed). Even those rejected usually get more explanation than simply "Not a bug" all: I agree that a tracker that we can modify with experience to make review and response easier would be great. I agree with fdrake about patch confusion. I think there should either be no separate patch list (but a way to pull out items with an open patch), or all patches should be on a separate patch list, but linked to a discussion-only item. I'd like a way to select boilerplate responses and check off the presence of required patch features. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-05 02:45 Message: Logged In: YES user_id=21627 fdrake: I deal with the RFE tracker by never (or very infrequently) looking at it. If Python has 700 bugs, and 250 unreviewed patches, I'm not going to implement features that people have requested just because they requested them. So I would be happy if the RFE tracker grew to 700 items, if the bugs tracker shrank to 150 entries simultaneously. Only at that point I will wonder what to do next, and look at RFEs. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-04 20:10 Message: Logged In: YES user_id=6380 We should switch to RoundUp on python.org. A single unified tracker that we can modify. Grr. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-04 17:58 Message: Logged In: YES user_id=3066 I will note that not everyone agrees on this. Having to look in multiple trackers is quite painful as well. The separation of the RFE tracker would be less of a problem if there were no "patches" tracker; a patch should be attached to a bug report or to a feature request, not separate. Grr. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-04 17:30 Message: Logged In: YES user_id=593130 Thanks for the clear directive. A thought for feature requesters: Conflicts between promise (the References) and performance (the CPython implementation) are clearly bugs. So to are sufficiently muddled docs. While a 'missing' feature might look like a bug to one who wants it, it might not to a developer looking to prune a bloated bug list (>700 today, about double what it was not too long ago.) On the other hand, a feature request in the smaller RFE list is only competing with other feature requests instead of sometimes serious bugs. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-02 01:00 Message: Logged In: YES user_id=31435 Oh yes! Overall, we'd rather reduce the mushrooming backlog of patch and bug reports than slam in new features, so we want to keep feature requests out of the bug tracker. That's why the RFE tracker was added. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-02 00:26 Message: Logged In: YES user_id=593130 In the meanwhile... is it appropriate to recommend that requests go in RFE (as I somewhat ignorantly and indirectly did today, see #960325) or is this a "don't care" issue for the developers (that I should ignore)? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-01 22:22 Message: Logged In: YES user_id=31435 Sorry, we can't do anything about this. Group names cannot be deleted in SourceForge's system, and can't even be renamed. So we'll have a "Feature Request" Group in the Bugs tracker forever -- or unless SF changes their system. When we first moved to SF, RFE trackers didn't exist. That's why Bugs grew a Feature Request group to begin with. Closing as 3rdParty, WontFix. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 From noreply at sourceforge.net Sat Jun 5 10:38:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 10:38:31 2004 Subject: [ python-Bugs-945665 ] platform.system() Windows inconsistency Message-ID: Bugs item #945665, was opened at 2004-05-01 01:19 Message generated for change (Comment added) made by jlgijsbers You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Guido van Rossum (gvanrossum) Assigned to: Nobody/Anonymous (nobody) Summary: platform.system() Windows inconsistency Initial Comment: On Windows, platform.system() (and platform.uname() [0]) return 'Windows' or 'Microsoft Windows' depending on whether win32api is available or not. This is confusing and can lead to hard-to-find bugs where testing in one environment doesn't reveal a bug that only occurs in another environment. I believe this hasn't been fixed in Python 2.4 yet (only the XP recognition has been fixed, it is also broken in 2.3 when win32api was available). ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 16:38 Message: Logged In: YES user_id=469548 New patch, the docs say we should use 'Windows' instead of 'Microsoft Windows', so we do: Index: platform.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/platform.py,v retrieving revision 1.13 diff -u -r1.13 platform.py --- platform.py 4 May 2004 18:18:59 -0000 1.13 +++ platform.py 5 Jun 2004 13:39:10 -0000 @@ -957,6 +957,8 @@ # platforms if use_syscmd_ver: system,release,version = _syscmd_ver(system) + if string.find(system, 'Microsoft Windows') != -1: + system = 'Windows' # In case we still don't know anything useful, we'll try to # help ourselves ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 16:07 Message: Logged In: YES user_id=469548 Yep, _syscmd_ver() returns 'Microsoft Windows' while the default is 'Windows'. Setting the default to Microsoft Windows seems the easiest way here (putting patch inline because I can't attach it): Index: platform.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/platform.py,v retrieving revision 1.13 diff -u -r1.13 platform.py --- platform.py 4 May 2004 18:18:59 -0000 1.13 +++ platform.py 5 Jun 2004 13:07:45 -0000 @@ -966,7 +966,7 @@ version = '32bit' else: version = '16bit' - system = 'Windows' + system = 'Microsoft Windows' elif system[:4] == 'java': release,vendor,vminfo,osinfo = java_ver() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 From noreply at sourceforge.net Sat Jun 5 10:54:05 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 10:54:12 2004 Subject: [ python-Bugs-964230 ] random.choice([]) should return more intelligible exception Message-ID: Bugs item #964230, was opened at 2004-06-01 07:39 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 Category: Python Library >Group: Python 2.4 >Status: Closed Resolution: None Priority: 2 Submitted By: Michael Hoffman (hoffmanm) Assigned to: Nobody/Anonymous (nobody) Summary: random.choice([]) should return more intelligible exception Initial Comment: Python 2.3.3 (#1, Mar 31 2004, 11:17:07) [GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import random >>> random.choice([]) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/random.py", line 231, in choice return seq[int(self.random() * len(seq))] IndexError: list index out of range This is simple enough here, but it's harder when it's at the bottom of a traceback. I suggest something like ValueError: sequence must not be empty. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 09:54 Message: Logged In: YES user_id=80475 Okay. Fixed. See Lib/random.py 1.61. ---------------------------------------------------------------------- Comment By: Michael Hoffman (hoffmanm) Date: 2004-06-05 04:24 Message: Logged In: YES user_id=987664 You have a point. What about rewriting the line to read: return seq[int(self.random() * len(seq))] # raises IndexError if seq is empty The comment would be a good hint. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 01:41 Message: Logged In: YES user_id=80475 That's better, but I'm still -0. This function is apt to be called inside a loop, so it would be a bummer to slow down everyone's code just to rewrite the error message. For better or worse, it is the nature of Python tracebacks to raise RoadWeavingErrors when a DontDrinkAndDriveWarning would be more to the point. I recommend closing this, but if you really think it's an essential life saver, then assign to Tim and see if you can persuade him. ---------------------------------------------------------------------- Comment By: Michael Hoffman (hoffmanm) Date: 2004-06-05 01:19 Message: Logged In: YES user_id=987664 I thought of that after I submitted. :-) Might it be better to raise an IndexError with a message similar to "sequence must not be empty" instead? It would just make debugging that much easier. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 00:39 Message: Logged In: YES user_id=80475 -0 While a ValueError would be appropriate, the status quo doesn't bug me much and changing it could break code if someone is currently trapping the IndexError. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 From noreply at sourceforge.net Sat Jun 5 10:57:15 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 10:57:40 2004 Subject: [ python-Bugs-952807 ] segfault in subclassing datetime.date & pickling Message-ID: Bugs item #952807, was opened at 2004-05-13 04:30 Message generated for change (Comment added) made by jiwon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=952807&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Thomas Wouters (twouters) Assigned to: Nobody/Anonymous (nobody) Summary: segfault in subclassing datetime.date & pickling Initial Comment: datetime.date does not take subclassing into account properly. datetime.date's tp_new has special code for unpickling (the single-string argument) which calls PyObject_New() directly, which doesn't account for the fact that subclasses may participate in cycle-gc (even if datetime.date objects do not.) The result is a segfault in code that unpickles instances of subclasses of datetime.date: import pickle, datetime class mydate(datetime.date): pass s = pickle.dumps(mydate.today()) broken = pickle.loads(s) del broken The 'del broken' is what causes the segfault: the 'mydate' class/type is supposed to participate in GC, but because of datetime.date's shortcut, that part of the object is never initialized (nor allocated, I presume.) The 'broken' instance reaches 0 refcounts, the GC gets triggered and it reads garbage memory. To 'prove' that the problem isn't caused by pickle itself: class mydate(datetime.date): pass broken = mydate('\x07\xd4\x05\x0c') del broken causes the same crash, in the GC code. ---------------------------------------------------------------------- Comment By: Jiwon Seo (jiwon) Date: 2004-06-05 23:57 Message: Logged In: YES user_id=595483 Here is the patch of datetimemodule and test code for it. I just read the summary, and made the datetimemodule patch as is said, and added a testcode for it. *** Modules/datetimemodule.c.orig Sat Jun 5 23:49:26 2004 --- Modules/datetimemodule.c Sat Jun 5 23:47:05 2004 *************** *** 2206,2212 **** { PyDateTime_Date *me; ! me = PyObject_New(PyDateTime_Date, type); if (me != NULL) { char *pdata = PyString_AS_STRING(state); memcpy(me->data, pdata, _PyDateTime_DATE_DATASIZE); --- 2206,2212 ---- { PyDateTime_Date *me; ! me = (PyDateTime_Date *) (type->tp_alloc(type, 0)); if (me != NULL) { char *pdata = PyString_AS_STRING(state); memcpy(me->data, pdata, _PyDateTime_DATE_DATASIZE); test code patch *** Lib/test/test_datetime.py.orig Sat Jun 5 23:49:44 2004 --- Lib/test/test_datetime.py Sat Jun 5 23:52:52 2004 *************** *** 510,515 **** --- 510,517 ---- dt2 = dt - delta self.assertEqual(dt2, dt - days) + class SubclassDate(date): pass + class TestDate(HarmlessMixedComparison): # Tests here should pass for both dates and datetimes, except for a # few tests that TestDateTime overrides. *************** *** 1028,1033 **** --- 1030,1044 ---- self.assertEqual(dt2.extra, 7) self.assertEqual(dt1.toordinal(), dt2.toordinal()) self.assertEqual(dt2.newmeth(-7), dt1.year + dt1.month - 7) + + def test_pickling_subclass_date(self): + + args = 6, 7, 23 + orig = SubclassDate(*args) + for pickler, unpickler, proto in pickle_choices: + green = pickler.dumps(orig, proto) + derived = unpickler.loads(green) + self.assertEqual(orig, derived) def test_backdoor_resistance(self): # For fast unpickling, the constructor accepts a pickle string. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=952807&group_id=5470 From noreply at sourceforge.net Sat Jun 5 11:08:12 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 11:08:19 2004 Subject: [ python-Bugs-919012 ] shutil.move can destroy files Message-ID: Bugs item #919012, was opened at 2004-03-18 20:29 Message generated for change (Comment added) made by jlgijsbers You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jeff Epler (jepler) Assigned to: Nobody/Anonymous (nobody) Summary: shutil.move can destroy files Initial Comment: $ mkdir a; touch a/b; python2.3 -c 'import shutil; shutil.move("a", "a/c") $ ls -l a ls: a: no such file or directory The same problem exists on Windows, as reported by one "shagshag". ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 17:08 Message: Logged In: YES user_id=469548 Here's a patch (with tests) that disallows moving a directory inside itself altogether. I can't upload patches to SF, so here's a link to it on my homepage: http://home.student.uva.nl/johannes.gijsbers/shutil.diff. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 From noreply at sourceforge.net Sat Jun 5 11:32:18 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 11:32:23 2004 Subject: [ python-Bugs-921657 ] HTMLParser ParseError in start tag Message-ID: Bugs item #921657, was opened at 2004-03-23 05:17 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=921657&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Bernd Zimmermann (bernd_zedv) >Assigned to: A.M. Kuchling (akuchling) Summary: HTMLParser ParseError in start tag Initial Comment: when this - obviously correct html - is parsed: xyz this exception is raised: HTMLParseError: junk characters in start tag: '@domain.com>', at line 1, column 1 I work around this by adding '@' to the allowed character's class: import HTMLParser HTMLParser.attrfind = re.compile( r'\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*' r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\) _#=~@]*))?') myparser = HTMLParser.HTMLParser() myparser.feed('Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 11:32 Message: Logged In: YES user_id=11375 Committed to the CVS HEAD; thanks! ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2004-04-19 09:01 Message: Logged In: YES user_id=11375 I don't believe this HTML is obviously correct. The section on attributes in the HTML 4.01 Recommendation (http://www.w3.org/TR/html4/intro/sgmltut.html#h-3.2.2) says: In certain cases, authors may specify the value of an attribute without any quotation marks. The attribute value may only contain letters (a-z and A-Z), digits (0-9), hyphens (ASCII decimal 45), periods (ASCII decimal 46), underscores (ASCII decimal 95), and colons (ASCII decimal 58). We recommend using quotation marks even when it is possible to eliminate them. The regex is already more liberal than this, allowing slashes and various other symbols, so we might as well add '@', but you should also consider adding quotation marks to the original attribute. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=921657&group_id=5470 From noreply at sourceforge.net Sat Jun 5 11:36:04 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 11:36:10 2004 Subject: [ python-Bugs-934282 ] pydoc.stripid doesn't strip ID Message-ID: Bugs item #934282, was opened at 2004-04-13 15:32 Message generated for change (Comment added) made by rgbecker You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Jim Jewett (jimjjewett) Assigned to: Nobody/Anonymous (nobody) Summary: pydoc.stripid doesn't strip ID Initial Comment: pydoc function stripid should strip the ID from an object's repr. It assumes that ID will be represented as one of two patterns -- but this is not the case with (at least) the 2.3.3 distributed binary, because of case-sensitivity. ' at 0x[0-9a-f]{6,}(>+)$' fails because the address is capitalized -- A-F. (Note that hex(15) is not capitalized -- this seems to be unique to addresses.) ' at [0-9A-F]{8,}(>+)$' fails because the address does contain a 0x. stripid checks both as a guard against false alarms, but I'm not sure how to guarantee that an address would contain a letter, so matching on either all-upper or all-lower may be the tightest possible bound. ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 15:36 Message: Logged In: YES user_id=6946 Definitely a problem in 2.3.3. using class bongo: pass print bongo() On freebsd with 2.3.3 I get <__main__.bongo instance at 0x81a05ac> with win2k I see <__main__.bongo instance at 0x0112FFD0> both are 8 characters, but the case differs. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-04-14 19:34 Message: Logged In: YES user_id=11105 It seems this depends on the operating system, more exactly on how the C compiler interprets the %p printf format. According to what I see, on windows it fails, on linux it works. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 From noreply at sourceforge.net Sat Jun 5 11:43:53 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 11:43:59 2004 Subject: [ python-Bugs-925107 ] _Subfile.readline( ) in mailbox.py ignoring self.stop Message-ID: Bugs item #925107, was opened at 2004-03-29 01:31 Message generated for change (Comment added) made by jlgijsbers You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=925107&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Sye van der Veen (syeberman) Assigned to: Nobody/Anonymous (nobody) Summary: _Subfile.readline( ) in mailbox.py ignoring self.stop Initial Comment: I'm using Python 2.3.3 (on Windows, but that shouldn't matter) and I've noticed a problem with the internal _Subfile class in mailbox.py. The readline method doesn't consider self.stop if the length argument is not None. It has: if length is None: length = self.stop - self.pos when really it should have something like: if length is None or length < 0: length = remaining elif length > remaining: length = remaining which is what the read method does. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 17:43 Message: Logged In: YES user_id=469548 This is correct. After fixing this bug the read() and readline() methods are identical, except for one function call, so my patch refactors their bodies into an internal _read() method. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=925107&group_id=5470 From noreply at sourceforge.net Sat Jun 5 11:44:11 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 11:44:16 2004 Subject: [ python-Bugs-964868 ] pickle protocol 2 is incompatible(?) with Cookie module Message-ID: Bugs item #964868, was opened at 2004-06-02 09:12 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964868&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: pickle protocol 2 is incompatible(?) with Cookie module Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. I don't know if this is a bug of Cookie module or of pickle. When I dump a Cookie instance with protocol = 2, the data is 'corrupted'. With protocol = 1 there are no problems. Here is an example: >>> s = 'Set-Cookie: key=value; path=/; expires=Fri, 21-May-2004 10:40:51 GMT' >>> c = Cookie.BaseCookie(s) >>> print c Set-Cookie: key=value; expires=Fri,; Path=/; >>> buf = pickle.dumps(c, protocol = 2) >>> print pickle.loads(buf) Set-Cookie: key=Set-Cookie: key=value; expires=Fri,; Path=/;; >>> buf = pickle.dumps(c, protocol = 1) >>> print pickle.loads(buf) Set-Cookie: key=value; expires=Fri,; Path=/; Thanks and regards Manlio Perillo ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 15:44 Message: Logged In: YES user_id=1057404 Okay, I've looked at the output from protocols 0, 1 and 2 from pickletools.py, and after nearly two hours of looking into this, I think the problem lies with the fact that both Morsel and BaseCookie derive from dict and override __setitem__. I think that this stems from BUILD using __dict__ directly, but lack the internal knowledge of pickle to investigate further. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964868&group_id=5470 From noreply at sourceforge.net Sat Jun 5 11:51:46 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 11:51:52 2004 Subject: [ python-Bugs-964230 ] random.choice([]) should return more intelligible exception Message-ID: Bugs item #964230, was opened at 2004-06-01 08:39 Message generated for change (Comment added) made by tjreedy You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 Category: Python Library Group: Python 2.4 Status: Closed Resolution: None Priority: 2 Submitted By: Michael Hoffman (hoffmanm) Assigned to: Nobody/Anonymous (nobody) Summary: random.choice([]) should return more intelligible exception Initial Comment: Python 2.3.3 (#1, Mar 31 2004, 11:17:07) [GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import random >>> random.choice([]) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/random.py", line 231, in choice return seq[int(self.random() * len(seq))] IndexError: list index out of range This is simple enough here, but it's harder when it's at the bottom of a traceback. I suggest something like ValueError: sequence must not be empty. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-05 11:51 Message: Logged In: YES user_id=593130 More succinctly: # empty seq raises IndexError Commenting source code for traceback clarity has been suggested before as a way to enhance without breaking. Works for me For making repeated choices, however, a generator would be faster for n much larger than 0. Something like def chooser(count, seq): if type(count) not in (int, long) or count < 0: raise ValueError('%s is not a count' % count) try: seqlen = len(seq) if not seqlen: raise ValueError('Cannot choose from nothing') except TypeError: raise ValueError('Seq arg must have length') while count: yield seq[int(self.random() * seqlen)] count -= 1 ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 10:54 Message: Logged In: YES user_id=80475 Okay. Fixed. See Lib/random.py 1.61. ---------------------------------------------------------------------- Comment By: Michael Hoffman (hoffmanm) Date: 2004-06-05 05:24 Message: Logged In: YES user_id=987664 You have a point. What about rewriting the line to read: return seq[int(self.random() * len(seq))] # raises IndexError if seq is empty The comment would be a good hint. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 02:41 Message: Logged In: YES user_id=80475 That's better, but I'm still -0. This function is apt to be called inside a loop, so it would be a bummer to slow down everyone's code just to rewrite the error message. For better or worse, it is the nature of Python tracebacks to raise RoadWeavingErrors when a DontDrinkAndDriveWarning would be more to the point. I recommend closing this, but if you really think it's an essential life saver, then assign to Tim and see if you can persuade him. ---------------------------------------------------------------------- Comment By: Michael Hoffman (hoffmanm) Date: 2004-06-05 02:19 Message: Logged In: YES user_id=987664 I thought of that after I submitted. :-) Might it be better to raise an IndexError with a message similar to "sequence must not be empty" instead? It would just make debugging that much easier. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-05 01:39 Message: Logged In: YES user_id=80475 -0 While a ValueError would be appropriate, the status quo doesn't bug me much and changing it could break code if someone is currently trapping the IndexError. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964230&group_id=5470 From noreply at sourceforge.net Sat Jun 5 11:57:22 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 11:57:32 2004 Subject: [ python-Bugs-964868 ] pickle protocol 2 is incompatible(?) with Cookie module Message-ID: Bugs item #964868, was opened at 2004-06-02 09:12 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964868&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: pickle protocol 2 is incompatible(?) with Cookie module Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. I don't know if this is a bug of Cookie module or of pickle. When I dump a Cookie instance with protocol = 2, the data is 'corrupted'. With protocol = 1 there are no problems. Here is an example: >>> s = 'Set-Cookie: key=value; path=/; expires=Fri, 21-May-2004 10:40:51 GMT' >>> c = Cookie.BaseCookie(s) >>> print c Set-Cookie: key=value; expires=Fri,; Path=/; >>> buf = pickle.dumps(c, protocol = 2) >>> print pickle.loads(buf) Set-Cookie: key=Set-Cookie: key=value; expires=Fri,; Path=/;; >>> buf = pickle.dumps(c, protocol = 1) >>> print pickle.loads(buf) Set-Cookie: key=value; expires=Fri,; Path=/; Thanks and regards Manlio Perillo ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 15:57 Message: Logged In: YES user_id=1057404 #826897 appears to be a dupe of this. __setitem__ is called for the items in the dict *before* the instance variables are set. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 15:44 Message: Logged In: YES user_id=1057404 Okay, I've looked at the output from protocols 0, 1 and 2 from pickletools.py, and after nearly two hours of looking into this, I think the problem lies with the fact that both Morsel and BaseCookie derive from dict and override __setitem__. I think that this stems from BUILD using __dict__ directly, but lack the internal knowledge of pickle to investigate further. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964868&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:00:26 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:00:36 2004 Subject: [ python-Bugs-953177 ] cgi module documentation could mention getlist Message-ID: Bugs item #953177, was opened at 2004-05-13 12:15 Message generated for change (Comment added) made by pmoore You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=953177&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Richard Jones (richard) Assigned to: Nobody/Anonymous (nobody) Summary: cgi module documentation could mention getlist Initial Comment: Section "11.2.2 Using the cgi module" at http://www.python.org/dev/doc/devel/lib/node411.html has a discussion about how the module handles multiple values with the same name. It even presents a section of code describing how to handle the situation. It could alternatively just make mention of its own getlist() method. ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 17:00 Message: Logged In: YES user_id=113328 The following patch seems to be what is required (inline because I can't upload files :-() Index: lib/libcgi.tex =================================================================== RCS file: /cvsroot/python/python/dist/src/Doc/lib/libcgi.tex,v retrieving revision 1.43 diff -u -r1.43 libcgi.tex --- lib/libcgi.tex 23 Jan 2004 04:05:27 -0000 1.43 +++ lib/libcgi.tex 5 Jun 2004 15:59:45 -0000 @@ -135,19 +135,14 @@ \samp{form.getvalue(\var{key})} would return a list of strings. If you expect this possibility (when your HTML form contains multiple fields with the same name), use -the \function{isinstance()} built-in function to determine whether you -have a single instance or a list of instances. For example, this +the \function{getlist()}, which always returns a list of values (so that you +do not need to special-case the single item case). For example, this code concatenates any number of username fields, separated by commas: \begin{verbatim} -value = form.getvalue("username", "") -if isinstance(value, list): - # Multiple username fields specified - usernames = ",".join(value) -else: - # Single or no username field specified - usernames = value +values = form.getlist("username") +usernames = ",".join(values) \end{verbatim} If a field represents an uploaded file, accessing the value via the ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=953177&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:04:01 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:04:06 2004 Subject: [ python-Bugs-944890 ] csv writer bug on windows Message-ID: Bugs item #944890, was opened at 2004-04-29 16:06 Message generated for change (Comment added) made by montanaro You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944890&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Brian Kelley (wc2so1) Assigned to: Skip Montanaro (montanaro) Summary: csv writer bug on windows Initial Comment: The excel dialect is set up to be class excel(Dialect): delimiter = ',' quotechar = '"' doublequote = True skipinitialspace = False lineterminator = '\r\n' quoting = QUOTE_MINIMAL register_dialect("excel", excel) However, on the windows platform, the lineterminator should be simply "\n" My suggested fix is: class excel(Dialect): delimiter = ',' quotechar = '"' doublequote = True skipinitialspace = False if sys.platform == "win32": lineterminator = '\n' else: lineterminator = '\r\n' quoting = QUOTE_MINIMAL Which seems to work. It could be that I'm missing something, but the universal readlines doesn't appear to work for writing files. If this is a usage issue, it probably should be a documentation fix. ---------------------------------------------------------------------- >Comment By: Skip Montanaro (montanaro) Date: 2004-06-05 11:04 Message: Logged In: YES user_id=44345 Can you attach an example that fails? I don't have access to Windows. Note that you must open the file with binary mode ("wb" or "rb"). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944890&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:14:43 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:16:10 2004 Subject: [ python-Bugs-944890 ] csv writer bug on windows Message-ID: Bugs item #944890, was opened at 2004-04-29 17:06 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944890&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Brian Kelley (wc2so1) Assigned to: Skip Montanaro (montanaro) Summary: csv writer bug on windows Initial Comment: The excel dialect is set up to be class excel(Dialect): delimiter = ',' quotechar = '"' doublequote = True skipinitialspace = False lineterminator = '\r\n' quoting = QUOTE_MINIMAL register_dialect("excel", excel) However, on the windows platform, the lineterminator should be simply "\n" My suggested fix is: class excel(Dialect): delimiter = ',' quotechar = '"' doublequote = True skipinitialspace = False if sys.platform == "win32": lineterminator = '\n' else: lineterminator = '\r\n' quoting = QUOTE_MINIMAL Which seems to work. It could be that I'm missing something, but the universal readlines doesn't appear to work for writing files. If this is a usage issue, it probably should be a documentation fix. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-05 12:14 Message: Logged In: YES user_id=31435 Excel on Windows puts \r\n line ends in .csv files it creates (I just tried it). Since the OP mentioned "universal readlines", I bet he's opening the file with "U" (but it needs to be "rb"). ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2004-06-05 12:04 Message: Logged In: YES user_id=44345 Can you attach an example that fails? I don't have access to Windows. Note that you must open the file with binary mode ("wb" or "rb"). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944890&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:18:19 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:18:24 2004 Subject: [ python-Bugs-929316 ] many os.path functions bahave inconsistently Message-ID: Bugs item #929316, was opened at 2004-04-04 20:45 Message generated for change (Comment added) made by jlgijsbers You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=929316&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Simon Percivall (percivall) Assigned to: Nobody/Anonymous (nobody) Summary: many os.path functions bahave inconsistently Initial Comment: Many os.path functions return different paths before and after applying os.path.normpath/os.path.realpath/os.path.abspath, etc. Functions such as os.path.basename and os.path.dirname will not handle a trailing slash "correctly". >>> dirs = '/usr/local/' >>> os.path.dirname(dirs) '/usr/local' >>> os.path.basename(dirs) '' >>> >>> dirs = os.path.normpath(dirs) >>> os.path.dirname(dirs) '/usr' >>> os.path.basename(dirs) 'local' >>> This should be wrong since normpath/realpath/abspath shouldn't have such an effect on other os.path functions. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 18:18 Message: Logged In: YES user_id=469548 Though this may arguably be wrong, it is the explicitly documented behavior and has been for a long time (see bug #219485). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=929316&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:23:13 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:23:18 2004 Subject: [ python-Bugs-934282 ] pydoc.stripid doesn't strip ID Message-ID: Bugs item #934282, was opened at 2004-04-13 15:32 Message generated for change (Comment added) made by rgbecker You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Jim Jewett (jimjjewett) Assigned to: Nobody/Anonymous (nobody) Summary: pydoc.stripid doesn't strip ID Initial Comment: pydoc function stripid should strip the ID from an object's repr. It assumes that ID will be represented as one of two patterns -- but this is not the case with (at least) the 2.3.3 distributed binary, because of case-sensitivity. ' at 0x[0-9a-f]{6,}(>+)$' fails because the address is capitalized -- A-F. (Note that hex(15) is not capitalized -- this seems to be unique to addresses.) ' at [0-9A-F]{8,}(>+)$' fails because the address does contain a 0x. stripid checks both as a guard against false alarms, but I'm not sure how to guarantee that an address would contain a letter, so matching on either all-upper or all-lower may be the tightest possible bound. ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 16:23 Message: Logged In: YES user_id=6946 This patch seems to fix variable case problems =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/pydoc.py,v retrieving revision 1.90 diff -c -r1.90 pydoc.py *** pydoc.py 29 Jan 2004 06:37:49 -0000 1.90 --- pydoc.py 5 Jun 2004 15:26:31 -0000 *************** *** 113,124 **** return text[:pre] + '...' + text[len(text)-post:] return text def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! for pattern in [' at 0x[0-9a-f]{6,}(>+)$', ' at [0-9A-F]{8,}(>+)$']: ! if re.search(pattern, repr(Exception)): ! return re.sub(pattern, '\1', text) return text def _is_some_method(object): --- 113,124 ---- return text[:pre] + '...' + text[len(text)-post:] return text + _re_stripid =re.compile(' at (?:0[xX][0-9a-fA-F]{6,}|[0-9a-fA-F]{8,})(>+)$'] def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! if _re_stripid.search(repr(Exception)): ! return _re_stripid.sub('\1', text) return text def _is_some_method(object): ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 15:36 Message: Logged In: YES user_id=6946 Definitely a problem in 2.3.3. using class bongo: pass print bongo() On freebsd with 2.3.3 I get <__main__.bongo instance at 0x81a05ac> with win2k I see <__main__.bongo instance at 0x0112FFD0> both are 8 characters, but the case differs. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-04-14 19:34 Message: Logged In: YES user_id=11105 It seems this depends on the operating system, more exactly on how the C compiler interprets the %p printf format. According to what I see, on windows it fails, on linux it works. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:29:34 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:29:37 2004 Subject: [ python-Feature Requests-967161 ] pty.spawn() enhancements Message-ID: Feature Requests item #967161, was opened at 2004-06-05 12:29 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=967161&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: A.M. Kuchling (akuchling) Assigned to: Nobody/Anonymous (nobody) Summary: pty.spawn() enhancements Initial Comment: (Originally suggested by James Henstridge in bug #897935) There are also a few changes that would be nice to see in pty.spawn: 1) get the exit status of the child. Could be fixed by adding the following to the end of the function: pid, status = os.waitpid(pid, 0) return status 2) set master_fd to non-blocking mode, so that the output is printed to the screen at the speed it is produced by the child. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=967161&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:29:07 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:30:03 2004 Subject: [ python-Bugs-964703 ] RFE versus Bug group Feature Request Message-ID: Bugs item #964703, was opened at 2004-06-02 04:14 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 Category: None Group: 3rd Party Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Terry J. Reedy (tjreedy) Assigned to: Tim Peters (tim_one) Summary: RFE versus Bug group Feature Request Initial Comment: The Category is 'Source Forge Item Tracker'. The possible bug is the redundancy of having both an RFE (Request For Enhancement) list separate from the Bugs list and a Feature Request Group within Bugs. Is this intentional or an historical artifact that should be removed in order to direct feature requests one place or another. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-05 18:29 Message: Logged In: YES user_id=21627 Yes, the RFE tracker is a way for me to move things at the end of the queue (or the bottom of the stack). I agree that all patches should be in the patches tracker, i.e. patches should not be attached to bug reports. Instead, the bug should have a comment "I have created a patch for this bug at python.org/sf/something", and the patch should start with "this fixes python.org/sf/somethingelse". I then process the entire thing by opening two tabs in Mozilla, and a shell window: - I download the patch from the patch tab - I test it in the shell, and commit it - I run my "sumcvs.py" (attached) to extract the essential bits from the CVS commit message (files and version numbers) - I possibly repeat the procedure for release23-maint - I put a comment in the bug saying that it is fixed with mentioned patch, and close the bug - I put the commit info into the patch comment, and close the patch as accepted This is a routine procedure, which takes a total of five minutes administrative action (plus time to actually read the patch, compile it, etc.) ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-05 16:37 Message: Logged In: YES user_id=593130 loewis: I currently interprete your advice to me as "Yes, encourage movement of RFEs to the RFE list so I can focus on real bugs" rather than "No, leave them so I can reward queue jumpers with attention I would not otherwise give". ;-) There are people who *have* read and responded to RFEs (about half have been closed). Even those rejected usually get more explanation than simply "Not a bug" all: I agree that a tracker that we can modify with experience to make review and response easier would be great. I agree with fdrake about patch confusion. I think there should either be no separate patch list (but a way to pull out items with an open patch), or all patches should be on a separate patch list, but linked to a discussion-only item. I'd like a way to select boilerplate responses and check off the presence of required patch features. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-05 08:45 Message: Logged In: YES user_id=21627 fdrake: I deal with the RFE tracker by never (or very infrequently) looking at it. If Python has 700 bugs, and 250 unreviewed patches, I'm not going to implement features that people have requested just because they requested them. So I would be happy if the RFE tracker grew to 700 items, if the bugs tracker shrank to 150 entries simultaneously. Only at that point I will wonder what to do next, and look at RFEs. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-05 02:10 Message: Logged In: YES user_id=6380 We should switch to RoundUp on python.org. A single unified tracker that we can modify. Grr. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-04 23:58 Message: Logged In: YES user_id=3066 I will note that not everyone agrees on this. Having to look in multiple trackers is quite painful as well. The separation of the RFE tracker would be less of a problem if there were no "patches" tracker; a patch should be attached to a bug report or to a feature request, not separate. Grr. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-04 23:30 Message: Logged In: YES user_id=593130 Thanks for the clear directive. A thought for feature requesters: Conflicts between promise (the References) and performance (the CPython implementation) are clearly bugs. So to are sufficiently muddled docs. While a 'missing' feature might look like a bug to one who wants it, it might not to a developer looking to prune a bloated bug list (>700 today, about double what it was not too long ago.) On the other hand, a feature request in the smaller RFE list is only competing with other feature requests instead of sometimes serious bugs. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-02 07:00 Message: Logged In: YES user_id=31435 Oh yes! Overall, we'd rather reduce the mushrooming backlog of patch and bug reports than slam in new features, so we want to keep feature requests out of the bug tracker. That's why the RFE tracker was added. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-02 06:26 Message: Logged In: YES user_id=593130 In the meanwhile... is it appropriate to recommend that requests go in RFE (as I somewhat ignorantly and indirectly did today, see #960325) or is this a "don't care" issue for the developers (that I should ignore)? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-02 04:22 Message: Logged In: YES user_id=31435 Sorry, we can't do anything about this. Group names cannot be deleted in SourceForge's system, and can't even be renamed. So we'll have a "Feature Request" Group in the Bugs tracker forever -- or unless SF changes their system. When we first moved to SF, RFE trackers didn't exist. That's why Bugs grew a Feature Request group to begin with. Closing as 3rdParty, WontFix. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:30:45 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:30:52 2004 Subject: [ python-Bugs-897935 ] pty.spawn() leaks file descriptors Message-ID: Bugs item #897935, was opened at 2004-02-16 06:28 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=897935&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: James Henstridge (jhenstridge) Assigned to: Nobody/Anonymous (nobody) Summary: pty.spawn() leaks file descriptors Initial Comment: By running the following short program on Linux, you can see the number of open file descriptors increase: import os, pty for i in range(10): pty.spawn(['true']) print len(os.listdir('/proc/%d/fd' % os.getpid())) This can be fixed by os.close()'ing master_fd. This problem seems to exist in CVS head as well as 2.3. There are also a few changes that would be nice to see in pty.spawn: 1) get the exit status of the child. Could be fixed by adding the following to the end of the function: pid, status = os.waitpid(pid, 0) return status 2) set master_fd to non-blocking mode, so that the output is printed to the screen at the speed it is produced by the child. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 12:30 Message: Logged In: YES user_id=11375 I've applied a fix to CVS HEAD; thanks for reporting this! The two feature suggestions have been added to the RFE tracker as #967161 so that I can close this bug. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=897935&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:30:52 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:30:59 2004 Subject: [ python-Bugs-945665 ] platform.system() Windows inconsistency Message-ID: Bugs item #945665, was opened at 2004-05-01 00:19 Message generated for change (Comment added) made by pmoore You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Guido van Rossum (gvanrossum) Assigned to: Nobody/Anonymous (nobody) Summary: platform.system() Windows inconsistency Initial Comment: On Windows, platform.system() (and platform.uname() [0]) return 'Windows' or 'Microsoft Windows' depending on whether win32api is available or not. This is confusing and can lead to hard-to-find bugs where testing in one environment doesn't reveal a bug that only occurs in another environment. I believe this hasn't been fixed in Python 2.4 yet (only the XP recognition has been fixed, it is also broken in 2.3 when win32api was available). ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 17:30 Message: Logged In: YES user_id=113328 Looks OK to me. Not sure where you found docs which specify the behaviour, but I'm OK with "Windows". There's a very small risk of compatibility issues, but as the module was new in 2.3, and the behaviour was inconsistent before, I see no reason why this should be an issue. I'd recommend committing this. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 15:38 Message: Logged In: YES user_id=469548 New patch, the docs say we should use 'Windows' instead of 'Microsoft Windows', so we do: Index: platform.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/platform.py,v retrieving revision 1.13 diff -u -r1.13 platform.py --- platform.py 4 May 2004 18:18:59 -0000 1.13 +++ platform.py 5 Jun 2004 13:39:10 -0000 @@ -957,6 +957,8 @@ # platforms if use_syscmd_ver: system,release,version = _syscmd_ver(system) + if string.find(system, 'Microsoft Windows') != -1: + system = 'Windows' # In case we still don't know anything useful, we'll try to # help ourselves ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 15:07 Message: Logged In: YES user_id=469548 Yep, _syscmd_ver() returns 'Microsoft Windows' while the default is 'Windows'. Setting the default to Microsoft Windows seems the easiest way here (putting patch inline because I can't attach it): Index: platform.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/platform.py,v retrieving revision 1.13 diff -u -r1.13 platform.py --- platform.py 4 May 2004 18:18:59 -0000 1.13 +++ platform.py 5 Jun 2004 13:07:45 -0000 @@ -966,7 +966,7 @@ version = '32bit' else: version = '16bit' - system = 'Windows' + system = 'Microsoft Windows' elif system[:4] == 'java': release,vendor,vminfo,osinfo = java_ver() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:31:25 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:31:30 2004 Subject: [ python-Bugs-934282 ] pydoc.stripid doesn't strip ID Message-ID: Bugs item #934282, was opened at 2004-04-13 15:32 Message generated for change (Comment added) made by rgbecker You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Jim Jewett (jimjjewett) Assigned to: Nobody/Anonymous (nobody) Summary: pydoc.stripid doesn't strip ID Initial Comment: pydoc function stripid should strip the ID from an object's repr. It assumes that ID will be represented as one of two patterns -- but this is not the case with (at least) the 2.3.3 distributed binary, because of case-sensitivity. ' at 0x[0-9a-f]{6,}(>+)$' fails because the address is capitalized -- A-F. (Note that hex(15) is not capitalized -- this seems to be unique to addresses.) ' at [0-9A-F]{8,}(>+)$' fails because the address does contain a 0x. stripid checks both as a guard against false alarms, but I'm not sure how to guarantee that an address would contain a letter, so matching on either all-upper or all-lower may be the tightest possible bound. ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 16:31 Message: Logged In: YES user_id=6946 This is the PROPER pasted in patch =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/pydoc.py,v retrieving revision 1.90 diff -c -r1.90 pydoc.py *** pydoc.py 29 Jan 2004 06:37:49 -0000 1.90 --- pydoc.py 5 Jun 2004 15:33:52 -0000 *************** *** 113,124 **** return text[:pre] + '...' + text[len(text)-post:] return text def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! for pattern in [' at 0x[0-9a-f]{6,}(>+)$', ' at [0-9A-F]{8,}(>+)$']: ! if re.search(pattern, repr(Exception)): ! return re.sub(pattern, '\1', text) return text def _is_some_method(object): --- 113,124 ---- return text[:pre] + '...' + text[len(text)-post:] return text + _re_stripid =re.compile(' at (?:0[xX][0-9a-fA-F]{6,}|[0-9a-fA-F]{8,})(>+)$') def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! if _re_stripid.search(repr(Exception)): ! return _re_stripid.sub('\1', text) return text def _is_some_method(object): ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 16:23 Message: Logged In: YES user_id=6946 This patch seems to fix variable case problems =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/pydoc.py,v retrieving revision 1.90 diff -c -r1.90 pydoc.py *** pydoc.py 29 Jan 2004 06:37:49 -0000 1.90 --- pydoc.py 5 Jun 2004 15:26:31 -0000 *************** *** 113,124 **** return text[:pre] + '...' + text[len(text)-post:] return text def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! for pattern in [' at 0x[0-9a-f]{6,}(>+)$', ' at [0-9A-F]{8,}(>+)$']: ! if re.search(pattern, repr(Exception)): ! return re.sub(pattern, '\1', text) return text def _is_some_method(object): --- 113,124 ---- return text[:pre] + '...' + text[len(text)-post:] return text + _re_stripid =re.compile(' at (?:0[xX][0-9a-fA-F]{6,}|[0-9a-fA-F]{8,})(>+)$'] def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! if _re_stripid.search(repr(Exception)): ! return _re_stripid.sub('\1', text) return text def _is_some_method(object): ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 15:36 Message: Logged In: YES user_id=6946 Definitely a problem in 2.3.3. using class bongo: pass print bongo() On freebsd with 2.3.3 I get <__main__.bongo instance at 0x81a05ac> with win2k I see <__main__.bongo instance at 0x0112FFD0> both are 8 characters, but the case differs. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-04-14 19:34 Message: Logged In: YES user_id=11105 It seems this depends on the operating system, more exactly on how the C compiler interprets the %p printf format. According to what I see, on windows it fails, on linux it works. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:31:52 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:31:59 2004 Subject: [ python-Bugs-897935 ] pty.spawn() leaks file descriptors Message-ID: Bugs item #897935, was opened at 2004-02-16 06:28 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=897935&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: James Henstridge (jhenstridge) >Assigned to: A.M. Kuchling (akuchling) Summary: pty.spawn() leaks file descriptors Initial Comment: By running the following short program on Linux, you can see the number of open file descriptors increase: import os, pty for i in range(10): pty.spawn(['true']) print len(os.listdir('/proc/%d/fd' % os.getpid())) This can be fixed by os.close()'ing master_fd. This problem seems to exist in CVS head as well as 2.3. There are also a few changes that would be nice to see in pty.spawn: 1) get the exit status of the child. Could be fixed by adding the following to the end of the function: pid, status = os.waitpid(pid, 0) return status 2) set master_fd to non-blocking mode, so that the output is printed to the screen at the speed it is produced by the child. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 12:31 Message: Logged In: YES user_id=11375 Closing. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 12:30 Message: Logged In: YES user_id=11375 I've applied a fix to CVS HEAD; thanks for reporting this! The two feature suggestions have been added to the RFE tracker as #967161 so that I can close this bug. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=897935&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:33:23 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:33:27 2004 Subject: [ python-Bugs-929316 ] many os.path functions bahave inconsistently Message-ID: Bugs item #929316, was opened at 2004-04-04 14:45 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=929316&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Simon Percivall (percivall) Assigned to: Nobody/Anonymous (nobody) Summary: many os.path functions bahave inconsistently Initial Comment: Many os.path functions return different paths before and after applying os.path.normpath/os.path.realpath/os.path.abspath, etc. Functions such as os.path.basename and os.path.dirname will not handle a trailing slash "correctly". >>> dirs = '/usr/local/' >>> os.path.dirname(dirs) '/usr/local' >>> os.path.basename(dirs) '' >>> >>> dirs = os.path.normpath(dirs) >>> os.path.dirname(dirs) '/usr' >>> os.path.basename(dirs) 'local' >>> This should be wrong since normpath/realpath/abspath shouldn't have such an effect on other os.path functions. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 12:33 Message: Logged In: YES user_id=11375 Closing. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 12:18 Message: Logged In: YES user_id=469548 Though this may arguably be wrong, it is the explicitly documented behavior and has been for a long time (see bug #219485). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=929316&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:38:07 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:38:12 2004 Subject: [ python-Bugs-936837 ] PyNumber_InPlaceDivide()'s description Message-ID: Bugs item #936837, was opened at 2004-04-17 10:24 Message generated for change (Comment added) made by pmoore You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=936837&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: [N/A] (ymasuda) Assigned to: Nobody/Anonymous (nobody) Summary: PyNumber_InPlaceDivide()'s description Initial Comment: In Doc/api/abstract.tex on HEAD, line 576 around, It describes:: Returns the mathematical of dividing \var{o1} by \var {o2}, or but, it appears to be something erratta. Instead,:: Returns the floor of dividing \var{o1} by \var{o2}, or would be appropreate. ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 17:38 Message: Logged In: YES user_id=113328 The following patch seems OK: Index: api/abstract.tex =================================================================== RCS file: /cvsroot/python/python/dist/src/Doc/api/abstract.tex,v retrieving revision 1.32 diff -u -r1.32 abstract.tex --- api/abstract.tex 17 Apr 2004 11:57:40 -0000 1.32 +++ api/abstract.tex 5 Jun 2004 16:34:56 -0000 @@ -573,7 +573,8 @@ \begin{cfuncdesc}{PyObject*}{PyNumber_InPlaceFloorDivide}{PyObject *o1, PyObject *o2} - Returns the mathematical of dividing \var{o1} by \var{o2}, or + Returns the mathematical floor of the result of + dividing \var{o1} by \var{o2}, or \NULL{} on failure. The operation is done \emph{in-place} when \var{o1} supports it. This is the equivalent of the Python statement \samp{\var{o1} //= \var{o2}}. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=936837&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:39:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:39:32 2004 Subject: [ python-Bugs-936915 ] Py_FilesystemDefaultEncoding leaks Message-ID: Bugs item #936915, was opened at 2004-04-17 08:53 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=936915&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Laszlo Toth (avenger_teambg) Assigned to: Nobody/Anonymous (nobody) Summary: Py_FilesystemDefaultEncoding leaks Initial Comment: I've compiled Python from the 2.3.3 sources, running on RH Linux 8. In pythonrun.c there is a line: codeset=strdup(codeset) Later it is assigned to Py_FilesystemDefaultEncoding, which is never freed. In other cases a constant is assigned to it which need not to be freed. This is only a small leak, but definitely one. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 12:39 Message: Logged In: YES user_id=11375 Closing. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-04-20 15:56 Message: Logged In: YES user_id=21627 Why is this a leak? Py_FileSystemDefaultEncoding refers to the string until the process terminates, so the string is not garbage (and thus not a leak) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=936915&group_id=5470 From noreply at sourceforge.net Sat Jun 5 12:44:39 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 12:45:31 2004 Subject: [ python-Bugs-918710 ] popen2 returns (out, in) not (in, out) Message-ID: Bugs item #918710, was opened at 2004-03-18 12:44 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=918710&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Yotam Medini (yotam) Assigned to: Nobody/Anonymous (nobody) Summary: popen2 returns (out, in) not (in, out) Initial Comment: http://python.org/doc/current/lib/os-newstreams.html#l2h-1379 says: popen2( cmd[, mode[, bufsize]]) Executes cmd as a sub-process. Returns the file objects (child_stdin, child_stdout). But for me it actually returns (child-output, child-input). Or... is it a semantci issue? that is child_stdin - is "the input _from_ child?" Anyway - it is confusing. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 16:44 Message: Logged In: YES user_id=1057404 The above is not-a-bug, but this documentation patch below might serve to remove any confusion. ### --- libos.tex- Sat Jun 5 17:28:52 2004 +++ libos.tex Sat Jun 5 17:32:40 2004 @@ -384,6 +384,10 @@ \versionadded{2.0} \end{funcdesc} +It should be noted that \code{\var{child_stdin}, \var{child_stdout}, and +\var{child_stderr}} are named from the child process' point of view, i.e. the +stdin of the child. + This functionality is also available in the \refmodule{popen2} module using functions of the same names, but the return values of those functions have a different order. ### ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=918710&group_id=5470 From noreply at sourceforge.net Sat Jun 5 13:05:24 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 13:05:28 2004 Subject: [ python-Bugs-934282 ] pydoc.stripid doesn't strip ID Message-ID: Bugs item #934282, was opened at 2004-04-13 11:32 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Jim Jewett (jimjjewett) Assigned to: Nobody/Anonymous (nobody) Summary: pydoc.stripid doesn't strip ID Initial Comment: pydoc function stripid should strip the ID from an object's repr. It assumes that ID will be represented as one of two patterns -- but this is not the case with (at least) the 2.3.3 distributed binary, because of case-sensitivity. ' at 0x[0-9a-f]{6,}(>+)$' fails because the address is capitalized -- A-F. (Note that hex(15) is not capitalized -- this seems to be unique to addresses.) ' at [0-9A-F]{8,}(>+)$' fails because the address does contain a 0x. stripid checks both as a guard against false alarms, but I'm not sure how to guarantee that an address would contain a letter, so matching on either all-upper or all-lower may be the tightest possible bound. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-05 13:05 Message: Logged In: YES user_id=31435 This can be simplifed. The code in PyString_FromFormatV() massages the native %p result to guarantee it begins with "0x". It didn't always do this, and inspect.py was written when Python didn't massage the native %p result at all. Now there's no need to cater to "0X", or to cater to that "0x" might be missing. The case of a-f may still differ across platforms, and that's deliberate (addresses are of most interest to C coders, and they're "used to" whichever case their platform delivers for %p in C code). ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 12:31 Message: Logged In: YES user_id=6946 This is the PROPER pasted in patch =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/pydoc.py,v retrieving revision 1.90 diff -c -r1.90 pydoc.py *** pydoc.py 29 Jan 2004 06:37:49 -0000 1.90 --- pydoc.py 5 Jun 2004 15:33:52 -0000 *************** *** 113,124 **** return text[:pre] + '...' + text[len(text)-post:] return text def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! for pattern in [' at 0x[0-9a-f]{6,}(>+)$', ' at [0-9A-F]{8,}(>+)$']: ! if re.search(pattern, repr(Exception)): ! return re.sub(pattern, '\1', text) return text def _is_some_method(object): --- 113,124 ---- return text[:pre] + '...' + text[len(text)-post:] return text + _re_stripid =re.compile(' at (?:0[xX][0-9a-fA-F]{6,}|[0-9a-fA-F]{8,})(>+)$') def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! if _re_stripid.search(repr(Exception)): ! return _re_stripid.sub('\1', text) return text def _is_some_method(object): ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 12:23 Message: Logged In: YES user_id=6946 This patch seems to fix variable case problems =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/pydoc.py,v retrieving revision 1.90 diff -c -r1.90 pydoc.py *** pydoc.py 29 Jan 2004 06:37:49 -0000 1.90 --- pydoc.py 5 Jun 2004 15:26:31 -0000 *************** *** 113,124 **** return text[:pre] + '...' + text[len(text)-post:] return text def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! for pattern in [' at 0x[0-9a-f]{6,}(>+)$', ' at [0-9A-F]{8,}(>+)$']: ! if re.search(pattern, repr(Exception)): ! return re.sub(pattern, '\1', text) return text def _is_some_method(object): --- 113,124 ---- return text[:pre] + '...' + text[len(text)-post:] return text + _re_stripid =re.compile(' at (?:0[xX][0-9a-fA-F]{6,}|[0-9a-fA-F]{8,})(>+)$'] def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! if _re_stripid.search(repr(Exception)): ! return _re_stripid.sub('\1', text) return text def _is_some_method(object): ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 11:36 Message: Logged In: YES user_id=6946 Definitely a problem in 2.3.3. using class bongo: pass print bongo() On freebsd with 2.3.3 I get <__main__.bongo instance at 0x81a05ac> with win2k I see <__main__.bongo instance at 0x0112FFD0> both are 8 characters, but the case differs. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-04-14 15:34 Message: Logged In: YES user_id=11105 It seems this depends on the operating system, more exactly on how the C compiler interprets the %p printf format. According to what I see, on windows it fails, on linux it works. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 From noreply at sourceforge.net Sat Jun 5 13:12:58 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 13:13:09 2004 Subject: [ python-Bugs-964703 ] RFE versus Bug group Feature Request Message-ID: Bugs item #964703, was opened at 2004-06-01 22:14 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 Category: None Group: 3rd Party Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Terry J. Reedy (tjreedy) Assigned to: Tim Peters (tim_one) Summary: RFE versus Bug group Feature Request Initial Comment: The Category is 'Source Forge Item Tracker'. The possible bug is the redundancy of having both an RFE (Request For Enhancement) list separate from the Bugs list and a Feature Request Group within Bugs. Is this intentional or an historical artifact that should be removed in order to direct feature requests one place or another. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-05 13:12 Message: Logged In: YES user_id=31435 Note that since SF doesn't let anyone other than the OP or a tracker admin attach a file to a report, someone who wants to contribute a patch generally must create a new tracker report to do so. That's the reality we live with here. It's less confusing to put those in the patch tracker than to grow an additional "bug report". ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-05 12:29 Message: Logged In: YES user_id=21627 Yes, the RFE tracker is a way for me to move things at the end of the queue (or the bottom of the stack). I agree that all patches should be in the patches tracker, i.e. patches should not be attached to bug reports. Instead, the bug should have a comment "I have created a patch for this bug at python.org/sf/something", and the patch should start with "this fixes python.org/sf/somethingelse". I then process the entire thing by opening two tabs in Mozilla, and a shell window: - I download the patch from the patch tab - I test it in the shell, and commit it - I run my "sumcvs.py" (attached) to extract the essential bits from the CVS commit message (files and version numbers) - I possibly repeat the procedure for release23-maint - I put a comment in the bug saying that it is fixed with mentioned patch, and close the bug - I put the commit info into the patch comment, and close the patch as accepted This is a routine procedure, which takes a total of five minutes administrative action (plus time to actually read the patch, compile it, etc.) ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-05 10:37 Message: Logged In: YES user_id=593130 loewis: I currently interprete your advice to me as "Yes, encourage movement of RFEs to the RFE list so I can focus on real bugs" rather than "No, leave them so I can reward queue jumpers with attention I would not otherwise give". ;-) There are people who *have* read and responded to RFEs (about half have been closed). Even those rejected usually get more explanation than simply "Not a bug" all: I agree that a tracker that we can modify with experience to make review and response easier would be great. I agree with fdrake about patch confusion. I think there should either be no separate patch list (but a way to pull out items with an open patch), or all patches should be on a separate patch list, but linked to a discussion-only item. I'd like a way to select boilerplate responses and check off the presence of required patch features. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-05 02:45 Message: Logged In: YES user_id=21627 fdrake: I deal with the RFE tracker by never (or very infrequently) looking at it. If Python has 700 bugs, and 250 unreviewed patches, I'm not going to implement features that people have requested just because they requested them. So I would be happy if the RFE tracker grew to 700 items, if the bugs tracker shrank to 150 entries simultaneously. Only at that point I will wonder what to do next, and look at RFEs. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-04 20:10 Message: Logged In: YES user_id=6380 We should switch to RoundUp on python.org. A single unified tracker that we can modify. Grr. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-04 17:58 Message: Logged In: YES user_id=3066 I will note that not everyone agrees on this. Having to look in multiple trackers is quite painful as well. The separation of the RFE tracker would be less of a problem if there were no "patches" tracker; a patch should be attached to a bug report or to a feature request, not separate. Grr. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-04 17:30 Message: Logged In: YES user_id=593130 Thanks for the clear directive. A thought for feature requesters: Conflicts between promise (the References) and performance (the CPython implementation) are clearly bugs. So to are sufficiently muddled docs. While a 'missing' feature might look like a bug to one who wants it, it might not to a developer looking to prune a bloated bug list (>700 today, about double what it was not too long ago.) On the other hand, a feature request in the smaller RFE list is only competing with other feature requests instead of sometimes serious bugs. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-02 01:00 Message: Logged In: YES user_id=31435 Oh yes! Overall, we'd rather reduce the mushrooming backlog of patch and bug reports than slam in new features, so we want to keep feature requests out of the bug tracker. That's why the RFE tracker was added. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-02 00:26 Message: Logged In: YES user_id=593130 In the meanwhile... is it appropriate to recommend that requests go in RFE (as I somewhat ignorantly and indirectly did today, see #960325) or is this a "don't care" issue for the developers (that I should ignore)? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-01 22:22 Message: Logged In: YES user_id=31435 Sorry, we can't do anything about this. Group names cannot be deleted in SourceForge's system, and can't even be renamed. So we'll have a "Feature Request" Group in the Bugs tracker forever -- or unless SF changes their system. When we first moved to SF, RFE trackers didn't exist. That's why Bugs grew a Feature Request group to begin with. Closing as 3rdParty, WontFix. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964703&group_id=5470 From noreply at sourceforge.net Sat Jun 5 13:15:42 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 13:15:46 2004 Subject: [ python-Bugs-967182 ] file("foo", "wU") is silently accepted Message-ID: Bugs item #967182, was opened at 2004-06-05 12:15 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967182&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Skip Montanaro (montanaro) Assigned to: Nobody/Anonymous (nobody) Summary: file("foo", "wU") is silently accepted Initial Comment: PEP 278 says at opening a file with "wU" is illegal, yet file("foo", "wU") passes without complaint. There may be other flags which the PEP disallows with "U" that need to be checked. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967182&group_id=5470 From noreply at sourceforge.net Sat Jun 5 13:17:21 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 13:17:26 2004 Subject: [ python-Bugs-913619 ] httplib: HTTPS does not close() connection properly Message-ID: Bugs item #913619, was opened at 2004-03-10 17:00 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=913619&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: rick (sf_rick) Assigned to: Nobody/Anonymous (nobody) Summary: httplib: HTTPS does not close() connection properly Initial Comment: When looping through requests to an HTTP server close() closes out the active connection. An the second loop through a new HTTP connection is created. When looping through requests to an HTTPS server close() does not close the connection. It keeps it active. On the second pass through the loop httplib.HTTPS uses the previously initiated connection. Shouldn't close() close out the connection as it does for the HTTP connection? sample code to illustrate: def getdata(): params = urllib.urlencode({'username': 'test', 'password': 'test'}) h = httplib.HTTPS(host = "test.site.com", port = 443, key_file = "fake.pem", cert_file = "fake.pem") h.putrequest('POST', '/scripts/cgi.exe?') h.putheader('Content-length', '%d'%len(params)) h.putheader('Accept', 'text/plain') h.putheader('Host', 'test.site.com') h.endheaders() h.send(params) reply, msg, hdrs = h.getreply() data = h.getfile().read() file('test.file', 'w').write(data) h.close() ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 17:17 Message: Logged In: YES user_id=1057404 I've attempted to duplicate this with HEAD (05/Jun/04), and I can't. HTTPSConnection uses a FakeSocket class which does reference counting, meaning that the amount of sock.makefile() must match sock.close() - this should be documented. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=913619&group_id=5470 From noreply at sourceforge.net Sat Jun 5 13:25:23 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 13:25:29 2004 Subject: [ python-Bugs-919012 ] shutil.move can destroy files Message-ID: Bugs item #919012, was opened at 2004-03-18 14:29 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jeff Epler (jepler) Assigned to: Nobody/Anonymous (nobody) Summary: shutil.move can destroy files Initial Comment: $ mkdir a; touch a/b; python2.3 -c 'import shutil; shutil.move("a", "a/c") $ ls -l a ls: a: no such file or directory The same problem exists on Windows, as reported by one "shagshag". ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-05 13:25 Message: Logged In: YES user_id=31435 Attached Johannes's patch. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 11:08 Message: Logged In: YES user_id=469548 Here's a patch (with tests) that disallows moving a directory inside itself altogether. I can't upload patches to SF, so here's a link to it on my homepage: http://home.student.uva.nl/johannes.gijsbers/shutil.diff. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 From noreply at sourceforge.net Sat Jun 5 13:39:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 13:39:33 2004 Subject: [ python-Bugs-913619 ] httplib: HTTPS does not close() connection properly Message-ID: Bugs item #913619, was opened at 2004-03-10 17:00 Message generated for change (Comment added) made by pmoore You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=913619&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: rick (sf_rick) Assigned to: Nobody/Anonymous (nobody) Summary: httplib: HTTPS does not close() connection properly Initial Comment: When looping through requests to an HTTP server close() closes out the active connection. An the second loop through a new HTTP connection is created. When looping through requests to an HTTPS server close() does not close the connection. It keeps it active. On the second pass through the loop httplib.HTTPS uses the previously initiated connection. Shouldn't close() close out the connection as it does for the HTTP connection? sample code to illustrate: def getdata(): params = urllib.urlencode({'username': 'test', 'password': 'test'}) h = httplib.HTTPS(host = "test.site.com", port = 443, key_file = "fake.pem", cert_file = "fake.pem") h.putrequest('POST', '/scripts/cgi.exe?') h.putheader('Content-length', '%d'%len(params)) h.putheader('Accept', 'text/plain') h.putheader('Host', 'test.site.com') h.endheaders() h.send(params) reply, msg, hdrs = h.getreply() data = h.getfile().read() file('test.file', 'w').write(data) h.close() ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 18:39 Message: Logged In: YES user_id=113328 I'm not sure there's a need to document the makefile/close thing in the Python manual, as the sock attribute isn't publicly documented. Internally, the constraint must be adhered to, but I can't see evidence of where it isn't. Unless the OP can provide a reproducible test case, I think this can be closed with no change. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 18:17 Message: Logged In: YES user_id=1057404 I've attempted to duplicate this with HEAD (05/Jun/04), and I can't. HTTPSConnection uses a FakeSocket class which does reference counting, meaning that the amount of sock.makefile() must match sock.close() - this should be documented. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=913619&group_id=5470 From noreply at sourceforge.net Sat Jun 5 13:42:48 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 13:42:54 2004 Subject: [ python-Bugs-891930 ] configure argument --libdir is ignored Message-ID: Bugs item #891930, was opened at 2004-02-06 16:54 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=891930&group_id=5470 Category: Build Group: Python 2.2.3 Status: Open Resolution: None Priority: 5 Submitted By: G?ran Uddeborg (goeran) Assigned to: Nobody/Anonymous (nobody) Summary: configure argument --libdir is ignored Initial Comment: I wanted to place the LIBDIR/python2.2 hierarchy on an alternate place, so I tried giving the --libdir command line option to configure. The argument is accepted, but apparently it is ignored during the build. Apparently, this directory is hard coded to be PREFIX/lib/python2.2 You could argue if this is a bug report or an enhancement request. Since "configure --help" does mention this option, I put it here. In either case I would consider it a good improvement to honour the --libdir configure option. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 17:42 Message: Logged In: YES user_id=1057404 There are many, many assumptions in the code (and probably elsewhere by now) that things go in ''' PREFIX "lib/python" VERSION '''. The correct variable to change /would be/ DATADIR, as it is for non-platform-dependent files (which .pys are, surely?). This is also not honoured. It doesn't appear possible to remove the options from configure's output without making changes to autoconf. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=891930&group_id=5470 From noreply at sourceforge.net Sat Jun 5 13:46:48 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 13:46:53 2004 Subject: [ python-Bugs-919012 ] shutil.move can destroy files Message-ID: Bugs item #919012, was opened at 2004-03-18 13:29 Message generated for change (Comment added) made by jepler You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jeff Epler (jepler) Assigned to: Nobody/Anonymous (nobody) Summary: shutil.move can destroy files Initial Comment: $ mkdir a; touch a/b; python2.3 -c 'import shutil; shutil.move("a", "a/c") $ ls -l a ls: a: no such file or directory The same problem exists on Windows, as reported by one "shagshag". ---------------------------------------------------------------------- >Comment By: Jeff Epler (jepler) Date: 2004-06-05 12:46 Message: Logged In: YES user_id=2772 I applied the attached patch, and got this exception: >>> shutil.move("a", "a/c") Traceback (most recent call last): File "", line 1, in ? File "/usr/src/cvs-src/python/dist/src/Lib/shutil.py", line 168, in move if is_destination_in_source(src, dst): TypeError: is_destination_in_source() takes exactly 3 arguments (2 given) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 12:25 Message: Logged In: YES user_id=31435 Attached Johannes's patch. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 10:08 Message: Logged In: YES user_id=469548 Here's a patch (with tests) that disallows moving a directory inside itself altogether. I can't upload patches to SF, so here's a link to it on my homepage: http://home.student.uva.nl/johannes.gijsbers/shutil.diff. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 From noreply at sourceforge.net Sat Jun 5 14:01:40 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 14:01:45 2004 Subject: [ python-Bugs-964876 ] mapping a 0 length file Message-ID: Bugs item #964876, was opened at 2004-06-02 10:28 Message generated for change (Comment added) made by pmoore You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: mapping a 0 length file Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. If I mmap a 0 length file on winnt, I obtain an exception: >>> import mmap, os >>> file = os.open(file_name, os.O_RDWR | os.O_BINARY) >>> buf = mmap.mmap(file, 0, access = map.ACCESS_WRITE) Traceback (most recent call last): File "", line 1, in -toplevel- buf = mmap.mmap(file, 0, access = mmap.ACCESS_WRITE) WindowsError: [Errno 1006] Il volume corrispondente al file ? stato alterato dall'esterno. Il file aperto non ? pi? valido This is a windows problem, but I think it should be at least documented. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 19:01 Message: Logged In: YES user_id=113328 Suggested documentation patch: Index: lib/libmmap.tex =================================================================== RCS file: /cvsroot/python/python/dist/src/Doc/lib/libmmap.tex,v retrieving revision 1.8 diff -u -r1.8 libmmap.tex --- lib/libmmap.tex 3 Dec 2001 18:27:22 -0000 1.8 +++ lib/libmmap.tex 5 Jun 2004 18:00:08 -0000 @@ -44,7 +44,9 @@ specified by the file handle \var{fileno}, and returns a mmap object. If \var{length} is \code{0}, the maximum length of the map will be the current size of the file when \function{mmap()} is - called. + called. If \var{length} is \code{0} and the file is 0 bytes long, + Windows will return an error. It is not possible to map a 0-byte + file under Windows. \var{tagname}, if specified and not \code{None}, is a string giving a tag name for the mapping. Windows allows you to have many ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 From noreply at sourceforge.net Sat Jun 5 14:02:59 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 14:03:02 2004 Subject: [ python-Bugs-967207 ] PythonWin fails reporting "Can not locate pywintypes23.dll Message-ID: Bugs item #967207, was opened at 2004-06-05 18:02 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967207&group_id=5470 Category: Extension Modules Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: puffingbilly (puffingbilly) Assigned to: Nobody/Anonymous (nobody) Summary: PythonWin fails reporting "Can not locate pywintypes23.dll Initial Comment: see attached file for details. Workaround :- copy pywintypes23.dll and pythoncom23.dll to top python directory, possibly Python23 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967207&group_id=5470 From noreply at sourceforge.net Sat Jun 5 14:08:03 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 14:08:12 2004 Subject: [ python-Bugs-874042 ] wrong answers from ctime Message-ID: Bugs item #874042, was opened at 2004-01-09 12:57 Message generated for change (Comment added) made by nobody You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 Category: Python Library Group: Python 2.2.2 Status: Open Resolution: None Priority: 5 Submitted By: paul rubin (phr) Assigned to: Nobody/Anonymous (nobody) Summary: wrong answers from ctime Initial Comment: For any time value less than -2**31, ctime returns the same result, 'Fri Dec 13 12:45:52 1901'. It should either compute a correct value (preferable) or raise ValueError. It should not return the wrong answer. >>> from time import * >>> ctime(-2**31) 'Fri Dec 13 12:45:52 1901' >>> ctime(-2**34) 'Fri Dec 13 12:45:52 1901' >>> ctime(-1e30) 'Fri Dec 13 12:45:52 1901' ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2004-06-05 11:08 Message: Logged In: NO I wish SF would let me upload patches. The below throws a ValueError when ctime is supplied with a negative value or a value over sys.maxint. ### diff -u -r2.140 timemodule.c --- timemodule.c 2 Mar 2004 04:38:10 -0000 2.140 +++ timemodule.c 5 Jun 2004 17:11:20 -0000 @@ -482,6 +482,10 @@ return NULL; tt = (time_t)dt; } + if (tt > INT_MAX || tt < 0) { + PyErr_SetString(PyExc_ValueError, "unconvertible time"); + return NULL; + } p = ctime(&tt); if (p == NULL) { PyErr_SetString(PyExc_ValueError, "unconvertible time"); ### ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-01-09 13:49 Message: Logged In: YES user_id=72053 Python 2.2.2, Red Hat GNU/Linux version 9, not sure what C runtime, whatever comes with Red Hat 9. If the value is coming from the C library's ctime function, then at minimum Python should check that the arg converts to a valid int32. It sounds like it's converting large negative values (like -1e30) to -sys.maxint. I see that ctime(sys.maxint+1) is also being converted to a large negative date. Since python's ctime (and presumably related functions) accept long and float arguments, they need to be range checked. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-01-09 13:22 Message: Logged In: YES user_id=31435 Please identify the Python version, OS and C runtime you're using. Here on Windows 2.3.3, >>> import time >>> time.ctime(-2**31) Traceback (most recent call last): File "", line 1, in ? ValueError: unconvertible time >>> The C standard doesn't define the range of convertible values for ctime(). Python raises ValueError if and only if the platform ctime() returns a NULL pointer. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 From noreply at sourceforge.net Sat Jun 5 14:11:52 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 14:11:56 2004 Subject: [ python-Bugs-874042 ] wrong answers from ctime Message-ID: Bugs item #874042, was opened at 2004-01-09 20:57 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 Category: Python Library Group: Python 2.2.2 Status: Open Resolution: None Priority: 5 Submitted By: paul rubin (phr) Assigned to: Nobody/Anonymous (nobody) Summary: wrong answers from ctime Initial Comment: For any time value less than -2**31, ctime returns the same result, 'Fri Dec 13 12:45:52 1901'. It should either compute a correct value (preferable) or raise ValueError. It should not return the wrong answer. >>> from time import * >>> ctime(-2**31) 'Fri Dec 13 12:45:52 1901' >>> ctime(-2**34) 'Fri Dec 13 12:45:52 1901' >>> ctime(-1e30) 'Fri Dec 13 12:45:52 1901' ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 18:11 Message: Logged In: YES user_id=1057404 The below is from me (insomnike) if there's any query. Like SF less and less. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2004-06-05 18:08 Message: Logged In: NO I wish SF would let me upload patches. The below throws a ValueError when ctime is supplied with a negative value or a value over sys.maxint. ### diff -u -r2.140 timemodule.c --- timemodule.c 2 Mar 2004 04:38:10 -0000 2.140 +++ timemodule.c 5 Jun 2004 17:11:20 -0000 @@ -482,6 +482,10 @@ return NULL; tt = (time_t)dt; } + if (tt > INT_MAX || tt < 0) { + PyErr_SetString(PyExc_ValueError, "unconvertible time"); + return NULL; + } p = ctime(&tt); if (p == NULL) { PyErr_SetString(PyExc_ValueError, "unconvertible time"); ### ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-01-09 21:49 Message: Logged In: YES user_id=72053 Python 2.2.2, Red Hat GNU/Linux version 9, not sure what C runtime, whatever comes with Red Hat 9. If the value is coming from the C library's ctime function, then at minimum Python should check that the arg converts to a valid int32. It sounds like it's converting large negative values (like -1e30) to -sys.maxint. I see that ctime(sys.maxint+1) is also being converted to a large negative date. Since python's ctime (and presumably related functions) accept long and float arguments, they need to be range checked. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-01-09 21:22 Message: Logged In: YES user_id=31435 Please identify the Python version, OS and C runtime you're using. Here on Windows 2.3.3, >>> import time >>> time.ctime(-2**31) Traceback (most recent call last): File "", line 1, in ? ValueError: unconvertible time >>> The C standard doesn't define the range of convertible values for ctime(). Python raises ValueError if and only if the platform ctime() returns a NULL pointer. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 From noreply at sourceforge.net Sat Jun 5 14:19:01 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 14:19:05 2004 Subject: [ python-Bugs-877165 ] distutils - compiling C++ ext in cyg or mingw w/ std Python Message-ID: Bugs item #877165, was opened at 2004-01-14 21:55 Message generated for change (Comment added) made by pmoore You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=877165&group_id=5470 Category: Distutils Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Michael Droettboom (mdboom) Assigned to: Nobody/Anonymous (nobody) Summary: distutils - compiling C++ ext in cyg or mingw w/ std Python Initial Comment: When compiling C++ extensions with Cygwin (Cygwin.dll or MingW), the linker is erroneously set to cc, rather than gcc. Even though cc is a symlink to gcc, the standard Python distribution is a standard Windows executable which can not follow Cygwin symlinks, so distutils displays the following error: command 'cc' failed: Invalid argument confusingly indicating cc.exe can not be found. The error occurs because there is no compiler_cxx in CygwinCCompiler to override the value in the UnixCCompiler base class, as there is for compiler, compiler_so etc... The included patch (against revision 1.24 in CVS of distutils/cygwinccompiler.py) adds this override and fixes the error. Thanks to Paul Moore for helping me track this down. ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 19:19 Message: Logged In: YES user_id=113328 Reviewed the patch - it looks OK to me. The current behaviour is not useful (there is no cc command in mingw for Windows). No documentation changes should be needed, and I can't see an obvious test that could be added (there are no distutils tests that I can see at the moment). I recommend applying this patch. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=877165&group_id=5470 From noreply at sourceforge.net Sat Jun 5 14:39:23 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 14:39:27 2004 Subject: [ python-Bugs-877165 ] distutils - compiling C++ ext in cyg or mingw w/ std Python Message-ID: Bugs item #877165, was opened at 2004-01-15 06:55 Message generated for change (Comment added) made by perky You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=877165&group_id=5470 Category: Distutils Group: Python 2.3 >Status: Closed >Resolution: Accepted Priority: 5 Submitted By: Michael Droettboom (mdboom) Assigned to: Nobody/Anonymous (nobody) Summary: distutils - compiling C++ ext in cyg or mingw w/ std Python Initial Comment: When compiling C++ extensions with Cygwin (Cygwin.dll or MingW), the linker is erroneously set to cc, rather than gcc. Even though cc is a symlink to gcc, the standard Python distribution is a standard Windows executable which can not follow Cygwin symlinks, so distutils displays the following error: command 'cc' failed: Invalid argument confusingly indicating cc.exe can not be found. The error occurs because there is no compiler_cxx in CygwinCCompiler to override the value in the UnixCCompiler base class, as there is for compiler, compiler_so etc... The included patch (against revision 1.24 in CVS of distutils/cygwinccompiler.py) adds this override and fixes the error. Thanks to Paul Moore for helping me track this down. ---------------------------------------------------------------------- >Comment By: Hye-Shik Chang (perky) Date: 2004-06-06 03:39 Message: Logged In: YES user_id=55188 Fixed in CVS. Thank you for the reporting! Misc/NEWS 1.992 Lib/distutils/cygwinccompiler.py 1.25 ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-06 03:19 Message: Logged In: YES user_id=113328 Reviewed the patch - it looks OK to me. The current behaviour is not useful (there is no cc command in mingw for Windows). No documentation changes should be needed, and I can't see an obvious test that could be added (there are no distutils tests that I can see at the moment). I recommend applying this patch. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=877165&group_id=5470 From noreply at sourceforge.net Sat Jun 5 14:55:48 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 14:55:57 2004 Subject: [ python-Bugs-774798 ] SSL bug in socketmodule.c using threads Message-ID: Bugs item #774798, was opened at 2003-07-20 23:46 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=774798&group_id=5470 Category: Extension Modules Group: Python 2.2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Liraz Siri (zaril) >Assigned to: A.M. Kuchling (akuchling) Summary: SSL bug in socketmodule.c using threads Initial Comment: I recently came across a bug in the SSL code in Modules/socketmodule.c. Most of the SSL functions support python threads, but the the constructor function for the SSL session does not. This can hang a multi threaded application if the SSL_connect stalls / hangs / takes a really long time etc. In my application, for example, this prevented me from cancelling an SSL connection to a badly routed destination or a very slow destination, since the GUI hanged. Once I enabled threading support in that function in socketmodule.c, the problem was easily fixed. Is there any reason for the SSL constructor in socketmodule.c to be thread unsafe? ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 14:55 Message: Logged In: YES user_id=11375 Fixed in the 2.4 HEAD. Closing. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-07-21 09:41 Message: Logged In: YES user_id=33168 Please attach the patch. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=774798&group_id=5470 From noreply at sourceforge.net Sat Jun 5 14:58:10 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 14:58:15 2004 Subject: [ python-Bugs-930024 ] os.path.realpath can't handle symlink loops Message-ID: Bugs item #930024, was opened at 2004-04-05 16:59 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=930024&group_id=5470 Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: A.M. Kuchling (akuchling) Assigned to: Nobody/Anonymous (nobody) Summary: os.path.realpath can't handle symlink loops Initial Comment: Create a symlink pointing to itself: ln -s infinite infinite Run os.path.realpath() on it, and it recurses infinitely (until the stack limit is hit): >>> import os >>> os.path.realpath('/home/amk/infinite') Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/posixpath.py", line 416, in realpath return realpath(newpath) os.path.realpath() should be fixed; /home/amk/infinite is a perfectly good path, though it can't be followed. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 14:58 Message: Logged In: YES user_id=11375 Patch attached; review requested. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=930024&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:01:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:01:47 2004 Subject: [ python-Bugs-936837 ] PyNumber_InPlaceDivide()'s description Message-ID: Bugs item #936837, was opened at 2004-04-17 05:24 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=936837&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: [N/A] (ymasuda) >Assigned to: A.M. Kuchling (akuchling) Summary: PyNumber_InPlaceDivide()'s description Initial Comment: In Doc/api/abstract.tex on HEAD, line 576 around, It describes:: Returns the mathematical of dividing \var{o1} by \var {o2}, or but, it appears to be something erratta. Instead,:: Returns the floor of dividing \var{o1} by \var{o2}, or would be appropreate. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 15:01 Message: Logged In: YES user_id=11375 Fixed; thanks! ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 12:38 Message: Logged In: YES user_id=113328 The following patch seems OK: Index: api/abstract.tex =================================================================== RCS file: /cvsroot/python/python/dist/src/Doc/api/abstract.tex,v retrieving revision 1.32 diff -u -r1.32 abstract.tex --- api/abstract.tex 17 Apr 2004 11:57:40 -0000 1.32 +++ api/abstract.tex 5 Jun 2004 16:34:56 -0000 @@ -573,7 +573,8 @@ \begin{cfuncdesc}{PyObject*}{PyNumber_InPlaceFloorDivide}{PyObject *o1, PyObject *o2} - Returns the mathematical of dividing \var{o1} by \var{o2}, or + Returns the mathematical floor of the result of + dividing \var{o1} by \var{o2}, or \NULL{} on failure. The operation is done \emph{in-place} when \var{o1} supports it. This is the equivalent of the Python statement \samp{\var{o1} //= \var{o2}}. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=936837&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:06:04 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:06:22 2004 Subject: [ python-Bugs-764437 ] AF_UNIX sockets do not handle Linux-specific addressing Message-ID: Bugs item #764437, was opened at 2003-07-02 07:13 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=764437&group_id=5470 Category: Python Library Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Paul Pergamenshchik (ppergame) Assigned to: Nobody/Anonymous (nobody) Summary: AF_UNIX sockets do not handle Linux-specific addressing Initial Comment: As described in unix(7) manpage, Linux allows for special "kernel namespace" AF_UNIX sockets defined. With such sockets, the first byte of the path is \x00, and the rest is the address. These sockets do not show up in the filesystem. socketmodule.c:makesockaddr (as called by recvfrom) uses code like PyString_FromString(a->sun_path) to retrieve the address. This is incorrect -- on Linux, if the first byte of a->sun_path is null, the function should use PyString_FromStringAndSize to retrieve the full 108- byte buffer. I am not entirely sure that this is the only thing that needs to be fixed, but bind() and sendto() work with these sort of socket paths just fine. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 19:06 Message: Logged In: YES user_id=1057404 The below patch adds the functionality if UNIX_PATH_MAX is defined (in Linux it's in sys/un.h). ### --- socketmodule.c 3 Jun 2004 09:24:42 -0000 1.291 +++ socketmodule.c 5 Jun 2004 18:08:47 -0000 @@ -942,6 +942,11 @@ case AF_UNIX: { struct sockaddr_un *a = (struct sockaddr_un *) addr; +#if defined(UNIX_PATH_MAX) + if (*a->sun_path == 0) { + return PyString_FromStringAndSize(a->sun_path, UNIX_PATH_MAX); + } +#endif /* UNIX_PATH_MAX */ return PyString_FromString(a->sun_path); } #endif /* AF_UNIX */ ### ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2003-11-25 08:13 Message: Logged In: YES user_id=29957 Eek. What a totally mental design decision on the part of the Linux kernel developers. Is there a magic C preprocessor symbol we can use to detect that this insanity is available? (FWIW, Perl also has problems with this: http://www.alexhudson.com/code/abstract-sockets-and-perl ) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=764437&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:11:03 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:11:07 2004 Subject: [ python-Bugs-758665 ] cgi module should handle large post attack Message-ID: Bugs item #758665, was opened at 2003-06-22 05:20 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=758665&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Yue Luo (yueluo) Assigned to: Nobody/Anonymous (nobody) Summary: cgi module should handle large post attack Initial Comment: Currently, the FieldStorage class will try to read in all the client's input to the cgi script. This may result in deny of service attack if the client tries to post huge amount of data. I wonder if FieldStorage could take a parameter limiting the max post size just like the $CGI::POST_MAX in Perl CGI.pm module. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 19:11 Message: Logged In: YES user_id=1057404 cgi.py does support a cgi.maxlen variable which can be used for this purpose. It defaults to 0, however. ---------------------------------------------------------------------- Comment By: Yue Luo (yueluo) Date: 2003-06-22 15:37 Message: Logged In: YES user_id=806666 Also, a parameter like Perl's $CGI::DISABLE_UPLOADS is also a good idea. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=758665&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:12:58 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:13:03 2004 Subject: [ python-Bugs-758665 ] cgi module should handle large post attack Message-ID: Bugs item #758665, was opened at 2003-06-22 01:20 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=758665&group_id=5470 Category: Extension Modules Group: None >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Yue Luo (yueluo) >Assigned to: A.M. Kuchling (akuchling) Summary: cgi module should handle large post attack Initial Comment: Currently, the FieldStorage class will try to read in all the client's input to the cgi script. This may result in deny of service attack if the client tries to post huge amount of data. I wonder if FieldStorage could take a parameter limiting the max post size just like the $CGI::POST_MAX in Perl CGI.pm module. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 15:12 Message: Logged In: YES user_id=11375 Closing. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 15:11 Message: Logged In: YES user_id=1057404 cgi.py does support a cgi.maxlen variable which can be used for this purpose. It defaults to 0, however. ---------------------------------------------------------------------- Comment By: Yue Luo (yueluo) Date: 2003-06-22 11:37 Message: Logged In: YES user_id=806666 Also, a parameter like Perl's $CGI::DISABLE_UPLOADS is also a good idea. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=758665&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:13:42 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:13:47 2004 Subject: [ python-Bugs-905389 ] str.join() intercepts TypeError raised by iterator Message-ID: Bugs item #905389, was opened at 2004-02-26 21:19 Message generated for change (Comment added) made by pmoore You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=905389&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Lenard Lindstrom (kermode) Assigned to: Nobody/Anonymous (nobody) Summary: str.join() intercepts TypeError raised by iterator Initial Comment: For str.join(), if it is passed an iterator and that iterator raises a TypeError, that exception is caught by the join method and replaced by its own TypeError exception. SyntaxError and IndexError exceptions are uneffected. Example: Python 2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)] on win32 ... IDLE 1.0.2 >>> def gen(n): if not isinstance(n, int): raise TypeError, "gen() TypeError" if n<0: raise IndexError, "gen() IndexError" for i in range(n): yield str(i) >>> ''.join(gen(5)) '01234' >>> ''.join(gen(-1)) Traceback (most recent call last): File "", line 1, in -toplevel- ''.join(gen(-1)) File "", line 5, in gen raise IndexError, "gen() IndexError" IndexError: gen() IndexError >>> ''.join(gen(None)) Traceback (most recent call last): File "", line 1, in -toplevel- ''.join(gen(None)) TypeError: sequence expected, generator found >>> ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 20:13 Message: Logged In: YES user_id=113328 Unicode objects do not have this behaviour. For example: >>> u''.join(gen(None)) Traceback (most recent call last): File "", line 1, in ? File "", line 3, in gen TypeError: gen() TypeError The offending code is at line 1610 or so of stringobject.c. The equivalent Unicode code starts at line 3955 of unicodeobject.c. The string code does a 2-pass approach to calculate the size of the result, allocate space, and then build the value. The Unicode version resizes as it goes along. This *may* be a significant speed optimisation (on the assumption that strings are more commonly used than Unicode objects), but I can't test (no MSVC7 to build with). If the speed issue is not significant, I'd recommend rewriting the string code to use the same approach the Unicode code uses. Otherwise, the documentation for str.join should clarify these points: 1. The sequence being joined is materialised as a tuple (PySequence_Fast) - this may have an impact on generators which use a lot of memory. 2. TypeErrors produced by materialising the sequence being joined will be caught and re-raised with a different message. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=905389&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:16:33 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:16:42 2004 Subject: [ python-Bugs-764437 ] AF_UNIX sockets do not handle Linux-specific addressing Message-ID: Bugs item #764437, was opened at 2003-07-02 07:13 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=764437&group_id=5470 Category: Python Library Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Paul Pergamenshchik (ppergame) Assigned to: Nobody/Anonymous (nobody) Summary: AF_UNIX sockets do not handle Linux-specific addressing Initial Comment: As described in unix(7) manpage, Linux allows for special "kernel namespace" AF_UNIX sockets defined. With such sockets, the first byte of the path is \x00, and the rest is the address. These sockets do not show up in the filesystem. socketmodule.c:makesockaddr (as called by recvfrom) uses code like PyString_FromString(a->sun_path) to retrieve the address. This is incorrect -- on Linux, if the first byte of a->sun_path is null, the function should use PyString_FromStringAndSize to retrieve the full 108- byte buffer. I am not entirely sure that this is the only thing that needs to be fixed, but bind() and sendto() work with these sort of socket paths just fine. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 19:16 Message: Logged In: YES user_id=1057404 Also checks for "linux" to be defined, on Mondragon's recommendation. ### --- socketmodule.c 3 Jun 2004 09:24:42 -0000 1.291 +++ socketmodule.c 5 Jun 2004 18:17:51 -0000 @@ -942,6 +942,11 @@ case AF_UNIX: { struct sockaddr_un *a = (struct sockaddr_un *) addr; +#if defined(UNIX_PATH_MAX) && defined(linux) + if (*a->sun_path == 0) { + return PyString_FromStringAndSize(a->sun_path, UNIX_PATH_MAX); + } +#endif /* UNIX_PATH_MAX && linux */ return PyString_FromString(a->sun_path); } #endif /* AF_UNIX */ ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 19:06 Message: Logged In: YES user_id=1057404 The below patch adds the functionality if UNIX_PATH_MAX is defined (in Linux it's in sys/un.h). ### --- socketmodule.c 3 Jun 2004 09:24:42 -0000 1.291 +++ socketmodule.c 5 Jun 2004 18:08:47 -0000 @@ -942,6 +942,11 @@ case AF_UNIX: { struct sockaddr_un *a = (struct sockaddr_un *) addr; +#if defined(UNIX_PATH_MAX) + if (*a->sun_path == 0) { + return PyString_FromStringAndSize(a->sun_path, UNIX_PATH_MAX); + } +#endif /* UNIX_PATH_MAX */ return PyString_FromString(a->sun_path); } #endif /* AF_UNIX */ ### ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2003-11-25 08:13 Message: Logged In: YES user_id=29957 Eek. What a totally mental design decision on the part of the Linux kernel developers. Is there a magic C preprocessor symbol we can use to detect that this insanity is available? (FWIW, Perl also has problems with this: http://www.alexhudson.com/code/abstract-sockets-and-perl ) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=764437&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:16:45 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:16:50 2004 Subject: [ python-Bugs-966992 ] cgitb.scanvars fails Message-ID: Bugs item #966992, was opened at 2004-06-05 04:59 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966992&group_id=5470 Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Robin Becker (rgbecker) >Assigned to: A.M. Kuchling (akuchling) Summary: cgitb.scanvars fails Initial Comment: Under certain circumstances cgitb.scanvars fails with because of an unititialized value variable. This bug is present in 2.3.3 and 2.4a0. The following script demonstrates #####start import cgitb;cgitb.enable() def err(L): if 'never' in L: return if 1: print '\n'.join(L) v=2 err(['',None]) #####finish when run this results in mangled output because scanvars attempts to evaluate '\n'.join(L) where L=['',None]. A fix is to set value=__UNDEF__ at the start of scanvars. Index: cgitb.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/cgitb.py,v retrieving revision 1.11 diff -r1.11 cgitb.py 63c63 < vars, lasttoken, parent, prefix = [], None, None, '' --- > vars, lasttoken, parent, prefix, value = [], None, None, '', __UNDEF__ ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 15:16 Message: Logged In: YES user_id=11375 Applied; thanks! ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 09:35 Message: Logged In: YES user_id=6946 The script cgitb_scanvars_bug.py produces mangled output under 2.3.3 and 2.4a0. The error is caused by an error during evaluation in cgitb.scanvars. I believe a fix is to intialize value to __UNDEF__. Patch python-2.4a0-966992.patch appears to fix in 2.3.3 and 2.4a0. There may be other ways in which scanvars can error, but I haven't found them yet :( ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966992&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:23:00 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:23:05 2004 Subject: [ python-Bugs-696846 ] CGIHTTPServer doesn't quote arguments correctly on Windows. Message-ID: Bugs item #696846, was opened at 2003-03-03 21:06 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=696846&group_id=5470 Category: Python Library Group: Python 2.2.2 Status: Open Resolution: None Priority: 5 Submitted By: Allan B. Wilson (allanbwilson) Assigned to: Nobody/Anonymous (nobody) Summary: CGIHTTPServer doesn't quote arguments correctly on Windows. Initial Comment: In module CGIHTTPServer.py, in the section containing the following: ----- elif self.have_popen2 or self.have_popen3: # Windows -- use popen2 or popen3 to create a subprocess import shutil if self.have_popen3: popenx = os.popen3 else: popenx = os.popen2 cmdline = scriptfile if self.is_python(scriptfile): interp = sys.executable if interp.lower().endswith("w.exe"): # On Windows, use python.exe, not pythonw.exe interp = interp[:-5] + interp[-4:] cmdline = "%s -u %s" % (interp, cmdline) ----- The final line, number 231 in my copy (version 0.4 in Python 2.2.2), doesn't handle filespecs with embedded spaces correctly. A script named, for example, "Powers of two.py" won't be found. This can be fixed by changing the quoting, namely to: cmdline = '%s -u "%s"' % (interp, cmdline) so that the script name in cmdline is quoted properly. Note that embedded spaces in interp could also cause problems (if Python were installed in C:\Program Files\ for example), but though adding "s around the first %s works for commands executed directly within Windows XP's cmd.exe, I couldn't get os.popen3 to handle them. Thanks for your help. Allan Wilson ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 19:22 Message: Logged In: YES user_id=1057404 The above isn't safe, and if the command is devoid of '=' or '"', it's run with quotes (in CVS HEAD as of 05/Jun/2004). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=696846&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:23:20 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:23:24 2004 Subject: [ python-Bugs-696846 ] CGIHTTPServer doesn't quote arguments correctly on Windows. Message-ID: Bugs item #696846, was opened at 2003-03-03 16:06 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=696846&group_id=5470 Category: Python Library Group: Python 2.2.2 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Allan B. Wilson (allanbwilson) >Assigned to: A.M. Kuchling (akuchling) Summary: CGIHTTPServer doesn't quote arguments correctly on Windows. Initial Comment: In module CGIHTTPServer.py, in the section containing the following: ----- elif self.have_popen2 or self.have_popen3: # Windows -- use popen2 or popen3 to create a subprocess import shutil if self.have_popen3: popenx = os.popen3 else: popenx = os.popen2 cmdline = scriptfile if self.is_python(scriptfile): interp = sys.executable if interp.lower().endswith("w.exe"): # On Windows, use python.exe, not pythonw.exe interp = interp[:-5] + interp[-4:] cmdline = "%s -u %s" % (interp, cmdline) ----- The final line, number 231 in my copy (version 0.4 in Python 2.2.2), doesn't handle filespecs with embedded spaces correctly. A script named, for example, "Powers of two.py" won't be found. This can be fixed by changing the quoting, namely to: cmdline = '%s -u "%s"' % (interp, cmdline) so that the script name in cmdline is quoted properly. Note that embedded spaces in interp could also cause problems (if Python were installed in C:\Program Files\ for example), but though adding "s around the first %s works for commands executed directly within Windows XP's cmd.exe, I couldn't get os.popen3 to handle them. Thanks for your help. Allan Wilson ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 15:23 Message: Logged In: YES user_id=11375 Fixed in HEAD; closing. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 15:22 Message: Logged In: YES user_id=1057404 The above isn't safe, and if the command is devoid of '=' or '"', it's run with quotes (in CVS HEAD as of 05/Jun/2004). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=696846&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:25:35 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:26:35 2004 Subject: [ python-Bugs-918710 ] popen2 returns (out, in) not (in, out) Message-ID: Bugs item #918710, was opened at 2004-03-18 07:44 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=918710&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Yotam Medini (yotam) >Assigned to: A.M. Kuchling (akuchling) Summary: popen2 returns (out, in) not (in, out) Initial Comment: http://python.org/doc/current/lib/os-newstreams.html#l2h-1379 says: popen2( cmd[, mode[, bufsize]]) Executes cmd as a sub-process. Returns the file objects (child_stdin, child_stdout). But for me it actually returns (child-output, child-input). Or... is it a semantci issue? that is child_stdin - is "the input _from_ child?" Anyway - it is confusing. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 15:25 Message: Logged In: YES user_id=11375 Paragraph added to docs; closing. Thanks! ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 12:44 Message: Logged In: YES user_id=1057404 The above is not-a-bug, but this documentation patch below might serve to remove any confusion. ### --- libos.tex- Sat Jun 5 17:28:52 2004 +++ libos.tex Sat Jun 5 17:32:40 2004 @@ -384,6 +384,10 @@ \versionadded{2.0} \end{funcdesc} +It should be noted that \code{\var{child_stdin}, \var{child_stdout}, and +\var{child_stderr}} are named from the child process' point of view, i.e. the +stdin of the child. + This functionality is also available in the \refmodule{popen2} module using functions of the same names, but the return values of those functions have a different order. ### ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=918710&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:35:57 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:36:04 2004 Subject: [ python-Bugs-960995 ] test_zlib is too slow Message-ID: Bugs item #960995, was opened at 2004-05-26 17:35 Message generated for change (Comment added) made by nascheme You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960995&group_id=5470 Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 3 Submitted By: Michael Hudson (mwh) >Assigned to: Neil Schemenauer (nascheme) Summary: test_zlib is too slow Initial Comment: I don't know what it's doing, but I've never seen it fail and waiting for it has certainly wasted quite a lot of my life :-) ---------------------------------------------------------------------- >Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 19:35 Message: Logged In: YES user_id=35752 Fixed in test_zlib.py 1.26. I removed a bunch of magic numbers while I was at it. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-05-27 06:24 Message: Logged In: YES user_id=31435 Persevere: taking tests you don't understand and just moving them to artificially bloat the time taken by an unrelated test is so lazy on so many counts I won't make you feel bad by belaboring the point . Moving them to yet another -u option doomed to be unused is possibly worse. IOW, fix the problem, don't shuffle it around. Or, IOOW, pare the expensive ones down. Since they never fail for anyone, it's not like they're testing something delicate. Does it *need* to try so many distinct cases? That will take some thought, but it's a real help that you already know the answer . ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-05-26 19:52 Message: Logged In: YES user_id=357491 A quick look at the tests Tim lists shows that each of those run the basic incremental decompression test 8 times, from the normal size to 2**8 time the base size; creates a list from [1<. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-26 18:17 Message: Logged In: YES user_id=80475 I hate this slow test. If you want to label this as explictly called resource, regrtest -u zlib , then be my guest. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960995&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:36:59 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:37:49 2004 Subject: [ python-Bugs-967182 ] file("foo", "wU") is silently accepted Message-ID: Bugs item #967182, was opened at 2004-06-05 12:15 Message generated for change (Settings changed) made by montanaro You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967182&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Skip Montanaro (montanaro) >Assigned to: Skip Montanaro (montanaro) Summary: file("foo", "wU") is silently accepted Initial Comment: PEP 278 says at opening a file with "wU" is illegal, yet file("foo", "wU") passes without complaint. There may be other flags which the PEP disallows with "U" that need to be checked. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967182&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:45:12 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:45:19 2004 Subject: [ python-Bugs-826897 ] Proto 2 pickle vs dict subclass Message-ID: Bugs item #826897, was opened at 2003-10-20 10:28 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=826897&group_id=5470 Category: Extension Modules Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Tim Peters (tim_one) Assigned to: Nobody/Anonymous (nobody) Summary: Proto 2 pickle vs dict subclass Initial Comment: >From c.l.py: """ From: Jimmy Retzlaff Sent: Thursday, October 16, 2003 1:56 AM To: python-list@python.org Subject: Pickle dict subclass instances using new protocol in PEP 307 I have a subclass of dict that acts kind of like Windows' file systems - keys are case insensitive but case preserving (keys are assumed to be strings, or at least they have to support .lower()). It's worked well for quite a while - it used to inherit from UserDict and it has inherited from dict since that became possible. I just tried to pickle an instance of this class for the first time using Python 2.3.2 on Windows. If I use protocols 0 (text) or 1 (binary) everything works great. If I use protocol 2 (PEP 307) then I have a problem when loading my pickle. Here is a small sample to illustrate: ###### import pickle class myDict(dict): def __init__(self, *args, **kwargs): self.x = 1 dict.__init__(self, *args, **kwargs) def __getstate__(self): print '__getstate__ returning', (self.copy(), self.x) return (self.copy(), self.x) def __setstate__(self, (d, x)): print '__setstate__' print ' object already in state:', self print ' x already in self:', 'x' in dir(self) self.x = x self.update(d) def __setitem__(self, key, value): print '__setitem__', (key, value) dict.__setitem__(self, key, value) d = myDict() d['key'] = 'value' protocols = [(0, 'Text'), (1, 'Binary'), (2, 'PEP 307')] for protocol, description in protocols: print '--------------------------------------' print 'Pickling with Protocol %s (%s)' % (protocol, description) pickle.dump(d, file('test.pickle', 'wb'), protocol) del d print 'Unpickling' d = pickle.load(file('test.pickle', 'rb')) ###### When run it prints: __setitem__ ('key', 'value') - self.x exists: True -------------------------------------- Pickling with Protocol 0 (Text) __getstate__ returning ({'key': 'value'}, 1) Unpickling __setstate__ object already in state: {'key': 'value'} x already in self: False -------------------------------------- Pickling with Protocol 1 (Binary) __getstate__ returning ({'key': 'value'}, 1) Unpickling __setstate__ object already in state: {'key': 'value'} x already in self: False -------------------------------------- Pickling with Protocol 2 (PEP 307) __getstate__ returning ({'key': 'value'}, 1) Unpickling __setitem__ ('key', 'value') - self.x exists: False __setstate__ object already in state: {'key': 'value'} x already in self: False The problem I'm having stems from the fact that the subclass' __setitem__ is called before __setstate__ when loading a protocol 2 pickle (the subclass' __setitem__ is not called at all with protocols 0 or 1). If I don't define __get/setstate__ then I have the same problem in that the subclass' __setitem__ is called before the subclass' instance variables are created by the pickle mechanism. I need to access one of those instance variables in my __setitem__. I suppose my question is one of practicality. I'd like my class instances to work with all pickle protocols. Am I getting too fancy trying to inherit from dict? Should I go back to UserDict or maybe to DictMixin? Should I submit a bug report on this, or am I getting too close to internals to expect a certain behavior across pickle protocols? """ ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 15:45 Message: Logged In: YES user_id=11375 Bug #964868 is a duplicate of this one. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=826897&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:46:02 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:46:07 2004 Subject: [ python-Bugs-964868 ] pickle protocol 2 is incompatible(?) with Cookie module Message-ID: Bugs item #964868, was opened at 2004-06-02 05:12 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964868&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Duplicate Priority: 5 Submitted By: Manlio Perillo (manlioperillo) >Assigned to: A.M. Kuchling (akuchling) Summary: pickle protocol 2 is incompatible(?) with Cookie module Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. I don't know if this is a bug of Cookie module or of pickle. When I dump a Cookie instance with protocol = 2, the data is 'corrupted'. With protocol = 1 there are no problems. Here is an example: >>> s = 'Set-Cookie: key=value; path=/; expires=Fri, 21-May-2004 10:40:51 GMT' >>> c = Cookie.BaseCookie(s) >>> print c Set-Cookie: key=value; expires=Fri,; Path=/; >>> buf = pickle.dumps(c, protocol = 2) >>> print pickle.loads(buf) Set-Cookie: key=Set-Cookie: key=value; expires=Fri,; Path=/;; >>> buf = pickle.dumps(c, protocol = 1) >>> print pickle.loads(buf) Set-Cookie: key=value; expires=Fri,; Path=/; Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 15:46 Message: Logged In: YES user_id=11375 Closing. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 11:57 Message: Logged In: YES user_id=1057404 #826897 appears to be a dupe of this. __setitem__ is called for the items in the dict *before* the instance variables are set. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 11:44 Message: Logged In: YES user_id=1057404 Okay, I've looked at the output from protocols 0, 1 and 2 from pickletools.py, and after nearly two hours of looking into this, I think the problem lies with the fact that both Morsel and BaseCookie derive from dict and override __setitem__. I think that this stems from BUILD using __dict__ directly, but lack the internal knowledge of pickle to investigate further. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964868&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:46:45 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:46:52 2004 Subject: [ python-Bugs-966618 ] float_subtype_new() bug Message-ID: Bugs item #966618, was opened at 2004-06-04 15:39 Message generated for change (Comment added) made by nascheme You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: float_subtype_new() bug Initial Comment: A rather obsure bug in the subclassing code: >>> class A: ... def __float__(self): return 'hello' ... >>> float(A()) 'hello' >>> class f(float): pass ... >>> f(A()) -5.7590155905901735e-56 In debug mode, the assert() in float_subtype_new() fails instead. In non-debug mode, the value we get is the result of typecasting the PyStringObject* to a PyFloatObject*. ---------------------------------------------------------------------- >Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 19:46 Message: Logged In: YES user_id=35752 I think the right fix is to have PyNumber_Int, PyNumber_Float, and PyNumber_Long check the return value of slot function (i.e. nb_int, nb_float). That matches the behavior of PyObject_Str and PyObject_Repr. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:13 Message: Logged In: YES user_id=1057404 (ack, spelling error copied from intobject.c) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must be convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:11 Message: Logged In: YES user_id=1057404 (Inline, I can't seem to attach things) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:01 Message: Logged In: YES user_id=1057404 floatobject.c contains an assertion that the value can be coerced into a float, but not a runtime if. I've changed it to be in line with what int_subtype_new() does. This may not be 100% correct, however, as they both allow a string to be returned from __int__() and __float__(), respectively. complex() does not allow this, however, and it throws TypeError (while int_subtype_new() and float_subtype_new() throw ValueError). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 From noreply at sourceforge.net Sat Jun 5 15:51:57 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 15:52:46 2004 Subject: [ python-Bugs-967182 ] file("foo", "wU") is silently accepted Message-ID: Bugs item #967182, was opened at 2004-06-05 12:15 Message generated for change (Comment added) made by montanaro You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967182&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Skip Montanaro (montanaro) Assigned to: Skip Montanaro (montanaro) Summary: file("foo", "wU") is silently accepted Initial Comment: PEP 278 says at opening a file with "wU" is illegal, yet file("foo", "wU") passes without complaint. There may be other flags which the PEP disallows with "U" that need to be checked. ---------------------------------------------------------------------- >Comment By: Skip Montanaro (montanaro) Date: 2004-06-05 14:51 Message: Logged In: YES user_id=44345 Here's a first cut patch - test suite fails though - must be something obvious... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967182&group_id=5470 From noreply at sourceforge.net Sat Jun 5 16:00:08 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 16:00:14 2004 Subject: [ python-Bugs-966256 ] realpath description misleading Message-ID: Bugs item #966256, was opened at 2004-06-03 22:49 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966256&group_id=5470 Category: Documentation Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: GaryD (gazzadee) >Assigned to: A.M. Kuchling (akuchling) Summary: realpath description misleading Initial Comment: The current description for os.path.realpath is: Return the canonical path of the specified filename, eliminating any symbolic links encountered in the path. Availability: Unix. New in version 2.2. Firstly, realpath _is_ available under windows also (at least, it does on my Win XP box). Secondly, it is not immediately obvious that realpath will also return an absolute path. An alternative understanding is that, when supplied with a relative path, realpath will figure out the absolute path to determine what components are symbolic links, but return the relative path with the links removed. This is quite obvious once you use the function, but if we're going to have documentation, it may as well be complete and straightforward. My suggestion is to change the documentation for realpath to read: "Return the absolute canonical path of the specified filename, eliminating any symbolic links encountered in the path." ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 16:00 Message: Logged In: YES user_id=11375 Edited pretty much as you suggest; thanks! ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966256&group_id=5470 From noreply at sourceforge.net Sat Jun 5 16:00:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 16:00:35 2004 Subject: [ python-Bugs-693416 ] 2.3a2 import after os.chdir difference Message-ID: Bugs item #693416, was opened at 2003-02-26 04:42 Message generated for change (Comment added) made by insomnike You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=693416&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: James P Rutledge (jrut) Assigned to: Nobody/Anonymous (nobody) Summary: 2.3a2 import after os.chdir difference Initial Comment: In Python 2.3a2 in interactive mode an import after an os.chdir imports the module in the new current directory after the os.chdir. This is the same as Python 2.2.2 does both in interactive and non-interactive mode. In Python 2.3a2 in non-interactive mode an import after an os.chdir does not import the module in the new current directory after the os.chdir. Instead it attempts to import a module (if present) by that name in the previous current directory before the os.chdir. If there is not a module by that name in the previous current directory, there is an ImportError exception. The above results are on a Debian Linux system using an Intel 32 bit processor. No PYTHONSTARTUP environment variable was set. No PYTHONPATH environment variable was set. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 20:00 Message: Logged In: YES user_id=1057404 Right, I'm nearly sure that this is the behaviour that we want. When run interactively, the first item in sys.path is '', but when run from a script, it is the path to the directory that the script lives in. If we reverted to the 2.2.2 behaviour, multi-module scripts couldn't use os.chdir() for fear of randomly breaking imports. ---------------------------------------------------------------------- Comment By: James P Rutledge (jrut) Date: 2003-02-26 14:31 Message: Logged In: YES user_id=720847 Additional Info -- I have now tried more than one os.chdir before the import in non-interactive mode and found that the words "previous current directory" in the original description should be more accurately expressed as "ORIGINAL current directory." ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=693416&group_id=5470 From noreply at sourceforge.net Sat Jun 5 16:19:49 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 16:19:55 2004 Subject: [ python-Bugs-963246 ] Multiple Metaclass inheritance limitation Message-ID: Bugs item #963246, was opened at 2004-05-30 19:52 Message generated for change (Comment added) made by pje You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=963246&group_id=5470 >Category: Type/class unification Group: Python 2.3 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Greg Chapman (glchapman) Assigned to: Nobody/Anonymous (nobody) Summary: Multiple Metaclass inheritance limitation Initial Comment: I'm not sure if this constitutes a bug or a limitation (which should be documented?), but the following doesn't work with Python 2.3.4. Assume CMeta is a type defined in C, with tp_base = PyType_Type tp_new = some C function which uses the mro to call the inherited tp_new tp_flags includes Py_TPFLAGS_BASETYPE class PyMeta(type): def __new__(meta, name, bases, attrs): return super(PyMeta, meta).__new__(meta, name, bases, attrs) class MetaTest(CMeta, PyMeta): pass class Test: __metaclass__ = MetaTest The attempt to define Test generates a TypeError: "type.__new__(MetaTest) is not safe, use CMeta.__new__()". The above error is generated (in tp_new_wrapper) by the super call in PyMeta.__new__, but this is only reached as a result of an initial call to CMeta.tp_new (which, using the mro to find the next "__new__" method, finds and calls PyMeta.__new__). It may be there is no good way to allow the above scenario, but I just thought I'd point it out in case someone can think of a workaround. ---------------------------------------------------------------------- >Comment By: Phillip J. Eby (pje) Date: 2004-06-05 20:19 Message: Logged In: YES user_id=56214 There are two things that can cause the error message you got. One is calling a Python __new__ from a C type, such as your code is doing. The only workaround is "don't do that". A C type must always call only C __new__ methods. You can avoid this in your example by moving CMeta *after* PyMeta in the __bases__, but it will fail if CMeta is ever placed anywhere but the end of the list. The alternative is to change CMeta to call its tp_base->tp_new instead of using the mro to find the next base. This will silently ignore any Python __new__ methods in the mro, instead of causing a TypeError. Another multiple inheritance situation that can cause the same error is if you define a C type which subtypes another C type and does not increase its tp_basicsize, *and* the C type is placed anywhere but first in the __bases__ of a Python subclass. You can work around that by ensuring that its tp_basicsize is larger than that of its base C type, so that Python will always pick it as the __base__ (aka tp_base) even if it is not listed first in __bases__. I have written the following additions to the Extending and Embedding manual for future reference: \note{If you want your type to be subclassable from Python, and your type has the same \member{tp_basicsize} as its base type, you may have problems with multiple inheritance. A Python subclass of your type will have to list your type first in its \member{__bases__}, or else it will not be able to call your type's \method{__new__} method without getting an error. You can avoid this problem by ensuring that your type has a larger value for \member{tp_basicsize} than its base type does. Most of the time, this will be true anyway, because either your base type will be \class{object}, or else you will be adding data members to your base type, and therefore increasing its size.} and... \note{If you are creating a co-operative \member{tp_new} (one that calls a base type's \member{tp_new} or \method{__new__}), you must \emph{not} try to determine what method to call using method resolution order at runtime. Always statically determine what type you are going to call, and call its \member{tp_new} directly, or via \code{type->tp_base->tp_new}. If you do not do this, Python subclasses of your type that also inherit from other Python-defined classes may not work correctly. (Specifically, you may not be able to create instances of such subclasses without getting a \exception{TypeError}.)} For more discussion on this, you can also see: A thread on Python-Dev that touches on these issues: http://mail.python.org/pipermail/python-dev/2003-April/034633.html Some notes on ZODB4 running afoul of the same issues: http://collector.zope.org/Zope3-dev/86 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=963246&group_id=5470 From noreply at sourceforge.net Sat Jun 5 16:22:29 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 16:22:34 2004 Subject: [ python-Bugs-953177 ] cgi module documentation could mention getlist Message-ID: Bugs item #953177, was opened at 2004-05-13 07:15 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=953177&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Richard Jones (richard) >Assigned to: A.M. Kuchling (akuchling) Summary: cgi module documentation could mention getlist Initial Comment: Section "11.2.2 Using the cgi module" at http://www.python.org/dev/doc/devel/lib/node411.html has a discussion about how the module handles multiple values with the same name. It even presents a section of code describing how to handle the situation. It could alternatively just make mention of its own getlist() method. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 16:22 Message: Logged In: YES user_id=11375 Ready to check in the fix, but SF CVS is down. ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 12:00 Message: Logged In: YES user_id=113328 The following patch seems to be what is required (inline because I can't upload files :-() Index: lib/libcgi.tex =================================================================== RCS file: /cvsroot/python/python/dist/src/Doc/lib/libcgi.tex,v retrieving revision 1.43 diff -u -r1.43 libcgi.tex --- lib/libcgi.tex 23 Jan 2004 04:05:27 -0000 1.43 +++ lib/libcgi.tex 5 Jun 2004 15:59:45 -0000 @@ -135,19 +135,14 @@ \samp{form.getvalue(\var{key})} would return a list of strings. If you expect this possibility (when your HTML form contains multiple fields with the same name), use -the \function{isinstance()} built-in function to determine whether you -have a single instance or a list of instances. For example, this +the \function{getlist()}, which always returns a list of values (so that you +do not need to special-case the single item case). For example, this code concatenates any number of username fields, separated by commas: \begin{verbatim} -value = form.getvalue("username", "") -if isinstance(value, list): - # Multiple username fields specified - usernames = ",".join(value) -else: - # Single or no username field specified - usernames = value +values = form.getlist("username") +usernames = ",".join(values) \end{verbatim} If a field represents an uploaded file, accessing the value via the ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=953177&group_id=5470 From noreply at sourceforge.net Sat Jun 5 16:22:26 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 16:23:03 2004 Subject: [ python-Bugs-944890 ] csv writer bug on windows Message-ID: Bugs item #944890, was opened at 2004-04-29 17:06 Message generated for change (Comment added) made by wc2so1 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944890&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Brian Kelley (wc2so1) Assigned to: Skip Montanaro (montanaro) Summary: csv writer bug on windows Initial Comment: The excel dialect is set up to be class excel(Dialect): delimiter = ',' quotechar = '"' doublequote = True skipinitialspace = False lineterminator = '\r\n' quoting = QUOTE_MINIMAL register_dialect("excel", excel) However, on the windows platform, the lineterminator should be simply "\n" My suggested fix is: class excel(Dialect): delimiter = ',' quotechar = '"' doublequote = True skipinitialspace = False if sys.platform == "win32": lineterminator = '\n' else: lineterminator = '\r\n' quoting = QUOTE_MINIMAL Which seems to work. It could be that I'm missing something, but the universal readlines doesn't appear to work for writing files. If this is a usage issue, it probably should be a documentation fix. ---------------------------------------------------------------------- >Comment By: Brian Kelley (wc2so1) Date: 2004-06-05 16:22 Message: Logged In: YES user_id=424987 The example in the documentation fails... import csv writer = csv.writer(file("some.csv", "w")) for row in someiterable: writer.writerow(row) As I suspected, the fix is a documentation issue. I will make a documentation patch next week. It will be my first one :) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 12:14 Message: Logged In: YES user_id=31435 Excel on Windows puts \r\n line ends in .csv files it creates (I just tried it). Since the OP mentioned "universal readlines", I bet he's opening the file with "U" (but it needs to be "rb"). ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2004-06-05 12:04 Message: Logged In: YES user_id=44345 Can you attach an example that fails? I don't have access to Windows. Note that you must open the file with binary mode ("wb" or "rb"). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944890&group_id=5470 From noreply at sourceforge.net Sat Jun 5 16:50:15 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 16:50:23 2004 Subject: [ python-Bugs-964861 ] Cookie module does not parse date Message-ID: Bugs item #964861, was opened at 2004-06-02 05:02 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964861&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Manlio Perillo (manlioperillo) >Assigned to: A.M. Kuchling (akuchling) Summary: Cookie module does not parse date Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. The standard Cookie module does not parse date string. Here is and example: >>> import Cookie >>> s = 'Set-Cookie: key=value; path=/; expires=Fri, 21-May-2004 10:40:51 GMT' >>> c = Cookie.BaseCookie(s) >>> print c Set-Cookie: key=value; expires=Fri,; Path=/; In the attached file I have reported the correct (I think) regex pattern. Thanks and Regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 16:50 Message: Logged In: YES user_id=11375 Closing as 'not a bug'. This decision could be reversed if there's some common application or software that returns cookies without quoting the date properly. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 09:43 Message: Logged In: YES user_id=1057404 This bug is in error; RFC2109 specifies the BNF grammar as: av-pairs = av-pair *(";" av-pair) av-pair = attr ["=" value] ; optional value attr = token value = word word = token | quoted-string If you surround the date in double quotes, as per the RFC, then the above works correctly. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964861&group_id=5470 From noreply at sourceforge.net Sat Jun 5 16:50:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 16:51:00 2004 Subject: [ python-Bugs-949832 ] Problem w/6.27.2.2 GNUTranslations ungettext() example code Message-ID: Bugs item #949832, was opened at 2004-05-07 08:44 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=949832&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Martin Miller (mrmiller) >Assigned to: A.M. Kuchling (akuchling) Summary: Problem w/6.27.2.2 GNUTranslations ungettext() example code Initial Comment: > Here is an example: > > n = len(os.listdir('.')) > cat = GNUTranslations(somefile) > message = cat.ungettext( > 'There is %(num)d file in this directory', > 'There are %(num)d files in this directory', > n) % {'n': n} The last line of code in the example should be: > n) % {'num': n} Also, I don't think the example illustrates a realistic usage of the ungettext() method, as it is unlikely that the message catalog is going to have different message id's for all the possible variations of the string each with a different number of files in them -- exactly the problem that ungettext() is suppose to address. A better example would just use ungettext() to pick either the word "file" or "files" based on the value of n. It would be more realistic and the example code would probably be simpler. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=949832&group_id=5470 From noreply at sourceforge.net Sat Jun 5 16:57:12 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 16:57:15 2004 Subject: [ python-Feature Requests-967275 ] Better SSL support in socket module Message-ID: Feature Requests item #967275, was opened at 2004-06-05 16:57 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=967275&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: A.M. Kuchling (akuchling) Assigned to: Nobody/Anonymous (nobody) Summary: Better SSL support in socket module Initial Comment: The socket module only provides very basic SSL features, and users don't have much control. See bug #508944 for one complaint. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=967275&group_id=5470 From noreply at sourceforge.net Sat Jun 5 16:58:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 16:58:16 2004 Subject: [ python-Bugs-508944 ] socket-module SSL is broken Message-ID: Bugs item #508944, was opened at 2002-01-26 14:05 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=508944&group_id=5470 Category: Extension Modules Group: Python 2.2 >Status: Closed >Resolution: Later Priority: 5 Submitted By: Jon Ribbens (jribbens) Assigned to: Nobody/Anonymous (nobody) Summary: socket-module SSL is broken Initial Comment: If you set a socket to non-blocking and then try to call socket.ssl on it, it fails because you are doing all the setup and calling SSL_connect as an indivisible operation in the object constructor. So you can't catch SSL_ERROR_WANT_READ/WRITE and restart SSL_connect because there is no way from python to call SSL_connect. (Don't tell me not to set the socket non-blocking - I need to implement timeouts. And don't tell me to use alarm(), my program is multi-threaded.) For the same reason, there is no way in Python to write an SSL server. The only way to create an SSL object is socket.ssl and it is hardcoded to call SSL_connect, you can't call SSL_accept. Please can you make it so that a new function in the socket module creates a proper SSL object (that preferably has actual useful methods to set the options, etc) that is not connected in its constructor so that you can then call SSL_connect or SSL_accept. It could then also have a makefile method like socket objects which would implement read and write properly (i.e. catching and handling WANT_READ/WANT_WRITE/ZERO_RETURN). You could even then make it so that it has methods to set the various options that OpenSSL provides rather than hard-coding them in the SSLObject constructor. Umm, sorry if I sound tetchy but due to the complete lack of documentation of the socket SSL facilities I've just spent ages trying to work out why my program wasn't working, only to discover that it's not possible to get it working. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 16:58 Message: Logged In: YES user_id=11375 Filed as RFE #967275; closing this bug. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-07-01 21:34 Message: Logged In: NO This is a vote for better ssl support in Python in general. Please. ---------------------------------------------------------------------- Comment By: Gerhard H?ring (ghaering) Date: 2002-05-03 18:27 Message: Logged In: YES user_id=163326 If you need to write SSL servers *now*, you can use one of the various third-party SSL libraries for Python: m2crypto, pyOpenSSL, POW. Fixing Python's SSL will most probably require a full rewrite, and there's no consensus yet about if and how to do this. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=508944&group_id=5470 From noreply at sourceforge.net Sat Jun 5 17:00:03 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 17:00:19 2004 Subject: [ python-Bugs-966618 ] float_subtype_new() bug Message-ID: Bugs item #966618, was opened at 2004-06-04 15:39 Message generated for change (Comment added) made by nascheme You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: float_subtype_new() bug Initial Comment: A rather obsure bug in the subclassing code: >>> class A: ... def __float__(self): return 'hello' ... >>> float(A()) 'hello' >>> class f(float): pass ... >>> f(A()) -5.7590155905901735e-56 In debug mode, the assert() in float_subtype_new() fails instead. In non-debug mode, the value we get is the result of typecasting the PyStringObject* to a PyFloatObject*. ---------------------------------------------------------------------- >Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 21:00 Message: Logged In: YES user_id=35752 I've got an alternative patch. SF cvs is down at the moment so I'll have to generate a patch later. My change makes CPython match the behavior of Jython. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 19:46 Message: Logged In: YES user_id=35752 I think the right fix is to have PyNumber_Int, PyNumber_Float, and PyNumber_Long check the return value of slot function (i.e. nb_int, nb_float). That matches the behavior of PyObject_Str and PyObject_Repr. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:13 Message: Logged In: YES user_id=1057404 (ack, spelling error copied from intobject.c) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must be convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:11 Message: Logged In: YES user_id=1057404 (Inline, I can't seem to attach things) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:01 Message: Logged In: YES user_id=1057404 floatobject.c contains an assertion that the value can be coerced into a float, but not a runtime if. I've changed it to be in line with what int_subtype_new() does. This may not be 100% correct, however, as they both allow a string to be returned from __int__() and __float__(), respectively. complex() does not allow this, however, and it throws TypeError (while int_subtype_new() and float_subtype_new() throw ValueError). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 From noreply at sourceforge.net Sat Jun 5 17:05:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 17:06:02 2004 Subject: [ python-Bugs-964876 ] mapping a 0 length file Message-ID: Bugs item #964876, was opened at 2004-06-02 05:28 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) >Assigned to: A.M. Kuchling (akuchling) Summary: mapping a 0 length file Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. If I mmap a 0 length file on winnt, I obtain an exception: >>> import mmap, os >>> file = os.open(file_name, os.O_RDWR | os.O_BINARY) >>> buf = mmap.mmap(file, 0, access = map.ACCESS_WRITE) Traceback (most recent call last): File "", line 1, in -toplevel- buf = mmap.mmap(file, 0, access = mmap.ACCESS_WRITE) WindowsError: [Errno 1006] Il volume corrispondente al file ? stato alterato dall'esterno. Il file aperto non ? pi? valido This is a windows problem, but I think it should be at least documented. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 14:01 Message: Logged In: YES user_id=113328 Suggested documentation patch: Index: lib/libmmap.tex =================================================================== RCS file: /cvsroot/python/python/dist/src/Doc/lib/libmmap.tex,v retrieving revision 1.8 diff -u -r1.8 libmmap.tex --- lib/libmmap.tex 3 Dec 2001 18:27:22 -0000 1.8 +++ lib/libmmap.tex 5 Jun 2004 18:00:08 -0000 @@ -44,7 +44,9 @@ specified by the file handle \var{fileno}, and returns a mmap object. If \var{length} is \code{0}, the maximum length of the map will be the current size of the file when \function{mmap()} is - called. + called. If \var{length} is \code{0} and the file is 0 bytes long, + Windows will return an error. It is not possible to map a 0-byte + file under Windows. \var{tagname}, if specified and not \code{None}, is a string giving a tag name for the mapping. Windows allows you to have many ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 From noreply at sourceforge.net Sat Jun 5 17:17:19 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 17:17:29 2004 Subject: [ python-Bugs-528990 ] bug? in PyImport_ImportModule under AIX Message-ID: Bugs item #528990, was opened at 2002-03-12 11:19 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=528990&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 >Status: Closed >Resolution: Out of Date Priority: 5 Submitted By: Fanny Wattez (wattez) >Assigned to: A.M. Kuchling (akuchling) Summary: bug? in PyImport_ImportModule under AIX Initial Comment: Here is a description of the problem we encounter. platform : AIX 4.3.3 Python 2.2 environment variables : ---------------------------------------- PATH=/home/soft/ccm_gdc7/bin:/bin:/usr/sbin:/usr/etc:/u sr/lpp/X11/bin:/usr/wf/bin:/usr/local/bin:/usr/local/ad min/etc:/lbin:/usr/bin/etc:/usr/ucb:/home/users/wattez/ lbin:/sbin:.:/usr/bin/X11:/sbin:/home/users/wattez/lbin /fvwm- exec:/tools/versant/6.0.0/rs6000/bin:/tools/views402/st udio/rs6000:/tools/sp1.3.4/bin:/tools/imagemagick5.4.2/ bin:/tools/python2.2.debug/bin PYTHON_ROOT=/tools/python2.2.debug PYTHONPATH=/tools/python2.2.debug/lib/python2.2:/tools/ python2.2.debug/lib/python2.2/lib- tk:/tools/xmlproc0.70:/tools/doctemplate2.2.1:/tools/py thon2.2.debug/lib/python2.2/plat- aix4:/tools/python2.2.debug/lib/python2.2/lib- dynload:/tools/python2.2.debug/lib/python2.2/site- packages test program : ------------------- #include #include int main(int i__argc,char* i__argv[]) { Py_Initialize(); cout << endl << "CALL of PyImport_ImportModule with argument " << getenv("NAME_MODULE") << endl; PyObject * l__obj = PyImport_ImportModule (getenv("NAME_MODULE")); if (l__obj == NULL) cout << "importation of module " << getenv ("NAME_MODULE") << " does not work well!" << endl; return 1; } We ran this test program for different values of $NAME_MODULE. $NAME_MODULE | does the test program work well? ---------------------------------------------------- base64 | KO (Segmentation fault) at the call of PyImport_ImportModule os | OK strop | KO (Segmentation fault) at the call of PyImport_ImportModule string | KO (Segmentation fault) at the call of PyImport_ImportModule What do we see with dbx (Python 2.2 is compiled in debug mode)? ------------------------------------- The last instructions in the stack are : stropmodule.split_whitespace(s = "strop", len = 806723600, maxsplit = 806727672), line 80 in "stropmodule.c" initstrop(), line 1221 in "stropmodule.c" _PyImport_LoadDynamicModule(0x2ff1d2b0, 0x2ff1cdc0, 0xf0004d40), line 53 in "importdl.c" with 0x2ff1d2b0 = "strop" 0x2ff1cdc0 = "/tools/python2.2.debug/lib/python2.2/lib- dynload/strop.so" Could you help us understand the problem? Note : under the interactive Python interpreter, we have no problems importing each of these modules. The test program only crashes under AIX 4.3.3 not under Windows 2000. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 17:17 Message: Logged In: YES user_id=11375 No response in over a year, so we may as well close this one. If the original reporter appears, the bug can be re-opened. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-02-07 17:31 Message: Logged In: YES user_id=33168 This problem may be fixed by adding a period (.) to the first line of the Modules/python.exp file created. The first line should contain the following 3 characters: #!. This is documented in Misc/AIX-NOTES. Fanny, if you are there, let me know. I'd like to try to close this bug report. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-01-13 17:40 Message: Logged In: YES user_id=33168 Fanny, did this problem happen to occur when running in 64-bit mode? There was another bug that was similar which changed an (unsigned int) to a (char *). The change was in Python/dynload_aix.c Could you test with the CVS version of 2.2.2+ or 2.3a1? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=528990&group_id=5470 From noreply at sourceforge.net Sat Jun 5 17:23:16 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 17:23:21 2004 Subject: [ python-Bugs-835145 ] [2.3.2] zipfile test failure on AIX 5.1 Message-ID: Bugs item #835145, was opened at 2003-11-03 12:06 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=835145&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: The Written Word (Albert Chin) (tww-china) >Assigned to: A.M. Kuchling (akuchling) Summary: [2.3.2] zipfile test failure on AIX 5.1 Initial Comment: $ cd /opt/build/Python-2.3.2 $ ./python -E -tt Lib/test/test_zipfile.py Traceback (most recent call last): File "Lib/test/test_zipfile.py", line 35, in ? zipTest(file, zipfile.ZIP_STORED, writtenData) File "Lib/test/test_zipfile.py", line 13, in zipTest zip.close() File "/opt/build/Python-2.3.2/Lib/zipfile.py", line 503, in close zinfo.header_offset) OverflowError: long int too large to convert Exception exceptions.OverflowError: 'long int too large to convert' in > ignored The test passes just fine on AIX 4.3.2. This is against a Python built with the IBM v6 C compiler and GCC 3.3.2. I added some debugging print statements to Lib/zipfile.py and it seems that zinfo.external_attr is out of whack. On AIX 4.3.2, the value for this variable is "2176057344" while on AIX 5.1 it is "10765991936". I tracked this back to the following line in the write method of Lib/zipfile.py: zinfo.external_attr = st[0] << 16L # Unix attributes On AIX 4.3.2, st[0] is 33204 while on AIX 5.1 it is 164276. In python 2.2.x, it was '<< 16' which resulted in a signed value on AIX 5.1. I really don't think you can use the 16L as mode_t on AIX is unsigned int. Ditto for other platforms. Why not just store st[0] unchanged? ---------------------------------------------------------------------- Comment By: The Written Word (Albert Chin) (tww-china) Date: 2003-11-30 19:47 Message: Logged In: YES user_id=119770 Ok, your (st[0] & 0xffff) change should be ok. AIX has several file systems available, among them JFS and JFS2. zipfile.py works fine on JFS and NFS file systems but not JFS2. The 0xffff change throws away the extra bits but it shouldn't matter. I checked the zip source and they don't & with 0xffff but they have the same problem. However, because the shift by 16 is constrained by the maxint(unsigned short), we don't run into the same problem with the zip source as with Python which promotes the int to a long. ---------------------------------------------------------------------- Comment By: The Written Word (Albert Chin) (tww-china) Date: 2003-11-13 14:51 Message: Logged In: YES user_id=119770 The suggestion works. I want to look through the zip-2.3 source though. I'll do so this weekend. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2003-11-13 14:11 Message: Logged In: YES user_id=6380 So Albert, any luck with my suggestion? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2003-11-10 11:43 Message: Logged In: YES user_id=6380 It looks like what is happening is that the mode returned by stat() has a bit beyond the 16th set. I'm guessing that those extra bits should be ignored -- there is no room for them in the header it seems. Could you try replacing st[0] with (st[0] & 0xffff) in that expression and then try again? (Hm, I wonder why the Unix mode is left-shifted 16 bits. Maybe the real definition of "external attributes" is a bit different? What is supposed to be stored in those lower 16 bits that always appear to be zero? I don't have time to research this.) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=835145&group_id=5470 From noreply at sourceforge.net Sat Jun 5 17:27:07 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 17:27:12 2004 Subject: [ python-Bugs-548109 ] Build fails in _curses module Message-ID: Bugs item #548109, was opened at 2002-04-24 10:04 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=548109&group_id=5470 Category: Build Group: Python 2.2.2 >Status: Closed >Resolution: Out of Date Priority: 5 Submitted By: Ralf Hildebrandt (hildeb) Assigned to: Nobody/Anonymous (nobody) Summary: Build fails in _curses module Initial Comment: WHile building Python-2.2.1 on HP-UX 10.20 I get: ... building '_curses' extension gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -I. -I/mnt/disk4/gnu/Python-2.2.1/./Include -I/usr/local/include -IInclude/ -c /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c -o build/temp.hp-ux-B.10.20-9000/715-2.2/_cursesmodule.o -DNDEBUG -g -O3 -Wall -Wstrict-prototypes /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c: In function PyCursesWindow_EchoChar': /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c:710: structure has no member named _flags' /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c:710: _ISPAD' undeclared (first use in this function) /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c:710: (Each undeclared identifier is reported only once /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c:710: for each function it appears in.) /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c: In function PyCursesWindow_NoOutRefresh': /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c:1118: structure has no member named _flags' /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c:1118: _ISPAD' undeclared (first use in this function) /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c: In function PyCursesWindow_Refresh': /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c:1260: structure has no member named _flags' /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c:1260: _ISPAD' undeclared (first use in this function) /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c: In function PyCursesWindow_SubWin': /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c:1326: structure has no member named _flags' /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c:1326: _ISPAD' undeclared (first use in this function) /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c: In function PyCurses_tigetflag': /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c:2264: warning: implicit declaration of function tigetflag' /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c: In function PyCurses_tigetnum': /mnt/disk4/gnu/Python-2.2.1/Modules/_cursesmodule.c:2277: warning: implicit declaration of function tigetnum' ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 17:27 Message: Logged In: YES user_id=11375 No comment from original poster for a year; closing this bug. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2003-03-12 09:15 Message: Logged In: YES user_id=6656 1) please don't dick with the priority... curses not building is never going to be priority 9, sorry 2) what version of ncurses do you have installed? 3) could you attach config.log/pyconfig.h? ---------------------------------------------------------------------- Comment By: Ralf Hildebrandt (hildeb) Date: 2003-03-12 09:00 Message: Logged In: YES user_id=77128 Today I rebuild 2.2.2 and had exactly the same problem. My fix -- I commented out the explicit "extern int setupterm" declaration: /* These prototypes are in , but including this header #defines many common symbols (such as "lines") which breaks the curses module in other ways. So the code will just specify explicit prototypes here. */ //extern int setupterm(char *,int,int *); Comments: I have ncurses installed. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2002-11-06 09:28 Message: Logged In: YES user_id=11375 I agree with Michael. The _ISPAD code is inside #ifdef WINDOW_HAS_FLAGS, which is detected by the configure script. It must be getting set to 1, but the compilation is picking up some different header files, or the same headers with different options. The 'implicit declaration of tigetflag' presumably means either your platform doesn't support tigetflag(), in which case the curses module will need more changes to support its absence, or the prototype is in some header file that isn't being included. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-04-24 10:18 Message: Logged In: YES user_id=6656 Do you have ncurses installed? It looks somewhat like configure is detecting ncurses but the module is being compiled against some lesser curses. Not sure, though. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=548109&group_id=5470 From noreply at sourceforge.net Sat Jun 5 17:26:51 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 17:28:00 2004 Subject: [ python-Bugs-962633 ] macfs and macostools tests fail on UFS Message-ID: Bugs item #962633, was opened at 2004-05-29 03:41 Message generated for change (Comment added) made by mondragon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=962633&group_id=5470 >Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Bryan Blackburn (blb) >Assigned to: Nick Bastin (mondragon) Summary: macfs and macostools tests fail on UFS Initial Comment: Two Mac-specific tests (macfs and macostools) will fail if 'make test' is run on a UFS volume. macfs fails since SetDates() doesn't affect all timestamps for UFS like it does on HFS+. This causes GetDates() to return unexpected values, hence the failure. macostools fails for similar reasons, but related to forks, since UFS doesn't have any. Not sure if this is something to be fixed (few use UFS) or should simply be pointed out in documentation. ---------------------------------------------------------------------- >Comment By: Nick Bastin (mondragon) Date: 2004-06-05 17:26 Message: Logged In: YES user_id=430343 I'm moving this to doc and mentioning it there. MacOS X generally doesn't work on UFS, and it's unclear as to whether apple will continue to support it anyhow, so I'm not sure we should spend the time fixing this. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=962633&group_id=5470 From noreply at sourceforge.net Sat Jun 5 17:46:49 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 17:46:56 2004 Subject: [ python-Bugs-964876 ] mapping a 0 length file Message-ID: Bugs item #964876, was opened at 2004-06-02 05:28 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: A.M. Kuchling (akuchling) Summary: mapping a 0 length file Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. If I mmap a 0 length file on winnt, I obtain an exception: >>> import mmap, os >>> file = os.open(file_name, os.O_RDWR | os.O_BINARY) >>> buf = mmap.mmap(file, 0, access = map.ACCESS_WRITE) Traceback (most recent call last): File "", line 1, in -toplevel- buf = mmap.mmap(file, 0, access = mmap.ACCESS_WRITE) WindowsError: [Errno 1006] Il volume corrispondente al file ? stato alterato dall'esterno. Il file aperto non ? pi? valido This is a windows problem, but I think it should be at least documented. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-05 17:46 Message: Logged In: YES user_id=31435 The patch looks good. On American Windows, the cryptic error msg is: "WindowsError: [Errno 1006] The volume for a file has been externally altered so that the opened file is no longer valid" Can't find any MS docs on this condition. Then again, mapping an empty file *as* a size-0 file isn't a sane thing to do anyway. ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 14:01 Message: Logged In: YES user_id=113328 Suggested documentation patch: Index: lib/libmmap.tex =================================================================== RCS file: /cvsroot/python/python/dist/src/Doc/lib/libmmap.tex,v retrieving revision 1.8 diff -u -r1.8 libmmap.tex --- lib/libmmap.tex 3 Dec 2001 18:27:22 -0000 1.8 +++ lib/libmmap.tex 5 Jun 2004 18:00:08 -0000 @@ -44,7 +44,9 @@ specified by the file handle \var{fileno}, and returns a mmap object. If \var{length} is \code{0}, the maximum length of the map will be the current size of the file when \function{mmap()} is - called. + called. If \var{length} is \code{0} and the file is 0 bytes long, + Windows will return an error. It is not possible to map a 0-byte + file under Windows. \var{tagname}, if specified and not \code{None}, is a string giving a tag name for the mapping. Windows allows you to have many ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 From noreply at sourceforge.net Sat Jun 5 17:48:24 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 17:48:30 2004 Subject: [ python-Bugs-539942 ] os.mkdir() handles SETGID inconsistently Message-ID: Bugs item #539942, was opened at 2002-04-05 14:31 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539942&group_id=5470 Category: Python Library Group: Python 2.1.2 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Les Niles (lniles) Assigned to: Nobody/Anonymous (nobody) Summary: os.mkdir() handles SETGID inconsistently Initial Comment: Under FreeBSD 4.4, with the 2.1.2, 2.1.1 or 1.5.2 library, os.mkdir() does NOT set the SETGID or SETUID bits, regardless of whether they're specified in the mode argument to os.mkdir(). The bits can be set via a call to os.chmod(). This behavior appears to be inherited from FreeBSD's mkdir() os call. On Linux, the SETGID/SETUID bits are set via os.mkdir()'s mode argument. (As near as I can tell, POSIX.1 specifies yet a different behavior.) This is a bug from the standpoint of Python's os module providing a uniform interface. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 17:48 Message: Logged In: YES user_id=11375 No one seems interested in providing a patch, and there's been no discussion of this bug for over two years, so I'm going to close it. ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2002-04-19 10:13 Message: Logged In: YES user_id=12800 Martin, I agree with all your requirements (this shouldn't be construed as an offer to produce such a patch!) ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-19 04:21 Message: Logged In: YES user_id=21627 POSIX specifies The file permission bits of the new directory shall be initialized from mode. These file permission bits of the mode shall be modified by the process' file creation mask. When bits in mode other than the file permission bits are set, the meaning of these additional bits is implementation-defined. (see http://www.opengroup.org/onlinepubs/007904975/functions/mkdir.html) S_ISGID is such an additional bit, so its meaning is implementation defined. Portability with respect to S_ISGID cannot be achieved by implicitly invoking chmod afterwards: S_ISGID might not be supported for directories at all, or its meaning might vary from system to system. So I'd rather honor system policies than trying to cheat them. *If* somebody tries to produce a patch to provide that feature, I'd require that a) there is an autoconf test for it, instead of merely checking whether the system is FreeBSD; b) no additional system call is made on systems where mkdir already has the desired effect; and c) that this deviation from the system's mkdir(2) is properly documented. ---------------------------------------------------------------------- Comment By: Dan Grassi (dgrassi) Date: 2002-04-18 16:23 Message: Logged In: YES user_id=366473 Indeed MAC OS X mkdir() is correct, it abides by umask. ---------------------------------------------------------------------- Comment By: Just van Rossum (jvr) Date: 2002-04-18 14:04 Message: Logged In: YES user_id=92689 For the record: I cannot reproduce what dgrassi reports; the mod argument to os.mkdir() works for me on MacOSX. ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2002-04-18 13:40 Message: Logged In: YES user_id=12800 It's also quite inconvenient for cross platform portability because now you have to either always call os.chmod() everytime you call os.mkdir(), or replace os.mkdir() with a function that does that (so all call sites, even in library modules actually DTRT). IWBNI Python's default os.mkdir() provided that cross platform compatibility. ---------------------------------------------------------------------- Comment By: Dan Grassi (dgrassi) Date: 2002-04-18 12:08 Message: Logged In: YES user_id=366473 On Mac OS X which is also a BSD derivative the mode argument to mkdir()is completely ignored. This becomes more of an issue when makedirs() is used because a simple chmod (which does work) is not sufficient if multiple directories were created. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-05 18:50 Message: Logged In: YES user_id=21627 This is not a bug. The posix module exposes functions from the OS as-is, not trying to unify them. The os module re-exposes those functions where available. Minor details of the behaviour of those functions across platforms are acceptable. For example, on Windows, os.mkdir does not set any bits. Instead, ACLs are inherited according to the OS semantics (i.e. it does on NTFS, but doesn't on FAT32). If you need a function that makes certain additional guarantees, write a new function. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539942&group_id=5470 From noreply at sourceforge.net Sat Jun 5 17:50:59 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 17:51:04 2004 Subject: [ python-Bugs-494203 ] Interpreter won't always exit with daemonic thread Message-ID: Bugs item #494203, was opened at 2001-12-17 09:37 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=494203&group_id=5470 Category: Threads Group: Python 2.1.1 >Status: Closed >Resolution: Works For Me Priority: 5 Submitted By: Morten W. Petersen (morphex) Assigned to: Nobody/Anonymous (nobody) Summary: Interpreter won't always exit with daemonic thread Initial Comment: See file. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 17:50 Message: Logged In: YES user_id=11375 No discussion in over 18 months; closing this bug. ---------------------------------------------------------------------- Comment By: Gustavo Niemeyer (niemeyer) Date: 2002-11-12 10:45 Message: Logged In: YES user_id=7887 I couldn't reproduce it on Conectiva Linux 8.0 as well. Looking trough the trace, I found out that there's some realtime management going on. Then, I tried to preload librt.so, and that made my trace look pretty similar to the attached one. Even then, I wasn't able to reproduce the problem. Morten, can you still reproduce this bug in Python 2.2.2? If so, I'll look for an environment similar to yours. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-17 12:06 Message: Logged In: YES user_id=6380 gdb? ---------------------------------------------------------------------- Comment By: Morten W. Petersen (morphex) Date: 2001-12-17 11:57 Message: Logged In: YES user_id=68005 I can't interpret it either. :-) Is there another way to to provide debugging information ? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-17 11:48 Message: Logged In: YES user_id=6380 Sorry, I don't know how to interpret the trace. Someone else, please? ---------------------------------------------------------------------- Comment By: Morten W. Petersen (morphex) Date: 2001-12-17 11:42 Message: Logged In: YES user_id=68005 I'm attaching a trace.. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-17 11:33 Message: Logged In: YES user_id=6380 I dunno. They're likely to shoot back to Python. Can you do some more digging (e.g. what is the program doing when it's hanging)? ---------------------------------------------------------------------- Comment By: Morten W. Petersen (morphex) Date: 2001-12-17 11:21 Message: Logged In: YES user_id=68005 Ok, the system I'm using is: morten@debian$ python Python 2.1.1 (#1, Nov 11 2001, 18:19:24) [GCC 2.95.4 20011006 (Debian prerelease)] on linux2 Type "copyright", "credits" or "license" for more information. >>> morten@debian$ uname -a Linux debian 2.4.14 #1 SMP Mon Dec 10 20:40:22 CET 2001 i686 unknown With a customized kernel. Should I contact the Debian maintainers ? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-17 10:58 Message: Logged In: YES user_id=6380 I can't reproduce this on Red Hat 6.2. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=494203&group_id=5470 From noreply at sourceforge.net Sat Jun 5 17:56:08 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 17:56:12 2004 Subject: [ python-Bugs-562585 ] build problems on DEC Unix 4.0f Message-ID: Bugs item #562585, was opened at 2002-05-30 16:27 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=562585&group_id=5470 Category: Build Group: Python 2.2.1 >Status: Closed >Resolution: Out of Date Priority: 5 Submitted By: Garey Mills (gareytm) Assigned to: Nobody/Anonymous (nobody) Summary: build problems on DEC Unix 4.0f Initial Comment: Building with cc instead of gcc (as recommended) and with configure switch '--with-dec-threads' (also recommended. 'make test yields the following errors and messages: test test_bsddb crashed -- bsddb.error: (22, 'Invalid argument') test test_format produced unexpected output: ********************************************************************** *** lines 2-3 of actual output doesn't appear in expected output after line 1: + '%#o' % 0 == '00' != '0' + u'%#o' % 0 == u'00' != '0' ********************************************************************** 2 tests failed: test_bsddb test_format 27 tests skipped: test_al test_audioop test_cd test_cl test_curses test_dl test_gdbm test_gl test_gzip test_imageop test_imgfile test_linuxaudiodev test_locale test_minidom test_nis test_ntpath test_pyexpat test_rgbimg test_sax test_socket_ssl test_socketserver test_sunaudiodev test_unicode_file test_winreg test_winsound test_zipfile test_zlib Ask someone to teach regrtest.py about which tests are expected to get skipped on osf1V4. Are test failures important? Who do I ask "to teach regrtest.py about which tests are expected to get skipped on osf1V4"? ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 17:56 Message: Logged In: YES user_id=11375 No discussion in two years; closing this bug. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-06-01 17:05 Message: Logged In: YES user_id=21627 Ok, I don't feel qualified to analyse this further from remote. It seems that something on your system does not like this file name; not sure whether this is the operating system, bsddb, or something else. One would need to use debugging techniques to get more information about the nature of this problem. If you don't plan to use bsddb, you can probably safely ignore this problem. ---------------------------------------------------------------------- Comment By: Garey Mills (gareytm) Date: 2002-05-31 18:47 Message: Logged In: YES user_id=555793 Here is the output of the test with the file name: # ./python Lib/test/regrtest.py -v test_bsddb test_bsddb Testing: BTree fname: /tmp/@18218.0 test test_bsddb crashed -- bsddb.error: (22, 'Invalid argument') Traceback (most recent call last): File "Lib/test/regrtest.py", line 305, in runtest the_module = __import__(test, globals(), locals(), []) File "Lib/test/test_bsddb.py", line 79, in ? test(type[0], type[1]) File "Lib/test/test_bsddb.py", line 21, in test f = openmethod(fname, 'c') error: (22, 'Invalid argument') 1 test failed: test_bsddb Garey ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-05-31 18:12 Message: Logged In: YES user_id=21627 Can you please edit the fragment in test_bsddb.py that reads f = openmethod(fname, 'c') to print fname before that line? Apparently, mktemp returns a bad file name. ---------------------------------------------------------------------- Comment By: Garey Mills (gareytm) Date: 2002-05-30 18:40 Message: Logged In: YES user_id=555793 Here are the command and output # ./python Lib/test/regrtest.py -v test_bsddb test_bsddb Testing: BTree test test_bsddb crashed -- bsddb.error: (22, 'Invalid argument') Traceback (most recent call last): File "Lib/test/regrtest.py", line 305, in runtest the_module = __import__(test, globals(), locals(), []) File "./Lib/test/test_bsddb.py", line 76, in ? test(type[0], type[1]) File "./Lib/test/test_bsddb.py", line 18, in test f = openmethod(fname, 'c') error: (22, 'Invalid argument') 1 test failed: test_bsddb ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-05-30 18:18 Message: Logged In: YES user_id=21627 the test_format bug is not important; it indicates a bug in the system's C library. For recording expected skipped tests, see Lib/regrtest.py. Search for win32, and submit a patch that records the expected skips. Alternatively, just don't worry about this. For test_:bsddb, please run "python Lib/regrtest.py -v test_bsddb" separately, and report the output. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=562585&group_id=5470 From noreply at sourceforge.net Sat Jun 5 17:58:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 17:58:36 2004 Subject: [ python-Bugs-595105 ] AESend on Jaguar Message-ID: Bugs item #595105, was opened at 2002-08-14 12:02 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=595105&group_id=5470 Category: Macintosh Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Alexandre Parenteau (aubonbeurre) Assigned to: Jack Jansen (jackjansen) Summary: AESend on Jaguar Initial Comment: I wonder how many of you guys played with Jaguar (OSX 10.2). It is also my first glance at gcc 3.1 that comes with 10.2. Everything was fine for building Python, except for waste which has an obsolete libWaste.a (even in August 1st Waste 2.1b1) which won't compile with gcc3.1. After I recompiled waste with CodeWarrior 8.2 (MPTP: early access), it came OK. I then run into some problems of checked out files because I'm using MacCvs (see earlier message). I used the 'make frameworkinstall' scheme. Now I'm experiencing the nice new architecture. I mostly use python from the command line to invoke CodeWarrior thru AppleScripts, so I almost immeditly run into a hanging problems of python : _err = AESend(&_self->ob_itself, &reply, sendMode, sendPriority, timeOutInTicks, 0L/*upp_AEIdleProc*/, (AEFilterUPP)0); I had to comment out upp_AEIdleProc (I tried several things, but that is the only thing which helped). Jack, you might want to remember this one if the problem is still in Jaguar. It hangs and finally times out. I've looked inside this function and I can see the signal handling, but I'm not sure what it is for. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 17:58 Message: Logged In: YES user_id=11375 Jack, any movement on this bug? Should it simply be closed? ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-11-27 16:23 Message: Logged In: YES user_id=45365 Apple are aware of the bug and looking at it. Their bug ID is 3097709. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-08-30 07:47 Message: Logged In: YES user_id=45365 The problem turns out to be that under Jaguar you must *not* pass an idle routine if you haven't done all the correct toolbox initializations yet. To verify this, do something like "import EasyDialogs; EasyDialogs.Message("There we go")" at the start of your script. Everything now works fine. Somehow, if you haven't initialized, the idle routine will be called with random garbage, and it will be called continuously. The question is: what's the best way to fix this? I noticed that even a simple Carbon.Evt.WaitNextEvent(0,0) in your test script is enough to make it work. Should we call this in the init_AE() routine? in AESend()? ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-08-19 10:45 Message: Logged In: YES user_id=45365 Alexandre, I tried a few simple things with the AE module (talking to CodeWarrior) on Jaguar, but I can't get it to hang. Could you give me an example script that shows the problem? Also: I've been using MachoPython from the CVS HEAD, I assume you're using the same, right? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=595105&group_id=5470 From noreply at sourceforge.net Sat Jun 5 18:00:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 18:00:46 2004 Subject: [ python-Bugs-629097 ] Race condition in asyncore poll Message-ID: Bugs item #629097, was opened at 2002-10-26 10:48 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=629097&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Steve Alexander (stevea_zope) >Assigned to: A.M. Kuchling (akuchling) Summary: Race condition in asyncore poll Initial Comment: In the following post to the Zope 3 developers' list, I describe a race condition in the poll method of asyncore. http://lists.zope.org/pipermail/zope3-dev/2002-October/003091.html The problem is this: There is a global dict socket_map. In the poll method, socket_map is processed into appropriate arguments for a select.select call. However, if a socket that is represented socket_map is closed during the time between the processing of socket_map and the select.select call, that call will fail with a Bad File Descriptor (EBADF) error. One solution is to patch asyncore to catch EBADF errors raised by select.select, and at that point see if the file descriptors in the current version of socket_map are the same as in the processed data used for the select.select call. If they are the same, re-raise the error, otherwise, ignore the error. In another email to the Zope 3 developers' list, Jeremy Hylton queries whether there are any other similar problems in asyncore. http://lists.zope.org/pipermail/zope3-dev/2002-October/003093.html ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-02-24 07:03 Message: Logged In: YES user_id=410460 Yup, closing file in separate thread is program's (not asyncore) error. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-11-05 17:49 Message: Logged In: YES user_id=6380 According to Jeremy, this is more a matter of "don't do that". The right solution is to make sure that sockets are only closed by the main thread (the thread running asyncore.loop()). I wonder if we should just close this bug report? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=629097&group_id=5470 From noreply at sourceforge.net Sat Jun 5 18:05:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 18:05:22 2004 Subject: [ python-Bugs-626936 ] canvas.create_box() crashes Tk thread Message-ID: Bugs item #626936, was opened at 2002-10-22 11:54 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=626936&group_id=5470 Category: Tkinter Group: Platform-specific >Status: Closed >Resolution: Out of Date Priority: 5 Submitted By: T. Koehler (rasfahan) Assigned to: Nobody/Anonymous (nobody) Summary: canvas.create_box() crashes Tk thread Initial Comment: Frequently, but apparently not depending on the paramters passed, the following exception will be thrown, and Tk will stop responding while the interpreter continues to run. This is on Windows (95, 98, 2000), under linux, the problem does not occur. All parameters passed have int-values, we checked that first. The exception can be caught via a try-statement, but Tk will stop responding anyway. ---snip self.rectangle =self.canvas.create_rectangle(self.x,self.y+1,self.x2,self.y 2-1) File "D:\PYTHON21\lib\lib-tk\Tkinter.py", line 1961, in create_rectangle return self._create('rectangle', args, kw) File "D:\PYTHON21\lib\lib-tk\Tkinter.py", line 1939, in _create (self._w, 'create', itemType) ValueError: invalid literal for int(): None ---snap ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 18:05 Message: Logged In: YES user_id=11375 No discussion in 18 months; closing this bug. ---------------------------------------------------------------------- Comment By: T. Koehler (rasfahan) Date: 2002-10-23 14:29 Message: Logged In: YES user_id=634021 So far, I have failed to produce the bug in a small code-snippet. We use no library that isn't included in the standard python downloads, and the bug is reproducible with all 2.x versions. The create_rectangle call is located in another thread (created with thread.start_new_thread), though, than the Tk.mainloop of the root window of the canvas - might that be a problem? Unfortunatly, I do not have access to a debugger/c-compiler for windows, except for cygwin, where the problem does not exist. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-10-23 09:40 Message: Logged In: YES user_id=21627 As I said, this is difficult to understand, since _tkinter has no return Py_None in the relevant code: it might be memory corruption in an completely unrelated module. Are you using any funny extension modules? Can you post a self-contained example that might allow me to reproduce the problem? Could you try to debug the associated C code in a debugger? I.e. set a breakpoint onto the single return in Tkapp_Call (the USING_OBJECTS version), and conditionalize it on res==Py_None. Then inspect the state of the Tcl interp. Sorry I can't be of more help. ---------------------------------------------------------------------- Comment By: T. Koehler (rasfahan) Date: 2002-10-23 07:09 Message: Logged In: YES user_id=634021 Ok, here's the output of the last two print-statements before the exception from getint(). To me it looks as if the tk.app.call actually did return None. ('.8476780.13726076.8520076', 'create', 'rectangle', 50, 3, 250, 13, '-fill', '#00FF00') None ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-10-22 13:03 Message: Logged In: YES user_id=21627 Can you please investigate this a little bit further? Split the return statement in _create into several parts command = (self._w, 'create', itemType) + args + self._options(cnf, kw) print command result = apply(self.tk.call, command) print result return getint(result) It seems that result will become None. This, in turn, is quite impossible: tk.app.call will never return None. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=626936&group_id=5470 From noreply at sourceforge.net Sat Jun 5 18:13:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 18:14:14 2004 Subject: [ python-Bugs-952807 ] segfault in subclassing datetime.date & pickling Message-ID: Bugs item #952807, was opened at 2004-05-12 15:30 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=952807&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Thomas Wouters (twouters) Assigned to: Nobody/Anonymous (nobody) Summary: segfault in subclassing datetime.date & pickling Initial Comment: datetime.date does not take subclassing into account properly. datetime.date's tp_new has special code for unpickling (the single-string argument) which calls PyObject_New() directly, which doesn't account for the fact that subclasses may participate in cycle-gc (even if datetime.date objects do not.) The result is a segfault in code that unpickles instances of subclasses of datetime.date: import pickle, datetime class mydate(datetime.date): pass s = pickle.dumps(mydate.today()) broken = pickle.loads(s) del broken The 'del broken' is what causes the segfault: the 'mydate' class/type is supposed to participate in GC, but because of datetime.date's shortcut, that part of the object is never initialized (nor allocated, I presume.) The 'broken' instance reaches 0 refcounts, the GC gets triggered and it reads garbage memory. To 'prove' that the problem isn't caused by pickle itself: class mydate(datetime.date): pass broken = mydate('\x07\xd4\x05\x0c') del broken causes the same crash, in the GC code. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-05 18:13 Message: Logged In: YES user_id=31435 I expect that datetime.datetime and datetime.time objects must have the same kind of vulnerability. Jiwon, can you address those too while you're at it? ---------------------------------------------------------------------- Comment By: Jiwon Seo (jiwon) Date: 2004-06-05 10:57 Message: Logged In: YES user_id=595483 Here is the patch of datetimemodule and test code for it. I just read the summary, and made the datetimemodule patch as is said, and added a testcode for it. *** Modules/datetimemodule.c.orig Sat Jun 5 23:49:26 2004 --- Modules/datetimemodule.c Sat Jun 5 23:47:05 2004 *************** *** 2206,2212 **** { PyDateTime_Date *me; ! me = PyObject_New(PyDateTime_Date, type); if (me != NULL) { char *pdata = PyString_AS_STRING(state); memcpy(me->data, pdata, _PyDateTime_DATE_DATASIZE); --- 2206,2212 ---- { PyDateTime_Date *me; ! me = (PyDateTime_Date *) (type->tp_alloc(type, 0)); if (me != NULL) { char *pdata = PyString_AS_STRING(state); memcpy(me->data, pdata, _PyDateTime_DATE_DATASIZE); test code patch *** Lib/test/test_datetime.py.orig Sat Jun 5 23:49:44 2004 --- Lib/test/test_datetime.py Sat Jun 5 23:52:52 2004 *************** *** 510,515 **** --- 510,517 ---- dt2 = dt - delta self.assertEqual(dt2, dt - days) + class SubclassDate(date): pass + class TestDate(HarmlessMixedComparison): # Tests here should pass for both dates and datetimes, except for a # few tests that TestDateTime overrides. *************** *** 1028,1033 **** --- 1030,1044 ---- self.assertEqual(dt2.extra, 7) self.assertEqual(dt1.toordinal(), dt2.toordinal()) self.assertEqual(dt2.newmeth(-7), dt1.year + dt1.month - 7) + + def test_pickling_subclass_date(self): + + args = 6, 7, 23 + orig = SubclassDate(*args) + for pickler, unpickler, proto in pickle_choices: + green = pickler.dumps(orig, proto) + derived = unpickler.loads(green) + self.assertEqual(orig, derived) def test_backdoor_resistance(self): # For fast unpickling, the constructor accepts a pickle string. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=952807&group_id=5470 From noreply at sourceforge.net Sat Jun 5 18:17:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 18:17:45 2004 Subject: [ python-Bugs-690341 ] tarfile fails on MacOS9 Message-ID: Bugs item #690341, was opened at 2003-02-20 17:21 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=690341&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Out of Date Priority: 3 Submitted By: Jack Jansen (jackjansen) Assigned to: Nobody/Anonymous (nobody) Summary: tarfile fails on MacOS9 Initial Comment: test_tarfile fails in MacPython-OS9. The problem seems to be pathname-related, and I'm not sure whether the problem is with the test or with the tarfile module itself. Various of the tests (test_seek, test_readlines) expect to find a file called, for example, "0-REGTYPE-TEXT", while the archive holds a file "/0-REGTYPE-TEXT". Apparently the tarfile module has some support for os.sep being different than "/", but not enough. An added problem (or maybe the only problem?) may be that on the mac there are no files at the "root", only directories (disks, actually). So while "/tmp" is representable as "tmp:" this doesn't really work for files as they would be interpreted as directories. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 18:17 Message: Logged In: YES user_id=11375 Given that MacOS 9 support is gone, I'll close this bug. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2003-03-07 08:51 Message: Logged In: YES user_id=45365 The problem turns out to be deeper: the way tarfile handles pathnames needs work (and probably a lot of it). There is an assumption all over the code that you can convert unix pathnames to native pathnames by simply substitution / with os.path.sep. This will not fly for the Mac: /unix/path/name -> unix:path:name and relative/unix/path -> :relative:unix:path. I've rigger tarfile.py to raise ImportError on the mac, changed the summary of this bug and left it open/unassigned with a lower priority. Maybe someone finds the time to fix it, some time. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-02-23 17:05 Message: Logged In: YES user_id=33168 Jack, I don't really know how to fix these. My guess is that doing something like os.path.normpath or abspath on the names would help. The code in test_seek and test_readlines does this: file(os.path.join(dirname(), filename), "r") Could that be changed to: file(os.path.normpath(os.path.join(dirname(), filename)), "r") Perhaps, there should be a utility function to get the normalized name? I can try to make the changes, but can't build or test on MacOS 9. If you can suggest some changes, I can make a patch for you to try. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=690341&group_id=5470 From noreply at sourceforge.net Sat Jun 5 19:11:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 19:11:59 2004 Subject: [ python-Bugs-967334 ] Cmd in thread segfaults after Ctrl-C Message-ID: Bugs item #967334, was opened at 2004-06-05 16:11 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967334&group_id=5470 Category: Threads Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Kevin M. Turner (acapnotic) Assigned to: Nobody/Anonymous (nobody) Summary: Cmd in thread segfaults after Ctrl-C Initial Comment: With Cmd.cmdloop running in a thread, saying Ctrl-C will make Python segfault. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967334&group_id=5470 From noreply at sourceforge.net Sat Jun 5 19:18:06 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 19:18:12 2004 Subject: [ python-Bugs-966431 ] import x.y inside of module x.y Message-ID: Bugs item #966431, was opened at 2004-06-04 19:58 Message generated for change (Comment added) made by jiwon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966431&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: import x.y inside of module x.y Initial Comment: To get to the module object from the body of the module itself, the usual trick is to import it from itself, as in: x.py: import x do_stuff_with(x) This fails strangely if x is in a package: package/x.py: import package.x do_stuff_with(package.x) The last line triggers an AttributeError: 'module' object has no attribute 'x'. In other words, the import succeeds but the expression 'package.x' still isn't valid after it. ---------------------------------------------------------------------- Comment By: Jiwon Seo (jiwon) Date: 2004-06-06 08:18 Message: Logged In: YES user_id=595483 The error seems to be due to the calling sequence of add_submodule and loadmodule in import.c:import_submodule. If load_module(..) is called after add_submodule(...) gets called, the above does not trigger Attribute Error. I made a patch that does it, but there is a problem... Currently, when import produces errors, sys.modules have the damaged module, but the patch does not. (That's why it cannot pass the test_pkgimport.py unittest, I think.) Someone who knows more about import.c could fix the patch to behave like that. The patch is in http://seojiwon.dnip.net:8000/~jiwon/tmp/import.diff ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966431&group_id=5470 From noreply at sourceforge.net Sat Jun 5 20:07:46 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 20:07:53 2004 Subject: [ python-Bugs-952807 ] segfault in subclassing datetime.date & pickling Message-ID: Bugs item #952807, was opened at 2004-05-13 04:30 Message generated for change (Comment added) made by jiwon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=952807&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Thomas Wouters (twouters) Assigned to: Nobody/Anonymous (nobody) Summary: segfault in subclassing datetime.date & pickling Initial Comment: datetime.date does not take subclassing into account properly. datetime.date's tp_new has special code for unpickling (the single-string argument) which calls PyObject_New() directly, which doesn't account for the fact that subclasses may participate in cycle-gc (even if datetime.date objects do not.) The result is a segfault in code that unpickles instances of subclasses of datetime.date: import pickle, datetime class mydate(datetime.date): pass s = pickle.dumps(mydate.today()) broken = pickle.loads(s) del broken The 'del broken' is what causes the segfault: the 'mydate' class/type is supposed to participate in GC, but because of datetime.date's shortcut, that part of the object is never initialized (nor allocated, I presume.) The 'broken' instance reaches 0 refcounts, the GC gets triggered and it reads garbage memory. To 'prove' that the problem isn't caused by pickle itself: class mydate(datetime.date): pass broken = mydate('\x07\xd4\x05\x0c') del broken causes the same crash, in the GC code. ---------------------------------------------------------------------- Comment By: Jiwon Seo (jiwon) Date: 2004-06-06 09:07 Message: Logged In: YES user_id=595483 It was as you expected. =^) I fixed it in the same way. Here is the patch. http://seojiwon.dnip.net:8000/~jiwon/tmp/datetime.diff (too long to copy&paste here) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-06 07:13 Message: Logged In: YES user_id=31435 I expect that datetime.datetime and datetime.time objects must have the same kind of vulnerability. Jiwon, can you address those too while you're at it? ---------------------------------------------------------------------- Comment By: Jiwon Seo (jiwon) Date: 2004-06-05 23:57 Message: Logged In: YES user_id=595483 Here is the patch of datetimemodule and test code for it. I just read the summary, and made the datetimemodule patch as is said, and added a testcode for it. *** Modules/datetimemodule.c.orig Sat Jun 5 23:49:26 2004 --- Modules/datetimemodule.c Sat Jun 5 23:47:05 2004 *************** *** 2206,2212 **** { PyDateTime_Date *me; ! me = PyObject_New(PyDateTime_Date, type); if (me != NULL) { char *pdata = PyString_AS_STRING(state); memcpy(me->data, pdata, _PyDateTime_DATE_DATASIZE); --- 2206,2212 ---- { PyDateTime_Date *me; ! me = (PyDateTime_Date *) (type->tp_alloc(type, 0)); if (me != NULL) { char *pdata = PyString_AS_STRING(state); memcpy(me->data, pdata, _PyDateTime_DATE_DATASIZE); test code patch *** Lib/test/test_datetime.py.orig Sat Jun 5 23:49:44 2004 --- Lib/test/test_datetime.py Sat Jun 5 23:52:52 2004 *************** *** 510,515 **** --- 510,517 ---- dt2 = dt - delta self.assertEqual(dt2, dt - days) + class SubclassDate(date): pass + class TestDate(HarmlessMixedComparison): # Tests here should pass for both dates and datetimes, except for a # few tests that TestDateTime overrides. *************** *** 1028,1033 **** --- 1030,1044 ---- self.assertEqual(dt2.extra, 7) self.assertEqual(dt1.toordinal(), dt2.toordinal()) self.assertEqual(dt2.newmeth(-7), dt1.year + dt1.month - 7) + + def test_pickling_subclass_date(self): + + args = 6, 7, 23 + orig = SubclassDate(*args) + for pickler, unpickler, proto in pickle_choices: + green = pickler.dumps(orig, proto) + derived = unpickler.loads(green) + self.assertEqual(orig, derived) def test_backdoor_resistance(self): # For fast unpickling, the constructor accepts a pickle string. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=952807&group_id=5470 From noreply at sourceforge.net Sat Jun 5 21:18:44 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 21:20:40 2004 Subject: [ python-Bugs-962633 ] macfs and macostools tests fail on UFS Message-ID: Bugs item #962633, was opened at 2004-05-29 03:41 Message generated for change (Comment added) made by mondragon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=962633&group_id=5470 Category: Documentation Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Bryan Blackburn (blb) Assigned to: Nick Bastin (mondragon) Summary: macfs and macostools tests fail on UFS Initial Comment: Two Mac-specific tests (macfs and macostools) will fail if 'make test' is run on a UFS volume. macfs fails since SetDates() doesn't affect all timestamps for UFS like it does on HFS+. This causes GetDates() to return unexpected values, hence the failure. macostools fails for similar reasons, but related to forks, since UFS doesn't have any. Not sure if this is something to be fixed (few use UFS) or should simply be pointed out in documentation. ---------------------------------------------------------------------- >Comment By: Nick Bastin (mondragon) Date: 2004-06-05 21:18 Message: Logged In: YES user_id=430343 Modified the macfs and macostools doc to reflect various amounts of workingness on UFS partitions... :-) ---------------------------------------------------------------------- Comment By: Nick Bastin (mondragon) Date: 2004-06-05 17:26 Message: Logged In: YES user_id=430343 I'm moving this to doc and mentioning it there. MacOS X generally doesn't work on UFS, and it's unclear as to whether apple will continue to support it anyhow, so I'm not sure we should spend the time fixing this. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=962633&group_id=5470 From noreply at sourceforge.net Sat Jun 5 23:30:50 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 5 23:31:09 2004 Subject: [ python-Bugs-952807 ] segfault in subclassing datetime.date & pickling Message-ID: Bugs item #952807, was opened at 2004-05-12 15:30 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=952807&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Thomas Wouters (twouters) Assigned to: Nobody/Anonymous (nobody) Summary: segfault in subclassing datetime.date & pickling Initial Comment: datetime.date does not take subclassing into account properly. datetime.date's tp_new has special code for unpickling (the single-string argument) which calls PyObject_New() directly, which doesn't account for the fact that subclasses may participate in cycle-gc (even if datetime.date objects do not.) The result is a segfault in code that unpickles instances of subclasses of datetime.date: import pickle, datetime class mydate(datetime.date): pass s = pickle.dumps(mydate.today()) broken = pickle.loads(s) del broken The 'del broken' is what causes the segfault: the 'mydate' class/type is supposed to participate in GC, but because of datetime.date's shortcut, that part of the object is never initialized (nor allocated, I presume.) The 'broken' instance reaches 0 refcounts, the GC gets triggered and it reads garbage memory. To 'prove' that the problem isn't caused by pickle itself: class mydate(datetime.date): pass broken = mydate('\x07\xd4\x05\x0c') del broken causes the same crash, in the GC code. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-05 23:30 Message: Logged In: YES user_id=31435 Thank you! I'm attaching your patch to this report. ---------------------------------------------------------------------- Comment By: Jiwon Seo (jiwon) Date: 2004-06-05 20:07 Message: Logged In: YES user_id=595483 It was as you expected. =^) I fixed it in the same way. Here is the patch. http://seojiwon.dnip.net:8000/~jiwon/tmp/datetime.diff (too long to copy&paste here) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 18:13 Message: Logged In: YES user_id=31435 I expect that datetime.datetime and datetime.time objects must have the same kind of vulnerability. Jiwon, can you address those too while you're at it? ---------------------------------------------------------------------- Comment By: Jiwon Seo (jiwon) Date: 2004-06-05 10:57 Message: Logged In: YES user_id=595483 Here is the patch of datetimemodule and test code for it. I just read the summary, and made the datetimemodule patch as is said, and added a testcode for it. *** Modules/datetimemodule.c.orig Sat Jun 5 23:49:26 2004 --- Modules/datetimemodule.c Sat Jun 5 23:47:05 2004 *************** *** 2206,2212 **** { PyDateTime_Date *me; ! me = PyObject_New(PyDateTime_Date, type); if (me != NULL) { char *pdata = PyString_AS_STRING(state); memcpy(me->data, pdata, _PyDateTime_DATE_DATASIZE); --- 2206,2212 ---- { PyDateTime_Date *me; ! me = (PyDateTime_Date *) (type->tp_alloc(type, 0)); if (me != NULL) { char *pdata = PyString_AS_STRING(state); memcpy(me->data, pdata, _PyDateTime_DATE_DATASIZE); test code patch *** Lib/test/test_datetime.py.orig Sat Jun 5 23:49:44 2004 --- Lib/test/test_datetime.py Sat Jun 5 23:52:52 2004 *************** *** 510,515 **** --- 510,517 ---- dt2 = dt - delta self.assertEqual(dt2, dt - days) + class SubclassDate(date): pass + class TestDate(HarmlessMixedComparison): # Tests here should pass for both dates and datetimes, except for a # few tests that TestDateTime overrides. *************** *** 1028,1033 **** --- 1030,1044 ---- self.assertEqual(dt2.extra, 7) self.assertEqual(dt1.toordinal(), dt2.toordinal()) self.assertEqual(dt2.newmeth(-7), dt1.year + dt1.month - 7) + + def test_pickling_subclass_date(self): + + args = 6, 7, 23 + orig = SubclassDate(*args) + for pickler, unpickler, proto in pickle_choices: + green = pickler.dumps(orig, proto) + derived = unpickler.loads(green) + self.assertEqual(orig, derived) def test_backdoor_resistance(self): # For fast unpickling, the constructor accepts a pickle string. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=952807&group_id=5470 From noreply at sourceforge.net Sun Jun 6 07:09:53 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 07:10:00 2004 Subject: [ python-Bugs-964861 ] Cookie module does not parse date Message-ID: Bugs item #964861, was opened at 2004-06-02 09:02 Message generated for change (Comment added) made by manlioperillo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964861&group_id=5470 Category: Python Library Group: None Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: A.M. Kuchling (akuchling) Summary: Cookie module does not parse date Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. The standard Cookie module does not parse date string. Here is and example: >>> import Cookie >>> s = 'Set-Cookie: key=value; path=/; expires=Fri, 21-May-2004 10:40:51 GMT' >>> c = Cookie.BaseCookie(s) >>> print c Set-Cookie: key=value; expires=Fri,; Path=/; In the attached file I have reported the correct (I think) regex pattern. Thanks and Regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-06 11:09 Message: Logged In: YES user_id=1054957 insomnike wrote that RFC2109 requires double quotes. This is right, but many servers follow the Netscape spec. Moreover (as I can see) the same Cookie module follow the Netscape date spec! >>> import Cookie >>> c = Cookie.SimpleCookie() >>> c['key'] = 'value' >>> c['key']['expires'] = 10 >>> c.output() 'Set-Cookie: key=value; expires=Sun, 06-Jun-2004 10:36:24 GMT;' >>> s = c.output() >>> nc = Cookie.SimpleCookie(s) >>> nc.output() 'Set-Cookie: key=value; expires=Sun,;' Thanks and regards Manlio Perillo ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 20:50 Message: Logged In: YES user_id=11375 Closing as 'not a bug'. This decision could be reversed if there's some common application or software that returns cookies without quoting the date properly. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:43 Message: Logged In: YES user_id=1057404 This bug is in error; RFC2109 specifies the BNF grammar as: av-pairs = av-pair *(";" av-pair) av-pair = attr ["=" value] ; optional value attr = token value = word word = token | quoted-string If you surround the date in double quotes, as per the RFC, then the above works correctly. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964861&group_id=5470 From noreply at sourceforge.net Sun Jun 6 07:15:53 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 07:16:00 2004 Subject: [ python-Bugs-964876 ] mapping a 0 length file Message-ID: Bugs item #964876, was opened at 2004-06-02 09:28 Message generated for change (Comment added) made by manlioperillo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: A.M. Kuchling (akuchling) Summary: mapping a 0 length file Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. If I mmap a 0 length file on winnt, I obtain an exception: >>> import mmap, os >>> file = os.open(file_name, os.O_RDWR | os.O_BINARY) >>> buf = mmap.mmap(file, 0, access = map.ACCESS_WRITE) Traceback (most recent call last): File "", line 1, in -toplevel- buf = mmap.mmap(file, 0, access = mmap.ACCESS_WRITE) WindowsError: [Errno 1006] Il volume corrispondente al file ? stato alterato dall'esterno. Il file aperto non ? pi? valido This is a windows problem, but I think it should be at least documented. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-06 11:15 Message: Logged In: YES user_id=1054957 tim_one wrote that mapping an empty file as a size-0 file isn't a sane thing to do anyway. I think this is not really true. mmap has a resize method, so, in theory, one can map a size-0 file and let it grow. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 21:46 Message: Logged In: YES user_id=31435 The patch looks good. On American Windows, the cryptic error msg is: "WindowsError: [Errno 1006] The volume for a file has been externally altered so that the opened file is no longer valid" Can't find any MS docs on this condition. Then again, mapping an empty file *as* a size-0 file isn't a sane thing to do anyway. ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 18:01 Message: Logged In: YES user_id=113328 Suggested documentation patch: Index: lib/libmmap.tex =================================================================== RCS file: /cvsroot/python/python/dist/src/Doc/lib/libmmap.tex,v retrieving revision 1.8 diff -u -r1.8 libmmap.tex --- lib/libmmap.tex 3 Dec 2001 18:27:22 -0000 1.8 +++ lib/libmmap.tex 5 Jun 2004 18:00:08 -0000 @@ -44,7 +44,9 @@ specified by the file handle \var{fileno}, and returns a mmap object. If \var{length} is \code{0}, the maximum length of the map will be the current size of the file when \function{mmap()} is - called. + called. If \var{length} is \code{0} and the file is 0 bytes long, + Windows will return an error. It is not possible to map a 0-byte + file under Windows. \var{tagname}, if specified and not \code{None}, is a string giving a tag name for the mapping. Windows allows you to have many ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 From noreply at sourceforge.net Sun Jun 6 12:09:13 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 12:09:19 2004 Subject: [ python-Bugs-967657 ] PyInt_FromString failed with certain hex/oct Message-ID: Bugs item #967657, was opened at 2004-06-06 16:09 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967657&group_id=5470 Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: Qian Wenjie (qwj) Assigned to: Nobody/Anonymous (nobody) Summary: PyInt_FromString failed with certain hex/oct Initial Comment: When numbers are 0x80000000 through 0xffffffff and 020000000000 through 037777777777, it will translate into negative. Example: >>> 030000000000 -1073741824 >>> int('030000000000',0) -1073741824 patches to Python 2.3.4: Python/compile.c 1259c1259 < x = (long) PyOS_strtoul(s, &end, 0); --- > x = (long) PyOS_strtol(s, &end, 0); Objects/intobject.c 293c293 < x = (long) PyOS_strtoul(s, &end, base); --- > x = (long) PyOS_strtol(s, &end, base); ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967657&group_id=5470 From noreply at sourceforge.net Sun Jun 6 12:14:10 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 12:14:17 2004 Subject: [ python-Bugs-964876 ] mapping a 0 length file Message-ID: Bugs item #964876, was opened at 2004-06-02 05:28 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: A.M. Kuchling (akuchling) Summary: mapping a 0 length file Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. If I mmap a 0 length file on winnt, I obtain an exception: >>> import mmap, os >>> file = os.open(file_name, os.O_RDWR | os.O_BINARY) >>> buf = mmap.mmap(file, 0, access = map.ACCESS_WRITE) Traceback (most recent call last): File "", line 1, in -toplevel- buf = mmap.mmap(file, 0, access = mmap.ACCESS_WRITE) WindowsError: [Errno 1006] Il volume corrispondente al file ? stato alterato dall'esterno. Il file aperto non ? pi? valido This is a windows problem, but I think it should be at least documented. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-06 12:14 Message: Logged In: YES user_id=31435 No, on Windows it's not sane. While the Python mmap module has a .resize() method, Microsoft's file mapping API does not -- .resize() on Windows is accomplished by throwing away the current mapping and creating a brand new one. On non-Windows systems, it's not guaranteed that Python can do a .resize() at all (it depends on whether the platform C supports mremap() -- if it doesn't, you get a SystemError mmap: resizing not available--no mremap() exception). So "in theory" ignores too much of reality to take seriously. ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-06 07:15 Message: Logged In: YES user_id=1054957 tim_one wrote that mapping an empty file as a size-0 file isn't a sane thing to do anyway. I think this is not really true. mmap has a resize method, so, in theory, one can map a size-0 file and let it grow. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 17:46 Message: Logged In: YES user_id=31435 The patch looks good. On American Windows, the cryptic error msg is: "WindowsError: [Errno 1006] The volume for a file has been externally altered so that the opened file is no longer valid" Can't find any MS docs on this condition. Then again, mapping an empty file *as* a size-0 file isn't a sane thing to do anyway. ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 14:01 Message: Logged In: YES user_id=113328 Suggested documentation patch: Index: lib/libmmap.tex =================================================================== RCS file: /cvsroot/python/python/dist/src/Doc/lib/libmmap.tex,v retrieving revision 1.8 diff -u -r1.8 libmmap.tex --- lib/libmmap.tex 3 Dec 2001 18:27:22 -0000 1.8 +++ lib/libmmap.tex 5 Jun 2004 18:00:08 -0000 @@ -44,7 +44,9 @@ specified by the file handle \var{fileno}, and returns a mmap object. If \var{length} is \code{0}, the maximum length of the map will be the current size of the file when \function{mmap()} is - called. + called. If \var{length} is \code{0} and the file is 0 bytes long, + Windows will return an error. It is not possible to map a 0-byte + file under Windows. \var{tagname}, if specified and not \code{None}, is a string giving a tag name for the mapping. Windows allows you to have many ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 From noreply at sourceforge.net Sun Jun 6 12:56:53 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 12:57:00 2004 Subject: [ python-Bugs-964876 ] mapping a 0 length file Message-ID: Bugs item #964876, was opened at 2004-06-02 05:28 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 >Category: Documentation >Group: Platform-specific >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Manlio Perillo (manlioperillo) >Assigned to: Tim Peters (tim_one) Summary: mapping a 0 length file Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. If I mmap a 0 length file on winnt, I obtain an exception: >>> import mmap, os >>> file = os.open(file_name, os.O_RDWR | os.O_BINARY) >>> buf = mmap.mmap(file, 0, access = map.ACCESS_WRITE) Traceback (most recent call last): File "", line 1, in -toplevel- buf = mmap.mmap(file, 0, access = mmap.ACCESS_WRITE) WindowsError: [Errno 1006] Il volume corrispondente al file ? stato alterato dall'esterno. Il file aperto non ? pi? valido This is a windows problem, but I think it should be at least documented. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-06 12:56 Message: Logged In: YES user_id=31435 Clarified the docs, on HEAD and 2.3 maint: Doc/lib/libmmap.tex new revisions 1.8.24.1 and 1.9. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-06 12:14 Message: Logged In: YES user_id=31435 No, on Windows it's not sane. While the Python mmap module has a .resize() method, Microsoft's file mapping API does not -- .resize() on Windows is accomplished by throwing away the current mapping and creating a brand new one. On non-Windows systems, it's not guaranteed that Python can do a .resize() at all (it depends on whether the platform C supports mremap() -- if it doesn't, you get a SystemError mmap: resizing not available--no mremap() exception). So "in theory" ignores too much of reality to take seriously. ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-06 07:15 Message: Logged In: YES user_id=1054957 tim_one wrote that mapping an empty file as a size-0 file isn't a sane thing to do anyway. I think this is not really true. mmap has a resize method, so, in theory, one can map a size-0 file and let it grow. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 17:46 Message: Logged In: YES user_id=31435 The patch looks good. On American Windows, the cryptic error msg is: "WindowsError: [Errno 1006] The volume for a file has been externally altered so that the opened file is no longer valid" Can't find any MS docs on this condition. Then again, mapping an empty file *as* a size-0 file isn't a sane thing to do anyway. ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 14:01 Message: Logged In: YES user_id=113328 Suggested documentation patch: Index: lib/libmmap.tex =================================================================== RCS file: /cvsroot/python/python/dist/src/Doc/lib/libmmap.tex,v retrieving revision 1.8 diff -u -r1.8 libmmap.tex --- lib/libmmap.tex 3 Dec 2001 18:27:22 -0000 1.8 +++ lib/libmmap.tex 5 Jun 2004 18:00:08 -0000 @@ -44,7 +44,9 @@ specified by the file handle \var{fileno}, and returns a mmap object. If \var{length} is \code{0}, the maximum length of the map will be the current size of the file when \function{mmap()} is - called. + called. If \var{length} is \code{0} and the file is 0 bytes long, + Windows will return an error. It is not possible to map a 0-byte + file under Windows. \var{tagname}, if specified and not \code{None}, is a string giving a tag name for the mapping. Windows allows you to have many ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 From noreply at sourceforge.net Sun Jun 6 15:59:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 15:59:33 2004 Subject: [ python-Bugs-881641 ] Error in obmalloc. Message-ID: Bugs item #881641, was opened at 2004-01-21 14:39 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=881641&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed Resolution: Rejected Priority: 5 Submitted By: Aki Tossavainen (cmouse) Assigned to: Neal Norwitz (nnorwitz) Summary: Error in obmalloc. Initial Comment: There appears to be some sort of problem with obmalloc.c's PyObject_Free() function. >From valgrind's output: # more python.pid6399 ==6399== Memcheck, a.k.a. Valgrind, a memory error detector for x86-linux. ==6399== Copyright (C) 2002-2003, and GNU GPL'd, by Julian Seward. ==6399== Using valgrind-2.0.0, a program supervision framework for x86-linux. ==6399== Copyright (C) 2000-2003, and GNU GPL'd, by Julian Seward. ==6399== ==6399== My PID = 6399, parent PID = 19903. Prog and args are: ==6399== python ==6399== Estimated CPU clock rate is 1100 MHz ==6399== For more details, rerun with: -v ==6399== ==6399== Conditional jump or move depends on uninitialised value(s) ==6399== at 0x4A9BDE74: PyObject_Free (obmalloc.c:711) ==6399== by 0x4A9B8198: dictresize (dictobject.c:477) ==6399== by 0x4A9C1B90: PyString_InternInPlace (stringobject.c:4139) ==6399== by 0x4A9C1C42: PyString_InternFromString (stringobject.c:4169) ==6399== by 0x4A9CBE27: add_operators (typeobject.c:5200) ==6399== by 0x4A9C999C: PyType_Ready (typeobject.c:3147) ==6399== by 0x4A9C9DAF: PyType_Ready (typeobject.c:3115) ==6399== by 0x4A9BCFCF: _Py_ReadyTypes (object.c:1961) ==6399== by 0x4AA1C78B: Py_Initialize (pythonrun.c:172) ==6399== by 0x4AA252AB: Py_Main (main.c:370) ==6399== by 0x8048708: main (ccpython.cc:10) ==6399== by 0x4ABA391D: __libc_start_main (libc- start.c:152) ==6399== by 0x8048630: (within /usr/bin/python) ==6399== Use of uninitialised value of size 4 ==6399== at 0x4A9BDE7E: PyObject_Free (obmalloc.c:711) ==6399== by 0x4A9B8198: dictresize (dictobject.c:477) ==6399== by 0x4A9C1B90: PyString_InternInPlace (stringobject.c:4139) ==6399== by 0x4A9C1C42: PyString_InternFromString (stringobject.c:4169) ==6399== by 0x4A9CBE27: add_operators (typeobject.c:5200) ==6399== by 0x4A9C999C: PyType_Ready (typeobject.c:3147) ==6399== by 0x4A9C9DAF: PyType_Ready (typeobject.c:3115) ==6399== by 0x4A9BCFCF: _Py_ReadyTypes (object.c:1961) ==6399== by 0x4AA1C78B: Py_Initialize (pythonrun.c:172) ==6399== by 0x4AA252AB: Py_Main (main.c:370) ==6399== by 0x8048708: main (ccpython.cc:10) ==6399== by 0x4ABA391D: __libc_start_main (libc- start.c:152) ==6399== by 0x8048630: (within /usr/bin/python) ==6399== Invalid read of size 4 ==6399== at 0x4A9BDE6F: PyObject_Free (obmalloc.c:711) ==6399== by 0x4A9D3564: pmerge (typeobject.c:1177) ==6399== by 0x4A9CCEFC: mro_implementation (typeobject.c:1248) ==6399== by 0x4A9C85A7: mro_internal (typeobject.c:1272) ==6399== by 0x4A9C9A3A: PyType_Ready (typeobject.c:3163) ==6399== by 0x4A9BD031: _Py_ReadyTypes (object.c:1976) ==6399== by 0x4AA1C78B: Py_Initialize (pythonrun.c:172) ==6399== by 0x4AA252AB: Py_Main (main.c:370) ==6399== by 0x8048708: main (ccpython.cc:10) ==6399== by 0x4ABA391D: __libc_start_main (libc- start.c:152) ==6399== by 0x8048630: (within /usr/bin/python) ==6399== Address 0x4BEAC010 is not stack'd, malloc'd or free'd and so on and so on. You can find valgrind from http://valgrind.kde.org/ These lines show up in the log when I do valgrind --num-callers=20 --logfile=python python My version of Python is 2.3.2. You can find the original logfile attached. All I did was start python and hit 'ctrl+d' ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-06 15:59 Message: Logged In: YES user_id=33168 Checked in some fixes to improve use of valgrind. Most importantly, you should read Misc/README.valgrind * Objects/obmalloc.c 2.52 and 2.53 * Misc/README.valgrind 1.1 * Misc/valgrind-python.supp 1.1 ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2004-01-22 11:11 Message: Logged In: YES user_id=33168 Heh, I recently found I didn't check in my valgrind patch. I don't know what happened. I will do this when CVS comes back. Unfortunately, it seems SF has been having CVS problems (again) for a while now. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-01-22 11:06 Message: Logged In: YES user_id=6656 Hmm, the patch in the mail i referred you too never seems to have got checked in, despite Tim telling Neal to :-) Neal? (can't do anything now, SF developer CVS is down) ---------------------------------------------------------------------- Comment By: Aki Tossavainen (cmouse) Date: 2004-01-22 10:53 Message: Logged In: YES user_id=198027 Could you put into somewhere some suppression file you can use with valgrind... so that no patching is required. If it's not a real problem then it could be just suppressed. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-01-22 06:01 Message: Logged In: YES user_id=6656 Well, yeah, pymalloc engages in behaviour that is not in strict conformance with the ANSI C standard. We knew this already. As yet, no platform has been reported as having problems with this, though. If you want to use valgrind to look for real problems, reading http://mail.python.org/pipermail/python-dev/2003-July/ 036740.html and the containing thread may be of interest (or you could just turn pymalloc off at configure time). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=881641&group_id=5470 From noreply at sourceforge.net Sun Jun 6 16:07:10 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 16:07:20 2004 Subject: [ python-Bugs-656392 ] binascii.a2b_base64 with non base64 data Message-ID: Bugs item #656392, was opened at 2002-12-19 12:20 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=656392&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Grzegorz Makarewicz (makaron) >Assigned to: Neal Norwitz (nnorwitz) Summary: binascii.a2b_base64 with non base64 data Initial Comment: python 2.2.2 or cvs, platform any binascii.a2b_base64 allocates buffer for data at startup, at end data it truncated to decoded size if it is bigger than 0, but what about invalid data where every character is non base64 - space, \r,\n ? Buffer remains allocated to bin_len but resulting data length is 0 and it isnt truncated as random data will be returned demo.py import base64 data = '\n' result = base64.decodestring(data) print map(ord,result) ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-06 16:07 Message: Logged In: YES user_id=33168 This problem appears to be fixed in Python 2.3+. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=656392&group_id=5470 From noreply at sourceforge.net Sun Jun 6 17:36:04 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 17:36:14 2004 Subject: [ python-Feature Requests-935915 ] os.nullfilename Message-ID: Feature Requests item #935915, was opened at 2004-04-16 05:44 Message generated for change (Comment added) made by jbelmonte You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=935915&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: John Belmonte (jbelmonte) Assigned to: Nobody/Anonymous (nobody) Summary: os.nullfilename Initial Comment: Just as the os library provides os.sep, etc., for the current OS, it should provide the name of the null file (e.g., "/dev/null" or "nul:"), so that there is a portable way to open a null file. Use of an object such as class nullFile: def write(self, data): pass is not sufficient because it does not provide a full file object interface (no access to file descriptor, etc.). See discussion at . ---------------------------------------------------------------------- >Comment By: John Belmonte (jbelmonte) Date: 2004-06-07 06:36 Message: Logged In: YES user_id=282299 Please mark this as a patch and consider for commit. ---------------------------------------------------------------------- Comment By: John Belmonte (jbelmonte) Date: 2004-05-30 08:36 Message: Logged In: YES user_id=282299 Attaching patch against Python HEAD, including docs and test. ---------------------------------------------------------------------- Comment By: John Belmonte (jbelmonte) Date: 2004-05-21 23:46 Message: Logged In: YES user_id=282299 I do intend to make a patch, but it may be some time before I get to it. Please give me a few weeks. If someone else would like to step in, that is fine, just let me know before you start the work. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-21 15:00 Message: Logged In: YES user_id=80475 Consider mentioning this on comp.lang.python. Perhaps someone will volunteer to write a patch. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-05-10 01:08 Message: Logged In: YES user_id=21627 Would you like to work on a patch? ---------------------------------------------------------------------- Comment By: David Albert Torpey (dtorp) Date: 2004-05-09 10:54 Message: Logged In: YES user_id=681258 I like this idea. It is more portable. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-04-16 05:52 Message: Logged In: YES user_id=21627 Move to feature requests tracker ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=935915&group_id=5470 From noreply at sourceforge.net Sun Jun 6 18:00:42 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 18:00:48 2004 Subject: [ python-Feature Requests-935915 ] os.nullfilename Message-ID: Feature Requests item #935915, was opened at 2004-04-15 22:44 Message generated for change (Settings changed) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=935915&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: John Belmonte (jbelmonte) >Assigned to: Martin v. L?wis (loewis) Summary: os.nullfilename Initial Comment: Just as the os library provides os.sep, etc., for the current OS, it should provide the name of the null file (e.g., "/dev/null" or "nul:"), so that there is a portable way to open a null file. Use of an object such as class nullFile: def write(self, data): pass is not sufficient because it does not provide a full file object interface (no access to file descriptor, etc.). See discussion at . ---------------------------------------------------------------------- Comment By: John Belmonte (jbelmonte) Date: 2004-06-06 23:36 Message: Logged In: YES user_id=282299 Please mark this as a patch and consider for commit. ---------------------------------------------------------------------- Comment By: John Belmonte (jbelmonte) Date: 2004-05-30 01:36 Message: Logged In: YES user_id=282299 Attaching patch against Python HEAD, including docs and test. ---------------------------------------------------------------------- Comment By: John Belmonte (jbelmonte) Date: 2004-05-21 16:46 Message: Logged In: YES user_id=282299 I do intend to make a patch, but it may be some time before I get to it. Please give me a few weeks. If someone else would like to step in, that is fine, just let me know before you start the work. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-21 08:00 Message: Logged In: YES user_id=80475 Consider mentioning this on comp.lang.python. Perhaps someone will volunteer to write a patch. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-05-09 18:08 Message: Logged In: YES user_id=21627 Would you like to work on a patch? ---------------------------------------------------------------------- Comment By: David Albert Torpey (dtorp) Date: 2004-05-09 03:54 Message: Logged In: YES user_id=681258 I like this idea. It is more portable. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-04-15 22:52 Message: Logged In: YES user_id=21627 Move to feature requests tracker ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=935915&group_id=5470 From noreply at sourceforge.net Sun Jun 6 19:29:29 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 19:29:35 2004 Subject: [ python-Bugs-953177 ] cgi module documentation could mention getlist Message-ID: Bugs item #953177, was opened at 2004-05-13 07:15 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=953177&group_id=5470 Category: Documentation Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Richard Jones (richard) Assigned to: A.M. Kuchling (akuchling) Summary: cgi module documentation could mention getlist Initial Comment: Section "11.2.2 Using the cgi module" at http://www.python.org/dev/doc/devel/lib/node411.html has a discussion about how the module handles multiple values with the same name. It even presents a section of code describing how to handle the situation. It could alternatively just make mention of its own getlist() method. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-06 19:29 Message: Logged In: YES user_id=11375 Committed to CVS HEAD and to 2.3 branch. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2004-06-05 16:22 Message: Logged In: YES user_id=11375 Ready to check in the fix, but SF CVS is down. ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 12:00 Message: Logged In: YES user_id=113328 The following patch seems to be what is required (inline because I can't upload files :-() Index: lib/libcgi.tex =================================================================== RCS file: /cvsroot/python/python/dist/src/Doc/lib/libcgi.tex,v retrieving revision 1.43 diff -u -r1.43 libcgi.tex --- lib/libcgi.tex 23 Jan 2004 04:05:27 -0000 1.43 +++ lib/libcgi.tex 5 Jun 2004 15:59:45 -0000 @@ -135,19 +135,14 @@ \samp{form.getvalue(\var{key})} would return a list of strings. If you expect this possibility (when your HTML form contains multiple fields with the same name), use -the \function{isinstance()} built-in function to determine whether you -have a single instance or a list of instances. For example, this +the \function{getlist()}, which always returns a list of values (so that you +do not need to special-case the single item case). For example, this code concatenates any number of username fields, separated by commas: \begin{verbatim} -value = form.getvalue("username", "") -if isinstance(value, list): - # Multiple username fields specified - usernames = ",".join(value) -else: - # Single or no username field specified - usernames = value +values = form.getlist("username") +usernames = ",".join(values) \end{verbatim} If a field represents an uploaded file, accessing the value via the ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=953177&group_id=5470 From noreply at sourceforge.net Sun Jun 6 23:07:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 23:08:05 2004 Subject: [ python-Bugs-967657 ] PyInt_FromString failed with certain hex/oct Message-ID: Bugs item #967657, was opened at 2004-06-06 12:09 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967657&group_id=5470 Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: Qian Wenjie (qwj) Assigned to: Nobody/Anonymous (nobody) Summary: PyInt_FromString failed with certain hex/oct Initial Comment: When numbers are 0x80000000 through 0xffffffff and 020000000000 through 037777777777, it will translate into negative. Example: >>> 030000000000 -1073741824 >>> int('030000000000',0) -1073741824 patches to Python 2.3.4: Python/compile.c 1259c1259 < x = (long) PyOS_strtoul(s, &end, 0); --- > x = (long) PyOS_strtol(s, &end, 0); Objects/intobject.c 293c293 < x = (long) PyOS_strtoul(s, &end, base); --- > x = (long) PyOS_strtol(s, &end, base); ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-06 23:07 Message: Logged In: YES user_id=31435 Python is supposed to act this way in 2.3. It's supposed to act the way you want in 2.4. You didn't say which version of Python you're using. If you used 2.3.4, I'm surprised your output didn't contain messages warning that this behavior is going to change: Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 030000000000 :1: FutureWarning: hex/oct constants > sys.maxint will return positive values in Python 2.4 and up -1073741824 >>> int('030000000000',0) __main__:1: FutureWarning: int('0...', 0): sign will change in Python 2.4 -1073741824 >>> Which version of Python were you using, and under which OS? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967657&group_id=5470 From noreply at sourceforge.net Sun Jun 6 23:17:02 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 23:17:09 2004 Subject: [ python-Bugs-967657 ] PyInt_FromString failed with certain hex/oct Message-ID: Bugs item #967657, was opened at 2004-06-06 16:09 Message generated for change (Comment added) made by qwj You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967657&group_id=5470 Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: Qian Wenjie (qwj) Assigned to: Nobody/Anonymous (nobody) Summary: PyInt_FromString failed with certain hex/oct Initial Comment: When numbers are 0x80000000 through 0xffffffff and 020000000000 through 037777777777, it will translate into negative. Example: >>> 030000000000 -1073741824 >>> int('030000000000',0) -1073741824 patches to Python 2.3.4: Python/compile.c 1259c1259 < x = (long) PyOS_strtoul(s, &end, 0); --- > x = (long) PyOS_strtol(s, &end, 0); Objects/intobject.c 293c293 < x = (long) PyOS_strtoul(s, &end, base); --- > x = (long) PyOS_strtol(s, &end, base); ---------------------------------------------------------------------- >Comment By: Qian Wenjie (qwj) Date: 2004-06-07 03:17 Message: Logged In: YES user_id=1057975 I am wondering why should we wait for python 2.4 to fix this bug. It just costs two lines changes. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-07 03:07 Message: Logged In: YES user_id=31435 Python is supposed to act this way in 2.3. It's supposed to act the way you want in 2.4. You didn't say which version of Python you're using. If you used 2.3.4, I'm surprised your output didn't contain messages warning that this behavior is going to change: Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 030000000000 :1: FutureWarning: hex/oct constants > sys.maxint will return positive values in Python 2.4 and up -1073741824 >>> int('030000000000',0) __main__:1: FutureWarning: int('0...', 0): sign will change in Python 2.4 -1073741824 >>> Which version of Python were you using, and under which OS? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967657&group_id=5470 From noreply at sourceforge.net Sun Jun 6 23:27:51 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 23:27:59 2004 Subject: [ python-Bugs-967657 ] PyInt_FromString failed with certain hex/oct Message-ID: Bugs item #967657, was opened at 2004-06-06 12:09 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967657&group_id=5470 Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: Qian Wenjie (qwj) Assigned to: Nobody/Anonymous (nobody) Summary: PyInt_FromString failed with certain hex/oct Initial Comment: When numbers are 0x80000000 through 0xffffffff and 020000000000 through 037777777777, it will translate into negative. Example: >>> 030000000000 -1073741824 >>> int('030000000000',0) -1073741824 patches to Python 2.3.4: Python/compile.c 1259c1259 < x = (long) PyOS_strtoul(s, &end, 0); --- > x = (long) PyOS_strtol(s, &end, 0); Objects/intobject.c 293c293 < x = (long) PyOS_strtoul(s, &end, base); --- > x = (long) PyOS_strtol(s, &end, base); ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-06 23:27 Message: Logged In: YES user_id=31435 It's not a bug -- Python has worked this way for more than a decade, and changing documented behavior is a slow process. This change is part of those discussed in PEP 237, which is in its 3rd year(!) of implementation: http://www.python.org/peps/pep-0237.html Do read the PEP. Costs here aren't implementation effort, they're end-user costs (changes in what Python does require users to change their programs, and that's necessarily a drawn-out process). ---------------------------------------------------------------------- Comment By: Qian Wenjie (qwj) Date: 2004-06-06 23:17 Message: Logged In: YES user_id=1057975 I am wondering why should we wait for python 2.4 to fix this bug. It just costs two lines changes. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-06 23:07 Message: Logged In: YES user_id=31435 Python is supposed to act this way in 2.3. It's supposed to act the way you want in 2.4. You didn't say which version of Python you're using. If you used 2.3.4, I'm surprised your output didn't contain messages warning that this behavior is going to change: Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 030000000000 :1: FutureWarning: hex/oct constants > sys.maxint will return positive values in Python 2.4 and up -1073741824 >>> int('030000000000',0) __main__:1: FutureWarning: int('0...', 0): sign will change in Python 2.4 -1073741824 >>> Which version of Python were you using, and under which OS? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967657&group_id=5470 From noreply at sourceforge.net Sun Jun 6 23:56:22 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 6 23:56:29 2004 Subject: [ python-Bugs-701936 ] getsockopt/setsockopt with SO_RCVTIMEO are inconsistent Message-ID: Bugs item #701936, was opened at 2003-03-11 21:54 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=701936&group_id=5470 Category: Python Library Group: Python 2.2.2 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: GaryD (gazzadee) >Assigned to: Neal Norwitz (nnorwitz) Summary: getsockopt/setsockopt with SO_RCVTIMEO are inconsistent Initial Comment: The SO_RCVTIMEO option to getsockopt/setsockopt seems to vary it's parameter format when used under Linux. With setsockopt, the parameter seems to be a struct of {long seconds, long microseconds}, as you would expect since it's modelling a C "struct timeval". However, with getsockopt, the parameter format seems to be {long seconds, long milliseconds} --- ie. it uses milliseconds rather than microseconds. The attached python script demonstrates this problem. Am I doing something crucially wrong, or is this meant to happen, or ... ? What I'm using: Python 2.2.2 (#1, Feb 24 2003, 17:36:09) [GCC 3.0.4 (Mandrake Linux 8.2 3.0.4-2mdk)] on linux2 ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-06 23:56 Message: Logged In: YES user_id=33168 Yes, I think this is not appropriate for Python to try to resolve. It would be very difficult to get right for all platforms. Thanks for the reminder. ---------------------------------------------------------------------- Comment By: GaryD (gazzadee) Date: 2004-06-03 22:53 Message: Logged In: YES user_id=693152 What do we do about this? Since it does not seem to be an explciitly python problem, do we just resolve it as "Rejected" or "Wont Fix"? ---------------------------------------------------------------------- Comment By: GaryD (gazzadee) Date: 2003-08-14 00:33 Message: Logged In: YES user_id=693152 Yes, you're right - the same thing happens with C. Here's the output from sockopt.c on my system: base (len 8) - 12, 345670 default (len 8) - 0, 0 after setsockopt (len 8) - 12, 20 ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-05-24 22:42 Message: Logged In: YES user_id=33168 I just tested this on my box (Redhat 9/Linux 2.4). I get similar results with a C program as Python. (Not sure why I didn't get exactly the same results, but I'm tired.) So I'm not sure Python has a problem, since it is just exposing what is happening in C. Take a look at the C example and try it on your box. What results do you get? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=701936&group_id=5470 From noreply at sourceforge.net Mon Jun 7 00:46:57 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 00:47:03 2004 Subject: [ python-Bugs-967934 ] csv module cannot handle embedded \r Message-ID: Bugs item #967934, was opened at 2004-06-07 14:46 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967934&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Gregory Bond (gnbond) Assigned to: Nobody/Anonymous (nobody) Summary: csv module cannot handle embedded \r Initial Comment: CSV module cannot handle the case of embedded \r (i.e. carriage return) in a field. As far as I can see, this is hard-coded into the _csv.c file and cannot be fixed with Dialect changes. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967934&group_id=5470 From noreply at sourceforge.net Mon Jun 7 01:01:43 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 01:02:24 2004 Subject: [ python-Bugs-956408 ] Simplifiy coding in cmd.py Message-ID: Bugs item #956408, was opened at 2004-05-18 20:54 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=956408&group_id=5470 Category: None Group: Python 2.4 Status: Open Resolution: None Priority: 3 Submitted By: Raymond Hettinger (rhettinger) >Assigned to: Neal Norwitz (nnorwitz) Summary: Simplifiy coding in cmd.py Initial Comment: In the cmd.py 1.35 checkin on 2/6/2003, there are many lines like: self.stdout.write("%s\n"%str(header)) I believe the str() call in unnecessary. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-07 00:01 Message: Logged In: YES user_id=80475 Neal, is this a simplification you would like to make? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=956408&group_id=5470 From noreply at sourceforge.net Mon Jun 7 01:02:59 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 01:03:30 2004 Subject: [ python-Bugs-967934 ] csv module cannot handle embedded \r Message-ID: Bugs item #967934, was opened at 2004-06-06 23:46 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967934&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Gregory Bond (gnbond) >Assigned to: Skip Montanaro (montanaro) Summary: csv module cannot handle embedded \r Initial Comment: CSV module cannot handle the case of embedded \r (i.e. carriage return) in a field. As far as I can see, this is hard-coded into the _csv.c file and cannot be fixed with Dialect changes. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-07 00:02 Message: Logged In: YES user_id=80475 Skip, does this coincide with your planned switchover to universal newlines? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967934&group_id=5470 From noreply at sourceforge.net Mon Jun 7 01:15:58 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 01:16:07 2004 Subject: [ python-Bugs-926910 ] Overenthusiastic check in Swap? Message-ID: Bugs item #926910, was opened at 2004-03-31 14:34 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=926910&group_id=5470 Category: Threads Group: None Status: Open Resolution: None Priority: 5 Submitted By: benson margulies (benson_basis) >Assigned to: Mark Hammond (mhammond) Summary: Overenthusiastic check in Swap? Initial Comment: When Py_DEBUG is turned on, PyThreadState_Swap calls in a fatal error if the two different thread states are ever used for a thread. I submit that this is not a good check. The documentation encourages us to write code that creates and destroys thread states as C threads come and go. Why can't a single C thread make a thread state, release it, and then make another one later? One particular usage pattern: We have an API that initializes embedded python. Then we have another API where the callers are allowed to be in any C thread. The second API has no easy way to tell if a thread used for it happens to be the same thread that made the initialization call. As the code is written now, any code running on the 'main thread' is required to fish out the build-in main-thread thread state. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-07 00:15 Message: Logged In: YES user_id=80475 Mark, I believe this is your code. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=926910&group_id=5470 From noreply at sourceforge.net Mon Jun 7 01:29:25 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 01:29:41 2004 Subject: [ python-Bugs-917055 ] add a stronger PRNG Message-ID: Bugs item #917055, was opened at 2004-03-15 21:46 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=917055&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: paul rubin (phr) Assigned to: Nobody/Anonymous (nobody) Summary: add a stronger PRNG Initial Comment: The default Mersenne Twister algorithm in the Random module is very fast but makes no serious attempt to generate output that stands up to adversarial analysis. Besides cryptography applications, this can be a serious problem in areas like computer games. Sites like www.partypoker.com routinely run online tournaments with prize funds of 100K USD or more. There's big financial incentives to find ways of guessing your opponent's cards with better than random chance probability. See bug #901285 for some discussion of possible correlations in Mersenne Twister's output. Teukolsky et al discuss PRNG issues at some length in their book "Numerical Recipes". The original edition of Numerical Recipes had a full blown version of the FIPS Data Encryption Standard implemented horrendously in Fortran, as a way of making a PRNG with no easily discoverable output correlations. Later editions replaced the DES routine with a more efficient one based on similar principles. Python already has an SHA module that makes a dandy PRNG. I coded a sample implementation: http://www.nightsong.com/phr/python/sharandom.py I'd like to ask that the Python lib include something like this as an alternative to MT. It would be similar to the existing whrandom module in that it's an alternative subclass to the regular Random class. The existing Random module wouldn't have to be changed. I don't propose directly including the module above, since I think the Random API should also be extended to allow directly requesting pseudo-random integers from the generator subclass, rather than making them from floating-point output. That would allow making the above subclass work more cleanly. I'll make a separate post about this, but first will have to examine the Random module source code. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-07 00:29 Message: Logged In: YES user_id=80475 Can we close this one (while leaving open the patch for an entropy module)? Essentially, it provides nothing that couldn't be contributed as a short recipe for those interested in such things. While an alternate RNG would be nice, turning this into a crypto project is probably not a great idea. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-04-14 00:33 Message: Logged In: YES user_id=72053 trevp, the jumpahead operation lets you stir in new entropy. Jeremy, I'll see if I can write some docs for it, and will attempt a concrete security proof. I don't think we should need to say no references were found for using sha1 as a prng. The randomness assumption is based on the Krawczyk-Bellare-Rogaway result that's cited somewhere down the page or in the clpy thread. I'll include a cite in the doc/rationale. I hope that the entropy module is accepted, assuming it works. The entropy module is quite a bit more important than the deterministic PRNG module, since the application can easily supply the DPRNG but can't always easily supply the entropy module. ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-13 23:10 Message: Logged In: YES user_id=973611 I submitted a patch for an entropy module, as was discussed below. See patch #934711. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2004-04-11 22:35 Message: Logged In: YES user_id=31392 The current patch doesn't address any of my concerns about documentation or rationale. ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-09 23:22 Message: Logged In: YES user_id=973611 We should probably clarify the requirements. If we just want to use SHA1 to produce an RNG suitable for Monte Carlo etc., then we could do something simpler and faster than what we're doing. In particular, there's no need for state update, we could just generate outputs by SHA1(seed + counter). This is mentioned in "Applied Cryptography", 17.14. If we want it to "stand up to adversarial analysis" and be usable for cryptography, then I think we need to put a little more into it - in particular, the ability to mix new randomness into the generator state becomes important, and it becomes preferable to use a block cipher construction, not because the SHA1 construction is insecure, but so we can point to something like Yarrow. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2004-04-09 23:01 Message: Logged In: YES user_id=31392 Much earlier in the discussion Raymond wrote: "The docs *must* include references indicating the strengths and weaknesses of the approach. It should also concisely say why it works (a summary proof that makes it clear how a one-way digest function can be tranformed into a sequence generator that is cryptographicly strong to both the left and right with the latter being the one that is not obvious)." I don't see any documentation of this sort in the current patch. I also think it would be helpful if the documentation made some mention of why this generator would be useful. In particular, I suspect some users may by confused by the mention of SHA and be lead to believe that this is CSPRNG, when it is not; perhaps a mention of Yarrow and other approaches for cryptographic applications would be enough to clarify. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-04-09 22:24 Message: Logged In: YES user_id=80475 Thanks for the detailed comments. 1) Will add the references we have but need to make it clear that this exact implementation is not studied and does not guarantee cryptographic security. 2) The API is clear, seed() overwrites and jumpahead() updates. Besides, the goal is to provide a good alternative random number generator. If someone needs real crypto, they should use that. Tossing in ad hoc pieces to "make it harder" is a sure sign of venturing too far from theoretical underpinnings. 3) Good observation. I don't think a change is necessary. The docs do not promise that asking for 160 gives the same as 96 and 64 back to back. The Mersenne Twister has similar behavior. 4) Let's not gum up the API because we have encryption and text API's on the brain. The module is about random numbers not byte strings. ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-09 19:52 Message: Logged In: YES user_id=973611 Hi Raymond, here's some lengthy though not critically important comments (i.e, I'm okay with the latest diff). 1) With regards to this documentation: "No references were found which document the properties of SHA-1 as a random number generator". I can't find anything that documents exactly what we're doing. This type of generator is similar to Bruce Schneier's Yarrow and its offspring (Fortuna and Tiny). However, those use block-ciphers in counter mode, instead of SHA-1. According to the recent Tiny paper: "Cryptographic hash functions can also be a good foundation for a PRNG. Many constructs have used MD5 or SHA1 in this capacity, but the constructions are often ad-hoc. When using a hash function, we would recommend HMAC in CTR mode (i.e., one MACs counter for each successive output block). Ultimately, we prefer the use of block ciphers, as they are generally better-studied constructs." http://www.acsac.org/2003/papers/79.pdf Using HMAC seems like overkill to me, and would slow things down. However, if there's any chance Python will add block ciphers in the future, it might be worth waiting for, so we could implement one of the well-documented block cipher PRNGs. 2) Most cryptographic PRNGs allow for mixing new entropy into the generator state. The idea is you harvest entropy in the background, and once you've got a good chunk (say 128+ bits) you add it in. This makes cryptanalysis of the output harder, and allows you to recover even if the generator state is compromised. We could change the seed() method so it updates the state instead of overwriting it: def __init__(self): self.cnt = 0 self.s0 = '\0' * 20 self.gauss_next = None def seed(self, a=None): if a is None: # Initialize from current time import time a = time.time() b = sha.new(repr(a)).digest() self.s0 = sha.new(self.s0 + b).digest() 'b' may not be necessary, I'm not sure, though it's similar to how some other PRNGs handle seed inputs. If we were using a block cipher PRNG, it would be more obvious how to do this. jumpahead() could also be used instead of seed(). 3) The current generators (not just SHA1Random) won't always return the same sequence of bits from the same state. For example, if I call SHA1Random.getrandbits() asking for 160 bits they'll come from the same block, but if I ask for 96 and 64 bits, they'll come from different blocks. I suggest buffering the output, so getting 160 bits or 96+64 gets the same bits. Changing getrandbits() to getrandbytes () would avoid the need for bit-level buffering. 4) I still think a better interface would only require new generators to return a byte string. That would be easier for SHA1Random, and easier for other generators based on cross- platform entropy sources. I.e., in place of random() and getrandbits(), SHA1Random would only have to implement: def getrandbytes(self, n): while len(buffer) < n: self.cnt += 1 self.s0 = sha.new(self.s0 + hex (self.cnt)).digest() self.buffer += sha.new(self.s0).digest() retVal = self.buffer[:n] self.buffer = self.buffer[n:] return retVal The superclass would call this to get the required number of bytes, and convert them as needed (for converting them to numbers it could use the 'long(s, 256)' patch I submitted. Besides making it easier to add new generators, this would provide a useful function to users of these generators. getrandbits() is less useful, and it's harder to go from a long- integer to a byte-string than vice versa, because you may have to zero-pad if the long-integer is small. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-04-08 22:41 Message: Logged In: YES user_id=80475 Attaching a revised patch. If there are no objections, I will add it to the library (after factoring the unittests and adding docs). ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-07 01:56 Message: Logged In: YES user_id=973611 Comments on shardh.py - SHA1Random2 seems more complex than it needs to be. Comparing with figure 19 of [1], I believe that s1 does not need to be kept as state, so you could replace this: self.s1 = s1 = sha.new(s0 + self.s1).hexdigest() with this: s1 = sha.new(s0).hexdigest() If there's concern about the low hamming-distance between counter values, you could simply hash the counter before feeding it in (or use M-T instead of the counter). Instead of updating s0 every block, you could update it every 10th block or so. This would slightly increase the number of old values an attacker could recover, upon compromising the generator state, but it could be a substantial speedup. SHA1Random1 depends on M-T for some of its security properties. In particular, if I discover the generator state, can I run it backwards and determine previous values? I don't know, it depends on M-T. Unless we know more about the properties of M-T, I think it would be preferable to use M- T only in place of the counter in the SHA1Random2 construction (if at all), *NOT* as the sole repository of PRNG state. [1] http://www.cypherpunks.to/~peter/06_random.pdf ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-31 21:48 Message: Logged In: YES user_id=72053 FYI, this was just announced on the python-crypto list. It's a Python wrapper for EGADS, a cross platform entropy-gathering RNG. I haven't looked at the code for it and have no opinion about it. http://wiki.osafoundation.org/twiki/bin/view/Chandler/PyEgads ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-25 19:36 Message: Logged In: YES user_id=72053 1. OK, though the docs should mention this. 2. One-way just means collision resistant, and we're trying to make something without a distinguisher, which is a stronger criterion. I'm not saying there's any problem with a single hash; I just don't see an obvious proof that there's not a problem. Also it makes me cringe slightly to keep the seed around as plaintext even temporarily. The PC that I'm using (midrange Athlon) can do about a million sha1 hashes per second, so an extra hash per reseed "multiple" times per second shouldn't be noticible, for reasonable values of "multiple". 3. Since most of the real computation of this function is in the sha1 compression, the timing indicates that evaluation is dominated by interpreter overhead rather than by hashing. I presume you're using CPython. The results may be different in Jython or with Psyco and different again under PyPy once that becomes real. I think we should take the view that we're designing a mathematical function that exists independently of any specific implementation, then figure out what characteristics it should have and implement those, rather than tailoring it to the peculiarities of CPython. If people are really going to be using this function in 2010, CPython will hopefully be dead and gone (replaced by PyPy) by then, so that's all the more reason to not worry about small CPython-specific effects since the function will outlast CPython. Maybe also sometime between now and then, these libraries can be compiled with psyco. 4. OK 5. OK. Would be good to also change %s for cnt in setstate to %x. 6. Synchronization can be avoided by hashing different fixed strings into s0 and s1 at each rehash (I did that in my version). I think it's worth doing that just to kick the hash function away from standard sha. I actually don't see much need for the counter in either hash, but you were concerned about hitting possible short cycles in sha. 7. OK. WHrandom is already non-threadsafe, so there's precedent. I do have to wonder if the 160 bit arithmetic is slowing things down. If we don't care about non-IEEE doubles, we're ok with 53 bits. Hmm, I also wonder whether the 160 bit int to float conversion is precisely specified even for IEEE and isn't an artifact of Python's long int implementation. But I guess it doesn't matter, since we're never hashing those floats. Re bugs til 2010: oh let's have more confidence than that :). I think if we're careful and get the details correct before deployment, we shouldn't have any problems. This is just one screenful of code or so, not complex by most reasonable standards. However, we might want post the algorithm on sci.crypt for comments, since there's some knowledgeable people there. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-25 04:08 Message: Logged In: YES user_id=80475 Took another look at #5 and will change str() to hex(). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-25 03:25 Message: Logged In: YES user_id=80475 1. We don't care about non IEEE. 2. One-way is one-way, so double hashing is unnecessary. Also, I've fielded bug reports from existing user apps that re-seed very frequently (multiple times per second). 3. The implementations reflect the results of timing experiments which showed that the fastest intermediate representation was hex. 4. ['0x0'] is necessary to hanlde the case where n==0. int('', 16) raises a ValueError while int('0x0', 16) does not. 5. random() and getrandbits() do not have to go through the same intermediate steps (it's okay for one to use hex and the other to use str) -- speed and space issues dominate. 0x comes up enough in Python, there is little use in tossing it away for an obscure, hypothetical micro-controller implementation. 6. Leaving cnt out of the s1 computation guarantees that it will never track the updates of s0 -- any syncronization would be a disaster. Adding a count or some variant smacks of desperation rather than reliance on proven hash properties. 7. The function is already 100 times slower than MT. Adding locks will make the situation worse. It is better to simply document it as being non-threadsafe. Look at back at the mt/sha version. Its code is much cleaner, faster, and threadsafe. It goes a long way towards meeting your request and serving as an alternate generator to validate simulation results. If we use the sha/sha version, I'm certain that we will be fielding bug reports on this through 2010. It is also sufficiently complex that it will spawn lengthy, wasteful discussions and it will create a mine-field for future maintainers. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-25 00:15 Message: Logged In: YES user_id=72053 I'm not sure why the larger state space of mt/sha1 is supposed to be an advantage, but the other points are reasonable. I like the new sha1/sha1 implementation except for a few minor points (below). I made the old version hopefully threadsafe by adding a threading.Lock() object to the instance and locking it on update. I generally like your version better so maybe the lock can be added. Of course that slows it down even further, but in the context of an interpreted Python program I feel that the generator is still fast enough compared to the other stuff the program is likely to be doing. Re the new attachment, some minor issues: 1) The number 2.0**-160 is < 10**-50. This is a valid IEEE double but on some non-IEEE machines it may be a floating underflow or even equal to zero. I don't know if this matters. 2) Paranoia led me to hash the seed twice in the seed operation in my version, to defeat unlikely message-extension attacks against the hash function. I figured reseeding is infrequent enough that an extra hash operation doesn't matter. 3) Storing s1 as a string of 40 hex digits in SHARandom2 means that s1+s2 is 60 characters, which means hashing it will need two sha1 compression operations, slowing it down some. 4) the intiialization of ciphertxt to ["0x0"] instead of [] doesn't seem to do anything useful. int('123abc', 16) is valid without the 0x prefix. 5) random() uses hex(cnt) while getrandbits uses str(cnt) (converting to decimal instead of hex). I think it's better to use hex and remove the 0x prefix from the output, which is cleanest, and simpler to implement on some targets (embedded microcontroller). The same goes for the %s conversion in jumpahead (my version also uses %s there). 6) It may be worthwhile to include cnt in both the s0 and s1 hash updates. That guarantees the s1 hash never gets the same input twice. 7) The variable "s1" in getrandbits (instead of self.s1) is set but never used. Note in my version of sharandom.py, I didn't put a thread lock around the tuple assignment in setstate(). I'm not sure if that's safe or not. But it looks to me like random.py in the CVS does the same thing, so maybe both are unsafe. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-24 06:31 Message: Logged In: YES user_id=80475 I attached a new version of the mt/sha1 combination. Here are the relative merits as compared sha1/sha1 approach: * simpiler to implement/maintain since state tracking is builtin * larger state space (2**19937 vs 2**160) * faster * threadsafe Favoring sha1/sha1: * uses only one primitive * easier to replace in situations where MT is not available ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-21 19:56 Message: Logged In: YES user_id=72053 Two small corrections to below: 1) "in favor of an entropy" is an editing error--the intended meaning should be obvious. 2) I meant Bryan Mongeau, not Eric Mongeau. Bryan's lib is at . Sorry for any confusion. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-21 19:49 Message: Logged In: YES user_id=72053 I'm very much in favor of an entropy and Bram's suggested interface of entropy(x) supplying x bytes of randomness is fine. Perhaps it should really live in a cryptography API rather than in the Random API, but either way is ok. Mark Moraes wrote a Windows module that gets random numbers from the Windows CAPI; I put up a copy at http://www.nightsong.com/phr/python/winrand.module For Linux, Cygwin, and *BSD (including MacOS 10, I think), just read /dev/urandom for random bytes. However, various other systems (I think including Solaris) don't include anything like this. OpenSSL has an entropy gathering daemon that might be of some use in that situation. There's also the Yarrow generator (http://www.schneier.com/yarrow.html) and Eric Mongeau(?) wrote a pure-Python generator a while back that tried to gather entropy from thread racing, similar to Java's default SecureRandom class (I consider that method to be a bit bogus in both Python and Java). I think, though, simply supporting /dev/*random and the Windows CAPI is a pretty good start, even if other OS's aren't supported. Providing that function in the Python lib will make quite a few people happy. A single module integrating both methods would be great. I don't have any Windows dev tools so can't test any wrappers for Mark Moraes's function but maybe one of you guys can do it. I'm not too keen on the md5random.py patch for reasons discussed in the c.l.py thread. It depends too deeply on the precise characteristics of both md5 and Mersenne Twister. I think for a cryptography-based generator, it's better to stick to one cryptography-based primitive, and to use sha instead of md5. That also helps portability since it means other environments (maybe including Jython) can reproduce the PRNG stream without having to re-implement MT, as long as they have SHA (which is a US federal standard). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-21 14:50 Message: Logged In: YES user_id=80475 Bram, if you have a patch, I would be happy to look at it. Please make it as platform independent as possible (its okay to have it conditionally compile differently so long as the API stays the same). Submit it as a separate patch and assign to me -- it doesn't have to be orginal, you can google around to determine prior art. ---------------------------------------------------------------------- Comment By: Bram Cohen (bram_cohen) Date: 2004-03-21 14:34 Message: Logged In: YES user_id=52561 The lack of a 'real' entropy source is the gap which can't be fixed with an application-level bit of code. I think there are simple hooks for this on all systems, such as /dev/random on linux, but they aren't cross platform. A unified API which always calls the native entropy hook would be a very good thing. An example of a reasonable API would be to have a module named entropy, with a single function entropy(x) which returns a random string of length x. This is a problem which almost anyone writing a security-related application runs into, and lots of time is wasted writing dubious hacks to harvest entropy when a single simple library could magically solve it the right way. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-21 12:33 Message: Logged In: YES user_id=80475 Attaching my alternative. If it fulfills your use case, let me know and I'll apply it. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-18 03:51 Message: Logged In: YES user_id=72053 Updated version of sharandom.py is at same url. It folds a counter into the hash and also includes a getrandbits method. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-18 01:27 Message: Logged In: YES user_id=72053 I don't mind dropping the time() auto-seed but I think that means eliminating any auto-seed, and requiring a user-supplied seed. There is no demonstrable minimum period for the SHA-OFB and it would be bad if there was, since it would then no longer act like a PRF. Note that the generator in the sample code actually comes from two different SHA instances and thus its expected period is about 2**160. Anyway, adding a simple counter (incrementing by 1 on every SHA call) to the SHA input removes any lingering chance of a repeating sequence. I'll update the code to do that. It's much less ugly than stirring in Mersenne Twister output. I don't have Python 2.4 yet but will look at it when I get a chance. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-17 06:27 Message: Logged In: YES user_id=80475 It has some potential for being added. That being said, try to avoid a religious fervor. I would like to see considerably more discussion and evolution first. The auto-seeding with time *must* be dropped -- it is not harmonious with the goal of creating a secure random sequence. It is okay for the a subclass to deviate in this way. Also, I was soliciting references stronger than (I don't know of any results ... It is generally considered ... ). If we put this in, people are going to rely on it. The docs *must* include references indicating the strengths and weaknesses of the approach. It should also concisely say why it works (a summary proof that makes it clear how a one-way digest function can be tranformed into a sequence generator that is cryptographicly strong to both the left and right with the latter being the one that is not obvious). Not having a demonstrable minimum period is also bad. Nothing in the discussion so far precludes the existence of a bad seed that has a period of only 1 or 2. See my suggestion on comp.lang.python for a means of mitigating this issue. With respect to the randint question, be sure to look at the current Py2.4 source for random.py. The API is expanded to include and an option method, getrandbits(). That in turn feeds the other integer methods without the int to float to int dance. Upon further consideration, I think the export control question is moot since we're using an existing library function and not really expressing new art. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-17 04:05 Message: Logged In: YES user_id=72053 There are no research results that I know of that can distinguish the output of sha1-ofb from random output in any practical way. To find such a distinguisher would be a significant result. It's a safe bet that cryptographers have searched for distinguishers, though I don't promise to have heard of every result that's been published. I'll ask on sci.crypt if anyone else has heard of such a thing, if you want. SHA1-HMAC is generally considered to be indistinguishable from a PRF (pseudorandom function, i.e. a function selected at random from the space of all functions from arbitrary strings to 160-bit outputs; this term is from concrete security theory). MD5 is deprecated these days and there's no point to using it instead of sha1 for this. I'm not sure what happens if randint is added to the API. If you subclass Random and don't provide a randint method, you inherit from the base class, which can call self.random() to get floats to make the ints from. US export restrictions basically don't exist any more. In principle, if you want to export something, you're supposed to send an email to an address at the commerce department, saying the name of the program and the url where it can be obtained and a few similar things. In practice, email to that address is ignored, they never check anything. I heard the address even stopped working for a while, though they may have fixed it since then. See http://www.bxa.doc.gov/Encryption/ for info. I've emailed notices to the address a few times and never heard back anything. Anyway, I don't think this should count as cryptography; it's simply using a cryptographic hash function as an PRNG to avoid the correlations in other PRNG's; scientific rationale for doing that is given in the Numerical Recipes book mentioned above. The code that I linked uses the standard API but I wonder if the floating point output is optimally uniform, i.e. the N/2**56 calculation may not be exactly the right thing for an ieee float64. Using the time of day is what the Random docs say to do by default. You're correct that any security application needs to supply a higher entropy seed. I would like it very much if the std lib included a module that read some random bytes from the OS for OS's that support it. That means reading /dev/urandom on recent Un*x-ish systems or Cygwin, or calling CryptGenRandom on Windows. Reading /dev/urandom is trivial, and there's a guy on the pycrypt list who wrote a Windows function to call CryptGenRandom and return the output through the Python API. I forwarded the function to Guido with the author's permission but nothing seemed to happen with it. However, this gets away from this sharandom subclass. I'd like to make a few more improvements to the module but after that I think it can be dropped into the lib. Let me know what you think. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-16 06:52 Message: Logged In: YES user_id=80475 One other thought: if cryptographic strength is a goal, then seeding absolutely should require a long seed (key) as an input and the time should *never* be used. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-16 06:49 Message: Logged In: YES user_id=80475 It would have been ideal if the Random API had been designed with an integer generator at the core and floating point as a computed value, but that it the way it has been for a long time and efforts to switch it over would likely result in either incompatibility with existing subclasses or introducing new complexity (making it even harder to subclass). I think the API should be left alone until Py3.0. The attached module would make a good recipe on ASPN where improvements and critiques can be posted. Do you have links to research showing that running SHA-1 in a cipher block feedback mode results in a cryptographically strong random number generator -- the result seems likely, but a research link would be great. Is there a link to research showing the RNG properties of the resulting generator (period, equidistribution, passing tests for randomness, etc)? Also, is there research showing the relative merits of this approach vs MD5, AES, or DES? If something like this gets added to the library, I prefer it to be added to random.py using the existing API. Adding yet another random module would likely do more harm than good. One other question (I don't know the answer to this): would including a cryptographically strong RNG trigger US export restrictions on the python distribution? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=917055&group_id=5470 From noreply at sourceforge.net Mon Jun 7 01:32:29 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 01:32:42 2004 Subject: [ python-Bugs-967934 ] csv module cannot handle embedded \r Message-ID: Bugs item #967934, was opened at 2004-06-07 14:46 Message generated for change (Comment added) made by andrewmcnamara You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967934&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Gregory Bond (gnbond) >Assigned to: Andrew McNamara (andrewmcnamara) Summary: csv module cannot handle embedded \r Initial Comment: CSV module cannot handle the case of embedded \r (i.e. carriage return) in a field. As far as I can see, this is hard-coded into the _csv.c file and cannot be fixed with Dialect changes. ---------------------------------------------------------------------- >Comment By: Andrew McNamara (andrewmcnamara) Date: 2004-06-07 15:32 Message: Logged In: YES user_id=698599 I suspect this restriction (CR appearing within a quoted field) is a historical accident and can be safely removed. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-07 15:02 Message: Logged In: YES user_id=80475 Skip, does this coincide with your planned switchover to universal newlines? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967934&group_id=5470 From noreply at sourceforge.net Mon Jun 7 02:13:39 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 02:13:46 2004 Subject: [ python-Bugs-881641 ] Error in obmalloc. Message-ID: Bugs item #881641, was opened at 2004-01-21 21:39 Message generated for change (Comment added) made by cmouse You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=881641&group_id=5470 Category: Python Library Group: Python 2.3 Status: Closed Resolution: Rejected Priority: 5 Submitted By: Aki Tossavainen (cmouse) Assigned to: Neal Norwitz (nnorwitz) Summary: Error in obmalloc. Initial Comment: There appears to be some sort of problem with obmalloc.c's PyObject_Free() function. >From valgrind's output: # more python.pid6399 ==6399== Memcheck, a.k.a. Valgrind, a memory error detector for x86-linux. ==6399== Copyright (C) 2002-2003, and GNU GPL'd, by Julian Seward. ==6399== Using valgrind-2.0.0, a program supervision framework for x86-linux. ==6399== Copyright (C) 2000-2003, and GNU GPL'd, by Julian Seward. ==6399== ==6399== My PID = 6399, parent PID = 19903. Prog and args are: ==6399== python ==6399== Estimated CPU clock rate is 1100 MHz ==6399== For more details, rerun with: -v ==6399== ==6399== Conditional jump or move depends on uninitialised value(s) ==6399== at 0x4A9BDE74: PyObject_Free (obmalloc.c:711) ==6399== by 0x4A9B8198: dictresize (dictobject.c:477) ==6399== by 0x4A9C1B90: PyString_InternInPlace (stringobject.c:4139) ==6399== by 0x4A9C1C42: PyString_InternFromString (stringobject.c:4169) ==6399== by 0x4A9CBE27: add_operators (typeobject.c:5200) ==6399== by 0x4A9C999C: PyType_Ready (typeobject.c:3147) ==6399== by 0x4A9C9DAF: PyType_Ready (typeobject.c:3115) ==6399== by 0x4A9BCFCF: _Py_ReadyTypes (object.c:1961) ==6399== by 0x4AA1C78B: Py_Initialize (pythonrun.c:172) ==6399== by 0x4AA252AB: Py_Main (main.c:370) ==6399== by 0x8048708: main (ccpython.cc:10) ==6399== by 0x4ABA391D: __libc_start_main (libc- start.c:152) ==6399== by 0x8048630: (within /usr/bin/python) ==6399== Use of uninitialised value of size 4 ==6399== at 0x4A9BDE7E: PyObject_Free (obmalloc.c:711) ==6399== by 0x4A9B8198: dictresize (dictobject.c:477) ==6399== by 0x4A9C1B90: PyString_InternInPlace (stringobject.c:4139) ==6399== by 0x4A9C1C42: PyString_InternFromString (stringobject.c:4169) ==6399== by 0x4A9CBE27: add_operators (typeobject.c:5200) ==6399== by 0x4A9C999C: PyType_Ready (typeobject.c:3147) ==6399== by 0x4A9C9DAF: PyType_Ready (typeobject.c:3115) ==6399== by 0x4A9BCFCF: _Py_ReadyTypes (object.c:1961) ==6399== by 0x4AA1C78B: Py_Initialize (pythonrun.c:172) ==6399== by 0x4AA252AB: Py_Main (main.c:370) ==6399== by 0x8048708: main (ccpython.cc:10) ==6399== by 0x4ABA391D: __libc_start_main (libc- start.c:152) ==6399== by 0x8048630: (within /usr/bin/python) ==6399== Invalid read of size 4 ==6399== at 0x4A9BDE6F: PyObject_Free (obmalloc.c:711) ==6399== by 0x4A9D3564: pmerge (typeobject.c:1177) ==6399== by 0x4A9CCEFC: mro_implementation (typeobject.c:1248) ==6399== by 0x4A9C85A7: mro_internal (typeobject.c:1272) ==6399== by 0x4A9C9A3A: PyType_Ready (typeobject.c:3163) ==6399== by 0x4A9BD031: _Py_ReadyTypes (object.c:1976) ==6399== by 0x4AA1C78B: Py_Initialize (pythonrun.c:172) ==6399== by 0x4AA252AB: Py_Main (main.c:370) ==6399== by 0x8048708: main (ccpython.cc:10) ==6399== by 0x4ABA391D: __libc_start_main (libc- start.c:152) ==6399== by 0x8048630: (within /usr/bin/python) ==6399== Address 0x4BEAC010 is not stack'd, malloc'd or free'd and so on and so on. You can find valgrind from http://valgrind.kde.org/ These lines show up in the log when I do valgrind --num-callers=20 --logfile=python python My version of Python is 2.3.2. You can find the original logfile attached. All I did was start python and hit 'ctrl+d' ---------------------------------------------------------------------- >Comment By: Aki Tossavainen (cmouse) Date: 2004-06-07 09:13 Message: Logged In: YES user_id=198027 Nice. Thank you very much ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-06 22:59 Message: Logged In: YES user_id=33168 Checked in some fixes to improve use of valgrind. Most importantly, you should read Misc/README.valgrind * Objects/obmalloc.c 2.52 and 2.53 * Misc/README.valgrind 1.1 * Misc/valgrind-python.supp 1.1 ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2004-01-22 18:11 Message: Logged In: YES user_id=33168 Heh, I recently found I didn't check in my valgrind patch. I don't know what happened. I will do this when CVS comes back. Unfortunately, it seems SF has been having CVS problems (again) for a while now. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-01-22 18:06 Message: Logged In: YES user_id=6656 Hmm, the patch in the mail i referred you too never seems to have got checked in, despite Tim telling Neal to :-) Neal? (can't do anything now, SF developer CVS is down) ---------------------------------------------------------------------- Comment By: Aki Tossavainen (cmouse) Date: 2004-01-22 17:53 Message: Logged In: YES user_id=198027 Could you put into somewhere some suppression file you can use with valgrind... so that no patching is required. If it's not a real problem then it could be just suppressed. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-01-22 13:01 Message: Logged In: YES user_id=6656 Well, yeah, pymalloc engages in behaviour that is not in strict conformance with the ANSI C standard. We knew this already. As yet, no platform has been reported as having problems with this, though. If you want to use valgrind to look for real problems, reading http://mail.python.org/pipermail/python-dev/2003-July/ 036740.html and the containing thread may be of interest (or you could just turn pymalloc off at configure time). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=881641&group_id=5470 From noreply at sourceforge.net Mon Jun 7 03:00:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 03:00:15 2004 Subject: [ python-Bugs-967986 ] file.encoding doesn't apply to file.write Message-ID: Bugs item #967986, was opened at 2004-06-07 00:00 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967986&group_id=5470 Category: Unicode Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Matthew Mueller (donut) Assigned to: M.-A. Lemburg (lemburg) Summary: file.encoding doesn't apply to file.write Initial Comment: In python2.3 printing unicode to an appropriate terminal actually works. But using sys.stdout.write doesn't. Ex: Python 2.3.4 (#2, May 29 2004, 03:31:27) [GCC 3.3.3 (Debian 20040417)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> sys.stdout.encoding 'UTF-8' >>> u=u'\u3053\u3093\u306b\u3061\u308f' >>> print u こんにちわ >>> sys.stdout.write(u) Traceback (most recent call last): File "", line 1, in ? UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-4: ordinal not in range(128) The file object docs say: "encoding The encoding that this file uses. When Unicode strings are written to a file, they will be converted to byte strings using this encoding. ..." Which indicates to me that it is supposed to work. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967986&group_id=5470 From noreply at sourceforge.net Mon Jun 7 03:22:32 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 03:22:41 2004 Subject: [ python-Bugs-926910 ] Overenthusiastic check in Swap? Message-ID: Bugs item #926910, was opened at 2004-04-01 05:34 Message generated for change (Comment added) made by mhammond You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=926910&group_id=5470 Category: Threads Group: None Status: Open Resolution: None Priority: 5 Submitted By: benson margulies (benson_basis) Assigned to: Mark Hammond (mhammond) Summary: Overenthusiastic check in Swap? Initial Comment: When Py_DEBUG is turned on, PyThreadState_Swap calls in a fatal error if the two different thread states are ever used for a thread. I submit that this is not a good check. The documentation encourages us to write code that creates and destroys thread states as C threads come and go. Why can't a single C thread make a thread state, release it, and then make another one later? One particular usage pattern: We have an API that initializes embedded python. Then we have another API where the callers are allowed to be in any C thread. The second API has no easy way to tell if a thread used for it happens to be the same thread that made the initialization call. As the code is written now, any code running on the 'main thread' is required to fish out the build-in main-thread thread state. ---------------------------------------------------------------------- >Comment By: Mark Hammond (mhammond) Date: 2004-06-07 17:22 Message: Logged In: YES user_id=14198 That check should not fail if you use the PyGILState APIs - it manages all of this for you. The PyGILState functions were added to handle exactly what you describe as your use case - is there any reason you can't use them? ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-07 15:15 Message: Logged In: YES user_id=80475 Mark, I believe this is your code. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=926910&group_id=5470 From noreply at sourceforge.net Mon Jun 7 05:18:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 05:19:05 2004 Subject: [ python-Bugs-964876 ] mapping a 0 length file Message-ID: Bugs item #964876, was opened at 2004-06-02 10:28 Message generated for change (Comment added) made by pmoore You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 Category: Documentation Group: Platform-specific Status: Closed Resolution: Fixed Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Tim Peters (tim_one) Summary: mapping a 0 length file Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. If I mmap a 0 length file on winnt, I obtain an exception: >>> import mmap, os >>> file = os.open(file_name, os.O_RDWR | os.O_BINARY) >>> buf = mmap.mmap(file, 0, access = map.ACCESS_WRITE) Traceback (most recent call last): File "", line 1, in -toplevel- buf = mmap.mmap(file, 0, access = mmap.ACCESS_WRITE) WindowsError: [Errno 1006] Il volume corrispondente al file ? stato alterato dall'esterno. Il file aperto non ? pi? valido This is a windows problem, but I think it should be at least documented. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-07 10:18 Message: Logged In: YES user_id=113328 FWIW, the documentation for CreateFileMapping (available at http://msdn.microsoft.com/library/en- us/fileio/base/createfilemapping.asp) states under the documentation for the dwMaximumSizeLow parameter that mapping a file of size 0 is invalid. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-06 17:56 Message: Logged In: YES user_id=31435 Clarified the docs, on HEAD and 2.3 maint: Doc/lib/libmmap.tex new revisions 1.8.24.1 and 1.9. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-06 17:14 Message: Logged In: YES user_id=31435 No, on Windows it's not sane. While the Python mmap module has a .resize() method, Microsoft's file mapping API does not -- .resize() on Windows is accomplished by throwing away the current mapping and creating a brand new one. On non-Windows systems, it's not guaranteed that Python can do a .resize() at all (it depends on whether the platform C supports mremap() -- if it doesn't, you get a SystemError mmap: resizing not available--no mremap() exception). So "in theory" ignores too much of reality to take seriously. ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-06 12:15 Message: Logged In: YES user_id=1054957 tim_one wrote that mapping an empty file as a size-0 file isn't a sane thing to do anyway. I think this is not really true. mmap has a resize method, so, in theory, one can map a size-0 file and let it grow. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 22:46 Message: Logged In: YES user_id=31435 The patch looks good. On American Windows, the cryptic error msg is: "WindowsError: [Errno 1006] The volume for a file has been externally altered so that the opened file is no longer valid" Can't find any MS docs on this condition. Then again, mapping an empty file *as* a size-0 file isn't a sane thing to do anyway. ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 19:01 Message: Logged In: YES user_id=113328 Suggested documentation patch: Index: lib/libmmap.tex =================================================================== RCS file: /cvsroot/python/python/dist/src/Doc/lib/libmmap.tex,v retrieving revision 1.8 diff -u -r1.8 libmmap.tex --- lib/libmmap.tex 3 Dec 2001 18:27:22 -0000 1.8 +++ lib/libmmap.tex 5 Jun 2004 18:00:08 -0000 @@ -44,7 +44,9 @@ specified by the file handle \var{fileno}, and returns a mmap object. If \var{length} is \code{0}, the maximum length of the map will be the current size of the file when \function{mmap()} is - called. + called. If \var{length} is \code{0} and the file is 0 bytes long, + Windows will return an error. It is not possible to map a 0-byte + file under Windows. \var{tagname}, if specified and not \code{None}, is a string giving a tag name for the mapping. Windows allows you to have many ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 From noreply at sourceforge.net Mon Jun 7 07:25:57 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 07:26:07 2004 Subject: [ python-Bugs-967934 ] csv module cannot handle embedded \r Message-ID: Bugs item #967934, was opened at 2004-06-06 23:46 Message generated for change (Comment added) made by montanaro You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967934&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Gregory Bond (gnbond) Assigned to: Andrew McNamara (andrewmcnamara) Summary: csv module cannot handle embedded \r Initial Comment: CSV module cannot handle the case of embedded \r (i.e. carriage return) in a field. As far as I can see, this is hard-coded into the _csv.c file and cannot be fixed with Dialect changes. ---------------------------------------------------------------------- >Comment By: Skip Montanaro (montanaro) Date: 2004-06-07 06:25 Message: Logged In: YES user_id=44345 It certainly intersects with it somehow. ;-) If nothing else, it will serve as a useful test case. ---------------------------------------------------------------------- Comment By: Andrew McNamara (andrewmcnamara) Date: 2004-06-07 00:32 Message: Logged In: YES user_id=698599 I suspect this restriction (CR appearing within a quoted field) is a historical accident and can be safely removed. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-07 00:02 Message: Logged In: YES user_id=80475 Skip, does this coincide with your planned switchover to universal newlines? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967934&group_id=5470 From noreply at sourceforge.net Mon Jun 7 07:32:02 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 07:32:08 2004 Subject: [ python-Bugs-919012 ] shutil.move can destroy files Message-ID: Bugs item #919012, was opened at 2004-03-18 20:29 Message generated for change (Comment added) made by jlgijsbers You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jeff Epler (jepler) Assigned to: Nobody/Anonymous (nobody) Summary: shutil.move can destroy files Initial Comment: $ mkdir a; touch a/b; python2.3 -c 'import shutil; shutil.move("a", "a/c") $ ls -l a ls: a: no such file or directory The same problem exists on Windows, as reported by one "shagshag". ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-07 13:32 Message: Logged In: YES user_id=469548 Sorry, in my haste made a silly mistake. Removed 'self' from is_destination_in_source definition and uploaded new patch to previous location. ---------------------------------------------------------------------- Comment By: Jeff Epler (jepler) Date: 2004-06-05 19:46 Message: Logged In: YES user_id=2772 I applied the attached patch, and got this exception: >>> shutil.move("a", "a/c") Traceback (most recent call last): File "", line 1, in ? File "/usr/src/cvs-src/python/dist/src/Lib/shutil.py", line 168, in move if is_destination_in_source(src, dst): TypeError: is_destination_in_source() takes exactly 3 arguments (2 given) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 19:25 Message: Logged In: YES user_id=31435 Attached Johannes's patch. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 17:08 Message: Logged In: YES user_id=469548 Here's a patch (with tests) that disallows moving a directory inside itself altogether. I can't upload patches to SF, so here's a link to it on my homepage: http://home.student.uva.nl/johannes.gijsbers/shutil.diff. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 From noreply at sourceforge.net Mon Jun 7 10:20:15 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 10:20:22 2004 Subject: [ python-Bugs-926910 ] Overenthusiastic check in Swap? Message-ID: Bugs item #926910, was opened at 2004-03-31 14:34 Message generated for change (Comment added) made by benson_basis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=926910&group_id=5470 Category: Threads Group: None Status: Open Resolution: None Priority: 5 Submitted By: benson margulies (benson_basis) Assigned to: Mark Hammond (mhammond) Summary: Overenthusiastic check in Swap? Initial Comment: When Py_DEBUG is turned on, PyThreadState_Swap calls in a fatal error if the two different thread states are ever used for a thread. I submit that this is not a good check. The documentation encourages us to write code that creates and destroys thread states as C threads come and go. Why can't a single C thread make a thread state, release it, and then make another one later? One particular usage pattern: We have an API that initializes embedded python. Then we have another API where the callers are allowed to be in any C thread. The second API has no easy way to tell if a thread used for it happens to be the same thread that made the initialization call. As the code is written now, any code running on the 'main thread' is required to fish out the build-in main-thread thread state. ---------------------------------------------------------------------- >Comment By: benson margulies (benson_basis) Date: 2004-06-07 10:20 Message: Logged In: YES user_id=876734 Somehow, the path I took through the documentation failed to rub my nose in this. It would be good if the language from the original PEP was applied to the APIs that I coded to to warn people to use these other APIs instead. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-06-07 03:22 Message: Logged In: YES user_id=14198 That check should not fail if you use the PyGILState APIs - it manages all of this for you. The PyGILState functions were added to handle exactly what you describe as your use case - is there any reason you can't use them? ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-07 01:15 Message: Logged In: YES user_id=80475 Mark, I believe this is your code. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=926910&group_id=5470 From noreply at sourceforge.net Mon Jun 7 10:24:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 10:24:36 2004 Subject: [ python-Bugs-454086 ] distutils/mingw32 links to dbg libs Message-ID: Bugs item #454086, was opened at 2001-08-22 00:30 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=454086&group_id=5470 Category: Distutils Group: None >Status: Closed >Resolution: Out of Date Priority: 5 Submitted By: Gerhard H?ring (ghaering) Assigned to: Gerhard H?ring (ghaering) Summary: distutils/mingw32 links to dbg libs Initial Comment: When compiling extension modules on Windows with debugging enabled for the native mode of the GNU compilers ("--compiler=mingw32 --debug"), distutils tries to link with the debugging version of the libraries (python21_d.dll, ...). This may be useful for Microsoft Visual C++, but for the GNU compilers, it isn't. GNU tools have a different debugging symbol format than MS tools, so there's no point in doing this. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-07 10:24 Message: Logged In: YES user_id=11375 Closing this bug; there's been no discussion in over a year. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-02-07 17:34 Message: Logged In: YES user_id=33168 Gerhard, is this still a problem? Can this be closed? ---------------------------------------------------------------------- Comment By: Gerhard H?ring (ghaering) Date: 2001-09-18 23:30 Message: Logged In: YES user_id=163326 OK, see patch #462754. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-09-04 15:56 Message: Logged In: YES user_id=11375 Hmm... I can't see any code in cygwinccompiler.py that adds the _d prefix in the current CVS. Can you please track down the code that adds it, and submit a patch to fix the problem? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=454086&group_id=5470 From noreply at sourceforge.net Mon Jun 7 10:54:10 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 10:54:18 2004 Subject: [ python-Bugs-964876 ] mapping a 0 length file Message-ID: Bugs item #964876, was opened at 2004-06-02 05:28 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 Category: Documentation Group: Platform-specific Status: Closed Resolution: Fixed Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Tim Peters (tim_one) Summary: mapping a 0 length file Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. If I mmap a 0 length file on winnt, I obtain an exception: >>> import mmap, os >>> file = os.open(file_name, os.O_RDWR | os.O_BINARY) >>> buf = mmap.mmap(file, 0, access = map.ACCESS_WRITE) Traceback (most recent call last): File "", line 1, in -toplevel- buf = mmap.mmap(file, 0, access = mmap.ACCESS_WRITE) WindowsError: [Errno 1006] Il volume corrispondente al file ? stato alterato dall'esterno. Il file aperto non ? pi? valido This is a windows problem, but I think it should be at least documented. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-07 10:54 Message: Logged In: YES user_id=31435 So it does! I was looking at old docs. Thanks. At the C level, someone must have used the generic-sounding ERROR_FILE_INVALID without realizing how strange the associated text is in this context. ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-07 05:18 Message: Logged In: YES user_id=113328 FWIW, the documentation for CreateFileMapping (available at http://msdn.microsoft.com/library/en- us/fileio/base/createfilemapping.asp) states under the documentation for the dwMaximumSizeLow parameter that mapping a file of size 0 is invalid. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-06 12:56 Message: Logged In: YES user_id=31435 Clarified the docs, on HEAD and 2.3 maint: Doc/lib/libmmap.tex new revisions 1.8.24.1 and 1.9. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-06 12:14 Message: Logged In: YES user_id=31435 No, on Windows it's not sane. While the Python mmap module has a .resize() method, Microsoft's file mapping API does not -- .resize() on Windows is accomplished by throwing away the current mapping and creating a brand new one. On non-Windows systems, it's not guaranteed that Python can do a .resize() at all (it depends on whether the platform C supports mremap() -- if it doesn't, you get a SystemError mmap: resizing not available--no mremap() exception). So "in theory" ignores too much of reality to take seriously. ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-06 07:15 Message: Logged In: YES user_id=1054957 tim_one wrote that mapping an empty file as a size-0 file isn't a sane thing to do anyway. I think this is not really true. mmap has a resize method, so, in theory, one can map a size-0 file and let it grow. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 17:46 Message: Logged In: YES user_id=31435 The patch looks good. On American Windows, the cryptic error msg is: "WindowsError: [Errno 1006] The volume for a file has been externally altered so that the opened file is no longer valid" Can't find any MS docs on this condition. Then again, mapping an empty file *as* a size-0 file isn't a sane thing to do anyway. ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 14:01 Message: Logged In: YES user_id=113328 Suggested documentation patch: Index: lib/libmmap.tex =================================================================== RCS file: /cvsroot/python/python/dist/src/Doc/lib/libmmap.tex,v retrieving revision 1.8 diff -u -r1.8 libmmap.tex --- lib/libmmap.tex 3 Dec 2001 18:27:22 -0000 1.8 +++ lib/libmmap.tex 5 Jun 2004 18:00:08 -0000 @@ -44,7 +44,9 @@ specified by the file handle \var{fileno}, and returns a mmap object. If \var{length} is \code{0}, the maximum length of the map will be the current size of the file when \function{mmap()} is - called. + called. If \var{length} is \code{0} and the file is 0 bytes long, + Windows will return an error. It is not possible to map a 0-byte + file under Windows. \var{tagname}, if specified and not \code{None}, is a string giving a tag name for the mapping. Windows allows you to have many ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964876&group_id=5470 From noreply at sourceforge.net Mon Jun 7 11:09:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 11:09:40 2004 Subject: [ python-Bugs-845802 ] Python crashes when __init__.py is a directory. Message-ID: Bugs item #845802, was opened at 2003-11-20 14:57 Message generated for change (Settings changed) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=845802&group_id=5470 >Category: None Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Kerim Borchaev (warkid) >Assigned to: Thomas Heller (theller) Summary: Python crashes when __init__.py is a directory. Initial Comment: If package/__init__.py is a directory then this code crashes under Windows XP: ### import sys#or os or maybe something else;-) import package#package/__init__.py is DIRECTORY! '\n'.join([]) ### Test case attached. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2004-06-07 17:09 Message: Logged In: YES user_id=11105 For me, it crashes in a debug built (on Windows) after doing 'import package' with a refcount error. I don't think it's windows specific. Fixed in Python/import.c, CVS rev. 2.232 and rev. 2.222.6.3. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-11-20 23:44 Message: Logged In: YES user_id=33168 I can't duplicate this problem on Linux, so I'm assuming it's Windows specific. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=845802&group_id=5470 From noreply at sourceforge.net Mon Jun 7 11:37:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 11:37:42 2004 Subject: [ python-Bugs-966618 ] float_subtype_new() bug Message-ID: Bugs item #966618, was opened at 2004-06-04 11:39 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) >Assigned to: Neil Schemenauer (nascheme) Summary: float_subtype_new() bug Initial Comment: A rather obsure bug in the subclassing code: >>> class A: ... def __float__(self): return 'hello' ... >>> float(A()) 'hello' >>> class f(float): pass ... >>> f(A()) -5.7590155905901735e-56 In debug mode, the assert() in float_subtype_new() fails instead. In non-debug mode, the value we get is the result of typecasting the PyStringObject* to a PyFloatObject*. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-07 11:37 Message: Logged In: YES user_id=31435 Assigned to Neil, as a reminder to attach his patch. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 17:00 Message: Logged In: YES user_id=35752 I've got an alternative patch. SF cvs is down at the moment so I'll have to generate a patch later. My change makes CPython match the behavior of Jython. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 15:46 Message: Logged In: YES user_id=35752 I think the right fix is to have PyNumber_Int, PyNumber_Float, and PyNumber_Long check the return value of slot function (i.e. nb_int, nb_float). That matches the behavior of PyObject_Str and PyObject_Repr. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 09:13 Message: Logged In: YES user_id=1057404 (ack, spelling error copied from intobject.c) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must be convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 09:11 Message: Logged In: YES user_id=1057404 (Inline, I can't seem to attach things) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 09:01 Message: Logged In: YES user_id=1057404 floatobject.c contains an assertion that the value can be coerced into a float, but not a runtime if. I've changed it to be in line with what int_subtype_new() does. This may not be 100% correct, however, as they both allow a string to be returned from __int__() and __float__(), respectively. complex() does not allow this, however, and it throws TypeError (while int_subtype_new() and float_subtype_new() throw ValueError). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 From noreply at sourceforge.net Mon Jun 7 11:49:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 11:49:58 2004 Subject: [ python-Bugs-968245 ] Python Logging filename & file number Message-ID: Bugs item #968245, was opened at 2004-06-07 10:49 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=968245&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Ryan (superleo303) Assigned to: Nobody/Anonymous (nobody) Summary: Python Logging filename & file number Initial Comment: I use Freebsd and redhat 9.0 linus at work. Using Python Logging filename & file number on freebsd works fine. When i print a log statement, and have my config formatter section set like: [formatter_default] format=%(asctime)s <%(levelname)s> <%(module)s:%(lineno)s> %(message)s datefmt= On bsd my logs look like: stateOnly is: DE MEssages ......... collecting datasource: 1011 Collectors overwrite existing pickle files: 0 Collectors run in multiple threads: 4 sql is: ... See, the filename and file number get displayed with each logging call, Now, The same exact code run on the same exact version of python on a linux machine yiedls the lines: <__init__:988> stateOnly is: DE <__init__:988> MEssages <__init__:988> collecting datasource: 1011 <__init__:988> Collectors overwrite existing pickle files: 0 <__init__:988> Collectors run in multiple threads: 4 <__init__:988> sql is: ... So i opened up ./python2.3/logging/__init__.py line 988 and it seems to be where the problem is. Can someone take a look at this asap? I have to run all my code on linux machines, so now i cant see which file and which line is making the logging. To reproduce, get a freebsd and linux machine, then run a simple script that uses logging config files and use the above example as your formatter in the logging confrig file, BSD should show the filenames and numbers, linux should show __init__ 988 instead. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=968245&group_id=5470 From noreply at sourceforge.net Mon Jun 7 11:50:02 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 11:50:08 2004 Subject: [ python-Bugs-967207 ] PythonWin fails reporting "Can not locate pywintypes23.dll Message-ID: Bugs item #967207, was opened at 2004-06-05 20:02 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967207&group_id=5470 Category: Extension Modules Group: Python 2.3 >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: puffingbilly (puffingbilly) Assigned to: Nobody/Anonymous (nobody) Summary: PythonWin fails reporting "Can not locate pywintypes23.dll Initial Comment: see attached file for details. Workaround :- copy pywintypes23.dll and pythoncom23.dll to top python directory, possibly Python23 ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2004-06-07 17:50 Message: Logged In: YES user_id=11105 This is not a Python bug - please report it to the pywin32 project. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967207&group_id=5470 From noreply at sourceforge.net Mon Jun 7 15:17:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 15:17:34 2004 Subject: [ python-Bugs-231540 ] threads and profiler don't work together Message-ID: Bugs item #231540, was opened at 2001-02-08 10:53 Message generated for change (Settings changed) made by mondragon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=231540&group_id=5470 Category: Threads Group: None Status: Open Resolution: None Priority: 3 Submitted By: Dave Brueck (dbrueck) >Assigned to: Nick Bastin (mondragon) Summary: threads and profiler don't work together Initial Comment: When a new thread is created, it doesn't inherit from the parent thread the trace and profile functions (sys_tracefunc and sys_profilefunc in PyThreadState), so multithreaded programs can't easily be profiled. This may be by design for safety/complexity sake, but the profiler module should still find some way to function correctly. A temporary (and performance-killing) workaround is to modify the standard profiler to hook into threads to start a new profiler for each new thread, and then merge the stats from a child thread into the parent's when the child thread ends. Here is sample code that exhibits the problem. Stats are printed only for the main thread because the child thread has no profiling function and therefore collects no stats: import threading, profile, time def yo(): for j in range(5): print j, def go(): threading.Thread(target=yo).start() time.sleep(1) profile.run('go()') ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-08-10 01:31 Message: Logged In: YES user_id=31435 Reassigned to Fred, our current profiler expert. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-02-09 18:57 Message: Assigned to me but reduced the priority. I'll take a look at it, but have to suspect it will get reclassified as a Feature Request and moved into PEP 42. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=231540&group_id=5470 From noreply at sourceforge.net Mon Jun 7 15:19:15 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 15:19:30 2004 Subject: [ python-Bugs-968245 ] Python Logging filename & file number Message-ID: Bugs item #968245, was opened at 2004-06-07 10:49 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=968245&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Ryan (superleo303) >Assigned to: Vinay Sajip (vsajip) Summary: Python Logging filename & file number Initial Comment: I use Freebsd and redhat 9.0 linus at work. Using Python Logging filename & file number on freebsd works fine. When i print a log statement, and have my config formatter section set like: [formatter_default] format=%(asctime)s <%(levelname)s> <%(module)s:%(lineno)s> %(message)s datefmt= On bsd my logs look like: stateOnly is: DE MEssages ......... collecting datasource: 1011 Collectors overwrite existing pickle files: 0 Collectors run in multiple threads: 4 sql is: ... See, the filename and file number get displayed with each logging call, Now, The same exact code run on the same exact version of python on a linux machine yiedls the lines: <__init__:988> stateOnly is: DE <__init__:988> MEssages <__init__:988> collecting datasource: 1011 <__init__:988> Collectors overwrite existing pickle files: 0 <__init__:988> Collectors run in multiple threads: 4 <__init__:988> sql is: ... So i opened up ./python2.3/logging/__init__.py line 988 and it seems to be where the problem is. Can someone take a look at this asap? I have to run all my code on linux machines, so now i cant see which file and which line is making the logging. To reproduce, get a freebsd and linux machine, then run a simple script that uses logging config files and use the above example as your formatter in the logging confrig file, BSD should show the filenames and numbers, linux should show __init__ 988 instead. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=968245&group_id=5470 From noreply at sourceforge.net Mon Jun 7 16:34:58 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 16:35:01 2004 Subject: [ python-Bugs-968430 ] error flattening complex smime signed message Message-ID: Bugs item #968430, was opened at 2004-06-07 22:34 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=968430&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Ludovico Magnocavallo (ludo) Assigned to: Nobody/Anonymous (nobody) Summary: error flattening complex smime signed message Initial Comment: Python 2.3.3 [GCC 3.2.2] on linux2 email version 2.5.5 Complex SMIME signed messages parsed and flattened again do not pass SMIME verification. I have noticed this with messages that have as message/rfc822 attachment another SMIME signed message. A diff between an "original" SMIME signed messaged passign openssl smime -verify verification and the same message parsed (message_from_file) and flattened (as_string(False)) by the email library: diff -bB bugmsg_signed.eml bugmsg_signed_parsed.eml 2c2,3 < Content-Type: multipart/signed; protocol="application/x-pkcs7-signature"; micalg=sha1; boundary="----381546B4549948B9F93D885A82884C49" --- > Content-Type: multipart/signed; protocol="application/x-pkcs7-signature"; > micalg=sha1; boundary="----381546B4549948B9F93D885A82884C49" The email-parsed message splits the signature header into two lines, thus rendering the message non-valid. Attached to this bug a .zip archive with: - msg #1: the non-signed message (with a signed message as attachment) - msg #2: message #1 signed by openssl - msg #3: message #2 parsed and flattened as described above - the CA certificate file used for smime verification openssl command used to verify #2 and #3: openssl smime -verify -in bugmsg_signed.eml -CAfile cacert.pem openssl smime -verify -in bugmsg_signed_parsed.eml -CAfile cacert.pem ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=968430&group_id=5470 From noreply at sourceforge.net Mon Jun 7 19:23:00 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 19:23:10 2004 Subject: [ python-Bugs-952807 ] segfault in subclassing datetime.date & pickling Message-ID: Bugs item #952807, was opened at 2004-05-12 15:30 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=952807&group_id=5470 Category: Extension Modules Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Thomas Wouters (twouters) >Assigned to: Tim Peters (tim_one) Summary: segfault in subclassing datetime.date & pickling Initial Comment: datetime.date does not take subclassing into account properly. datetime.date's tp_new has special code for unpickling (the single-string argument) which calls PyObject_New() directly, which doesn't account for the fact that subclasses may participate in cycle-gc (even if datetime.date objects do not.) The result is a segfault in code that unpickles instances of subclasses of datetime.date: import pickle, datetime class mydate(datetime.date): pass s = pickle.dumps(mydate.today()) broken = pickle.loads(s) del broken The 'del broken' is what causes the segfault: the 'mydate' class/type is supposed to participate in GC, but because of datetime.date's shortcut, that part of the object is never initialized (nor allocated, I presume.) The 'broken' instance reaches 0 refcounts, the GC gets triggered and it reads garbage memory. To 'prove' that the problem isn't caused by pickle itself: class mydate(datetime.date): pass broken = mydate('\x07\xd4\x05\x0c') del broken causes the same crash, in the GC code. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-07 19:23 Message: Logged In: YES user_id=31435 Applied Jiwon Seo's patch, to HEAD and release23-maint: Lib/test/test_datetime.py, new revisions: 1.45.8.1; 1.47 Misc/ACKS, new revision: 1.243.6.1 Misc/NEWS, new revisions: 1.831.4.119; 1.994 Modules/datetimemodule.c, new revisions: 1.67.8.2; 1.72 ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 23:30 Message: Logged In: YES user_id=31435 Thank you! I'm attaching your patch to this report. ---------------------------------------------------------------------- Comment By: Jiwon Seo (jiwon) Date: 2004-06-05 20:07 Message: Logged In: YES user_id=595483 It was as you expected. =^) I fixed it in the same way. Here is the patch. http://seojiwon.dnip.net:8000/~jiwon/tmp/datetime.diff (too long to copy&paste here) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 18:13 Message: Logged In: YES user_id=31435 I expect that datetime.datetime and datetime.time objects must have the same kind of vulnerability. Jiwon, can you address those too while you're at it? ---------------------------------------------------------------------- Comment By: Jiwon Seo (jiwon) Date: 2004-06-05 10:57 Message: Logged In: YES user_id=595483 Here is the patch of datetimemodule and test code for it. I just read the summary, and made the datetimemodule patch as is said, and added a testcode for it. *** Modules/datetimemodule.c.orig Sat Jun 5 23:49:26 2004 --- Modules/datetimemodule.c Sat Jun 5 23:47:05 2004 *************** *** 2206,2212 **** { PyDateTime_Date *me; ! me = PyObject_New(PyDateTime_Date, type); if (me != NULL) { char *pdata = PyString_AS_STRING(state); memcpy(me->data, pdata, _PyDateTime_DATE_DATASIZE); --- 2206,2212 ---- { PyDateTime_Date *me; ! me = (PyDateTime_Date *) (type->tp_alloc(type, 0)); if (me != NULL) { char *pdata = PyString_AS_STRING(state); memcpy(me->data, pdata, _PyDateTime_DATE_DATASIZE); test code patch *** Lib/test/test_datetime.py.orig Sat Jun 5 23:49:44 2004 --- Lib/test/test_datetime.py Sat Jun 5 23:52:52 2004 *************** *** 510,515 **** --- 510,517 ---- dt2 = dt - delta self.assertEqual(dt2, dt - days) + class SubclassDate(date): pass + class TestDate(HarmlessMixedComparison): # Tests here should pass for both dates and datetimes, except for a # few tests that TestDateTime overrides. *************** *** 1028,1033 **** --- 1030,1044 ---- self.assertEqual(dt2.extra, 7) self.assertEqual(dt1.toordinal(), dt2.toordinal()) self.assertEqual(dt2.newmeth(-7), dt1.year + dt1.month - 7) + + def test_pickling_subclass_date(self): + + args = 6, 7, 23 + orig = SubclassDate(*args) + for pickler, unpickler, proto in pickle_choices: + green = pickler.dumps(orig, proto) + derived = unpickler.loads(green) + self.assertEqual(orig, derived) def test_backdoor_resistance(self): # For fast unpickling, the constructor accepts a pickle string. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=952807&group_id=5470 From noreply at sourceforge.net Mon Jun 7 22:10:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 22:12:22 2004 Subject: [ python-Bugs-967182 ] file("foo", "wU") is silently accepted Message-ID: Bugs item #967182, was opened at 2004-06-05 12:15 Message generated for change (Comment added) made by montanaro You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967182&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Skip Montanaro (montanaro) Assigned to: Skip Montanaro (montanaro) Summary: file("foo", "wU") is silently accepted Initial Comment: PEP 278 says at opening a file with "wU" is illegal, yet file("foo", "wU") passes without complaint. There may be other flags which the PEP disallows with "U" that need to be checked. ---------------------------------------------------------------------- >Comment By: Skip Montanaro (montanaro) Date: 2004-06-07 21:10 Message: Logged In: YES user_id=44345 Turned out not to be obvious at all (and not related to my changes). Here's another patch which is cleaner I think. Would someone take a look at this? My intent is to not let invalid modes pass silently (as "wU" currently does). Have I accounted for all the valid mode strings? It has some semantic changes, so this is not a backport candidate. I'm not sure about how 't' is handled. It's only of use on Windows as I understand it, but I don't see it sucked out of the mode string on non-Windows platforms, so it must be silently accepted on Unix and Mac. (Can someone confirm this?) ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2004-06-05 14:51 Message: Logged In: YES user_id=44345 Here's a first cut patch - test suite fails though - must be something obvious... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967182&group_id=5470 From noreply at sourceforge.net Mon Jun 7 22:54:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 22:54:49 2004 Subject: [ python-Bugs-966618 ] float_subtype_new() bug Message-ID: Bugs item #966618, was opened at 2004-06-04 15:39 Message generated for change (Comment added) made by nascheme You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Neil Schemenauer (nascheme) Summary: float_subtype_new() bug Initial Comment: A rather obsure bug in the subclassing code: >>> class A: ... def __float__(self): return 'hello' ... >>> float(A()) 'hello' >>> class f(float): pass ... >>> f(A()) -5.7590155905901735e-56 In debug mode, the assert() in float_subtype_new() fails instead. In non-debug mode, the value we get is the result of typecasting the PyStringObject* to a PyFloatObject*. ---------------------------------------------------------------------- >Comment By: Neil Schemenauer (nascheme) Date: 2004-06-08 02:54 Message: Logged In: YES user_id=35752 Attaching patch. One outstanding issue is that it may make sense to search for and remove unnecessary type checks (e.g. PyNumber_Int followed by PyInt_Check). Also, this change only broke one test case but I have no idea how much user code this might break. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-07 15:37 Message: Logged In: YES user_id=31435 Assigned to Neil, as a reminder to attach his patch. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 21:00 Message: Logged In: YES user_id=35752 I've got an alternative patch. SF cvs is down at the moment so I'll have to generate a patch later. My change makes CPython match the behavior of Jython. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 19:46 Message: Logged In: YES user_id=35752 I think the right fix is to have PyNumber_Int, PyNumber_Float, and PyNumber_Long check the return value of slot function (i.e. nb_int, nb_float). That matches the behavior of PyObject_Str and PyObject_Repr. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:13 Message: Logged In: YES user_id=1057404 (ack, spelling error copied from intobject.c) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must be convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:11 Message: Logged In: YES user_id=1057404 (Inline, I can't seem to attach things) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:01 Message: Logged In: YES user_id=1057404 floatobject.c contains an assertion that the value can be coerced into a float, but not a runtime if. I've changed it to be in line with what int_subtype_new() does. This may not be 100% correct, however, as they both allow a string to be returned from __int__() and __float__(), respectively. complex() does not allow this, however, and it throws TypeError (while int_subtype_new() and float_subtype_new() throw ValueError). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 From noreply at sourceforge.net Mon Jun 7 23:00:02 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 23:00:14 2004 Subject: [ python-Bugs-472568 ] PyBuffer_New() memory not aligned Message-ID: Bugs item #472568, was opened at 2001-10-18 21:31 Message generated for change (Comment added) made by nascheme You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=472568&group_id=5470 Category: Python Interpreter Core Group: None >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Nobody/Anonymous (nobody) >Assigned to: Neil Schemenauer (nascheme) Summary: PyBuffer_New() memory not aligned Initial Comment: Memory buffer areas created by PyBuffer_New are missaligned on 32 bit machines, for doubles (64 bit). This typically generates a bus error crash on most RISC machines when a library function tries to write a double in a memory buffer allocated by PyBuffer_New(). When looking at the bufferobject.c code, this seems to come from the fact that the memory buffer points at 'malloc() + sizeof struct PyBufferObject'; [line: b->b_ptr = (void *)(b + 1); ] To the extent that struct PyBufferObject is not a multiple of sizeof (double), the largest value type; thus the misalignment, and the crashes on most RISC processors. A quick and temporary solution would consist in adding a dummy double field as the last field of the PyBufferObject structure: struct PyBufferObject { ... double dummy; } ; and setting b->b_ptr = (void*)&b->dummy; /*was (void*(b+1)*/ In doing so, b->b_ptr will always by aligned on sizeof (double), on either 32 and 64 bit architectures. Since I'm on the buffer type problem: It would be nice (and probably easy to do) to augment the buffer protocol interface with the ability to specify the basic value type stored in the buffer. A possible list of values (enum ...) would be: - undefined (backward compatibility) - char, unsigned char, short, ....int, ... long, long long, float, double, and void* (memory address). This would enable to check at runtime the type of values stored in a buffer (and prevent missalignement buserrors, as well as catching garbage situations when improper array types are passed by means of the buffer interface [e.g.: int/float/double/short...). Frederic Giacometti Frederic Giacometti ---------------------------------------------------------------------- >Comment By: Neil Schemenauer (nascheme) Date: 2004-06-08 03:00 Message: Logged In: YES user_id=35752 Added note to documentation as suggested by Armin. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2003-09-22 18:57 Message: Logged In: YES user_id=4771 No more activity around here. I suggest to deprecate this bug report. Here is a note for the docs saying the memory PyBuffer_New() gets you isn't specifically aligned. (cannot attach it, sorry) http://arigo.tunes.org/api_concrete.diff ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-06-17 15:30 Message: Logged In: YES user_id=31435 Unassigned -- I don't expect to work on this. ---------------------------------------------------------------------- Comment By: Frederic Giacometti (giacometti) Date: 2001-11-01 17:21 Message: Logged In: YES user_id=93657 The solution Tim proposes (2d malloc) will be very fine with what we do. Here are some more details on what we're doing (and this is a standard operation): - We want to create an array of double that we pass to a C function, and then return this array to Python as buffer object (the buffer is passed latter on as arg to other functions using the buffer interface); and do it so that Python takes ownership of the buffer memory management. - We don't want to require Numerical to operate the package; just for memory allocation. - We should actually be using the Python array module interface. Unfortunately: * the Python array object C definitions are not exported in a .h file * the Array python interface does not provide the ability to create a new array of an arbitrary size (and certainly initialized to 0). One has to provide a list or a string to create an array of a given size. IThis is not workable if we work we large arrays (e.g.: an array of 1.000.000 doubles is only 8 MB RAM ...). Another solution, then, would consist in extending the array Python interface, so as to enable the creation of arrays of arbitrary sizes (prefereably initialized to 0 or to something alse with a calloc or a memset). The extension of the array.array() function could be the better solution, taking into account our needs as well as Tim's concerns. FG ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-11-01 01:31 Message: Logged In: YES user_id=31435 The only portable way to fix this (assuming it's broken -- I don't see any alignment promises in the docs, and since we never use it we can't mine the source code for clues either) is to have PyBuffer_New do a second separate malloc (size) and set b_ptr to that. The C std guarantees memory returned by malloc is suitably aligned for all purposes; it doesn't promise that any standard type captures the strictest alignment requirement (indeed, at KSR we returned 128-byte aligned memory from malloc, to cater to our "subpage" type extension). ---------------------------------------------------------------------- Comment By: Frederic Giacometti (giacometti) Date: 2001-11-01 00:31 Message: Logged In: YES user_id=93657 A portable solution (im[provement over what I proposed) would consitst in declaring 'dummy' with a union type, 'unionizing' all C-ANSI value types (and including 'long long' optionally by mean of an #ifdef). { .... union { int Int; long Long; double Double; void* Pvoid ...} dummy; } All (void*)obj->dummy can be replaced with obj->dummy.Pvoid FG ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2001-10-22 20:19 Message: Logged In: YES user_id=31392 Note to whomever take this bug: PyBuffer_New() is not called anywhere in the Python source tree; nor are there any tests for buffer objects that I'm aware of. A few simple test cases would have caught this bug already. (And for the case of the builtin buffer() call, it might be good if it used PyBuffer_New().) ---------------------------------------------------------------------- Comment By: Frederic Giacometti (giacometti) Date: 2001-10-18 21:35 Message: Logged In: YES user_id=93657 I wasn't looged in when I submitted the item. Don't think I'm becoming anonymous :)) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=472568&group_id=5470 From noreply at sourceforge.net Mon Jun 7 23:01:44 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 7 23:02:53 2004 Subject: [ python-Bugs-967182 ] file("foo", "wU") is silently accepted Message-ID: Bugs item #967182, was opened at 2004-06-05 13:15 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967182&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Skip Montanaro (montanaro) Assigned to: Skip Montanaro (montanaro) Summary: file("foo", "wU") is silently accepted Initial Comment: PEP 278 says at opening a file with "wU" is illegal, yet file("foo", "wU") passes without complaint. There may be other flags which the PEP disallows with "U" that need to be checked. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-07 23:01 Message: Logged In: YES user_id=31435 The C standard is open-ended about what a mode string can contain, and Python has historically allowed users to exploit platform-specific extensions here. On Windows, at least 't' and 'c' mean something in mode strings, and 'c' is actually useful (it forces writes to get committed to disk immediately). Most platforms ignore characters in mode strings that they don't understand. This is an exhaustive list of the mode strings a conforming C implementation must support (from C99): """ r open text file for reading w truncate to zero length or create text file for writing a append; open or create text file for writing at end-of-file rb open binary file for reading wb truncate to zero length or create binary file for writing ab append; open or create binary file for writing at end-of-file r+ open text file for update (reading and writing) w+ truncate to zero length or create text file for update a+ append; open or create text file for update, writing at end- of-file r+b or rb+ open binary file for update (reading and writing) w+b or wb+ truncate to zero length or create binary file for update a+b or ab+ append; open or create binary file for update, writing at end-of-file """ Implementations are free to support as many more as they care to. Guido may be in favor of restricting Python (in 2.4 or 2.5) to the set of mode strings required by C99, plus those that make sense with Python's U extension. I think he said something to that effect in person once. But 'c' is in fact useful on Windows, and code will break if it's outlawed. ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2004-06-07 22:10 Message: Logged In: YES user_id=44345 Turned out not to be obvious at all (and not related to my changes). Here's another patch which is cleaner I think. Would someone take a look at this? My intent is to not let invalid modes pass silently (as "wU" currently does). Have I accounted for all the valid mode strings? It has some semantic changes, so this is not a backport candidate. I'm not sure about how 't' is handled. It's only of use on Windows as I understand it, but I don't see it sucked out of the mode string on non-Windows platforms, so it must be silently accepted on Unix and Mac. (Can someone confirm this?) ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2004-06-05 15:51 Message: Logged In: YES user_id=44345 Here's a first cut patch - test suite fails though - must be something obvious... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967182&group_id=5470 From noreply at sourceforge.net Tue Jun 8 04:30:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 8 04:30:53 2004 Subject: [ python-Feature Requests-935915 ] os.nullfilename Message-ID: Feature Requests item #935915, was opened at 2004-04-15 22:44 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=935915&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Accepted Priority: 5 Submitted By: John Belmonte (jbelmonte) Assigned to: Martin v. L?wis (loewis) Summary: os.nullfilename Initial Comment: Just as the os library provides os.sep, etc., for the current OS, it should provide the name of the null file (e.g., "/dev/null" or "nul:"), so that there is a portable way to open a null file. Use of an object such as class nullFile: def write(self, data): pass is not sufficient because it does not provide a full file object interface (no access to file descriptor, etc.). See discussion at . ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-08 10:30 Message: Logged In: YES user_id=21627 Thanks for the patch. Committed as libos.tex 1.137 macpath.py 1.48 ntpath.py 1.59 os.py 1.77 os2emxpath.py 1.12 posixpath.py 1.66 test_os.py 1.24 NEWS 1.996 ---------------------------------------------------------------------- Comment By: John Belmonte (jbelmonte) Date: 2004-06-06 23:36 Message: Logged In: YES user_id=282299 Please mark this as a patch and consider for commit. ---------------------------------------------------------------------- Comment By: John Belmonte (jbelmonte) Date: 2004-05-30 01:36 Message: Logged In: YES user_id=282299 Attaching patch against Python HEAD, including docs and test. ---------------------------------------------------------------------- Comment By: John Belmonte (jbelmonte) Date: 2004-05-21 16:46 Message: Logged In: YES user_id=282299 I do intend to make a patch, but it may be some time before I get to it. Please give me a few weeks. If someone else would like to step in, that is fine, just let me know before you start the work. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-21 08:00 Message: Logged In: YES user_id=80475 Consider mentioning this on comp.lang.python. Perhaps someone will volunteer to write a patch. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-05-09 18:08 Message: Logged In: YES user_id=21627 Would you like to work on a patch? ---------------------------------------------------------------------- Comment By: David Albert Torpey (dtorp) Date: 2004-05-09 03:54 Message: Logged In: YES user_id=681258 I like this idea. It is more portable. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-04-15 22:52 Message: Logged In: YES user_id=21627 Move to feature requests tracker ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=935915&group_id=5470 From noreply at sourceforge.net Tue Jun 8 04:32:26 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 8 04:32:43 2004 Subject: [ python-Bugs-474836 ] Tix not included in windows distribution Message-ID: Bugs item #474836, was opened at 2001-10-25 13:22 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=474836&group_id=5470 Category: Tkinter Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Martin v. L?wis (loewis) Summary: Tix not included in windows distribution Initial Comment: Although there is a Tix.py available, there is no Tix support in the precomiled Python-distribution for windows. So import Tix works fine, but root = Tix.Tk() results in TclError: package not found. It is possible to circumvent this problem by installing a regular Tcl/Tk distribution (e.g. in c:\programme\tcl) and installing Tix in the regular Tcl-path (i.e. tcl\lib). Mathias ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-08 10:32 Message: Logged In: YES user_id=21627 Thanks to Thomas' efforts, the problem has been eventually resolved for Python 2.3.4. It is not clear at this time whether Tix will also ship with Python 2.4, as that release will be built with Visual Studio .NET 2004; Tix currently does not build with that compiler. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-04-16 21:27 Message: Logged In: YES user_id=11105 Thanks, Martijn, that helped. The tix-dll is in the directory c:\Python23\DLLs, and the pkgindex.tcl is in the c:\Python23\tcl\tix8.1 directory, so I added lappend dirs [file join [file dirname [info nameofexe]] DLLs] and it works. I still wonder if it would be better to locate the dll relative to the directory of pkgindex.tcl, but I cannot achive this. Better than nothing. I'll check it in and close the bug. (Sidenote: it seems MvL's instructions actually *were* correct, I just missed the 'DLLs]' part which were on the next line ;-) ---------------------------------------------------------------------- Comment By: Martijn Pieters (mjpieters) Date: 2004-04-16 11:08 Message: Logged In: YES user_id=116747 If the tix8184 DLL cannot be found, this is most likely because you are running a Python binary with a different relative path than the bog-standard c:\Python23\python.exe. For example, the Pythonwin package lives in a site-packages subdir! To have Tix work in Pythonwin therefor, you'll have to add another search path to the tix8.1/pkgIndex.tcl file, one which uses the correct relative path for the DLLs dir. I added: lappend dirs [file join [file dirname [info nameofexe]] .. .. .. DLLs] (That's one line, with three ..'s). This'll look for a DLLs dir 3 directories above the dir of the running binary. Voila, it now works in Pythonwin as well as in IDLE and in standalone scripts. Martijn ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-04-08 21:57 Message: Logged In: YES user_id=11105 Martin, any ideas? ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-04-02 18:48 Message: Logged In: YES user_id=11105 Problem seems to be that tix8184.dll is not found, and neither of the entries in tcl\tix8.1\pkgIndex.tcl seem to work. Adding a line lappend dirs /python23/DLLs helps when the whole stuff is installed in c:\Python23, but this cannot be the solution. OTOH, I don't know anything of tcl, so I cannot proceed. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-04-01 22:16 Message: Logged In: YES user_id=11105 I've built according to your instructions (slightly adjusted), then copied tix8184.dll to the DLLs directory (of a Python 2.3.3, installed with the windows installer), and the tix8.1 directory into the tcl directory (sibling of tcl8.4, tk8.4 and so on). Demo\tixwidgets.py complains: Traceback (most recent call last): File "c:\sf\python\dist23\src\Demo\tix\tixwidgets.py", line 1002, in ? root = Tix.Tk() File "C:\Python23\lib\lib-tk\Tix.py", line 210, in __init__ self.tk.eval('package require Tix') _tkinter.TclError: couldn't load library "tix8184.dll": this library or a dependent library could not be found in library path Any advise? ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-03-31 22:03 Message: Logged In: YES user_id=21627 The instructions from 2003-04-26/2003-06-15 should still be valid. For 2.4, the story will be different, as Tix does not currently build with VC7. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-03-31 21:43 Message: Logged In: YES user_id=11105 I'm willing to do some work to include tix in Python 2.3.4, if someone can update the instructions. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2004-02-24 18:16 Message: Logged In: YES user_id=764593 Note that the problem is still there in 2.3.3; if it can't be fixed, could the documentation at least mention that Tix requires 3rd-party libraries on Windows? ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2003-10-03 15:15 Message: Logged In: YES user_id=21627 Reassigning to Thomas, who is doing Windows releases these days. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2003-10-03 15:13 Message: Logged In: YES user_id=31435 Unassigned (doesn't look like I'll ever get time for this). ---------------------------------------------------------------------- Comment By: Aaron Optimizer Digulla (digulla) Date: 2003-10-03 13:10 Message: Logged In: YES user_id=606 loewis, when will your package show up in the official Python distribution? It's still not there in 2.3.2 :-( ---------------------------------------------------------------------- Comment By: Aaron Optimizer Digulla (digulla) Date: 2003-07-28 15:22 Message: Logged In: YES user_id=606 The Tix8184.dll is still missing in Python 2.3c2. The included Tix8183.dll (which is in the directory tcl\tix8.1\ along with a couple of other dlls -> can't be found by Python) is linked against Tcl/Tk 8.3. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2003-06-15 14:40 Message: Logged In: YES user_id=21627 I found that the instructions need slight modification: In step 2, use tk...\mkd.bat for mkdir. Apart from that, these instructions work fine for me, now. I have made a binary release of tix8.1 for Python 2.3 at http://www.dcl.hpi.uni-potsdam.de/home/loewis/tix8.1.zip The tix8184.dll goes to DLLs, the tix8.1 subdirectory goes to tcl. It differs from the standard tix8.1 subdirectory only in fixing the path to the DLLs directory. To test whether this works, execute Demo/tix/tixwidgets.py. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2003-04-26 12:22 Message: Logged In: YES user_id=21627 I still think Python should include Tix. Here are some instructions on how to make Tix 8.1.4 work: 1. Unpack as a sibling of tcl8.4.1 and tk8.4.1 2. Edit win\common.mk, to set the following variables TCL_VER=8.4 INSTALLDIR= MKDIR=mkdir 3. Edit win\makefile.vc, to set the following variables TOOLS32= TOOLS32_rc= 4. Edit win\tk\pkgindex.tcl, to replace lappend dirs ../../Dlls with lappend dirs [file join [file dirname [info nameofexe]] Dlls] 5. nmake -f makefile.vc 6. nmake -f makefile.vc install 7. Copy INSTALLDIR\bin\tix8184.dll to \DLLs 8. Optionally copy tix8184.lib somewhere 9. copy INSTALLDIR\lib\tix8.1 into \tcl With these instructions, invoking t.tk.eval("package require Tix") succeeds. For some reason, Tix won't still find any of the commands; I'll investigate this later. ---------------------------------------------------------------------- Comment By: Internet Discovery (idiscovery) Date: 2002-12-11 10:14 Message: Logged In: YES user_id=33229 My you're courageous - going with a version of Tcl that doesn't even pass its own tests :-) Been there, seen it, done it .... 8.1.4 will be out this week, which compiles with 8.4 but I don't expect it to "support" 8.4 for a while yet (they added more problems in 8.4.1). 8.3.5 is definitely "supported". Check back with me before 2.3 goes into beta and I'll do another minor release if necessary. Maybe Tk will test clean then. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-11-20 02:36 Message: Logged In: YES user_id=31435 Parents shouldn't disagree in front of their children . Not all the Tcl or Tk tests (their tests, not ours) passed when I built 8.4.1, but I couldn't (and can't) make time to dig into that, and all the Python stuff I tried worked fine. So I don't fear 8.4, and am inclined to accept Martin's assurance that 8.4 is best for Python. We intend to put out the first 2.3 Python alpha by the end of the year, and my bet is it won't be a minute before that. If Tix 8.1.4 is at RC3 now, I'd *guess* you'll have a final release out well before then. Yes? No? ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-11-19 23:12 Message: Logged In: YES user_id=21627 I think the recommendation cannot apply to Python; I'm very much in favour of releasing Python 2.3 with Tk 8.4.x. So the question then is whether Python 2.3 should include Tix 8.1.4 or no Tix at all, and at what time Tix 8.1.4 can be expected. ---------------------------------------------------------------------- Comment By: Internet Discovery (idiscovery) Date: 2002-11-19 20:10 Message: Logged In: YES user_id=33229 Look on http://tix.sourceforge.net/download.shtml for Tix 8.1.4 RC3. It works with Tk 8.4.1 and passes the test suite, but there are still issues with Tk 8.4 and it has not been widely tested with yet with 8.4.1, so we still recommend 8.3.5. (Tcl major releases often aren't stable until patch .3 or so.) If you have any problems let me know directly by email and I'll try and help. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-11-18 17:35 Message: Logged In: YES user_id=31435 Does Tix 8.1.3 play with Tcl/Tk 8.4.1? The 2.3. Windows distro is set up to include the latter now. The win\common.mak file from Tix 8.1.3 doesn't have a section for Tcl/Tk 8.4, though. There appear to be several reasons Tix won't compile on my box anyway without fiddling the Tix makefiles (e.g., my VC doesn't live in \DevStudio), so before spending more time on that I'd like to know whether it's doomed. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-11-07 15:52 Message: Logged In: YES user_id=6380 I support this. Tim, I know you're not a big Tk user (to say the least). I'll offer to help in person. ---------------------------------------------------------------------- Comment By: Internet Discovery (idiscovery) Date: 2002-11-03 08:30 Message: Logged In: YES user_id=33229 I would really like to see Tix in 2.3 and will be glad to help. AFAIK there are no major issues with tix-8.1.3 and Python 2.x and it should be a simple drop in of a cannonically compiled Tix. If there are any issues that need dealing with at Tix's end, I'll be glad to put out a new minor release of Tix to address them. On Python's end I've suggested a fix for http://python.org/sf/564729 FYI, please also see my comments for bug 632323. ---------------------------------------------------------------------- Comment By: Internet Discovery (idiscovery) Date: 2002-11-03 06:34 Message: Logged In: YES user_id=33229 I would really like to see Tix in 2.3 and will be glad to help. AFAIK there are no major issues with tix-8.1.3 and Python 2.x and it should be a simple drop in of a cannonically compiled Tix. If there are any issues that need dealing with at Tix's end, I'll be glad to put out a new minor release of Tix to address them. On Python's end I've suggested a fix for http://python.org/sf/564729 FYI, please also see my comments for bug 632323. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-03-23 04:34 Message: Logged In: YES user_id=6380 Yes, for 2.3. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-03-10 02:48 Message: Logged In: YES user_id=31435 Guido, do you want me to spend time on this? ---------------------------------------------------------------------- Comment By: Mathias Palm (monos) Date: 2002-03-07 14:38 Message: Logged In: YES user_id=361926 Thanks. Mathias ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-02-25 13:57 Message: Logged In: YES user_id=21627 The zip file is slightly too large for SF, so it is now at http://www.informatik.hu- berlin.de/~loewis/python/tix813win.zip ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-02-25 13:56 Message: Logged In: YES user_id=21627 Building Tix from sources is non-trivial, and I could not find any recent Windows binary distribution (based on Tix 8.1). So I'll attach a build of Tix 8.1.3 for Tcl/Tk 8.3, as a drop-in into the Python binary distribution. Compared to the original distribution, only tix8.1 \pkgIndex.tcl required tweaking, to tell it that tix8183.dll can be found in the DLLs subdirectory. Also, unless TIX_LIBRARY is set, the Tix tcl files *must* live in tcl\tix8.1, since tix8183.dll will look in TCL_LIBRARY\..\tix (among other locations). If a major Tcl release happens before Python 2.3 is released (and it is then still desirable to distribute Python with Tix), these binaries need to be regenerated. Would these instructions (unpack zip file into distribution tree) be precise enough to allow incorporation into the windows installer? ---------------------------------------------------------------------- Comment By: Mathias Palm (monos) Date: 2001-10-29 12:53 Message: Logged In: YES user_id=361926 As mentioned in the mail above (by me, Mathias), Tix is a package belonging to Tcl/Tk (to be found on sourceforge: tix.sourceforge.net, or via the Python home page - tkinter link). Everything needed can be found there, just read about it (and dont forget about the winking, eyes might be getting dry) Mathias ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-10-25 20:26 Message: Logged In: YES user_id=31435 I don't know anything about Tix, so if somebody wants this in the Windows installer, they're going to have to explain exactly (by which I mean exactly <0.5 wink>) what's needed. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=474836&group_id=5470 From noreply at sourceforge.net Tue Jun 8 04:48:51 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 8 04:48:59 2004 Subject: [ python-Bugs-777597 ] socketmodule.c connection handling incorect on windows Message-ID: Bugs item #777597, was opened at 2003-07-25 17:01 Message generated for change (Comment added) made by troels You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=777597&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Garth Bushell (garth42) Assigned to: Nobody/Anonymous (nobody) Summary: socketmodule.c connection handling incorect on windows Initial Comment: The socketmodule.c code does not handle connection refused correctly. This is due to a different of operation on windows of select. The offending code is in internal_connect in the MS_WINDOWS ifdef. The code in should test exceptfds to check for connecttion refused. If this is so itshould call getsockopt(SOL_SOCKET, SO_ERROR,..) to get the error status. (Source microsoft Platform SDK) The suggested fix is shown below (untested) #ifdef MS_WINDOWS f (s->sock_timeout > 0.0) { if (res < 0 && WSAGetLastError() == WSAEWOULDBLOCK) { /* This is a mess. Best solution: trust select */ fd_set exfds; struct timeval tv; tv.tv_sec = (int)s->sock_timeout; tv.tv_usec = (int)((s->sock_timeout - tv.tv_sec) * 1e6); FD_ZERO(&exfds); FD_SET(s->sock_fd, &exfds); /* Platform SDK says so */ res = select(s->sock_fd+1, NULL, NULL, &exfds, &tv); if (res > 0) { if( FD_ISSET( &exfds ) ) { /* Get the real reason */ getsockopt(s->sock_fd,SOL_SOCKET,SO_ERROR,(char*)&res,sizeof(res)); } else { /* God knows how we got here */ res = 0; } } else if( res == 0 ) { res = WSAEWOULDBLOCK; } else { /* Not sure we should return the erro from select? */ res = WSAGetLastError(); } } } else if (res < 0) res = WSAGetLastError(); #else ---------------------------------------------------------------------- Comment By: Troels Walsted Hansen (troels) Date: 2004-06-08 10:48 Message: Logged In: YES user_id=32863 http://python.org/sf/965036 has been updated with a fixed and tested patch. Could somebody review and apply it? Thanks! ---------------------------------------------------------------------- Comment By: Troels Walsted Hansen (troels) Date: 2004-06-02 15:59 Message: Logged In: YES user_id=32863 I have turned Garth's code into a patch versus Python 2.3.4. I don't believe the fix is correct and complete, but it should serve as a starting point. Patch is in http://python.org/sf/965036 ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-07-29 00:00 Message: Logged In: YES user_id=33168 Garth could you produce a patch against 2.3c2 with your selected change and test it? It would help us a lot as we are all very overloaded. Thanks. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=777597&group_id=5470 From noreply at sourceforge.net Tue Jun 8 09:44:50 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 8 09:44:59 2004 Subject: [ python-Bugs-730467 ] Not detecting AIX_GENUINE_CPLUSPLUS Message-ID: Bugs item #730467, was opened at 2003-04-30 17:22 Message generated for change (Comment added) made by mjarvis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=730467&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Patrick Miller (patmiller) Assigned to: Nobody/Anonymous (nobody) Summary: Not detecting AIX_GENUINE_CPLUSPLUS Initial Comment: PYthon2.2.2 and Python2.3 AIX xlC Need to update the code in configure.in to remove the hardcoded path for load where we detect AIX_GENUINE_CPLUSPLUS The current version looks for /usr/lpp/xlC/include/load.h but it should just use Without it, Python uses load() for dynamic load instead of loadAndInit() in dynload_aix.c Patch attached # checks for system dependent C++ extensions support case "$ac_sys_system" in AIX*) AC_MSG_CHECKING(for genuine AIX C++ extensions support) AC_TRY_LINK([#include "], [loadAndInit("", 0, "")], [AC_DEFINE(AIX_GENUINE_CPLUSPLUS) AC_MSG_RESULT(yes)], [AC_MSG_RESULT(no)]);; *) ;; esac ---------------------------------------------------------------------- Comment By: Michael Jarvis (mjarvis) Date: 2004-06-08 08:44 Message: Logged In: YES user_id=108945 This is still a problem with Python 2.3.4, FYI ---------------------------------------------------------------------- Comment By: Patrick Miller (patmiller) Date: 2003-07-16 17:23 Message: Logged In: YES user_id=30074 Martin also wrote: > will the new code still run on the old versions of I will upload a patch that looks in all the known directories. ---------------------------------------------------------------------- Comment By: Patrick Miller (patmiller) Date: 2003-07-16 17:19 Message: Logged In: YES user_id=30074 Martin writes: > When you say "update", do you mean that the code was correct The old code was correct because the xlC code was not in the main include directory but rather in the /usr/lpp/xlC directory. IBM apparently moved the file around during various releases of AIX though it settled a few versions ago in /usr/include. Perhaps the proper tactic is to look in the various directories like the perl configuration does: % more ./ext/DynaLoader/hints/aix.pl # See dl_aix.xs for details. use Config; if ($Config{libs} =~ /-lC/ && -f '/lib/libC.a') { $self->{CCFLAGS} = $Config{ccflags} . ' -DUSE_libC'; if (-f '/usr/vacpp/include/load.h') { $self->{CCFLAGS} .= ' -DUSE_vacpp_load_h'; } elsif (-f '/usr/ibmcxx/include/load.h') { $self->{CCFLAGS} .= ' -DUSE_ibmcxx_load_h'; } elsif (-f '/usr/lpp/xlC/include/load.h') { $self->{CCFLAGS} .= ' -DUSE_xlC_load_h'; } elsif (-f '/usr/include/load.h') { $self->{CCFLAGS} .= ' -DUSE_load_h'; } } ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2003-05-04 07:33 Message: Logged In: YES user_id=21627 When you say "update", do you mean that the code was correct for earlier versions of some software (which versions of which software?), and is now incorrect? With the proposed change, will the new code still run on the old versions of the software? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=730467&group_id=5470 From noreply at sourceforge.net Wed Jun 9 02:54:22 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 02:54:26 2004 Subject: [ python-Bugs-969415 ] CJK codecs list incomplete Message-ID: Bugs item #969415, was opened at 2004-06-08 23:54 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Mike Brown (mike_j_brown) Assigned to: Nobody/Anonymous (nobody) Summary: CJK codecs list incomplete Initial Comment: http://www.python.org/dev/doc/devel/whatsnew/node7. html states that various CJK encodings have been added, but the list given there does not match the list on http://www.python.org/dev/doc/devel/lib/node128.html. In particular, missing from the latter list are all of the aliases with hyphens: shift-jis, shift-jisx0213, euc-jp, euc-jisx0213, iso-2022- jp, iso-2022-jp-1, iso-2022-jp-2, iso-2022-jp-3, iso- 2022-jp-ext, euc-kr, iso-2022-kr Since I successfully ran codecs.lookup() tests on a few of the hyphenated aliases, I assume that the omission of the hyphenated versions in the docs is merely an oversight. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 From noreply at sourceforge.net Wed Jun 9 03:01:20 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 03:01:33 2004 Subject: [ python-Bugs-969415 ] CJK codecs list incomplete Message-ID: Bugs item #969415, was opened at 2004-06-09 15:54 Message generated for change (Comment added) made by perky You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 Category: Documentation Group: Python 2.4 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Mike Brown (mike_j_brown) >Assigned to: Hye-Shik Chang (perky) Summary: CJK codecs list incomplete Initial Comment: http://www.python.org/dev/doc/devel/whatsnew/node7. html states that various CJK encodings have been added, but the list given there does not match the list on http://www.python.org/dev/doc/devel/lib/node128.html. In particular, missing from the latter list are all of the aliases with hyphens: shift-jis, shift-jisx0213, euc-jp, euc-jisx0213, iso-2022- jp, iso-2022-jp-1, iso-2022-jp-2, iso-2022-jp-3, iso- 2022-jp-ext, euc-kr, iso-2022-kr Since I successfully ran codecs.lookup() tests on a few of the hyphenated aliases, I assume that the omission of the hyphenated versions in the docs is merely an oversight. ---------------------------------------------------------------------- >Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 16:01 Message: Logged In: YES user_id=55188 All hyphens are translated as underscores in encoding lookups. So we may not need to provide encoding list with hyphens additionally. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 From noreply at sourceforge.net Wed Jun 9 03:10:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 03:10:21 2004 Subject: [ python-Bugs-969415 ] CJK codecs list incomplete Message-ID: Bugs item #969415, was opened at 2004-06-09 15:54 Message generated for change (Comment added) made by perky You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 Category: Documentation Group: Python 2.4 >Status: Open >Resolution: None Priority: 5 Submitted By: Mike Brown (mike_j_brown) >Assigned to: Nobody/Anonymous (nobody) Summary: CJK codecs list incomplete Initial Comment: http://www.python.org/dev/doc/devel/whatsnew/node7. html states that various CJK encodings have been added, but the list given there does not match the list on http://www.python.org/dev/doc/devel/lib/node128.html. In particular, missing from the latter list are all of the aliases with hyphens: shift-jis, shift-jisx0213, euc-jp, euc-jisx0213, iso-2022- jp, iso-2022-jp-1, iso-2022-jp-2, iso-2022-jp-3, iso- 2022-jp-ext, euc-kr, iso-2022-kr Since I successfully ran codecs.lookup() tests on a few of the hyphenated aliases, I assume that the omission of the hyphenated versions in the docs is merely an oversight. ---------------------------------------------------------------------- >Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 16:10 Message: Logged In: YES user_id=55188 Reopened to consider the consistence with non-cjk codecs. All the non-cjk codecs are written with hyphen even if their realname is with underscore. (eg. iso8859-1 and iso8859_1.py) Will changing cjk codecs's codec/alias names to use not underscores but hyphens make docs more friendly? ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 16:01 Message: Logged In: YES user_id=55188 All hyphens are translated as underscores in encoding lookups. So we may not need to provide encoding list with hyphens additionally. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 From noreply at sourceforge.net Wed Jun 9 04:25:08 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 04:25:17 2004 Subject: [ python-Bugs-969415 ] CJK codecs list incomplete Message-ID: Bugs item #969415, was opened at 2004-06-08 23:54 Message generated for change (Comment added) made by mike_j_brown You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Mike Brown (mike_j_brown) Assigned to: Nobody/Anonymous (nobody) Summary: CJK codecs list incomplete Initial Comment: http://www.python.org/dev/doc/devel/whatsnew/node7. html states that various CJK encodings have been added, but the list given there does not match the list on http://www.python.org/dev/doc/devel/lib/node128.html. In particular, missing from the latter list are all of the aliases with hyphens: shift-jis, shift-jisx0213, euc-jp, euc-jisx0213, iso-2022- jp, iso-2022-jp-1, iso-2022-jp-2, iso-2022-jp-3, iso- 2022-jp-ext, euc-kr, iso-2022-kr Since I successfully ran codecs.lookup() tests on a few of the hyphenated aliases, I assume that the omission of the hyphenated versions in the docs is merely an oversight. ---------------------------------------------------------------------- >Comment By: Mike Brown (mike_j_brown) Date: 2004-06-09 01:25 Message: Logged In: YES user_id=371366 I see no reason to omit any aliases that are recognized, especially when the aliases in question are, more often than not, the IANA's preferred MIME name as shown at http://www.iana.org/assignments/character-sets. I was looking in the docs to see if Python 2.4 was going to support 'euc-jp', and was dismayed to see 'euc_jp' and variants but no 'euc-jp'. I had to obtain and install 2.4a0 to test to find out that it was just a documentation problem. Please consider listing all realnames and aliases. ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 00:10 Message: Logged In: YES user_id=55188 Reopened to consider the consistence with non-cjk codecs. All the non-cjk codecs are written with hyphen even if their realname is with underscore. (eg. iso8859-1 and iso8859_1.py) Will changing cjk codecs's codec/alias names to use not underscores but hyphens make docs more friendly? ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 00:01 Message: Logged In: YES user_id=55188 All hyphens are translated as underscores in encoding lookups. So we may not need to provide encoding list with hyphens additionally. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 From noreply at sourceforge.net Wed Jun 9 05:26:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 05:27:03 2004 Subject: [ python-Bugs-968245 ] Python Logging filename & file number Message-ID: Bugs item #968245, was opened at 2004-06-07 15:49 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=968245&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Ryan (superleo303) Assigned to: Vinay Sajip (vsajip) Summary: Python Logging filename & file number Initial Comment: I use Freebsd and redhat 9.0 linus at work. Using Python Logging filename & file number on freebsd works fine. When i print a log statement, and have my config formatter section set like: [formatter_default] format=%(asctime)s <%(levelname)s> <%(module)s:%(lineno)s> %(message)s datefmt= On bsd my logs look like: stateOnly is: DE MEssages ......... collecting datasource: 1011 Collectors overwrite existing pickle files: 0 Collectors run in multiple threads: 4 sql is: ... See, the filename and file number get displayed with each logging call, Now, The same exact code run on the same exact version of python on a linux machine yiedls the lines: <__init__:988> stateOnly is: DE <__init__:988> MEssages <__init__:988> collecting datasource: 1011 <__init__:988> Collectors overwrite existing pickle files: 0 <__init__:988> Collectors run in multiple threads: 4 <__init__:988> sql is: ... So i opened up ./python2.3/logging/__init__.py line 988 and it seems to be where the problem is. Can someone take a look at this asap? I have to run all my code on linux machines, so now i cant see which file and which line is making the logging. To reproduce, get a freebsd and linux machine, then run a simple script that uses logging config files and use the above example as your formatter in the logging confrig file, BSD should show the filenames and numbers, linux should show __init__ 988 instead. ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2004-06-09 09:26 Message: Logged In: YES user_id=308438 The problem appears to be related to some underlying problem with sys._getframe(). Can you please download the attached stackwalk.tar.gz and run it on both FreeBSD and Linux? Please post your findings here, thanks. Note that the stackwalk stuff contains no calls to the logging package. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=968245&group_id=5470 From noreply at sourceforge.net Wed Jun 9 05:28:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 05:28:33 2004 Subject: [ python-Bugs-932563 ] logging: need a way to discard Logger objects Message-ID: Bugs item #932563, was opened at 2004-04-09 21:51 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fred L. Drake, Jr. (fdrake) Assigned to: Vinay Sajip (vsajip) Summary: logging: need a way to discard Logger objects Initial Comment: There needs to be a way to tell the logging package that an application is done with a particular logger object. This is important for long-running processes that want to use a logger to represent a related set of activities over a relatively short period of time (compared to the life of the process). This is useful to allow creating per-connection loggers for internet servers, for example. Using a connection-specific logger allows the application to provide an identifier for the session that can be automatically included in the logs without having the application encode it into each message (a far more error prone approach). ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2004-06-09 09:28 Message: Logged In: YES user_id=308438 Fred, any more thoughts on this? Thanks, Vinay ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-05-08 19:28 Message: Logged In: YES user_id=308438 The problem with disposing of Logger objects programmatically is that you don't know who is referencing them. How about the following approach? I'm making no assumptions about the actual connection classes used; if you need to make it even less error prone, you can create delegating methods in the server class which do the appropriate wrapping. class ConnectionWrapper: def __init__(self, conn): self.conn = conn def message(self, msg): return "[connection: %s]: %s" % (self.conn, msg) and then use this like so... class Server: def get_connection(self, request): # return the connection for this request def handle_request(self, request): conn = self.get_connection(request) # we use a cache of connection wrappers if conn in self.conn_cache: cw = self.conn_cache[conn] else: cw = ConnectionWrapper(conn) self.conn_cache[conn] = cw #process request, and if events need to be logged, you can e.g. logger.debug(cw.message("Network packet truncated at %d bytes"), n) #The logged message will contain the connection ID ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 From noreply at sourceforge.net Wed Jun 9 05:39:45 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 05:39:55 2004 Subject: [ python-Bugs-964949 ] Ctrl-C causes odd behaviour Message-ID: Bugs item #964949, was opened at 2004-06-02 13:17 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964949&group_id=5470 Category: Windows Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Michael Bax (mrbax) >Assigned to: Nobody/Anonymous (nobody) Summary: Ctrl-C causes odd behaviour Initial Comment: With various versions of console Python 2.3.x under Windows 2000, executed using the "Python (command- line)" Start Menu shortcut, I have noticed the following intermittent errors: 1. When pressing Ctrl-C at the prompt, Python terminates. 2. When pressing Ctrl-C during a raw_input, Python raises an EOFError instead of KeyboardInterrupt. I usually cannot duplicate this behaviour by repeatedly pressing Ctrl-C or repeating the steps that led to it. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2004-06-09 11:39 Message: Logged In: YES user_id=11105 I cannot reproduce this behaviour, so I cannot do anything on this. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964949&group_id=5470 From noreply at sourceforge.net Wed Jun 9 06:03:18 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 06:03:25 2004 Subject: [ python-Bugs-969492 ] Python hangs up on I/O operations on the latest FreeBSD 4.10 Message-ID: Bugs item #969492, was opened at 2004-06-09 17:03 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969492&group_id=5470 Category: None Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: _Iww_ (iww) Assigned to: Nobody/Anonymous (nobody) Summary: Python hangs up on I/O operations on the latest FreeBSD 4.10 Initial Comment: Hello, friends! Here is my sample code, which works perfectly on other systems, but not the FreeBSD 4.10-STABLE I got today by cvsupping. #!/usr/local/bin/python from threading import Thread class Reading(Thread): def __init__(self): Thread.__init__(self) def run(self): print "Start!" z = 1 while 1: print z z += 1 fl = open('blah.txt') fl.read() fl.close() for i in range(10): print "i:", i zu = open('bzzz.txt') print "|->", zu.read() bzz = Reading() bzz.start() #--- I have tested this on Python 2.3.3, 2.3.4 and 2.4a0 from CVS. The interpretar falls in the infinite loop and stays in the poll-state. You can see it in the top: 34446 goga 2 0 3328K 2576K poll 0:00 0.00% 0.00% python I think it has some connection to the latest bug, found in the select() function (http://www.securityfocus.com/bid/10455) and its fix on BSD. Best regards, _Iww_ ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969492&group_id=5470 From noreply at sourceforge.net Wed Jun 9 06:21:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 06:21:38 2004 Subject: [ python-Bugs-964949 ] Ctrl-C causes odd behaviour Message-ID: Bugs item #964949, was opened at 2004-06-02 04:17 Message generated for change (Comment added) made by mrbax You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964949&group_id=5470 Category: Windows Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Michael Bax (mrbax) Assigned to: Nobody/Anonymous (nobody) Summary: Ctrl-C causes odd behaviour Initial Comment: With various versions of console Python 2.3.x under Windows 2000, executed using the "Python (command- line)" Start Menu shortcut, I have noticed the following intermittent errors: 1. When pressing Ctrl-C at the prompt, Python terminates. 2. When pressing Ctrl-C during a raw_input, Python raises an EOFError instead of KeyboardInterrupt. I usually cannot duplicate this behaviour by repeatedly pressing Ctrl-C or repeating the steps that led to it. ---------------------------------------------------------------------- >Comment By: Michael Bax (mrbax) Date: 2004-06-09 03:21 Message: Logged In: YES user_id=1055057 It *is* intermittent. Try entering the tutorial examples (copy and paste) and pressing Ctrl-C at random. That works for me. Here's another example: today I rebooted, then later clicked on the "Python (command-line)" shortcut. I typed 1 and pressed ENTER. I then pressed Ctrl-C. Boom. Window disappears. I tried again around 20 times -- it gave a KeyboardInterrupt each time, as it should. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-09 02:39 Message: Logged In: YES user_id=11105 I cannot reproduce this behaviour, so I cannot do anything on this. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964949&group_id=5470 From noreply at sourceforge.net Wed Jun 9 09:00:00 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 09:00:07 2004 Subject: [ python-Bugs-969574 ] BSD restartable signals not correctly disabled Message-ID: Bugs item #969574, was opened at 2004-06-09 22:59 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969574&group_id=5470 Category: Python Interpreter Core Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Luke Mewburn (lukem) Assigned to: Nobody/Anonymous (nobody) Summary: BSD restartable signals not correctly disabled Initial Comment: I noticed a problem with some python scripts not performing correctly when ^C (SIGINT) was pressed. I managed to isolate it to the following test case: import sys foo=sys.stdin.read() Once that's executed, you need to press ^C _twice_ for KeyboardInterrupt to be raised; the first ^C is ignored. If you manually enter that into an interactive python session this behaviour also occurs, although it appears to function correctly if you run the foo=sys.stdin.read() a second time (only one ^C is needed). This occurs on NetBSD 1.6 and 2.0, with python 2.1, 2.2 and 2.3, configured with and without pthreads. It also occurs on FreeBSD 4.8 with python 2.2.2. It does not occur on various Linux systems I asked people to test for me. (I have a NetBSD problem report about this issue: http://www.netbsd.org/cgi-bin/query-pr-single.pl?number=24797 ) I traced the process and noticed that the read() system call was being restarted when the first SIGINT was received. This hint, and the fact that Linux was unaffected indicated that python was probably not expecting restartable signal behaviour, and that behaviour is the default in BSD systems for signal(3) (but not sigaction(2) or the other POSIX signal calls). After doing some research in the python 2.3.4 source it appeared to me that the intent was to disable restartable signals, but that was not what was happening in practice. I noticed the following code issues: * not all code had been converted from using signal(3) to PyOS_getsig() and PyOS_setsig(). This is contrary to the HISTORY for python 2.0beta2. * siginterrupt(2) (an older BSD function) was being used in places in an attempt to ensure that signals were not restartable. However, in some cases this function was called _before_ signal(3)/PyOS_setsig(), which would mean that the call may be ineffective if PyOS_setsig() was implemented using signal(3) and not sigaction(2) * PyOS_setsig() was using sigaction(2) suboptimally, iand inheriting the sa_flags from the existing handler. If SA_RESTART happened to be already set for the signal, it would be copied. I provide the following patch, which does: * converts a few remaining signal(3) calls to PyOS_setsig(). There should be none left in a build on a UNIX system, although there may be on other systems. Converting any remaining calls to signal(3) is left as an exercise :) * moves siginterrupt(2) to PyOS_setsig() when the latter is implemented using signal(3) instead of sigaction(2). * when implementing PyOS_setsig() in terms of sigaction(2), use sigaction(2) in a more portable and "common" manner that explicitly clears the flags for the new handler, thus preventing SA_RESTART from being implicitly copied. With this patch applied, python passes all the same regression tests as without it, and my test case now exits on the first ^C as expected. Also, it is possible that this patch may also fix other outstanding signal issues on systems with BSD restartable signal semantics. Cheers, Luke. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969574&group_id=5470 From noreply at sourceforge.net Wed Jun 9 10:16:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 10:17:15 2004 Subject: [ python-Feature Requests-967161 ] pty.spawn() enhancements Message-ID: Feature Requests item #967161, was opened at 2004-06-06 00:29 Message generated for change (Comment added) made by jhenstridge You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=967161&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: A.M. Kuchling (akuchling) Assigned to: Nobody/Anonymous (nobody) Summary: pty.spawn() enhancements Initial Comment: (Originally suggested by James Henstridge in bug #897935) There are also a few changes that would be nice to see in pty.spawn: 1) get the exit status of the child. Could be fixed by adding the following to the end of the function: pid, status = os.waitpid(pid, 0) return status 2) set master_fd to non-blocking mode, so that the output is printed to the screen at the speed it is produced by the child. ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2004-06-09 22:16 Message: Logged In: YES user_id=146903 Since filing the original bug report, I got reports that simply setting the fds to non-blocking caused problems under Solaris. Some details are available in this bug report: http://bugzilla.gnome.org/show_bug.cgi?id=139168 The _copy() function never raised an IOError or OSError, so it never exited. I'd imagine that EOF could be detected by getting back then empty string when reading from the fd when select() says it is ready for reading, but I haven't checked whether this works. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=967161&group_id=5470 From noreply at sourceforge.net Wed Jun 9 11:23:24 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 11:23:33 2004 Subject: [ python-Bugs-891930 ] configure argument --libdir is ignored Message-ID: Bugs item #891930, was opened at 2004-02-06 17:54 Message generated for change (Comment added) made by goeran You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=891930&group_id=5470 Category: Build Group: Python 2.2.3 Status: Open Resolution: None Priority: 5 Submitted By: G?ran Uddeborg (goeran) Assigned to: Nobody/Anonymous (nobody) Summary: configure argument --libdir is ignored Initial Comment: I wanted to place the LIBDIR/python2.2 hierarchy on an alternate place, so I tried giving the --libdir command line option to configure. The argument is accepted, but apparently it is ignored during the build. Apparently, this directory is hard coded to be PREFIX/lib/python2.2 You could argue if this is a bug report or an enhancement request. Since "configure --help" does mention this option, I put it here. In either case I would consider it a good improvement to honour the --libdir configure option. ---------------------------------------------------------------------- >Comment By: G?ran Uddeborg (goeran) Date: 2004-06-09 17:23 Message: Logged In: YES user_id=55884 lib/pythonVERSION contains not only pys. It also contains .so modules which obviouosly are platform dependent. So LIBDIR does look appropriate to me. Do I miss something? I wasn't aware there were so many dependencies on this. Red Hat has made that in their build of python for the x86_64 platform. I enclose the patch they use for this from the source RPM for Fedora Core 2. It is only 232 lines long. I enclose it for your convenience. I would have thought supporting --libdir would be of the same order of magniture. Do I miss something here too? ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 19:42 Message: Logged In: YES user_id=1057404 There are many, many assumptions in the code (and probably elsewhere by now) that things go in ''' PREFIX "lib/python" VERSION '''. The correct variable to change /would be/ DATADIR, as it is for non-platform-dependent files (which .pys are, surely?). This is also not honoured. It doesn't appear possible to remove the options from configure's output without making changes to autoconf. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=891930&group_id=5470 From noreply at sourceforge.net Wed Jun 9 11:56:47 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 11:56:52 2004 Subject: [ python-Bugs-969718 ] BASECFLAGS are not passed to module build line Message-ID: Bugs item #969718, was opened at 2004-06-09 10:56 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969718&group_id=5470 Category: Distutils Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jason Beardsley (vaxhacker) Assigned to: Nobody/Anonymous (nobody) Summary: BASECFLAGS are not passed to module build line Initial Comment: The value of BASECFLAGS from /prefix/lib/pythonver/config/Makefile is not present on the compile command for modules being built by distutils ("python setup.py build"). It seems that only the value of OPT is passed along. This is insufficient when BASECFLAGS contains "-fno-static-aliasing", since recent versions of gcc will emit incorrect (crashing) code if this flag is not provided, when compiling certain modules (the mx products from egenix, for example). I did try to set CFLAGS in my environment, as directed by documentation, but this also had zero effect on the final build command. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969718&group_id=5470 From noreply at sourceforge.net Wed Jun 9 12:01:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 12:01:44 2004 Subject: [ python-Bugs-932563 ] logging: need a way to discard Logger objects Message-ID: Bugs item #932563, was opened at 2004-04-09 17:51 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fred L. Drake, Jr. (fdrake) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: logging: need a way to discard Logger objects Initial Comment: There needs to be a way to tell the logging package that an application is done with a particular logger object. This is important for long-running processes that want to use a logger to represent a related set of activities over a relatively short period of time (compared to the life of the process). This is useful to allow creating per-connection loggers for internet servers, for example. Using a connection-specific logger allows the application to provide an identifier for the session that can be automatically included in the logs without having the application encode it into each message (a far more error prone approach). ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-09 12:01 Message: Logged In: YES user_id=31435 Assigned to Fred, because Vinay wants his input (in general, a bug should be assigned to the next person who needs to "do something" about it, and that can change over time). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-09 05:28 Message: Logged In: YES user_id=308438 Fred, any more thoughts on this? Thanks, Vinay ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-05-08 15:28 Message: Logged In: YES user_id=308438 The problem with disposing of Logger objects programmatically is that you don't know who is referencing them. How about the following approach? I'm making no assumptions about the actual connection classes used; if you need to make it even less error prone, you can create delegating methods in the server class which do the appropriate wrapping. class ConnectionWrapper: def __init__(self, conn): self.conn = conn def message(self, msg): return "[connection: %s]: %s" % (self.conn, msg) and then use this like so... class Server: def get_connection(self, request): # return the connection for this request def handle_request(self, request): conn = self.get_connection(request) # we use a cache of connection wrappers if conn in self.conn_cache: cw = self.conn_cache[conn] else: cw = ConnectionWrapper(conn) self.conn_cache[conn] = cw #process request, and if events need to be logged, you can e.g. logger.debug(cw.message("Network packet truncated at %d bytes"), n) #The logged message will contain the connection ID ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 From noreply at sourceforge.net Wed Jun 9 12:59:31 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 12:59:40 2004 Subject: [ python-Bugs-969757 ] function and method objects confounded in Tutorial Message-ID: Bugs item #969757, was opened at 2004-06-09 16:59 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969757&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Mark Jackson (mjackson) Assigned to: Nobody/Anonymous (nobody) Summary: function and method objects confounded in Tutorial Initial Comment: In Section 9.3.2 (Class Objects) we find, right after the MyClass example code: "then MyClass.i and MyClass.f are valid attribute references, returning an integer and a method object, respectively." However, at the end of Section 9.3.3 (Instance Objects) we find, referring to the same example: "But x.f is not the same thing as MyClass.f - it is a method object, not a function object." There are references to MyClass.f as a function or function object in Section 9.3.4 as well. Although Python terminology doesn't seem to be completely consistent around this point (in the Python 2.1.3 interpreter MyClass.f describes itself as an "unbound method") iit seems clear that calling MyClass.f a method object in Section 9.3.2 is, in this context, an error. Should be changed to "function object." ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969757&group_id=5470 From noreply at sourceforge.net Wed Jun 9 17:45:29 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 17:45:34 2004 Subject: [ python-Bugs-969938 ] pydoc ignores module's __all__ attributes Message-ID: Bugs item #969938, was opened at 2004-06-09 16:45 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969938&group_id=5470 Category: Demos and Tools Group: None Status: Open Resolution: None Priority: 5 Submitted By: Skip Montanaro (montanaro) Assigned to: Skip Montanaro (montanaro) Summary: pydoc ignores module's __all__ attributes Initial Comment: The pydoc tool ignores the contents of a module's __all__ list. If the programmer has gone to the trouble of creating __all__ then pydoc should only generate/display documentation for objects it lists. The problem can be demonstrated by executing: pydoc csv Scroll down and note that StringIO is described in the Functions section. Someone executing "from csv import *" would not get a StringIO function added to their namespace. I've no time to create a patch right this minute, but if you come up with one, feel free to steal this bug report from me. ;-) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969938&group_id=5470 From noreply at sourceforge.net Wed Jun 9 20:35:52 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 9 20:35:56 2004 Subject: [ python-Bugs-970042 ] lfcntl.lockf() signature uses len, doc refers to length Message-ID: Bugs item #970042, was opened at 2004-06-10 00:35 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970042&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Clinton Roy (clintonroy) Assigned to: Nobody/Anonymous (nobody) Summary: lfcntl.lockf() signature uses len, doc refers to length Initial Comment: The documentation has the signature: lockf(fd, operation, [len, [start, [whence]]]) but the description refers to the length parameter. Obviously very minor. Personally, I'd be happier to see the signature changed, rather than the documentation. cheers, ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970042&group_id=5470 From noreply at sourceforge.net Thu Jun 10 07:15:50 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 10 07:17:49 2004 Subject: [ python-Bugs-876637 ] Random stack corruption from socketmodule.c Message-ID: Bugs item #876637, was opened at 2004-01-14 07:41 Message generated for change (Comment added) made by troels You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=876637&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Mike Pall (mikesfpy) Assigned to: Nobody/Anonymous (nobody) Summary: Random stack corruption from socketmodule.c Initial Comment: THE PROBLEM: The implementation of the socket_object.settimeout() method (socketmodule.c, function internal_select()) uses the select() system call with an unbounded file descriptor number. This will cause random stack corruption if fd>=FD_SETSIZE. This took me ages to track down! It happened with a massively multithreaded and massively connection-swamped network server. Basically most of the descriptors did not use that routine (because they were either pure blocking or pure non-blocking). But one module used settimeout() and with a little bit of luck got an fd>=FD_SETSIZE and with even more luck corrupted the stack and took down the whole server process. Demonstration script appended. THE SOLUTION: The solution is to use poll() and to favour poll() even if select() is available on a platform. The current trend in modern OS+libc combinations is to emulate select() in libc and call kernel-level poll() anyway. And this emulation is costly (both for the caller and for libc). Not so the other way round (only some systems of historical interest do that BTW), so we definitely want to use poll() if it's available (even if it's an emulation). And if select() is your only choice, then check for FD_SETSIZE before using the FD_SET macro (and raise some strange exception if that fails). [ I should note that using SO_RCVTIMEO and SO_SNDTIMEO would be a lot more efficient (kernel-wise at least). Unfortunately they are not universally available (though defined by most system header files). But a simple runtime test with a fallback to poll()/select() would do. ] A PATCH, A PATCH? Well, the check for FD_SETSIZE is left as an exercise for the reader. :-) Don't forget to merge this with the stray select() way down by adding a return value to internal_select(). But yes, I can do a 'real' patch with poll() [and even one with the SO_RCVTIMEO trick if you are adventurous]. But, I can't test it with dozens of platforms, various include files, compilers and so on. So, dear Python core developers: Please discuss this and tell me, if you want a patch, then you'll get one ASAP. Thank you for your time! ---------------------------------------------------------------------- Comment By: Troels Walsted Hansen (troels) Date: 2004-06-10 13:15 Message: Logged In: YES user_id=32863 I have created a patch to make socketmodule use poll() when available. See http://python.org/sf/970288 (I'm not allowed to attach patches to this bug item.) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=876637&group_id=5470 From noreply at sourceforge.net Thu Jun 10 08:46:18 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 10 08:46:28 2004 Subject: [ python-Bugs-970334 ] 2.3.4 fails build on solaris 10 - complexobject.c Message-ID: Bugs item #970334, was opened at 2004-06-10 07:46 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970334&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: JD Bronson (lonebandit) Assigned to: Nobody/Anonymous (nobody) Summary: 2.3.4 fails build on solaris 10 - complexobject.c Initial Comment: this has been an ongoing issue: gcc -c -fno-strict-aliasing -DNDEBUG -g -O3 -Wall - Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/classobject.o Objects/classobject.c gcc -c -fno-strict-aliasing -DNDEBUG -g -O3 -Wall - Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/cobject.o Objects/cobject.c gcc -c -fno-strict-aliasing -DNDEBUG -g -O3 -Wall - Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/complexobject.o Objects/complexobject.c Objects/complexobject.c: In function `complex_pow': Objects/complexobject.c:469: error: invalid operands to binary == Objects/complexobject.c:469: error: wrong type argument to unary minus Objects/complexobject.c:469: error: invalid operands to binary == Objects/complexobject.c:469: error: wrong type argument to unary minus make: *** [Objects/complexobject.o] Error 1 ..It fails at that point with no workaround. Jeff ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970334&group_id=5470 From noreply at sourceforge.net Thu Jun 10 10:58:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 10 10:58:16 2004 Subject: [ python-Bugs-970459 ] Generators produce wrong exception Message-ID: Bugs item #970459, was opened at 2004-06-10 16:58 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970459&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Anders Lehmann (anders_lehmann) Assigned to: Nobody/Anonymous (nobody) Summary: Generators produce wrong exception Initial Comment: The following script : def f(): yield "%s" %('Et','to') for i in f(): print i will produce the following traceback in Python 2.3.4 Traceback (most recent call last): File "python_generator_bug.py", line 6, in ? b+=f() TypeError: argument to += must be iterable Where I would expect a: TypeError : not all arguments converted during string formatting. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970459&group_id=5470 From noreply at sourceforge.net Thu Jun 10 11:08:50 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 10 11:12:08 2004 Subject: [ python-Bugs-970334 ] 2.3.4 fails build on solaris 10 - complexobject.c Message-ID: Bugs item #970334, was opened at 2004-06-10 08:46 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970334&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: JD Bronson (lonebandit) Assigned to: Nobody/Anonymous (nobody) Summary: 2.3.4 fails build on solaris 10 - complexobject.c Initial Comment: this has been an ongoing issue: gcc -c -fno-strict-aliasing -DNDEBUG -g -O3 -Wall - Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/classobject.o Objects/classobject.c gcc -c -fno-strict-aliasing -DNDEBUG -g -O3 -Wall - Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/cobject.o Objects/cobject.c gcc -c -fno-strict-aliasing -DNDEBUG -g -O3 -Wall - Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/complexobject.o Objects/complexobject.c Objects/complexobject.c: In function `complex_pow': Objects/complexobject.c:469: error: invalid operands to binary == Objects/complexobject.c:469: error: wrong type argument to unary minus Objects/complexobject.c:469: error: invalid operands to binary == Objects/complexobject.c:469: error: wrong type argument to unary minus make: *** [Objects/complexobject.o] Error 1 ..It fails at that point with no workaround. Jeff ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-10 11:08 Message: Logged In: YES user_id=31435 See comments about Py_HUGE_VAL in pyport.h. Possible that this platform defines C's HUGE_VAL incorrectly, in which case config for this platfrom needs to set Py_HUGE_VAL to a correct expansion. You could also try compiling complexobject.c without optimization. In at least one prior case, the platform HUGE_VAL was correct, but expanded to such a hairy expression that it confused the platform C compiler when optimization was cranked up. Finally, you didn't say which version of gcc and its libraries you're using. It's possible that another version would work. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970334&group_id=5470 From noreply at sourceforge.net Thu Jun 10 11:38:35 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 10 11:38:46 2004 Subject: [ python-Bugs-970459 ] Generators produce wrong exception Message-ID: Bugs item #970459, was opened at 2004-06-10 10:58 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970459&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Anders Lehmann (anders_lehmann) Assigned to: Nobody/Anonymous (nobody) Summary: Generators produce wrong exception Initial Comment: The following script : def f(): yield "%s" %('Et','to') for i in f(): print i will produce the following traceback in Python 2.3.4 Traceback (most recent call last): File "python_generator_bug.py", line 6, in ? b+=f() TypeError: argument to += must be iterable Where I would expect a: TypeError : not all arguments converted during string formatting. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-10 11:38 Message: Logged In: YES user_id=31435 The script you gave does produce the message you expect: >>> def f(): ... yield "%s" %('Et','to') ... >>> for i in f(): print i ... Traceback (most recent call last): File "", line 1, in ? File "", line 2, in f TypeError: not all arguments converted during string formatting >>> The traceback you gave contains the line b+=f() which doesn't appear in the script you gave. If the script you *actually* ran had, for example, >>> b = [] >>> b += f() Then Traceback (most recent call last): File "", line 1, in ? TypeError: argument to += must be iterable >>> is an appropriate exception. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970459&group_id=5470 From noreply at sourceforge.net Thu Jun 10 12:31:59 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 10 12:32:04 2004 Subject: [ python-Bugs-966431 ] import x.y inside of module x.y Message-ID: Bugs item #966431, was opened at 2004-06-04 19:58 Message generated for change (Comment added) made by jiwon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966431&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: import x.y inside of module x.y Initial Comment: To get to the module object from the body of the module itself, the usual trick is to import it from itself, as in: x.py: import x do_stuff_with(x) This fails strangely if x is in a package: package/x.py: import package.x do_stuff_with(package.x) The last line triggers an AttributeError: 'module' object has no attribute 'x'. In other words, the import succeeds but the expression 'package.x' still isn't valid after it. ---------------------------------------------------------------------- Comment By: Jiwon Seo (jiwon) Date: 2004-06-11 01:31 Message: Logged In: YES user_id=595483 The behavior is definitely due to the calling sequence of add_submodule and load_module in import_submodule. (like following) ... m = load_module(fullname, fp, buf, fdp->type, loader); Py_XDECREF(loader); if (fp) fclose(fp); if (!add_submodule(mod, m, fullname, subname, modules)) { Py_XDECREF(m); m = NULL; } ... For "importing package.x;do_something(package.x)" from within package.x to be possible, add_submodule should be done before load_module, since in load_module, not only the module is load, but also executed by PyImport_ExecCodeModuleEx. So, if we make a module and call add_submodule before load_module, import ing package.x and using it is possible. (like following) m = PyImport_AddModule(fullname); if (!m) { return NULL; } if (!add_submodule(mod, m, fullname, subname, modules)) { Py_XDECREF(m); return NULL; } m = load_module(mod, fullname, fp, buf, fdp->type, loader); Py_XDECREF(loader); but above would make test_importhook fail because in IMP_HOOK case, module object is created by PyObject_CallMethod(... "load_module"..), not by calling PyImport_AddModule. So, we cannot know about the module before that method calling is returned. Thus, in IMP_HOOK case, load_module would not use the already-created module by PyImport_AddModule, but would make a new one, which is not added as submodule to its parent. Anyway, adding another add_submodule after load_module would make import-hook test code to be passed, but it's a lame patch since in IMP_HOOK case, import package.x in package/x.py cannot be done. So, for the behavior to be possible, I think load_module should be explicitly separated into two function - load_module, execute_module. And then we'll load_module, add_submodule itself to its parent and then execute_module. There does not seem to be any hack that touches only limited places, so I think this bug(?) will stay open for quite long time. =) ---------------------------------------------------------------------- Comment By: Jiwon Seo (jiwon) Date: 2004-06-06 08:18 Message: Logged In: YES user_id=595483 The error seems to be due to the calling sequence of add_submodule and loadmodule in import.c:import_submodule. If load_module(..) is called after add_submodule(...) gets called, the above does not trigger Attribute Error. I made a patch that does it, but there is a problem... Currently, when import produces errors, sys.modules have the damaged module, but the patch does not. (That's why it cannot pass the test_pkgimport.py unittest, I think.) Someone who knows more about import.c could fix the patch to behave like that. The patch is in http://seojiwon.dnip.net:8000/~jiwon/tmp/import.diff ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966431&group_id=5470 From noreply at sourceforge.net Thu Jun 10 12:50:20 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 10 12:50:27 2004 Subject: [ python-Bugs-932563 ] logging: need a way to discard Logger objects Message-ID: Bugs item #932563, was opened at 2004-04-09 17:51 Message generated for change (Comment added) made by fdrake You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fred L. Drake, Jr. (fdrake) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: logging: need a way to discard Logger objects Initial Comment: There needs to be a way to tell the logging package that an application is done with a particular logger object. This is important for long-running processes that want to use a logger to represent a related set of activities over a relatively short period of time (compared to the life of the process). This is useful to allow creating per-connection loggers for internet servers, for example. Using a connection-specific logger allows the application to provide an identifier for the session that can be automatically included in the logs without having the application encode it into each message (a far more error prone approach). ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-10 12:50 Message: Logged In: YES user_id=3066 Sorry for the delay in following up. I'll re-visit the software where I wanted this to see how it'll work out in practice. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-09 12:01 Message: Logged In: YES user_id=31435 Assigned to Fred, because Vinay wants his input (in general, a bug should be assigned to the next person who needs to "do something" about it, and that can change over time). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-09 05:28 Message: Logged In: YES user_id=308438 Fred, any more thoughts on this? Thanks, Vinay ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-05-08 15:28 Message: Logged In: YES user_id=308438 The problem with disposing of Logger objects programmatically is that you don't know who is referencing them. How about the following approach? I'm making no assumptions about the actual connection classes used; if you need to make it even less error prone, you can create delegating methods in the server class which do the appropriate wrapping. class ConnectionWrapper: def __init__(self, conn): self.conn = conn def message(self, msg): return "[connection: %s]: %s" % (self.conn, msg) and then use this like so... class Server: def get_connection(self, request): # return the connection for this request def handle_request(self, request): conn = self.get_connection(request) # we use a cache of connection wrappers if conn in self.conn_cache: cw = self.conn_cache[conn] else: cw = ConnectionWrapper(conn) self.conn_cache[conn] = cw #process request, and if events need to be logged, you can e.g. logger.debug(cw.message("Network packet truncated at %d bytes"), n) #The logged message will contain the connection ID ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 From noreply at sourceforge.net Thu Jun 10 13:18:47 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 10 13:19:03 2004 Subject: [ python-Bugs-968245 ] Python Logging filename & file number Message-ID: Bugs item #968245, was opened at 2004-06-07 10:49 Message generated for change (Comment added) made by superleo303 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=968245&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Ryan (superleo303) Assigned to: Vinay Sajip (vsajip) Summary: Python Logging filename & file number Initial Comment: I use Freebsd and redhat 9.0 linus at work. Using Python Logging filename & file number on freebsd works fine. When i print a log statement, and have my config formatter section set like: [formatter_default] format=%(asctime)s <%(levelname)s> <%(module)s:%(lineno)s> %(message)s datefmt= On bsd my logs look like: stateOnly is: DE MEssages ......... collecting datasource: 1011 Collectors overwrite existing pickle files: 0 Collectors run in multiple threads: 4 sql is: ... See, the filename and file number get displayed with each logging call, Now, The same exact code run on the same exact version of python on a linux machine yiedls the lines: <__init__:988> stateOnly is: DE <__init__:988> MEssages <__init__:988> collecting datasource: 1011 <__init__:988> Collectors overwrite existing pickle files: 0 <__init__:988> Collectors run in multiple threads: 4 <__init__:988> sql is: ... So i opened up ./python2.3/logging/__init__.py line 988 and it seems to be where the problem is. Can someone take a look at this asap? I have to run all my code on linux machines, so now i cant see which file and which line is making the logging. To reproduce, get a freebsd and linux machine, then run a simple script that uses logging config files and use the above example as your formatter in the logging confrig file, BSD should show the filenames and numbers, linux should show __init__ 988 instead. ---------------------------------------------------------------------- >Comment By: Ryan (superleo303) Date: 2004-06-10 12:18 Message: Logged In: YES user_id=1058618 vsajip, i tested on both machines, both machines returned the same results: STACKWALKER RESULTS RUN ON LINUX RED HAT 9.0 [rsmith@marge]:~/ned$ cd bin/testing/stackwalk/ [rsmith@marge]:~/ned/bin/testing/stackwalk$ python sw1.py [rsmith@marge]:~/ned/bin/testing/stackwalk$ python sw2.py [rsmith@marge]:~/ned/bin/testing/stackwalk$ python sw3.py [rsmith@marge]:~/ned/bin/testing/stackwalk$ python sw4.py [rsmith@marge]:~/ned/bin/testing/stackwalk$ python stackwalk.py [rsmith@marge]:~/ned/bin/testing/stackwalk$ python getcaller.py /home/rsmith/ned/bin/testing/stackwalk/sw1.py(4) /home/rsmith/ned/bin/testing/stackwalk/sw2.py(5) /home/rsmith/ned/bin/testing/stackwalk/sw3.py(6) /home/rsmith/ned/bin/testing/stackwalk/sw4.py(7) getcaller.py(3) [rsmith@marge]:~/ned/bin/testing/stackwalk$ ------------------------------------------------------------------------- STACKWALKER RESULTS RUN ON FREEBSD 5.2.1-RELEASE-p8 [ryan@dev2]:~/ned$ cd bin/testing/stackwalk/ [ryan@dev2]:~/ned/bin/testing/stackwalk$ python sw1.py [ryan@dev2]:~/ned/bin/testing/stackwalk$ python sw2.py [ryan@dev2]:~/ned/bin/testing/stackwalk$ python sw3.py [ryan@dev2]:~/ned/bin/testing/stackwalk$ python sw4.py [ryan@dev2]:~/ned/bin/testing/stackwalk$ python stackwalk.py [ryan@dev2]:~/ned/bin/testing/stackwalk$ python getcaller.py /usr/home/ryan/ned/bin/testing/stackwalk/sw1.py(4) /usr/home/ryan/ned/bin/testing/stackwalk/sw2.py(5) /usr/home/ryan/ned/bin/testing/stackwalk/sw3.py(6) /usr/home/ryan/ned/bin/testing/stackwalk/sw4.py(7) getcaller.py(3) [ryan@dev2]:~/ned/bin/testing/stackwalk$ ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-09 04:26 Message: Logged In: YES user_id=308438 The problem appears to be related to some underlying problem with sys._getframe(). Can you please download the attached stackwalk.tar.gz and run it on both FreeBSD and Linux? Please post your findings here, thanks. Note that the stackwalk stuff contains no calls to the logging package. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=968245&group_id=5470 From noreply at sourceforge.net Thu Jun 10 18:51:40 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 10 18:51:45 2004 Subject: [ python-Bugs-970783 ] PyObject_GenericGetAttr is undocumented Message-ID: Bugs item #970783, was opened at 2004-06-10 15:51 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970783&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Eric Huss (ehuss) Assigned to: Nobody/Anonymous (nobody) Summary: PyObject_GenericGetAttr is undocumented Initial Comment: The Python/C API documentation references the PyObject_GenericGetAttr function in a few places, but doesn't actually document what it does. Same with PyObject_GenericSetAttr. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970783&group_id=5470 From noreply at sourceforge.net Thu Jun 10 19:42:11 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 10 19:42:15 2004 Subject: [ python-Bugs-970799 ] Pyton 2.3.4 Make Test Fails on Mac OS X Message-ID: Bugs item #970799, was opened at 2004-06-10 16:42 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970799&group_id=5470 Category: Build Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: D. Evan Kiefer (dekiefer) Assigned to: Nobody/Anonymous (nobody) Summary: Pyton 2.3.4 Make Test Fails on Mac OS X Initial Comment: Under Mac OSX 10.3.4 with latest security update. Power Mac G4 733MHz Trying to install Zope 2.7.0 with Python 2.3.4. I first used fink to install 2.3.4 but Zope could find module 'os' to import. I then followed the instructions at: http://zope.org/Members/jens/docs/Document.2003-12-27.2431/ document_view to install Python with the default configure. Unlike under fink, doing this allowed me to run 'make test'. It too noted import problems for 'os' and 'site'. test_tempfile 'import site' failed; use -v for traceback Traceback (most recent call last): File "/Volumes/Spielen/Python-2.3.4/Lib/test/tf_inherit_check.py", line 6, in ? import os ImportError: No module named os test test_tempfile failed -- Traceback (most recent call last): File "/Volumes/Spielen/Python-2.3.4/Lib/test/test_tempfile.py", line 307, in test_noinherit self.failIf(retval > 0, "child process reports failure") File "/Volumes/Spielen/Python-2.3.4/Lib/unittest.py", line 274, in failIf if expr: raise self.failureException, msg AssertionError: child process reports failure test_atexit 'import site' failed; use -v for traceback Traceback (most recent call last): File "@test.py", line 1, in ? import atexit ImportError: No module named atexit test test_atexit failed -- '' == "handler2 (7,) {'kw': 'abc'}\nhandler2 () {}\nhandler1\n" test_audioop ---------- test_poll skipped -- select.poll not defined -- skipping test_poll test_popen 'import site' failed; use -v for traceback 'import site' failed; use -v for traceback 'import site' failed; use -v for traceback test_popen2 ------------------- 229 tests OK. 2 tests failed: test_atexit test_tempfile 24 tests skipped: test_al test_bsddb3 test_cd test_cl test_curses test_dl test_email_codecs test_gl test_imgfile test_largefile test_linuxaudiodev test_locale test_nis test_normalization test_ossaudiodev test_pep277 test_poll test_socket_ssl test_socketserver test_sunaudiodev test_timeout test_urllibnet test_winreg test_winsound Those skips are all expected on darwin. make: *** [test] Error 1 deksmacintosh:3-> ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970799&group_id=5470 From noreply at sourceforge.net Fri Jun 11 00:46:34 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 00:46:42 2004 Subject: [ python-Bugs-969938 ] pydoc ignores module's __all__ attributes Message-ID: Bugs item #969938, was opened at 2004-06-09 16:45 Message generated for change (Comment added) made by montanaro You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969938&group_id=5470 Category: Demos and Tools Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Skip Montanaro (montanaro) Assigned to: Skip Montanaro (montanaro) Summary: pydoc ignores module's __all__ attributes Initial Comment: The pydoc tool ignores the contents of a module's __all__ list. If the programmer has gone to the trouble of creating __all__ then pydoc should only generate/display documentation for objects it lists. The problem can be demonstrated by executing: pydoc csv Scroll down and note that StringIO is described in the Functions section. Someone executing "from csv import *" would not get a StringIO function added to their namespace. I've no time to create a patch right this minute, but if you come up with one, feel free to steal this bug report from me. ;-) ---------------------------------------------------------------------- >Comment By: Skip Montanaro (montanaro) Date: 2004-06-10 23:46 Message: Logged In: YES user_id=44345 fixed in 1.92. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969938&group_id=5470 From noreply at sourceforge.net Fri Jun 11 01:44:42 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 01:45:07 2004 Subject: [ python-Bugs-967182 ] file("foo", "wU") is silently accepted Message-ID: Bugs item #967182, was opened at 2004-06-05 12:15 Message generated for change (Comment added) made by montanaro You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967182&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Skip Montanaro (montanaro) Assigned to: Skip Montanaro (montanaro) Summary: file("foo", "wU") is silently accepted Initial Comment: PEP 278 says at opening a file with "wU" is illegal, yet file("foo", "wU") passes without complaint. There may be other flags which the PEP disallows with "U" that need to be checked. ---------------------------------------------------------------------- >Comment By: Skip Montanaro (montanaro) Date: 2004-06-11 00:44 Message: Logged In: YES user_id=44345 So this means I can't be explicit about what to accept, only about what to reject. Simpler patch attached... ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-07 22:01 Message: Logged In: YES user_id=31435 The C standard is open-ended about what a mode string can contain, and Python has historically allowed users to exploit platform-specific extensions here. On Windows, at least 't' and 'c' mean something in mode strings, and 'c' is actually useful (it forces writes to get committed to disk immediately). Most platforms ignore characters in mode strings that they don't understand. This is an exhaustive list of the mode strings a conforming C implementation must support (from C99): """ r open text file for reading w truncate to zero length or create text file for writing a append; open or create text file for writing at end-of-file rb open binary file for reading wb truncate to zero length or create binary file for writing ab append; open or create binary file for writing at end-of-file r+ open text file for update (reading and writing) w+ truncate to zero length or create text file for update a+ append; open or create text file for update, writing at end- of-file r+b or rb+ open binary file for update (reading and writing) w+b or wb+ truncate to zero length or create binary file for update a+b or ab+ append; open or create binary file for update, writing at end-of-file """ Implementations are free to support as many more as they care to. Guido may be in favor of restricting Python (in 2.4 or 2.5) to the set of mode strings required by C99, plus those that make sense with Python's U extension. I think he said something to that effect in person once. But 'c' is in fact useful on Windows, and code will break if it's outlawed. ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2004-06-07 21:10 Message: Logged In: YES user_id=44345 Turned out not to be obvious at all (and not related to my changes). Here's another patch which is cleaner I think. Would someone take a look at this? My intent is to not let invalid modes pass silently (as "wU" currently does). Have I accounted for all the valid mode strings? It has some semantic changes, so this is not a backport candidate. I'm not sure about how 't' is handled. It's only of use on Windows as I understand it, but I don't see it sucked out of the mode string on non-Windows platforms, so it must be silently accepted on Unix and Mac. (Can someone confirm this?) ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2004-06-05 14:51 Message: Logged In: YES user_id=44345 Here's a first cut patch - test suite fails though - must be something obvious... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967182&group_id=5470 From noreply at sourceforge.net Fri Jun 11 08:26:15 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 08:26:39 2004 Subject: [ python-Bugs-924301 ] A leak case with cmd.py & readline & exception Message-ID: Bugs item #924301, was opened at 2004-03-27 00:28 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Sverker Nilsson (svenil) Assigned to: Michael Hudson (mwh) Summary: A leak case with cmd.py & readline & exception Initial Comment: A leak to do with readline & cmd, in Python 2.3. I found out what hold on to my interactive objects too long ('for ever') in certain circumstances. The circumstance had to do with an exception being raised in Cmd.cmdloop and handled (or not handled) outside of Cmd.cmdloop. In cmd.py, class Cmd, in cmdloop(), if an exception is raised and propagated out from the interior of cmdloop, the function postloop() is not called. The default function of this, (in 2.3) when the readline library is present, is to restore the completer, via: readline.set_completer(self.old_completer) If this is not done, the newly (by preloop) inserted completer will remain. Even if the loop is called again and run without exception, the new completer will remain, because then in postloop the old completer will be set to our new completer. When we exit, the completer will remain the one we set. This will hold on to our object, aka 'leak'. - In cmd.py in 2.2 no attempt was made to restore the completer, so this was also a kind of leak, but it was replaced the next time a Cmd instance was initialized. Now, however, the next time we will not replace the old completer, but both of them will remain in memory. The old one will be stored as self.old_completer. If we terminate with an exception, bad luck... the stored completer retains both of the instances. If we terminate normally, the old one will be retained. In no case do we restore the space of the first instance. The only way that would happen, would be if the second instance first exited the loop with an exception, and then entered the loop again an exited normally. But then, the second instance is retained instead! If each instance happens to terminate with an exception, it seems well possible that an ever increasing chain of leaking instances will be accumulated. My fix is to always call the postloop, given the preloop succeeded. This is done via a try:finally clause. def cmdloop(self, intro=None): ... self.preloop() try: ... finally: # Make sure postloop called self.postloop() I am attaching my patched version of cmd.py. It was originally from the tarball of Python 2.3.3 downloaded from Python.org some month or so ago in which cmd.py had this size & date: 14504 Feb 19 2003 cmd.py Best regards, Sverker Nilsson ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-11 13:26 Message: Logged In: YES user_id=6656 trying again... ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-01 11:10 Message: Logged In: YES user_id=6656 Bah. I don't have the laptop with the patch with me, I'll try uploading again in a couple of days. ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-05-29 08:43 Message: Logged In: YES user_id=356603 I couldn't find a new attached file. I acknowledge some problems with my original patch, but have no other suggestion at the moment. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-26 17:36 Message: Logged In: YES user_id=6656 What do you think of the attached? This makes the documentation of pre & post loop more accurate again, which I think is nice. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-19 10:38 Message: Logged In: YES user_id=6656 This is where I go "I wish I'd reviewed that patch more carefully". In particular, the documentation of {pre,post}loop is now out of date. I wonder setting/getting the completer in these functions was a good idea. Hmm. This bug report confuses me :-) but I can certainly see the intent of the patch... ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-19 02:52 Message: Logged In: YES user_id=80475 Michael, this touches some of your code. Do you want to handle this one? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 From noreply at sourceforge.net Fri Jun 11 08:29:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 08:30:02 2004 Subject: [ python-Bugs-924301 ] A leak case with cmd.py & readline & exception Message-ID: Bugs item #924301, was opened at 2004-03-27 00:28 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Sverker Nilsson (svenil) Assigned to: Michael Hudson (mwh) Summary: A leak case with cmd.py & readline & exception Initial Comment: A leak to do with readline & cmd, in Python 2.3. I found out what hold on to my interactive objects too long ('for ever') in certain circumstances. The circumstance had to do with an exception being raised in Cmd.cmdloop and handled (or not handled) outside of Cmd.cmdloop. In cmd.py, class Cmd, in cmdloop(), if an exception is raised and propagated out from the interior of cmdloop, the function postloop() is not called. The default function of this, (in 2.3) when the readline library is present, is to restore the completer, via: readline.set_completer(self.old_completer) If this is not done, the newly (by preloop) inserted completer will remain. Even if the loop is called again and run without exception, the new completer will remain, because then in postloop the old completer will be set to our new completer. When we exit, the completer will remain the one we set. This will hold on to our object, aka 'leak'. - In cmd.py in 2.2 no attempt was made to restore the completer, so this was also a kind of leak, but it was replaced the next time a Cmd instance was initialized. Now, however, the next time we will not replace the old completer, but both of them will remain in memory. The old one will be stored as self.old_completer. If we terminate with an exception, bad luck... the stored completer retains both of the instances. If we terminate normally, the old one will be retained. In no case do we restore the space of the first instance. The only way that would happen, would be if the second instance first exited the loop with an exception, and then entered the loop again an exited normally. But then, the second instance is retained instead! If each instance happens to terminate with an exception, it seems well possible that an ever increasing chain of leaking instances will be accumulated. My fix is to always call the postloop, given the preloop succeeded. This is done via a try:finally clause. def cmdloop(self, intro=None): ... self.preloop() try: ... finally: # Make sure postloop called self.postloop() I am attaching my patched version of cmd.py. It was originally from the tarball of Python 2.3.3 downloaded from Python.org some month or so ago in which cmd.py had this size & date: 14504 Feb 19 2003 cmd.py Best regards, Sverker Nilsson ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-11 13:29 Message: Logged In: YES user_id=6656 yay, that appears to have worked. let me know what you think. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 13:26 Message: Logged In: YES user_id=6656 trying again... ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-01 11:10 Message: Logged In: YES user_id=6656 Bah. I don't have the laptop with the patch with me, I'll try uploading again in a couple of days. ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-05-29 08:43 Message: Logged In: YES user_id=356603 I couldn't find a new attached file. I acknowledge some problems with my original patch, but have no other suggestion at the moment. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-26 17:36 Message: Logged In: YES user_id=6656 What do you think of the attached? This makes the documentation of pre & post loop more accurate again, which I think is nice. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-19 10:38 Message: Logged In: YES user_id=6656 This is where I go "I wish I'd reviewed that patch more carefully". In particular, the documentation of {pre,post}loop is now out of date. I wonder setting/getting the completer in these functions was a good idea. Hmm. This bug report confuses me :-) but I can certainly see the intent of the patch... ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-19 02:52 Message: Logged In: YES user_id=80475 Michael, this touches some of your code. Do you want to handle this one? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 From noreply at sourceforge.net Fri Jun 11 08:48:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 08:48:48 2004 Subject: [ python-Bugs-971101 ] Comparisons of unicode and strings shouldn Message-ID: Bugs item #971101, was opened at 2004-06-11 14:48 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971101&group_id=5470 Category: Unicode Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Christian Theune (ctheune) Assigned to: M.-A. Lemburg (lemburg) Summary: Comparisons of unicode and strings shouldn Initial Comment: Errors in implicit conversion of strings to unicode in comp ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971101&group_id=5470 From noreply at sourceforge.net Fri Jun 11 09:07:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 09:07:44 2004 Subject: [ python-Bugs-971106 ] Comparisons of unicode and strings raises UnicodeErrors Message-ID: Bugs item #971106, was opened at 2004-06-11 15:07 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 Category: Unicode Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Christian Theune (ctheune) Assigned to: M.-A. Lemburg (lemburg) Summary: Comparisons of unicode and strings raises UnicodeErrors Initial Comment: When comparing unicode and strings the implicit conversion will raise an exception instead of returning false. See the example later on. We (Christian Theune and Jim Fulton) suggest that if the ordinary string can't be decoded, that False should be returned. This seems to be the only sane approach given's Python's policy of implicitly converting strings to unicode using a default encoding. Python 2.3.4 (#1, Jun 10 2004, 11:08:42) [GCC 3.3.3 20040412 (Gentoo Linux 3.3.3-r6, ssp-3.3.2-2, pie-8.7.6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class Elephant: ... pass ... >>> e1 = Elephant() >>> e1 == 5 False >>> u"asdf" == "asdf?" Traceback (most recent call last): File "", line 1, in ? UnicodeDecodeError: 'ascii' codec can't decode byte 0xf6 in position 4: ordinal not in range(128) >>> e1 == "asdf?" False >>> e1 == u"asdf?" False >>> ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 From noreply at sourceforge.net Fri Jun 11 09:11:38 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 09:11:56 2004 Subject: [ python-Bugs-971101 ] Comparisons of unicode and strings shouldn Message-ID: Bugs item #971101, was opened at 2004-06-11 14:48 Message generated for change (Settings changed) made by ctheune You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971101&group_id=5470 Category: Unicode Group: Python 2.3 >Status: Deleted Resolution: None Priority: 5 Submitted By: Christian Theune (ctheune) Assigned to: M.-A. Lemburg (lemburg) Summary: Comparisons of unicode and strings shouldn Initial Comment: Errors in implicit conversion of strings to unicode in comp ---------------------------------------------------------------------- >Comment By: Christian Theune (ctheune) Date: 2004-06-11 15:11 Message: Logged In: YES user_id=100622 dupe. sorry. mozilla choked. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971101&group_id=5470 From noreply at sourceforge.net Fri Jun 11 10:03:37 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 10:03:46 2004 Subject: [ python-Bugs-966623 ] execfile -> type() created objects w/ no __module__ error Message-ID: Bugs item #966623, was opened at 2004-06-04 16:46 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966623&group_id=5470 Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Nobody/Anonymous (nobody) Summary: execfile -> type() created objects w/ no __module__ error Initial Comment: Apologies for the imprecise summary - I have no idea where the problem is here. Thanks to JP Calderone for this little horror. (distilled down from his example) bonanza% cat foo.py print type('F', (object,), {})().__class__.__module__ bonanza% python2.3 -c "execfile('foo.py', {})" Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 1, in ? print type('F', (object,), {})().__class__.__module__ SystemError: error return without exception set bonanza% python2.4 -c "execfile('foo.py', {})" Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 1, in ? print type('F', (object,), {})().__class__.__module__ SystemError: error return without exception set ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-11 15:03 Message: Logged In: YES user_id=6656 I don't think there's an answer to that. OTOH, I think it's more important that this gets fixed than that it gets fixed 100% perfectly. IOW, do what you like, but please do something :-) ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-04 17:49 Message: Logged In: YES user_id=29957 Is it better to fix this here, or in the type() call to make sure there's always a __module__ ? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-04 17:36 Message: Logged In: YES user_id=6656 Ah, I was about to attach the same test :-) Do add a test and use PEP 7 code if you check it in... ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-04 17:27 Message: Logged In: YES user_id=29957 The attached patch fixes this to raise an AttributeError if the object has no __module__. The other approach to fixing it would be to make sure that the object created always gets a __module__, but I have no idea in that case what a reasonable fix would be. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966623&group_id=5470 From noreply at sourceforge.net Fri Jun 11 11:13:04 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 11:13:09 2004 Subject: [ python-Bugs-966623 ] execfile -> type() created objects w/ no __module__ error Message-ID: Bugs item #966623, was opened at 2004-06-05 01:46 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966623&group_id=5470 Category: Python Interpreter Core Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) >Assigned to: Anthony Baxter (anthonybaxter) Summary: execfile -> type() created objects w/ no __module__ error Initial Comment: Apologies for the imprecise summary - I have no idea where the problem is here. Thanks to JP Calderone for this little horror. (distilled down from his example) bonanza% cat foo.py print type('F', (object,), {})().__class__.__module__ bonanza% python2.3 -c "execfile('foo.py', {})" Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 1, in ? print type('F', (object,), {})().__class__.__module__ SystemError: error return without exception set bonanza% python2.4 -c "execfile('foo.py', {})" Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 1, in ? print type('F', (object,), {})().__class__.__module__ SystemError: error return without exception set ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-12 01:13 Message: Logged In: YES user_id=29957 Fix checked in, will be in 2.4a1 and 2.3.5. Objects/typeobject 2.259, 2.241.6.10 Misc/NEWS 1.999, 1.831.4.120 ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-12 00:03 Message: Logged In: YES user_id=6656 I don't think there's an answer to that. OTOH, I think it's more important that this gets fixed than that it gets fixed 100% perfectly. IOW, do what you like, but please do something :-) ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-05 02:49 Message: Logged In: YES user_id=29957 Is it better to fix this here, or in the type() call to make sure there's always a __module__ ? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-05 02:36 Message: Logged In: YES user_id=6656 Ah, I was about to attach the same test :-) Do add a test and use PEP 7 code if you check it in... ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-05 02:27 Message: Logged In: YES user_id=29957 The attached patch fixes this to raise an AttributeError if the object has no __module__. The other approach to fixing it would be to make sure that the object created always gets a __module__, but I have no idea in that case what a reasonable fix would be. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966623&group_id=5470 From noreply at sourceforge.net Fri Jun 11 11:19:06 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 11:19:11 2004 Subject: [ python-Bugs-971200 ] bizarro test_asynchat failure Message-ID: Bugs item #971200, was opened at 2004-06-11 16:19 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Michael Hudson (mwh) Assigned to: Nobody/Anonymous (nobody) Summary: bizarro test_asynchat failure Initial Comment: current cvs head, mac os x 10.2, debug build of python. test_asynchat fails if and only if the compiled asyncore.pyc file is present. this makes no sense to me, but it's consistent. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 From noreply at sourceforge.net Fri Jun 11 11:30:24 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 11:30:27 2004 Subject: [ python-Bugs-971213 ] another threads+readline+signals nasty Message-ID: Bugs item #971213, was opened at 2004-06-12 01:30 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971213&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Nobody/Anonymous (nobody) Summary: another threads+readline+signals nasty Initial Comment: python -c "import time, readline, thread; thread.start_new_thread(raw_input, ()); time.sleep(2)" Segfaults on ^C Fails on Linux, freebsd. On linux (FC - using kernel 2.6.1, glibc 2.3.3, gcc-3.3.3) (gdb) where #0 0x002627a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x008172b1 in ___newselect_nocancel () from /lib/tls/libc.so.6 #2 0x0011280b in time_sleep (self=0x0, args=0xb7fe17ac) at Modules/timemodule.c:815 on FreeBSD 5.2.1-RC, a different error. Fatal error 'longjmp()ing between thread contexts is undefined by POSIX 1003.1' at line 72 in file /usr/src/lib/libc_r/uthread/uthread_jmp.c (errno = 2) Abort (core dumped) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971213&group_id=5470 From noreply at sourceforge.net Fri Jun 11 11:38:12 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 11:38:18 2004 Subject: [ python-Bugs-971213 ] another threads+readline+signals nasty Message-ID: Bugs item #971213, was opened at 2004-06-11 16:30 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971213&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Nobody/Anonymous (nobody) Summary: another threads+readline+signals nasty Initial Comment: python -c "import time, readline, thread; thread.start_new_thread(raw_input, ()); time.sleep(2)" Segfaults on ^C Fails on Linux, freebsd. On linux (FC - using kernel 2.6.1, glibc 2.3.3, gcc-3.3.3) (gdb) where #0 0x002627a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x008172b1 in ___newselect_nocancel () from /lib/tls/libc.so.6 #2 0x0011280b in time_sleep (self=0x0, args=0xb7fe17ac) at Modules/timemodule.c:815 on FreeBSD 5.2.1-RC, a different error. Fatal error 'longjmp()ing between thread contexts is undefined by POSIX 1003.1' at line 72 in file /usr/src/lib/libc_r/uthread/uthread_jmp.c (errno = 2) Abort (core dumped) ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-11 16:38 Message: Logged In: YES user_id=6656 Hmm. Doesn't crash on OS X. Messes the terminal up good and proper, though. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971213&group_id=5470 From noreply at sourceforge.net Fri Jun 11 11:39:10 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 11:39:15 2004 Subject: [ python-Bugs-971213 ] another threads+readline+signals nasty Message-ID: Bugs item #971213, was opened at 2004-06-12 01:30 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971213&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Nobody/Anonymous (nobody) Summary: another threads+readline+signals nasty Initial Comment: python -c "import time, readline, thread; thread.start_new_thread(raw_input, ()); time.sleep(2)" Segfaults on ^C Fails on Linux, freebsd. On linux (FC - using kernel 2.6.1, glibc 2.3.3, gcc-3.3.3) (gdb) where #0 0x002627a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x008172b1 in ___newselect_nocancel () from /lib/tls/libc.so.6 #2 0x0011280b in time_sleep (self=0x0, args=0xb7fe17ac) at Modules/timemodule.c:815 on FreeBSD 5.2.1-RC, a different error. Fatal error 'longjmp()ing between thread contexts is undefined by POSIX 1003.1' at line 72 in file /usr/src/lib/libc_r/uthread/uthread_jmp.c (errno = 2) Abort (core dumped) ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-12 01:39 Message: Logged In: YES user_id=29957 The patch in #960406 doesn't help here. The FC test system also has readline-4.3, if it helps, as does the FreeBSD box. It apparently doesn't crash on OSX. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-12 01:38 Message: Logged In: YES user_id=6656 Hmm. Doesn't crash on OS X. Messes the terminal up good and proper, though. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971213&group_id=5470 From noreply at sourceforge.net Fri Jun 11 11:43:23 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 11:43:34 2004 Subject: [ python-Bugs-971213 ] another threads+readline+signals nasty Message-ID: Bugs item #971213, was opened at 2004-06-11 17:30 Message generated for change (Comment added) made by mpasternak You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971213&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Nobody/Anonymous (nobody) Summary: another threads+readline+signals nasty Initial Comment: python -c "import time, readline, thread; thread.start_new_thread(raw_input, ()); time.sleep(2)" Segfaults on ^C Fails on Linux, freebsd. On linux (FC - using kernel 2.6.1, glibc 2.3.3, gcc-3.3.3) (gdb) where #0 0x002627a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x008172b1 in ___newselect_nocancel () from /lib/tls/libc.so.6 #2 0x0011280b in time_sleep (self=0x0, args=0xb7fe17ac) at Modules/timemodule.c:815 on FreeBSD 5.2.1-RC, a different error. Fatal error 'longjmp()ing between thread contexts is undefined by POSIX 1003.1' at line 72 in file /usr/src/lib/libc_r/uthread/uthread_jmp.c (errno = 2) Abort (core dumped) ---------------------------------------------------------------------- Comment By: Michal Pasternak (mpasternak) Date: 2004-06-11 17:43 Message: Logged In: YES user_id=799039 readline used on FreeBSD was readline-4.3pl5; everything else: gcc 3.3.3, ncurses, libc were standard from 5.2.1. ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-11 17:39 Message: Logged In: YES user_id=29957 The patch in #960406 doesn't help here. The FC test system also has readline-4.3, if it helps, as does the FreeBSD box. It apparently doesn't crash on OSX. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 17:38 Message: Logged In: YES user_id=6656 Hmm. Doesn't crash on OS X. Messes the terminal up good and proper, though. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971213&group_id=5470 From noreply at sourceforge.net Fri Jun 11 11:58:50 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 11:58:57 2004 Subject: [ python-Bugs-971200 ] marshalling infinities Message-ID: Bugs item #971200, was opened at 2004-06-11 16:19 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Michael Hudson (mwh) Assigned to: Nobody/Anonymous (nobody) >Summary: marshalling infinities Initial Comment: current cvs head, mac os x 10.2, debug build of python. test_asynchat fails if and only if the compiled asyncore.pyc file is present. this makes no sense to me, but it's consistent. ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-11 16:58 Message: Logged In: YES user_id=6656 Oh my: >>> 1e309 Inf [40577 refs] >>> marshal.loads(marshal.dumps(1e309)) 0.0 [40577 refs] this must be the new "LC_NUMERIC agnostic" stuff, right? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 From noreply at sourceforge.net Fri Jun 11 12:07:14 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 12:07:20 2004 Subject: [ python-Bugs-971238 ] test_timeout failure on trunk Message-ID: Bugs item #971238, was opened at 2004-06-12 02:07 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971238&group_id=5470 Category: Extension Modules Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Nobody/Anonymous (nobody) Summary: test_timeout failure on trunk Initial Comment: On current-CVS trunk, Fedora Core 2: testBlockingThenTimeout (__main__.CreationTestCase) ... ok testFloatReturnValue (__main__.CreationTestCase) ... ok testObjectCreation (__main__.CreationTestCase) ... ok testRangeCheck (__main__.CreationTestCase) ... ok testReturnType (__main__.CreationTestCase) ... ok testTimeoutThenBlocking (__main__.CreationTestCase) ... ok testTypeCheck (__main__.CreationTestCase) ... ok testAcceptTimeout (__main__.TimeoutTestCase) ... ok testConnectTimeout (__main__.TimeoutTestCase) ... FAIL testRecvTimeout (__main__.TimeoutTestCase) ... ok testRecvfromTimeout (__main__.TimeoutTestCase) ... ok testSend (__main__.TimeoutTestCase) ... ok testSendall (__main__.TimeoutTestCase) ... ok testSendto (__main__.TimeoutTestCase) ... ok ====================================================================== FAIL: testConnectTimeout (__main__.TimeoutTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "Lib/test/test_timeout.py", line 121, in testConnectTimeout "timeout (%g) is more than %g seconds more than expected (%g)" AssertionError: timeout (13.3079) is more than 2 seconds more than expected (0.001) ---------------------------------------------------------------------- Ran 14 tests in 36.437s FAILED (failures=1) Traceback (most recent call last): File "Lib/test/test_timeout.py", line 192, in ? test_main() File "Lib/test/test_timeout.py", line 189, in test_main test_support.run_unittest(CreationTestCase, TimeoutTestCase) File "/home/anthony/src/py/pyhead/dist/src/Lib/test/test_support.py", line 290, in run_unittest run_suite(suite, testclass) File "/home/anthony/src/py/pyhead/dist/src/Lib/test/test_support.py", line 275, in run_suite raise TestFailed(err) test.test_support.TestFailed: Traceback (most recent call last): File "Lib/test/test_timeout.py", line 121, in testConnectTimeout "timeout (%g) is more than %g seconds more than expected (%g)" AssertionError: timeout (13.3079) is more than 2 seconds more than expected (0.001) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971238&group_id=5470 From noreply at sourceforge.net Fri Jun 11 12:46:08 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 12:46:17 2004 Subject: [ python-Bugs-967657 ] PyInt_FromString failed with certain hex/oct Message-ID: Bugs item #967657, was opened at 2004-06-07 02:09 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967657&group_id=5470 Category: Parser/Compiler Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Qian Wenjie (qwj) Assigned to: Nobody/Anonymous (nobody) Summary: PyInt_FromString failed with certain hex/oct Initial Comment: When numbers are 0x80000000 through 0xffffffff and 020000000000 through 037777777777, it will translate into negative. Example: >>> 030000000000 -1073741824 >>> int('030000000000',0) -1073741824 patches to Python 2.3.4: Python/compile.c 1259c1259 < x = (long) PyOS_strtoul(s, &end, 0); --- > x = (long) PyOS_strtol(s, &end, 0); Objects/intobject.c 293c293 < x = (long) PyOS_strtoul(s, &end, base); --- > x = (long) PyOS_strtol(s, &end, base); ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-12 02:46 Message: Logged In: YES user_id=29957 Closing. This is not going to change in 2.3, but is fixed in 2.4. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-07 13:27 Message: Logged In: YES user_id=31435 It's not a bug -- Python has worked this way for more than a decade, and changing documented behavior is a slow process. This change is part of those discussed in PEP 237, which is in its 3rd year(!) of implementation: http://www.python.org/peps/pep-0237.html Do read the PEP. Costs here aren't implementation effort, they're end-user costs (changes in what Python does require users to change their programs, and that's necessarily a drawn-out process). ---------------------------------------------------------------------- Comment By: Qian Wenjie (qwj) Date: 2004-06-07 13:17 Message: Logged In: YES user_id=1057975 I am wondering why should we wait for python 2.4 to fix this bug. It just costs two lines changes. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-07 13:07 Message: Logged In: YES user_id=31435 Python is supposed to act this way in 2.3. It's supposed to act the way you want in 2.4. You didn't say which version of Python you're using. If you used 2.3.4, I'm surprised your output didn't contain messages warning that this behavior is going to change: Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 030000000000 :1: FutureWarning: hex/oct constants > sys.maxint will return positive values in Python 2.4 and up -1073741824 >>> int('030000000000',0) __main__:1: FutureWarning: int('0...', 0): sign will change in Python 2.4 -1073741824 >>> Which version of Python were you using, and under which OS? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967657&group_id=5470 From noreply at sourceforge.net Fri Jun 11 12:47:37 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 12:47:43 2004 Subject: [ python-Bugs-971200 ] marshalling infinities Message-ID: Bugs item #971200, was opened at 2004-06-11 11:19 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Michael Hudson (mwh) Assigned to: Nobody/Anonymous (nobody) Summary: marshalling infinities Initial Comment: current cvs head, mac os x 10.2, debug build of python. test_asynchat fails if and only if the compiled asyncore.pyc file is present. this makes no sense to me, but it's consistent. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-11 12:47 Message: Logged In: YES user_id=31435 Well, what marshal (or pickle) do with an infinity (or NaN, or the sign of a signed zero) is a platform accident. Here with the released 2.3.4 on Windows (which doesn't have any LC_NUMERIC changes): Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 1e309 1.#INF >>> import marshal >>> marshal.loads(marshal.dumps(1e309)) 1.0 >>> So simply can't use a literal 1e309 in compiled code. There's no portable way to spell infinity in Python. PEP 754 would introduce a reasonably portable way, were it accepted. Before then, 1e200*1e200 is probably the easiest reasonably portable way -- but since behavior in the presence of an infinity is accidental anyway, much better to avoid using infinity at all in the libraries. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 11:58 Message: Logged In: YES user_id=6656 Oh my: >>> 1e309 Inf [40577 refs] >>> marshal.loads(marshal.dumps(1e309)) 0.0 [40577 refs] this must be the new "LC_NUMERIC agnostic" stuff, right? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 From noreply at sourceforge.net Fri Jun 11 12:55:14 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 12:55:21 2004 Subject: [ python-Bugs-962226 ] Python 2.3.4 on Linux: test test_grp failed Message-ID: Bugs item #962226, was opened at 2004-05-29 00:04 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=962226&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Michael Str?der (stroeder) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.3.4 on Linux: test test_grp failed Initial Comment: Build on SuSE Linux 9.0: test test_grp failed -- Traceback (most recent call last): File "/home/michael/src/Python-2.3.4/Lib/test/test_grp.py", line 71, in test_errors self.assertRaises(KeyError, grp.getgrnam, fakename) File "/home/michael/src/Python-2.3.4/Lib/unittest.py", line 295, in failUnlessRaises raise self.failureException, excName AssertionError: KeyError ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-12 02:55 Message: Logged In: YES user_id=29957 Can you see if the patch on http://www.python.org/sf/775964 fixes this? I have had this patch sitting in my source tree for Some Time, waiting for time to test it. :-/ ---------------------------------------------------------------------- Comment By: Michael Str?der (stroeder) Date: 2004-05-29 00:43 Message: Logged In: YES user_id=64920 I do not use NIS. But I don't know what SuSE really puts into the "compat" NSS module. >From my /etc/nsswitch.conf: passwd: compat group: compat ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-05-29 00:26 Message: Logged In: YES user_id=29957 Do you use YP/NIS? If so, this is a known problem, and is documented on the bugs page for 2.3.4. I really should get around to checking those fixes in... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=962226&group_id=5470 From noreply at sourceforge.net Fri Jun 11 13:03:26 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 13:03:32 2004 Subject: [ python-Bugs-971200 ] asyncore sillies Message-ID: Bugs item #971200, was opened at 2004-06-11 16:19 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Michael Hudson (mwh) Assigned to: Nobody/Anonymous (nobody) >Summary: asyncore sillies Initial Comment: current cvs head, mac os x 10.2, debug build of python. test_asynchat fails if and only if the compiled asyncore.pyc file is present. this makes no sense to me, but it's consistent. ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-11 18:03 Message: Logged In: YES user_id=6656 actually, i think the summary is that the most recent change to asyncore is just broken. blaming the recent changes around LC_NUMERIC and their effect or non-effect on marshal was a read herring. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-11 17:47 Message: Logged In: YES user_id=31435 Well, what marshal (or pickle) do with an infinity (or NaN, or the sign of a signed zero) is a platform accident. Here with the released 2.3.4 on Windows (which doesn't have any LC_NUMERIC changes): Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 1e309 1.#INF >>> import marshal >>> marshal.loads(marshal.dumps(1e309)) 1.0 >>> So simply can't use a literal 1e309 in compiled code. There's no portable way to spell infinity in Python. PEP 754 would introduce a reasonably portable way, were it accepted. Before then, 1e200*1e200 is probably the easiest reasonably portable way -- but since behavior in the presence of an infinity is accidental anyway, much better to avoid using infinity at all in the libraries. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 16:58 Message: Logged In: YES user_id=6656 Oh my: >>> 1e309 Inf [40577 refs] >>> marshal.loads(marshal.dumps(1e309)) 0.0 [40577 refs] this must be the new "LC_NUMERIC agnostic" stuff, right? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 From noreply at sourceforge.net Fri Jun 11 13:18:45 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 13:18:53 2004 Subject: [ python-Bugs-957381 ] bdist_rpm fails on redhat9, fc1, fc2 Message-ID: Bugs item #957381, was opened at 2004-05-20 23:05 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=957381&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: Jeff Epler (jepler) >Assigned to: Anthony Baxter (anthonybaxter) Summary: bdist_rpm fails on redhat9, fc1, fc2 Initial Comment: distutils bdist_rpm has long been broken for recent versions of RPM (RedHat 9, Fedora Core 1 and 2) https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=88616 When an RPM contains an executable or shared library, a "-debuginfo" rpm is generated. For instance, the following two packages would be generated: foo-1.0.0-1.i386.rpm foo-debuginfo-1.0.0.1-i386.rpm When distutils is faced with this problem, it prints an error like AssertionError: unexpected number of RPM files found: ['build/bdist.linux-i686/rpm/RPMS/i386/foo-1.0.0-1.i386.rpm', build/bdist.linux-i686/rpm/RPMS/i386/foo-debuginfo-1.0.0-1.i386.rpm'] The bugzilla bug contains a proposed patch, but redhat/fedora developers chose not to accept it for their own build of Python. ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-12 03:18 Message: Logged In: YES user_id=29957 Applied to trunk. Will backport to the 2.3 branch. ---------------------------------------------------------------------- Comment By: Jeff Epler (jepler) Date: 2004-05-21 21:46 Message: Logged In: YES user_id=2772 the "WONTFIX" closure of bugzilla 88616 was already from Fedora days (2004-04-05), so opening a new bug report is unlikely to do much on its own. (in fact, I don't know if it's likely to do more than get closed as a duplicate) Of course, I don't speak for Fedora. If a fix for this new RPM feature is included in Python (for 2.3.5 and 2.4) then I'd guess it's more likely to be added as a patch for a subsequent 2.3.3 or 2.3.4-based Python package, but again I don't speak for the Fedora developers. ---------------------------------------------------------------------- Comment By: Jeremy Sanders (jeremysanders) Date: 2004-05-21 18:46 Message: Logged In: YES user_id=8953 I've opened a bugzilla report for fedora 2 See https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=123598 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=957381&group_id=5470 From noreply at sourceforge.net Fri Jun 11 14:12:04 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 14:12:09 2004 Subject: [ python-Bugs-971330 ] test_signal sucks Message-ID: Bugs item #971330, was opened at 2004-06-11 19:12 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971330&group_id=5470 Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 3 Submitted By: Michael Hudson (mwh) Assigned to: Nobody/Anonymous (nobody) Summary: test_signal sucks Initial Comment: It spawns a shell script for no apparent reason. It doesn't use unit test. It's generally horrible (though slightly less so than a few minutes ago). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971330&group_id=5470 From noreply at sourceforge.net Fri Jun 11 16:15:24 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 11 16:15:42 2004 Subject: [ python-Bugs-971395 ] thread.name crashes interpreter Message-ID: Bugs item #971395, was opened at 2004-06-11 20:15 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971395&group_id=5470 Category: Threads Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jonathan Ellis (ellisj) Assigned to: Nobody/Anonymous (nobody) Summary: thread.name crashes interpreter Initial Comment: I changed the __repr__ method of the cookbook Future class -- http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/84317 -- as follows: def __repr__(self): return '<%s at %s:%s>' % (self.__T.name, hex(id(self)), self.__status) this caused obscure crashes with the uninformative message Fatal Python error: PyEval_SaveThread: NULL tstate changing to __T.getName() fixed the crashing. It seems to me that thread.name should be __name or _name to help novices not shoot themselves in the foot. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971395&group_id=5470 From noreply at sourceforge.net Sat Jun 12 03:02:33 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 03:02:43 2004 Subject: [ python-Bugs-971200 ] asyncore sillies Message-ID: Bugs item #971200, was opened at 2004-06-11 10:19 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 Category: None Group: None Status: Open Resolution: None >Priority: 7 Submitted By: Michael Hudson (mwh) Assigned to: Nobody/Anonymous (nobody) Summary: asyncore sillies Initial Comment: current cvs head, mac os x 10.2, debug build of python. test_asynchat fails if and only if the compiled asyncore.pyc file is present. this makes no sense to me, but it's consistent. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 12:03 Message: Logged In: YES user_id=6656 actually, i think the summary is that the most recent change to asyncore is just broken. blaming the recent changes around LC_NUMERIC and their effect or non-effect on marshal was a read herring. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-11 11:47 Message: Logged In: YES user_id=31435 Well, what marshal (or pickle) do with an infinity (or NaN, or the sign of a signed zero) is a platform accident. Here with the released 2.3.4 on Windows (which doesn't have any LC_NUMERIC changes): Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 1e309 1.#INF >>> import marshal >>> marshal.loads(marshal.dumps(1e309)) 1.0 >>> So simply can't use a literal 1e309 in compiled code. There's no portable way to spell infinity in Python. PEP 754 would introduce a reasonably portable way, were it accepted. Before then, 1e200*1e200 is probably the easiest reasonably portable way -- but since behavior in the presence of an infinity is accidental anyway, much better to avoid using infinity at all in the libraries. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 10:58 Message: Logged In: YES user_id=6656 Oh my: >>> 1e309 Inf [40577 refs] >>> marshal.loads(marshal.dumps(1e309)) 0.0 [40577 refs] this must be the new "LC_NUMERIC agnostic" stuff, right? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 From noreply at sourceforge.net Sat Jun 12 03:03:50 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 03:03:59 2004 Subject: [ python-Bugs-970459 ] Generators produce wrong exception Message-ID: Bugs item #970459, was opened at 2004-06-10 09:58 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970459&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Anders Lehmann (anders_lehmann) Assigned to: Nobody/Anonymous (nobody) Summary: Generators produce wrong exception Initial Comment: The following script : def f(): yield "%s" %('Et','to') for i in f(): print i will produce the following traceback in Python 2.3.4 Traceback (most recent call last): File "python_generator_bug.py", line 6, in ? b+=f() TypeError: argument to += must be iterable Where I would expect a: TypeError : not all arguments converted during string formatting. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:03 Message: Logged In: YES user_id=80475 Ander, if this resolves your issue, please close this bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-10 10:38 Message: Logged In: YES user_id=31435 The script you gave does produce the message you expect: >>> def f(): ... yield "%s" %('Et','to') ... >>> for i in f(): print i ... Traceback (most recent call last): File "", line 1, in ? File "", line 2, in f TypeError: not all arguments converted during string formatting >>> The traceback you gave contains the line b+=f() which doesn't appear in the script you gave. If the script you *actually* ran had, for example, >>> b = [] >>> b += f() Then Traceback (most recent call last): File "", line 1, in ? TypeError: argument to += must be iterable >>> is an appropriate exception. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970459&group_id=5470 From noreply at sourceforge.net Sat Jun 12 03:05:00 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 03:05:08 2004 Subject: [ python-Bugs-969415 ] CJK codecs list incomplete Message-ID: Bugs item #969415, was opened at 2004-06-09 01:54 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Mike Brown (mike_j_brown) >Assigned to: M.-A. Lemburg (lemburg) Summary: CJK codecs list incomplete Initial Comment: http://www.python.org/dev/doc/devel/whatsnew/node7. html states that various CJK encodings have been added, but the list given there does not match the list on http://www.python.org/dev/doc/devel/lib/node128.html. In particular, missing from the latter list are all of the aliases with hyphens: shift-jis, shift-jisx0213, euc-jp, euc-jisx0213, iso-2022- jp, iso-2022-jp-1, iso-2022-jp-2, iso-2022-jp-3, iso- 2022-jp-ext, euc-kr, iso-2022-kr Since I successfully ran codecs.lookup() tests on a few of the hyphenated aliases, I assume that the omission of the hyphenated versions in the docs is merely an oversight. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:05 Message: Logged In: YES user_id=80475 Mark, would you pronounce on this one. ---------------------------------------------------------------------- Comment By: Mike Brown (mike_j_brown) Date: 2004-06-09 03:25 Message: Logged In: YES user_id=371366 I see no reason to omit any aliases that are recognized, especially when the aliases in question are, more often than not, the IANA's preferred MIME name as shown at http://www.iana.org/assignments/character-sets. I was looking in the docs to see if Python 2.4 was going to support 'euc-jp', and was dismayed to see 'euc_jp' and variants but no 'euc-jp'. I had to obtain and install 2.4a0 to test to find out that it was just a documentation problem. Please consider listing all realnames and aliases. ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 02:10 Message: Logged In: YES user_id=55188 Reopened to consider the consistence with non-cjk codecs. All the non-cjk codecs are written with hyphen even if their realname is with underscore. (eg. iso8859-1 and iso8859_1.py) Will changing cjk codecs's codec/alias names to use not underscores but hyphens make docs more friendly? ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 02:01 Message: Logged In: YES user_id=55188 All hyphens are translated as underscores in encoding lookups. So we may not need to provide encoding list with hyphens additionally. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 From noreply at sourceforge.net Sat Jun 12 03:11:18 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 03:11:30 2004 Subject: [ python-Bugs-924703 ] test_unicode_file fails on Win98SE Message-ID: Bugs item #924703, was opened at 2004-03-27 20:48 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924703&group_id=5470 Category: Unicode Group: Python 2.4 Status: Open Resolution: None >Priority: 7 Submitted By: Tim Peters (tim_one) Assigned to: Martin v. L?wis (loewis) Summary: test_unicode_file fails on Win98SE Initial Comment: In current CVS, test_unicode_file fails on Win98SE. This has been going on for some time, actually. ERROR: test_single_files (__main__.TestUnicodeFiles) Traceback (most recent call last): File ".../lib/test/test_unicode_file.py", line 162, in test_single_files self._test_single(TESTFN_UNICODE) File ".../lib/test/test_unicode_file.py", line 136, in _test_single self._do_single(filename) File ".../lib/test/test_unicode_file.py", line 49, in _do_single new_base = unicodedata.normalize("NFD", new_base) TypeError: normalized() argument 2 must be unicode, not str At this point, filename is TESTFN_UNICODE is u'@test-\xe0\xf2' os.path.abspath(filename) is 'C:\Code\python\PC\VC6\@test-\xe0\xf2' new_base is '@test-\xe0\xf2 So abspath() removed the "Unicodeness" of filename, and new_base is indeed not a Unicode string at this point. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:11 Message: Logged In: YES user_id=80475 This is still failing. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-03-30 00:44 Message: Logged In: YES user_id=31435 Just a guess: the os.path functions are generally just string manipulation, and on Windows I sometimes import posixpath.py directly to do Unixish path manipulations. So it's conceivable that someone (not me) on a non-Windows box imports ntpath directly to manipulate Windows paths. In fact, I see that Fredrik's "Python Standard Library" book explicitly mentions this use case for importing ntpath directly. So maybe he actually did it -- once . ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-03-30 00:25 Message: Logged In: YES user_id=21627 I see. I'll look into changing _getfullpathname to return Unicode output for Unicode input even if unicode_file_names() is false. However, I do wonder what the purpose of _abspath then is: On what system would it be used??? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-03-29 18:11 Message: Logged In: YES user_id=31435 Nope, that can't help -- ntpath.py's _abspath doesn't exist on Win98SE (the "from nt import _getfullpathname" succeeds, so _abspath is never defined). It's _getfullpathname() that's taking a Unicode input and returning a str output here. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-03-29 17:17 Message: Logged In: YES user_id=21627 abspath(unicode) should return a Unicode path. Does it help if _abspath (in ntpath.py) is changed to contain if not isabs(path): if isinstance(path, unicode): cwd = os.getcwdu() else: cwd = os.getcwd() path = join(cwd, path) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924703&group_id=5470 From noreply at sourceforge.net Sat Jun 12 03:20:57 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 03:21:11 2004 Subject: [ python-Bugs-917055 ] add a stronger PRNG Message-ID: Bugs item #917055, was opened at 2004-03-15 21:46 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=917055&group_id=5470 Category: Python Library Group: Feature Request >Status: Closed Resolution: None Priority: 5 Submitted By: paul rubin (phr) Assigned to: Nobody/Anonymous (nobody) Summary: add a stronger PRNG Initial Comment: The default Mersenne Twister algorithm in the Random module is very fast but makes no serious attempt to generate output that stands up to adversarial analysis. Besides cryptography applications, this can be a serious problem in areas like computer games. Sites like www.partypoker.com routinely run online tournaments with prize funds of 100K USD or more. There's big financial incentives to find ways of guessing your opponent's cards with better than random chance probability. See bug #901285 for some discussion of possible correlations in Mersenne Twister's output. Teukolsky et al discuss PRNG issues at some length in their book "Numerical Recipes". The original edition of Numerical Recipes had a full blown version of the FIPS Data Encryption Standard implemented horrendously in Fortran, as a way of making a PRNG with no easily discoverable output correlations. Later editions replaced the DES routine with a more efficient one based on similar principles. Python already has an SHA module that makes a dandy PRNG. I coded a sample implementation: http://www.nightsong.com/phr/python/sharandom.py I'd like to ask that the Python lib include something like this as an alternative to MT. It would be similar to the existing whrandom module in that it's an alternative subclass to the regular Random class. The existing Random module wouldn't have to be changed. I don't propose directly including the module above, since I think the Random API should also be extended to allow directly requesting pseudo-random integers from the generator subclass, rather than making them from floating-point output. That would allow making the above subclass work more cleanly. I'll make a separate post about this, but first will have to examine the Random module source code. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:20 Message: Logged In: YES user_id=80475 Closing due to lack of interest/progress, etc. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-07 00:29 Message: Logged In: YES user_id=80475 Can we close this one (while leaving open the patch for an entropy module)? Essentially, it provides nothing that couldn't be contributed as a short recipe for those interested in such things. While an alternate RNG would be nice, turning this into a crypto project is probably not a great idea. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-04-14 00:33 Message: Logged In: YES user_id=72053 trevp, the jumpahead operation lets you stir in new entropy. Jeremy, I'll see if I can write some docs for it, and will attempt a concrete security proof. I don't think we should need to say no references were found for using sha1 as a prng. The randomness assumption is based on the Krawczyk-Bellare-Rogaway result that's cited somewhere down the page or in the clpy thread. I'll include a cite in the doc/rationale. I hope that the entropy module is accepted, assuming it works. The entropy module is quite a bit more important than the deterministic PRNG module, since the application can easily supply the DPRNG but can't always easily supply the entropy module. ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-13 23:10 Message: Logged In: YES user_id=973611 I submitted a patch for an entropy module, as was discussed below. See patch #934711. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2004-04-11 22:35 Message: Logged In: YES user_id=31392 The current patch doesn't address any of my concerns about documentation or rationale. ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-09 23:22 Message: Logged In: YES user_id=973611 We should probably clarify the requirements. If we just want to use SHA1 to produce an RNG suitable for Monte Carlo etc., then we could do something simpler and faster than what we're doing. In particular, there's no need for state update, we could just generate outputs by SHA1(seed + counter). This is mentioned in "Applied Cryptography", 17.14. If we want it to "stand up to adversarial analysis" and be usable for cryptography, then I think we need to put a little more into it - in particular, the ability to mix new randomness into the generator state becomes important, and it becomes preferable to use a block cipher construction, not because the SHA1 construction is insecure, but so we can point to something like Yarrow. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2004-04-09 23:01 Message: Logged In: YES user_id=31392 Much earlier in the discussion Raymond wrote: "The docs *must* include references indicating the strengths and weaknesses of the approach. It should also concisely say why it works (a summary proof that makes it clear how a one-way digest function can be tranformed into a sequence generator that is cryptographicly strong to both the left and right with the latter being the one that is not obvious)." I don't see any documentation of this sort in the current patch. I also think it would be helpful if the documentation made some mention of why this generator would be useful. In particular, I suspect some users may by confused by the mention of SHA and be lead to believe that this is CSPRNG, when it is not; perhaps a mention of Yarrow and other approaches for cryptographic applications would be enough to clarify. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-04-09 22:24 Message: Logged In: YES user_id=80475 Thanks for the detailed comments. 1) Will add the references we have but need to make it clear that this exact implementation is not studied and does not guarantee cryptographic security. 2) The API is clear, seed() overwrites and jumpahead() updates. Besides, the goal is to provide a good alternative random number generator. If someone needs real crypto, they should use that. Tossing in ad hoc pieces to "make it harder" is a sure sign of venturing too far from theoretical underpinnings. 3) Good observation. I don't think a change is necessary. The docs do not promise that asking for 160 gives the same as 96 and 64 back to back. The Mersenne Twister has similar behavior. 4) Let's not gum up the API because we have encryption and text API's on the brain. The module is about random numbers not byte strings. ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-09 19:52 Message: Logged In: YES user_id=973611 Hi Raymond, here's some lengthy though not critically important comments (i.e, I'm okay with the latest diff). 1) With regards to this documentation: "No references were found which document the properties of SHA-1 as a random number generator". I can't find anything that documents exactly what we're doing. This type of generator is similar to Bruce Schneier's Yarrow and its offspring (Fortuna and Tiny). However, those use block-ciphers in counter mode, instead of SHA-1. According to the recent Tiny paper: "Cryptographic hash functions can also be a good foundation for a PRNG. Many constructs have used MD5 or SHA1 in this capacity, but the constructions are often ad-hoc. When using a hash function, we would recommend HMAC in CTR mode (i.e., one MACs counter for each successive output block). Ultimately, we prefer the use of block ciphers, as they are generally better-studied constructs." http://www.acsac.org/2003/papers/79.pdf Using HMAC seems like overkill to me, and would slow things down. However, if there's any chance Python will add block ciphers in the future, it might be worth waiting for, so we could implement one of the well-documented block cipher PRNGs. 2) Most cryptographic PRNGs allow for mixing new entropy into the generator state. The idea is you harvest entropy in the background, and once you've got a good chunk (say 128+ bits) you add it in. This makes cryptanalysis of the output harder, and allows you to recover even if the generator state is compromised. We could change the seed() method so it updates the state instead of overwriting it: def __init__(self): self.cnt = 0 self.s0 = '\0' * 20 self.gauss_next = None def seed(self, a=None): if a is None: # Initialize from current time import time a = time.time() b = sha.new(repr(a)).digest() self.s0 = sha.new(self.s0 + b).digest() 'b' may not be necessary, I'm not sure, though it's similar to how some other PRNGs handle seed inputs. If we were using a block cipher PRNG, it would be more obvious how to do this. jumpahead() could also be used instead of seed(). 3) The current generators (not just SHA1Random) won't always return the same sequence of bits from the same state. For example, if I call SHA1Random.getrandbits() asking for 160 bits they'll come from the same block, but if I ask for 96 and 64 bits, they'll come from different blocks. I suggest buffering the output, so getting 160 bits or 96+64 gets the same bits. Changing getrandbits() to getrandbytes () would avoid the need for bit-level buffering. 4) I still think a better interface would only require new generators to return a byte string. That would be easier for SHA1Random, and easier for other generators based on cross- platform entropy sources. I.e., in place of random() and getrandbits(), SHA1Random would only have to implement: def getrandbytes(self, n): while len(buffer) < n: self.cnt += 1 self.s0 = sha.new(self.s0 + hex (self.cnt)).digest() self.buffer += sha.new(self.s0).digest() retVal = self.buffer[:n] self.buffer = self.buffer[n:] return retVal The superclass would call this to get the required number of bytes, and convert them as needed (for converting them to numbers it could use the 'long(s, 256)' patch I submitted. Besides making it easier to add new generators, this would provide a useful function to users of these generators. getrandbits() is less useful, and it's harder to go from a long- integer to a byte-string than vice versa, because you may have to zero-pad if the long-integer is small. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-04-08 22:41 Message: Logged In: YES user_id=80475 Attaching a revised patch. If there are no objections, I will add it to the library (after factoring the unittests and adding docs). ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-07 01:56 Message: Logged In: YES user_id=973611 Comments on shardh.py - SHA1Random2 seems more complex than it needs to be. Comparing with figure 19 of [1], I believe that s1 does not need to be kept as state, so you could replace this: self.s1 = s1 = sha.new(s0 + self.s1).hexdigest() with this: s1 = sha.new(s0).hexdigest() If there's concern about the low hamming-distance between counter values, you could simply hash the counter before feeding it in (or use M-T instead of the counter). Instead of updating s0 every block, you could update it every 10th block or so. This would slightly increase the number of old values an attacker could recover, upon compromising the generator state, but it could be a substantial speedup. SHA1Random1 depends on M-T for some of its security properties. In particular, if I discover the generator state, can I run it backwards and determine previous values? I don't know, it depends on M-T. Unless we know more about the properties of M-T, I think it would be preferable to use M- T only in place of the counter in the SHA1Random2 construction (if at all), *NOT* as the sole repository of PRNG state. [1] http://www.cypherpunks.to/~peter/06_random.pdf ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-31 21:48 Message: Logged In: YES user_id=72053 FYI, this was just announced on the python-crypto list. It's a Python wrapper for EGADS, a cross platform entropy-gathering RNG. I haven't looked at the code for it and have no opinion about it. http://wiki.osafoundation.org/twiki/bin/view/Chandler/PyEgads ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-25 19:36 Message: Logged In: YES user_id=72053 1. OK, though the docs should mention this. 2. One-way just means collision resistant, and we're trying to make something without a distinguisher, which is a stronger criterion. I'm not saying there's any problem with a single hash; I just don't see an obvious proof that there's not a problem. Also it makes me cringe slightly to keep the seed around as plaintext even temporarily. The PC that I'm using (midrange Athlon) can do about a million sha1 hashes per second, so an extra hash per reseed "multiple" times per second shouldn't be noticible, for reasonable values of "multiple". 3. Since most of the real computation of this function is in the sha1 compression, the timing indicates that evaluation is dominated by interpreter overhead rather than by hashing. I presume you're using CPython. The results may be different in Jython or with Psyco and different again under PyPy once that becomes real. I think we should take the view that we're designing a mathematical function that exists independently of any specific implementation, then figure out what characteristics it should have and implement those, rather than tailoring it to the peculiarities of CPython. If people are really going to be using this function in 2010, CPython will hopefully be dead and gone (replaced by PyPy) by then, so that's all the more reason to not worry about small CPython-specific effects since the function will outlast CPython. Maybe also sometime between now and then, these libraries can be compiled with psyco. 4. OK 5. OK. Would be good to also change %s for cnt in setstate to %x. 6. Synchronization can be avoided by hashing different fixed strings into s0 and s1 at each rehash (I did that in my version). I think it's worth doing that just to kick the hash function away from standard sha. I actually don't see much need for the counter in either hash, but you were concerned about hitting possible short cycles in sha. 7. OK. WHrandom is already non-threadsafe, so there's precedent. I do have to wonder if the 160 bit arithmetic is slowing things down. If we don't care about non-IEEE doubles, we're ok with 53 bits. Hmm, I also wonder whether the 160 bit int to float conversion is precisely specified even for IEEE and isn't an artifact of Python's long int implementation. But I guess it doesn't matter, since we're never hashing those floats. Re bugs til 2010: oh let's have more confidence than that :). I think if we're careful and get the details correct before deployment, we shouldn't have any problems. This is just one screenful of code or so, not complex by most reasonable standards. However, we might want post the algorithm on sci.crypt for comments, since there's some knowledgeable people there. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-25 04:08 Message: Logged In: YES user_id=80475 Took another look at #5 and will change str() to hex(). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-25 03:25 Message: Logged In: YES user_id=80475 1. We don't care about non IEEE. 2. One-way is one-way, so double hashing is unnecessary. Also, I've fielded bug reports from existing user apps that re-seed very frequently (multiple times per second). 3. The implementations reflect the results of timing experiments which showed that the fastest intermediate representation was hex. 4. ['0x0'] is necessary to hanlde the case where n==0. int('', 16) raises a ValueError while int('0x0', 16) does not. 5. random() and getrandbits() do not have to go through the same intermediate steps (it's okay for one to use hex and the other to use str) -- speed and space issues dominate. 0x comes up enough in Python, there is little use in tossing it away for an obscure, hypothetical micro-controller implementation. 6. Leaving cnt out of the s1 computation guarantees that it will never track the updates of s0 -- any syncronization would be a disaster. Adding a count or some variant smacks of desperation rather than reliance on proven hash properties. 7. The function is already 100 times slower than MT. Adding locks will make the situation worse. It is better to simply document it as being non-threadsafe. Look at back at the mt/sha version. Its code is much cleaner, faster, and threadsafe. It goes a long way towards meeting your request and serving as an alternate generator to validate simulation results. If we use the sha/sha version, I'm certain that we will be fielding bug reports on this through 2010. It is also sufficiently complex that it will spawn lengthy, wasteful discussions and it will create a mine-field for future maintainers. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-25 00:15 Message: Logged In: YES user_id=72053 I'm not sure why the larger state space of mt/sha1 is supposed to be an advantage, but the other points are reasonable. I like the new sha1/sha1 implementation except for a few minor points (below). I made the old version hopefully threadsafe by adding a threading.Lock() object to the instance and locking it on update. I generally like your version better so maybe the lock can be added. Of course that slows it down even further, but in the context of an interpreted Python program I feel that the generator is still fast enough compared to the other stuff the program is likely to be doing. Re the new attachment, some minor issues: 1) The number 2.0**-160 is < 10**-50. This is a valid IEEE double but on some non-IEEE machines it may be a floating underflow or even equal to zero. I don't know if this matters. 2) Paranoia led me to hash the seed twice in the seed operation in my version, to defeat unlikely message-extension attacks against the hash function. I figured reseeding is infrequent enough that an extra hash operation doesn't matter. 3) Storing s1 as a string of 40 hex digits in SHARandom2 means that s1+s2 is 60 characters, which means hashing it will need two sha1 compression operations, slowing it down some. 4) the intiialization of ciphertxt to ["0x0"] instead of [] doesn't seem to do anything useful. int('123abc', 16) is valid without the 0x prefix. 5) random() uses hex(cnt) while getrandbits uses str(cnt) (converting to decimal instead of hex). I think it's better to use hex and remove the 0x prefix from the output, which is cleanest, and simpler to implement on some targets (embedded microcontroller). The same goes for the %s conversion in jumpahead (my version also uses %s there). 6) It may be worthwhile to include cnt in both the s0 and s1 hash updates. That guarantees the s1 hash never gets the same input twice. 7) The variable "s1" in getrandbits (instead of self.s1) is set but never used. Note in my version of sharandom.py, I didn't put a thread lock around the tuple assignment in setstate(). I'm not sure if that's safe or not. But it looks to me like random.py in the CVS does the same thing, so maybe both are unsafe. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-24 06:31 Message: Logged In: YES user_id=80475 I attached a new version of the mt/sha1 combination. Here are the relative merits as compared sha1/sha1 approach: * simpiler to implement/maintain since state tracking is builtin * larger state space (2**19937 vs 2**160) * faster * threadsafe Favoring sha1/sha1: * uses only one primitive * easier to replace in situations where MT is not available ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-21 19:56 Message: Logged In: YES user_id=72053 Two small corrections to below: 1) "in favor of an entropy" is an editing error--the intended meaning should be obvious. 2) I meant Bryan Mongeau, not Eric Mongeau. Bryan's lib is at . Sorry for any confusion. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-21 19:49 Message: Logged In: YES user_id=72053 I'm very much in favor of an entropy and Bram's suggested interface of entropy(x) supplying x bytes of randomness is fine. Perhaps it should really live in a cryptography API rather than in the Random API, but either way is ok. Mark Moraes wrote a Windows module that gets random numbers from the Windows CAPI; I put up a copy at http://www.nightsong.com/phr/python/winrand.module For Linux, Cygwin, and *BSD (including MacOS 10, I think), just read /dev/urandom for random bytes. However, various other systems (I think including Solaris) don't include anything like this. OpenSSL has an entropy gathering daemon that might be of some use in that situation. There's also the Yarrow generator (http://www.schneier.com/yarrow.html) and Eric Mongeau(?) wrote a pure-Python generator a while back that tried to gather entropy from thread racing, similar to Java's default SecureRandom class (I consider that method to be a bit bogus in both Python and Java). I think, though, simply supporting /dev/*random and the Windows CAPI is a pretty good start, even if other OS's aren't supported. Providing that function in the Python lib will make quite a few people happy. A single module integrating both methods would be great. I don't have any Windows dev tools so can't test any wrappers for Mark Moraes's function but maybe one of you guys can do it. I'm not too keen on the md5random.py patch for reasons discussed in the c.l.py thread. It depends too deeply on the precise characteristics of both md5 and Mersenne Twister. I think for a cryptography-based generator, it's better to stick to one cryptography-based primitive, and to use sha instead of md5. That also helps portability since it means other environments (maybe including Jython) can reproduce the PRNG stream without having to re-implement MT, as long as they have SHA (which is a US federal standard). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-21 14:50 Message: Logged In: YES user_id=80475 Bram, if you have a patch, I would be happy to look at it. Please make it as platform independent as possible (its okay to have it conditionally compile differently so long as the API stays the same). Submit it as a separate patch and assign to me -- it doesn't have to be orginal, you can google around to determine prior art. ---------------------------------------------------------------------- Comment By: Bram Cohen (bram_cohen) Date: 2004-03-21 14:34 Message: Logged In: YES user_id=52561 The lack of a 'real' entropy source is the gap which can't be fixed with an application-level bit of code. I think there are simple hooks for this on all systems, such as /dev/random on linux, but they aren't cross platform. A unified API which always calls the native entropy hook would be a very good thing. An example of a reasonable API would be to have a module named entropy, with a single function entropy(x) which returns a random string of length x. This is a problem which almost anyone writing a security-related application runs into, and lots of time is wasted writing dubious hacks to harvest entropy when a single simple library could magically solve it the right way. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-21 12:33 Message: Logged In: YES user_id=80475 Attaching my alternative. If it fulfills your use case, let me know and I'll apply it. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-18 03:51 Message: Logged In: YES user_id=72053 Updated version of sharandom.py is at same url. It folds a counter into the hash and also includes a getrandbits method. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-18 01:27 Message: Logged In: YES user_id=72053 I don't mind dropping the time() auto-seed but I think that means eliminating any auto-seed, and requiring a user-supplied seed. There is no demonstrable minimum period for the SHA-OFB and it would be bad if there was, since it would then no longer act like a PRF. Note that the generator in the sample code actually comes from two different SHA instances and thus its expected period is about 2**160. Anyway, adding a simple counter (incrementing by 1 on every SHA call) to the SHA input removes any lingering chance of a repeating sequence. I'll update the code to do that. It's much less ugly than stirring in Mersenne Twister output. I don't have Python 2.4 yet but will look at it when I get a chance. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-17 06:27 Message: Logged In: YES user_id=80475 It has some potential for being added. That being said, try to avoid a religious fervor. I would like to see considerably more discussion and evolution first. The auto-seeding with time *must* be dropped -- it is not harmonious with the goal of creating a secure random sequence. It is okay for the a subclass to deviate in this way. Also, I was soliciting references stronger than (I don't know of any results ... It is generally considered ... ). If we put this in, people are going to rely on it. The docs *must* include references indicating the strengths and weaknesses of the approach. It should also concisely say why it works (a summary proof that makes it clear how a one-way digest function can be tranformed into a sequence generator that is cryptographicly strong to both the left and right with the latter being the one that is not obvious). Not having a demonstrable minimum period is also bad. Nothing in the discussion so far precludes the existence of a bad seed that has a period of only 1 or 2. See my suggestion on comp.lang.python for a means of mitigating this issue. With respect to the randint question, be sure to look at the current Py2.4 source for random.py. The API is expanded to include and an option method, getrandbits(). That in turn feeds the other integer methods without the int to float to int dance. Upon further consideration, I think the export control question is moot since we're using an existing library function and not really expressing new art. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-17 04:05 Message: Logged In: YES user_id=72053 There are no research results that I know of that can distinguish the output of sha1-ofb from random output in any practical way. To find such a distinguisher would be a significant result. It's a safe bet that cryptographers have searched for distinguishers, though I don't promise to have heard of every result that's been published. I'll ask on sci.crypt if anyone else has heard of such a thing, if you want. SHA1-HMAC is generally considered to be indistinguishable from a PRF (pseudorandom function, i.e. a function selected at random from the space of all functions from arbitrary strings to 160-bit outputs; this term is from concrete security theory). MD5 is deprecated these days and there's no point to using it instead of sha1 for this. I'm not sure what happens if randint is added to the API. If you subclass Random and don't provide a randint method, you inherit from the base class, which can call self.random() to get floats to make the ints from. US export restrictions basically don't exist any more. In principle, if you want to export something, you're supposed to send an email to an address at the commerce department, saying the name of the program and the url where it can be obtained and a few similar things. In practice, email to that address is ignored, they never check anything. I heard the address even stopped working for a while, though they may have fixed it since then. See http://www.bxa.doc.gov/Encryption/ for info. I've emailed notices to the address a few times and never heard back anything. Anyway, I don't think this should count as cryptography; it's simply using a cryptographic hash function as an PRNG to avoid the correlations in other PRNG's; scientific rationale for doing that is given in the Numerical Recipes book mentioned above. The code that I linked uses the standard API but I wonder if the floating point output is optimally uniform, i.e. the N/2**56 calculation may not be exactly the right thing for an ieee float64. Using the time of day is what the Random docs say to do by default. You're correct that any security application needs to supply a higher entropy seed. I would like it very much if the std lib included a module that read some random bytes from the OS for OS's that support it. That means reading /dev/urandom on recent Un*x-ish systems or Cygwin, or calling CryptGenRandom on Windows. Reading /dev/urandom is trivial, and there's a guy on the pycrypt list who wrote a Windows function to call CryptGenRandom and return the output through the Python API. I forwarded the function to Guido with the author's permission but nothing seemed to happen with it. However, this gets away from this sharandom subclass. I'd like to make a few more improvements to the module but after that I think it can be dropped into the lib. Let me know what you think. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-16 06:52 Message: Logged In: YES user_id=80475 One other thought: if cryptographic strength is a goal, then seeding absolutely should require a long seed (key) as an input and the time should *never* be used. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-16 06:49 Message: Logged In: YES user_id=80475 It would have been ideal if the Random API had been designed with an integer generator at the core and floating point as a computed value, but that it the way it has been for a long time and efforts to switch it over would likely result in either incompatibility with existing subclasses or introducing new complexity (making it even harder to subclass). I think the API should be left alone until Py3.0. The attached module would make a good recipe on ASPN where improvements and critiques can be posted. Do you have links to research showing that running SHA-1 in a cipher block feedback mode results in a cryptographically strong random number generator -- the result seems likely, but a research link would be great. Is there a link to research showing the RNG properties of the resulting generator (period, equidistribution, passing tests for randomness, etc)? Also, is there research showing the relative merits of this approach vs MD5, AES, or DES? If something like this gets added to the library, I prefer it to be added to random.py using the existing API. Adding yet another random module would likely do more harm than good. One other question (I don't know the answer to this): would including a cryptographically strong RNG trigger US export restrictions on the python distribution? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=917055&group_id=5470 From noreply at sourceforge.net Sat Jun 12 03:22:50 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 03:22:54 2004 Subject: [ python-Bugs-892939 ] Race condition in popen2 Message-ID: Bugs item #892939, was opened at 2004-02-08 12:31 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=892939&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None >Priority: 6 Submitted By: Ken McNeil (kenmcneil) Assigned to: Nobody/Anonymous (nobody) Summary: Race condition in popen2 Initial Comment: The "fix" for bug #761888 created a race condition in Popen3 . The interaction between wait and _cleanup is the root of the problem. def wait(self): """Wait for and return the exit status of the child process.""" if self.sts < 0: pid, sts = os.waitpid(self.pid, 0) if pid == self.pid: self.sts = sts return self.sts def _cleanup(): for inst in _active[:]: inst.poll() In wait, between the check of self.sts and the call to os.waitpid a new Popen3 object can be created in another thread which will trigger a call to _cleanup. Since the call to _cleanup polls the process, when the thread running wait starts back up again it will try to poll the process using os.waitpid, which will throw an OSError because os.waitpid has already been called for the PID indirectly in _cleanup. A work around is for the caller of wait to catch the OSError and check the sts field, and if sts is non-negative then the OSError is most likely because of this problem and can be ignored. However, sts is undocumented and should probably stay that way. My suggestion is that the patch that added _active, _cleanup, and all be removed and a more suitable mechanism for fixing bug #761888 be found. As it has been said in the discussion of bug #761888, magically closing FDs is not a "good thing". It seems to me that surrounding the call to os.fork with a try/except, and closing the pipes in the except, would be suitable but I don't know how this would interact with a failed call to fork, therefore I wont provide a patch. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=892939&group_id=5470 From noreply at sourceforge.net Sat Jun 12 03:27:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 03:27:32 2004 Subject: [ python-Bugs-215126 ] Over restricted type checking on eval() function Message-ID: Bugs item #215126, was opened at 2000-09-22 14:36 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 Category: Python Interpreter Core Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Raymond Hettinger (rhettinger) Summary: Over restricted type checking on eval() function Initial Comment: The built-in function eval() takes a string argument and a dictionary. The second argument should allow any instance which defines __getitem__ as opposed to just dictionaries. The following example creates a type error: eval, argument 2: expected dictionary, instance found class SpreadSheet: _cells = {} def __setitem__( self, key, formula ): self._cells[key] = formula def __getitem__( self, key ): return eval( self._cells[key], self ) ss = SpreadSheet() ss['a1'] = '5' ss['a2'] = 'a1*5' ss['a2'] ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:27 Message: Logged In: YES user_id=80475 Armin, can you whip-up a quick patch so that we can explore the implications of your suggestion. Ideally, I would like to see something like this go in for Py2.4. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-22 11:53 Message: Logged In: YES user_id=80475 +1 Amrin's idea provides most of the needed functionality with zero performance impact. Also, using a local dictionary for the application variables is likely to be just exactly what a programmer would want to do. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 13:21 Message: Logged In: YES user_id=4771 With minimal work and performance impact, we could allow frame->f_locals to be something else than a real dictionary. It looks like it would be easily possible as long as we don't touch the time-critical frame->f_globals. Code compiled by eval() always uses LOAD_NAME and STORE_NAME, which is anyway allowed to be a bit slower than LOAD_FAST or LOAD_GLOBAL. Note that PyDict_GetItem() & co., as called by LOAD/STORE_NAME, do a PyDict_Check() anyway. We could just replace it with PyDict_CheckExact() and if it fails fall back to calling the appropriate ob_type->tp_as_mapping method. This would turn some of the PyDict_Xxx() API into a general mapping API, half-way between its current dict-only usage and the fully abstract PyObject_Xxx() API. This is maybe a bit strange from the user's point of view, because eval("expr", mydict) still wouldn't work: only eval("expr", {}, mydict) would. which does which does Now, there is no way I can think of that code compiled by eval() could contain LOAD_GLOBAL or STORE_GLOBAL. The only way to tell the difference between eval("expr", mydict) and eval("expr", {}, mydict) appears to be by calling globals() or somehow inspecting the frame directly. Therefore it might be acceptable to redefine the two-argument eval(expr, dict) to be equivalent not to eval(expr, dict, dict) but eval(expr, {}, dict). This hack might be useful enough even if it won't work with the exec statement. ---------------------------------------------------------------------- Comment By: Philippe Fremy (pfremy) Date: 2004-03-03 04:40 Message: Logged In: YES user_id=233844 I have exactly the same need as the original poster. The only difference in my case is that I inhreted a dictionnary. I want to use the eval() function as a simple expression evaluator. I have the follwing dictionnary: d['a']='1' d['b']='2' d['c']='a+b' I want the following results: d[a] -> 1 d[b] -> 2 d[c] -> 3 To do that, I was planning to use the eval() function and overloading the __getitem__ of the global or local dictionnary: class MyDict( dict ) : def __getitem__( self, key ): print "__getitem__", key val = dict.__getitem__( self, key ) print "val = '%s'" % val return eval( val , self ) But it does not work: d[a]: __getitem__ a val = '1' -> 1 d[b]: __getitem__ b val = '2' -> 2 d[c]: __getitem__ c val = 'e+1' ERROR Traceback (most recent call last): File "test_parse_jaycos_config.py", line 83, in testMyDict self.assertEquals( d['c'], 2 ) File "parse_config_file.py", line 10, in __getitem__ return eval( val , self ) File "", line 0, in ? TypeError: cannot concatenate 'str' and 'int' objects d['c'] did fetch the 'a+1' value, which was passed to eval. However, eval() tried to evaluate the expression using the content of the dictionnary instead of using __getitem__. So it fetched the string '1' for 'a' instead of the value 1, so the result is not suitable for the addition. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-01-12 17:01 Message: Logged In: YES user_id=99874 Hmm... I like this! Of course, I am wary of adding *yet another* special double- underscore function to satisfy a single special purpose, but this would satisfy all of *my* needs, and without any performance impact for lookups that are FOUND. Lookups that are NOT found would have a slight performance degrade (I know better than to speculate about the size of the effect without measuring it). I'm not really sure what percentage of dict lookups succeed. At any rate, what are these "other contexts" you mention in which a __keyerror__ would "also be useful"? Because if we can find other places where it is useful, that significantly improves the usefulness. OTOH, if the tests can be done ONLY for eval (I don't really think that's possible), then I'm certain no one cares about the performance of eval. ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2004-01-12 14:38 Message: Logged In: YES user_id=292741 This may be a compromise solution, which could also be useful in other contexts: What if the object passed is derived from dict - presumably that doesn't help because the point is to do low-level calls to the actual dict's lookup functions. Now, suppose we modify the basic dict type, so that before throwing a KeyError, it checks to see if it is really a derived object with a method __keyerror__, and if so, calls that and returns its result (or lets it throw)? Now you can make objects that look like dicts, and act like them at the low level, but can automatically populate themselves when non-existent keys are requested. Of course, __keyerror__('x') does not have to make an 'x' entry in the dict; it could make no change, or add several entries, depending on the desired semantics regarding future lookups. It could be set up so that every lookup fails and is forwarded by __keyerror__ to the __getitem__ of another object, for instance. The cost of this to the 'normal' dict lookup is that the need to do PyDict_CheckExact() each time a lookup fails. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2003-12-16 10:37 Message: Logged In: YES user_id=99874 I'll just add my voice as somebody who would find this to be "darn handy" if it could ever done without noticably impacting the speed of python code that DOESN'T use eval(). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 23:18 Message: Added to PEP 42. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 14:42 Message: Changed Group to Feature Request. Should be added to PEP 42 (I'll do that if nobody beats me to it). CPython requires a genuine dict now for speed. I believe JPython allows any mapping object. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 From noreply at sourceforge.net Sat Jun 12 03:28:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 03:28:13 2004 Subject: [ python-Bugs-963321 ] Acroread aborts printing PDF documentation Message-ID: Bugs item #963321, was opened at 2004-05-30 18:50 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=963321&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Howard B. Golden (hgolden) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Acroread aborts printing PDF documentation Initial Comment: Using acroread 5.08 under Linux 2.6.5, the May 20, 2004 version of the documentation (release 2.3.4) is displayed successfully. However, when I attempt to print the documentation, acroread aborts (prints "Aborted" on standard error). (It is possible that this is an acroread bug, but I'm reporting it here in case it indicates a problem with PDF files generated by the documentation building process.) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=963321&group_id=5470 From noreply at sourceforge.net Sat Jun 12 04:15:01 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 04:15:22 2004 Subject: [ python-Bugs-970459 ] Generators produce wrong exception Message-ID: Bugs item #970459, was opened at 2004-06-10 16:58 Message generated for change (Comment added) made by anders_lehmann You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970459&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Anders Lehmann (anders_lehmann) Assigned to: Nobody/Anonymous (nobody) Summary: Generators produce wrong exception Initial Comment: The following script : def f(): yield "%s" %('Et','to') for i in f(): print i will produce the following traceback in Python 2.3.4 Traceback (most recent call last): File "python_generator_bug.py", line 6, in ? b+=f() TypeError: argument to += must be iterable Where I would expect a: TypeError : not all arguments converted during string formatting. ---------------------------------------------------------------------- >Comment By: Anders Lehmann (anders_lehmann) Date: 2004-06-12 10:15 Message: Logged In: YES user_id=1060856 I am sorry that I overoptimized the script that should demonstrate the error. The problem was discovered with the : b=[] b+=f() I find it very confusing that the reported error is that the function is not iterable ( which isnt due to a format error ). When the generator is more complex than the example the exception doesn't help in locating the bug. But if Tim thinks it is the correct behaviour I will close the bug. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 09:03 Message: Logged In: YES user_id=80475 Ander, if this resolves your issue, please close this bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-10 17:38 Message: Logged In: YES user_id=31435 The script you gave does produce the message you expect: >>> def f(): ... yield "%s" %('Et','to') ... >>> for i in f(): print i ... Traceback (most recent call last): File "", line 1, in ? File "", line 2, in f TypeError: not all arguments converted during string formatting >>> The traceback you gave contains the line b+=f() which doesn't appear in the script you gave. If the script you *actually* ran had, for example, >>> b = [] >>> b += f() Then Traceback (most recent call last): File "", line 1, in ? TypeError: argument to += must be iterable >>> is an appropriate exception. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970459&group_id=5470 From noreply at sourceforge.net Sat Jun 12 04:15:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 04:15:33 2004 Subject: [ python-Bugs-970459 ] Generators produce wrong exception Message-ID: Bugs item #970459, was opened at 2004-06-10 16:58 Message generated for change (Settings changed) made by anders_lehmann You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970459&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 >Status: Closed Resolution: None Priority: 5 Submitted By: Anders Lehmann (anders_lehmann) Assigned to: Nobody/Anonymous (nobody) Summary: Generators produce wrong exception Initial Comment: The following script : def f(): yield "%s" %('Et','to') for i in f(): print i will produce the following traceback in Python 2.3.4 Traceback (most recent call last): File "python_generator_bug.py", line 6, in ? b+=f() TypeError: argument to += must be iterable Where I would expect a: TypeError : not all arguments converted during string formatting. ---------------------------------------------------------------------- Comment By: Anders Lehmann (anders_lehmann) Date: 2004-06-12 10:15 Message: Logged In: YES user_id=1060856 I am sorry that I overoptimized the script that should demonstrate the error. The problem was discovered with the : b=[] b+=f() I find it very confusing that the reported error is that the function is not iterable ( which isnt due to a format error ). When the generator is more complex than the example the exception doesn't help in locating the bug. But if Tim thinks it is the correct behaviour I will close the bug. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 09:03 Message: Logged In: YES user_id=80475 Ander, if this resolves your issue, please close this bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-10 17:38 Message: Logged In: YES user_id=31435 The script you gave does produce the message you expect: >>> def f(): ... yield "%s" %('Et','to') ... >>> for i in f(): print i ... Traceback (most recent call last): File "", line 1, in ? File "", line 2, in f TypeError: not all arguments converted during string formatting >>> The traceback you gave contains the line b+=f() which doesn't appear in the script you gave. If the script you *actually* ran had, for example, >>> b = [] >>> b += f() Then Traceback (most recent call last): File "", line 1, in ? TypeError: argument to += must be iterable >>> is an appropriate exception. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970459&group_id=5470 From noreply at sourceforge.net Sat Jun 12 04:54:52 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 04:55:01 2004 Subject: [ python-Bugs-970459 ] Generators produce wrong exception Message-ID: Bugs item #970459, was opened at 2004-06-10 09:58 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970459&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Closed >Resolution: Duplicate Priority: 5 Submitted By: Anders Lehmann (anders_lehmann) Assigned to: Nobody/Anonymous (nobody) Summary: Generators produce wrong exception Initial Comment: The following script : def f(): yield "%s" %('Et','to') for i in f(): print i will produce the following traceback in Python 2.3.4 Traceback (most recent call last): File "python_generator_bug.py", line 6, in ? b+=f() TypeError: argument to += must be iterable Where I would expect a: TypeError : not all arguments converted during string formatting. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 03:54 Message: Logged In: YES user_id=80475 Okay, I was able to reproduce this under Py2.3.4 but not Py2.4a0 where it appears to have already been fixed. Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. >>> def f(): yield "%s" %('Et','to') >>> b = [] >>> b += f() Traceback (most recent call last): File "", line 1, in -toplevel- b += f() TypeError: argument to += must be iterable While this particular example has been fixed, others similar situations exist. Fixing them is not easy (there are many places where a one error message gets trounced by another message from an enclosing function which is trying to provide more information). Leaving this bug report as closed because the specific case has been fixed and because there is another open bug report for the more general case. ---------------------------------------------------------------------- Comment By: Anders Lehmann (anders_lehmann) Date: 2004-06-12 03:15 Message: Logged In: YES user_id=1060856 I am sorry that I overoptimized the script that should demonstrate the error. The problem was discovered with the : b=[] b+=f() I find it very confusing that the reported error is that the function is not iterable ( which isnt due to a format error ). When the generator is more complex than the example the exception doesn't help in locating the bug. But if Tim thinks it is the correct behaviour I will close the bug. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:03 Message: Logged In: YES user_id=80475 Ander, if this resolves your issue, please close this bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-10 10:38 Message: Logged In: YES user_id=31435 The script you gave does produce the message you expect: >>> def f(): ... yield "%s" %('Et','to') ... >>> for i in f(): print i ... Traceback (most recent call last): File "", line 1, in ? File "", line 2, in f TypeError: not all arguments converted during string formatting >>> The traceback you gave contains the line b+=f() which doesn't appear in the script you gave. If the script you *actually* ran had, for example, >>> b = [] >>> b += f() Then Traceback (most recent call last): File "", line 1, in ? TypeError: argument to += must be iterable >>> is an appropriate exception. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970459&group_id=5470 From noreply at sourceforge.net Sat Jun 12 07:54:46 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 07:54:53 2004 Subject: [ python-Bugs-969415 ] CJK codecs list incomplete Message-ID: Bugs item #969415, was opened at 2004-06-09 08:54 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Mike Brown (mike_j_brown) Assigned to: M.-A. Lemburg (lemburg) Summary: CJK codecs list incomplete Initial Comment: http://www.python.org/dev/doc/devel/whatsnew/node7. html states that various CJK encodings have been added, but the list given there does not match the list on http://www.python.org/dev/doc/devel/lib/node128.html. In particular, missing from the latter list are all of the aliases with hyphens: shift-jis, shift-jisx0213, euc-jp, euc-jisx0213, iso-2022- jp, iso-2022-jp-1, iso-2022-jp-2, iso-2022-jp-3, iso- 2022-jp-ext, euc-kr, iso-2022-kr Since I successfully ran codecs.lookup() tests on a few of the hyphenated aliases, I assume that the omission of the hyphenated versions in the docs is merely an oversight. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-12 13:54 Message: Logged In: YES user_id=21627 It is just not feasible to list all recognized aliases. For example, for ISO-8859-1, there are trivial 31 aliases, including Iso_8859-1 and iSO-8859_1. For shift_jisx0213, there are 1023 trivial aliases. The aliases column in the documentation should only list non-trivial aliases, and for these, it should list a form that people are most likely to encounter. So if "s-jis" would be more common than "s_jis", this is what should be listed. If s-JIS is even more common, this should be listed. The top of the page should say that case in encoding names does not matter, and that _ and - can be freely substituted. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 09:05 Message: Logged In: YES user_id=80475 Mark, would you pronounce on this one. ---------------------------------------------------------------------- Comment By: Mike Brown (mike_j_brown) Date: 2004-06-09 10:25 Message: Logged In: YES user_id=371366 I see no reason to omit any aliases that are recognized, especially when the aliases in question are, more often than not, the IANA's preferred MIME name as shown at http://www.iana.org/assignments/character-sets. I was looking in the docs to see if Python 2.4 was going to support 'euc-jp', and was dismayed to see 'euc_jp' and variants but no 'euc-jp'. I had to obtain and install 2.4a0 to test to find out that it was just a documentation problem. Please consider listing all realnames and aliases. ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 09:10 Message: Logged In: YES user_id=55188 Reopened to consider the consistence with non-cjk codecs. All the non-cjk codecs are written with hyphen even if their realname is with underscore. (eg. iso8859-1 and iso8859_1.py) Will changing cjk codecs's codec/alias names to use not underscores but hyphens make docs more friendly? ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 09:01 Message: Logged In: YES user_id=55188 All hyphens are translated as underscores in encoding lookups. So we may not need to provide encoding list with hyphens additionally. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 From noreply at sourceforge.net Sat Jun 12 07:59:23 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 07:59:29 2004 Subject: [ python-Bugs-969415 ] CJK codecs list incomplete Message-ID: Bugs item #969415, was opened at 2004-06-09 08:54 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Mike Brown (mike_j_brown) Assigned to: M.-A. Lemburg (lemburg) Summary: CJK codecs list incomplete Initial Comment: http://www.python.org/dev/doc/devel/whatsnew/node7. html states that various CJK encodings have been added, but the list given there does not match the list on http://www.python.org/dev/doc/devel/lib/node128.html. In particular, missing from the latter list are all of the aliases with hyphens: shift-jis, shift-jisx0213, euc-jp, euc-jisx0213, iso-2022- jp, iso-2022-jp-1, iso-2022-jp-2, iso-2022-jp-3, iso- 2022-jp-ext, euc-kr, iso-2022-kr Since I successfully ran codecs.lookup() tests on a few of the hyphenated aliases, I assume that the omission of the hyphenated versions in the docs is merely an oversight. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-12 13:59 Message: Logged In: YES user_id=21627 Actually, the top of the page does already say Notice that spelling alternatives that only differ in case or use a hyphen instead of an underscore are also valid aliases. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-12 13:54 Message: Logged In: YES user_id=21627 It is just not feasible to list all recognized aliases. For example, for ISO-8859-1, there are trivial 31 aliases, including Iso_8859-1 and iSO-8859_1. For shift_jisx0213, there are 1023 trivial aliases. The aliases column in the documentation should only list non-trivial aliases, and for these, it should list a form that people are most likely to encounter. So if "s-jis" would be more common than "s_jis", this is what should be listed. If s-JIS is even more common, this should be listed. The top of the page should say that case in encoding names does not matter, and that _ and - can be freely substituted. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 09:05 Message: Logged In: YES user_id=80475 Mark, would you pronounce on this one. ---------------------------------------------------------------------- Comment By: Mike Brown (mike_j_brown) Date: 2004-06-09 10:25 Message: Logged In: YES user_id=371366 I see no reason to omit any aliases that are recognized, especially when the aliases in question are, more often than not, the IANA's preferred MIME name as shown at http://www.iana.org/assignments/character-sets. I was looking in the docs to see if Python 2.4 was going to support 'euc-jp', and was dismayed to see 'euc_jp' and variants but no 'euc-jp'. I had to obtain and install 2.4a0 to test to find out that it was just a documentation problem. Please consider listing all realnames and aliases. ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 09:10 Message: Logged In: YES user_id=55188 Reopened to consider the consistence with non-cjk codecs. All the non-cjk codecs are written with hyphen even if their realname is with underscore. (eg. iso8859-1 and iso8859_1.py) Will changing cjk codecs's codec/alias names to use not underscores but hyphens make docs more friendly? ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 09:01 Message: Logged In: YES user_id=55188 All hyphens are translated as underscores in encoding lookups. So we may not need to provide encoding list with hyphens additionally. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 From noreply at sourceforge.net Sat Jun 12 08:04:06 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 08:04:09 2004 Subject: [ python-Bugs-969415 ] CJK codecs list incomplete Message-ID: Bugs item #969415, was opened at 2004-06-09 08:54 Message generated for change (Comment added) made by lemburg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Mike Brown (mike_j_brown) Assigned to: M.-A. Lemburg (lemburg) Summary: CJK codecs list incomplete Initial Comment: http://www.python.org/dev/doc/devel/whatsnew/node7. html states that various CJK encodings have been added, but the list given there does not match the list on http://www.python.org/dev/doc/devel/lib/node128.html. In particular, missing from the latter list are all of the aliases with hyphens: shift-jis, shift-jisx0213, euc-jp, euc-jisx0213, iso-2022- jp, iso-2022-jp-1, iso-2022-jp-2, iso-2022-jp-3, iso- 2022-jp-ext, euc-kr, iso-2022-kr Since I successfully ran codecs.lookup() tests on a few of the hyphenated aliases, I assume that the omission of the hyphenated versions in the docs is merely an oversight. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-12 14:04 Message: Logged In: YES user_id=38388 I think that it might be a good idea to document of how the standard search function of the encodings package work at the top of that page, namely to normalize encoding names before doing the lookup: """ Normalization works as follows: all non-alphanumeric characters except the dot used for Python package names are collapsed and replaced with a single underscore, e.g. ' -;#' becomes '_'. Leading and trailing underscores are removed. Note that encoding names should be ASCII only; if they do use non-ASCII characters, these must be Latin-1 compatible. """ The table should then only list normalized encoding names (which I think is already the case). ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-12 13:59 Message: Logged In: YES user_id=21627 Actually, the top of the page does already say Notice that spelling alternatives that only differ in case or use a hyphen instead of an underscore are also valid aliases. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-12 13:54 Message: Logged In: YES user_id=21627 It is just not feasible to list all recognized aliases. For example, for ISO-8859-1, there are trivial 31 aliases, including Iso_8859-1 and iSO-8859_1. For shift_jisx0213, there are 1023 trivial aliases. The aliases column in the documentation should only list non-trivial aliases, and for these, it should list a form that people are most likely to encounter. So if "s-jis" would be more common than "s_jis", this is what should be listed. If s-JIS is even more common, this should be listed. The top of the page should say that case in encoding names does not matter, and that _ and - can be freely substituted. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 09:05 Message: Logged In: YES user_id=80475 Mark, would you pronounce on this one. ---------------------------------------------------------------------- Comment By: Mike Brown (mike_j_brown) Date: 2004-06-09 10:25 Message: Logged In: YES user_id=371366 I see no reason to omit any aliases that are recognized, especially when the aliases in question are, more often than not, the IANA's preferred MIME name as shown at http://www.iana.org/assignments/character-sets. I was looking in the docs to see if Python 2.4 was going to support 'euc-jp', and was dismayed to see 'euc_jp' and variants but no 'euc-jp'. I had to obtain and install 2.4a0 to test to find out that it was just a documentation problem. Please consider listing all realnames and aliases. ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 09:10 Message: Logged In: YES user_id=55188 Reopened to consider the consistence with non-cjk codecs. All the non-cjk codecs are written with hyphen even if their realname is with underscore. (eg. iso8859-1 and iso8859_1.py) Will changing cjk codecs's codec/alias names to use not underscores but hyphens make docs more friendly? ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 09:01 Message: Logged In: YES user_id=55188 All hyphens are translated as underscores in encoding lookups. So we may not need to provide encoding list with hyphens additionally. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 From noreply at sourceforge.net Sat Jun 12 08:04:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 08:04:46 2004 Subject: [ python-Bugs-969415 ] CJK codecs list incomplete Message-ID: Bugs item #969415, was opened at 2004-06-09 08:54 Message generated for change (Settings changed) made by lemburg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Mike Brown (mike_j_brown) >Assigned to: Raymond Hettinger (rhettinger) Summary: CJK codecs list incomplete Initial Comment: http://www.python.org/dev/doc/devel/whatsnew/node7. html states that various CJK encodings have been added, but the list given there does not match the list on http://www.python.org/dev/doc/devel/lib/node128.html. In particular, missing from the latter list are all of the aliases with hyphens: shift-jis, shift-jisx0213, euc-jp, euc-jisx0213, iso-2022- jp, iso-2022-jp-1, iso-2022-jp-2, iso-2022-jp-3, iso- 2022-jp-ext, euc-kr, iso-2022-kr Since I successfully ran codecs.lookup() tests on a few of the hyphenated aliases, I assume that the omission of the hyphenated versions in the docs is merely an oversight. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-12 14:04 Message: Logged In: YES user_id=38388 I think that it might be a good idea to document of how the standard search function of the encodings package work at the top of that page, namely to normalize encoding names before doing the lookup: """ Normalization works as follows: all non-alphanumeric characters except the dot used for Python package names are collapsed and replaced with a single underscore, e.g. ' -;#' becomes '_'. Leading and trailing underscores are removed. Note that encoding names should be ASCII only; if they do use non-ASCII characters, these must be Latin-1 compatible. """ The table should then only list normalized encoding names (which I think is already the case). ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-12 13:59 Message: Logged In: YES user_id=21627 Actually, the top of the page does already say Notice that spelling alternatives that only differ in case or use a hyphen instead of an underscore are also valid aliases. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-12 13:54 Message: Logged In: YES user_id=21627 It is just not feasible to list all recognized aliases. For example, for ISO-8859-1, there are trivial 31 aliases, including Iso_8859-1 and iSO-8859_1. For shift_jisx0213, there are 1023 trivial aliases. The aliases column in the documentation should only list non-trivial aliases, and for these, it should list a form that people are most likely to encounter. So if "s-jis" would be more common than "s_jis", this is what should be listed. If s-JIS is even more common, this should be listed. The top of the page should say that case in encoding names does not matter, and that _ and - can be freely substituted. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 09:05 Message: Logged In: YES user_id=80475 Mark, would you pronounce on this one. ---------------------------------------------------------------------- Comment By: Mike Brown (mike_j_brown) Date: 2004-06-09 10:25 Message: Logged In: YES user_id=371366 I see no reason to omit any aliases that are recognized, especially when the aliases in question are, more often than not, the IANA's preferred MIME name as shown at http://www.iana.org/assignments/character-sets. I was looking in the docs to see if Python 2.4 was going to support 'euc-jp', and was dismayed to see 'euc_jp' and variants but no 'euc-jp'. I had to obtain and install 2.4a0 to test to find out that it was just a documentation problem. Please consider listing all realnames and aliases. ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 09:10 Message: Logged In: YES user_id=55188 Reopened to consider the consistence with non-cjk codecs. All the non-cjk codecs are written with hyphen even if their realname is with underscore. (eg. iso8859-1 and iso8859_1.py) Will changing cjk codecs's codec/alias names to use not underscores but hyphens make docs more friendly? ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 09:01 Message: Logged In: YES user_id=55188 All hyphens are translated as underscores in encoding lookups. So we may not need to provide encoding list with hyphens additionally. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 From noreply at sourceforge.net Sat Jun 12 11:21:10 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 11:21:30 2004 Subject: [ python-Bugs-917055 ] add a stronger PRNG Message-ID: Bugs item #917055, was opened at 2004-03-16 02:46 Message generated for change (Comment added) made by phr You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=917055&group_id=5470 Category: Python Library Group: Feature Request Status: Closed Resolution: None Priority: 5 Submitted By: paul rubin (phr) Assigned to: Nobody/Anonymous (nobody) Summary: add a stronger PRNG Initial Comment: The default Mersenne Twister algorithm in the Random module is very fast but makes no serious attempt to generate output that stands up to adversarial analysis. Besides cryptography applications, this can be a serious problem in areas like computer games. Sites like www.partypoker.com routinely run online tournaments with prize funds of 100K USD or more. There's big financial incentives to find ways of guessing your opponent's cards with better than random chance probability. See bug #901285 for some discussion of possible correlations in Mersenne Twister's output. Teukolsky et al discuss PRNG issues at some length in their book "Numerical Recipes". The original edition of Numerical Recipes had a full blown version of the FIPS Data Encryption Standard implemented horrendously in Fortran, as a way of making a PRNG with no easily discoverable output correlations. Later editions replaced the DES routine with a more efficient one based on similar principles. Python already has an SHA module that makes a dandy PRNG. I coded a sample implementation: http://www.nightsong.com/phr/python/sharandom.py I'd like to ask that the Python lib include something like this as an alternative to MT. It would be similar to the existing whrandom module in that it's an alternative subclass to the regular Random class. The existing Random module wouldn't have to be changed. I don't propose directly including the module above, since I think the Random API should also be extended to allow directly requesting pseudo-random integers from the generator subclass, rather than making them from floating-point output. That would allow making the above subclass work more cleanly. I'll make a separate post about this, but first will have to examine the Random module source code. ---------------------------------------------------------------------- >Comment By: paul rubin (phr) Date: 2004-06-12 15:21 Message: Logged In: YES user_id=72053 Sorry about not responding earlier. I thought the current patch was ok and I needed to get around to writing a doc for it, which if it covers all the proofs involved, starts getting to be a bit scary--I've never written anything like that but have been wanting to. I do think the patch should be preserved and made available, if not in the python lib then maybe in Vaults of Parnassus, or ASPN Cookbook or someplace like that. It's not a "crypto project", it's just an attempt at writing a PRNG that meets modern criteria. However, it's considerably less important for the Python lib than the entropy module is, since applications can generally paste it from somewhere. I didn't realize it was possible to close a bug and leave its patches open. Should we open a separate bug for the entropy module? ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 07:20 Message: Logged In: YES user_id=80475 Closing due to lack of interest/progress, etc. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-07 05:29 Message: Logged In: YES user_id=80475 Can we close this one (while leaving open the patch for an entropy module)? Essentially, it provides nothing that couldn't be contributed as a short recipe for those interested in such things. While an alternate RNG would be nice, turning this into a crypto project is probably not a great idea. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-04-14 05:33 Message: Logged In: YES user_id=72053 trevp, the jumpahead operation lets you stir in new entropy. Jeremy, I'll see if I can write some docs for it, and will attempt a concrete security proof. I don't think we should need to say no references were found for using sha1 as a prng. The randomness assumption is based on the Krawczyk-Bellare-Rogaway result that's cited somewhere down the page or in the clpy thread. I'll include a cite in the doc/rationale. I hope that the entropy module is accepted, assuming it works. The entropy module is quite a bit more important than the deterministic PRNG module, since the application can easily supply the DPRNG but can't always easily supply the entropy module. ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-14 04:10 Message: Logged In: YES user_id=973611 I submitted a patch for an entropy module, as was discussed below. See patch #934711. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2004-04-12 03:35 Message: Logged In: YES user_id=31392 The current patch doesn't address any of my concerns about documentation or rationale. ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-10 04:22 Message: Logged In: YES user_id=973611 We should probably clarify the requirements. If we just want to use SHA1 to produce an RNG suitable for Monte Carlo etc., then we could do something simpler and faster than what we're doing. In particular, there's no need for state update, we could just generate outputs by SHA1(seed + counter). This is mentioned in "Applied Cryptography", 17.14. If we want it to "stand up to adversarial analysis" and be usable for cryptography, then I think we need to put a little more into it - in particular, the ability to mix new randomness into the generator state becomes important, and it becomes preferable to use a block cipher construction, not because the SHA1 construction is insecure, but so we can point to something like Yarrow. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2004-04-10 04:01 Message: Logged In: YES user_id=31392 Much earlier in the discussion Raymond wrote: "The docs *must* include references indicating the strengths and weaknesses of the approach. It should also concisely say why it works (a summary proof that makes it clear how a one-way digest function can be tranformed into a sequence generator that is cryptographicly strong to both the left and right with the latter being the one that is not obvious)." I don't see any documentation of this sort in the current patch. I also think it would be helpful if the documentation made some mention of why this generator would be useful. In particular, I suspect some users may by confused by the mention of SHA and be lead to believe that this is CSPRNG, when it is not; perhaps a mention of Yarrow and other approaches for cryptographic applications would be enough to clarify. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-04-10 03:24 Message: Logged In: YES user_id=80475 Thanks for the detailed comments. 1) Will add the references we have but need to make it clear that this exact implementation is not studied and does not guarantee cryptographic security. 2) The API is clear, seed() overwrites and jumpahead() updates. Besides, the goal is to provide a good alternative random number generator. If someone needs real crypto, they should use that. Tossing in ad hoc pieces to "make it harder" is a sure sign of venturing too far from theoretical underpinnings. 3) Good observation. I don't think a change is necessary. The docs do not promise that asking for 160 gives the same as 96 and 64 back to back. The Mersenne Twister has similar behavior. 4) Let's not gum up the API because we have encryption and text API's on the brain. The module is about random numbers not byte strings. ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-10 00:52 Message: Logged In: YES user_id=973611 Hi Raymond, here's some lengthy though not critically important comments (i.e, I'm okay with the latest diff). 1) With regards to this documentation: "No references were found which document the properties of SHA-1 as a random number generator". I can't find anything that documents exactly what we're doing. This type of generator is similar to Bruce Schneier's Yarrow and its offspring (Fortuna and Tiny). However, those use block-ciphers in counter mode, instead of SHA-1. According to the recent Tiny paper: "Cryptographic hash functions can also be a good foundation for a PRNG. Many constructs have used MD5 or SHA1 in this capacity, but the constructions are often ad-hoc. When using a hash function, we would recommend HMAC in CTR mode (i.e., one MACs counter for each successive output block). Ultimately, we prefer the use of block ciphers, as they are generally better-studied constructs." http://www.acsac.org/2003/papers/79.pdf Using HMAC seems like overkill to me, and would slow things down. However, if there's any chance Python will add block ciphers in the future, it might be worth waiting for, so we could implement one of the well-documented block cipher PRNGs. 2) Most cryptographic PRNGs allow for mixing new entropy into the generator state. The idea is you harvest entropy in the background, and once you've got a good chunk (say 128+ bits) you add it in. This makes cryptanalysis of the output harder, and allows you to recover even if the generator state is compromised. We could change the seed() method so it updates the state instead of overwriting it: def __init__(self): self.cnt = 0 self.s0 = '\0' * 20 self.gauss_next = None def seed(self, a=None): if a is None: # Initialize from current time import time a = time.time() b = sha.new(repr(a)).digest() self.s0 = sha.new(self.s0 + b).digest() 'b' may not be necessary, I'm not sure, though it's similar to how some other PRNGs handle seed inputs. If we were using a block cipher PRNG, it would be more obvious how to do this. jumpahead() could also be used instead of seed(). 3) The current generators (not just SHA1Random) won't always return the same sequence of bits from the same state. For example, if I call SHA1Random.getrandbits() asking for 160 bits they'll come from the same block, but if I ask for 96 and 64 bits, they'll come from different blocks. I suggest buffering the output, so getting 160 bits or 96+64 gets the same bits. Changing getrandbits() to getrandbytes () would avoid the need for bit-level buffering. 4) I still think a better interface would only require new generators to return a byte string. That would be easier for SHA1Random, and easier for other generators based on cross- platform entropy sources. I.e., in place of random() and getrandbits(), SHA1Random would only have to implement: def getrandbytes(self, n): while len(buffer) < n: self.cnt += 1 self.s0 = sha.new(self.s0 + hex (self.cnt)).digest() self.buffer += sha.new(self.s0).digest() retVal = self.buffer[:n] self.buffer = self.buffer[n:] return retVal The superclass would call this to get the required number of bytes, and convert them as needed (for converting them to numbers it could use the 'long(s, 256)' patch I submitted. Besides making it easier to add new generators, this would provide a useful function to users of these generators. getrandbits() is less useful, and it's harder to go from a long- integer to a byte-string than vice versa, because you may have to zero-pad if the long-integer is small. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-04-09 03:41 Message: Logged In: YES user_id=80475 Attaching a revised patch. If there are no objections, I will add it to the library (after factoring the unittests and adding docs). ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-07 06:56 Message: Logged In: YES user_id=973611 Comments on shardh.py - SHA1Random2 seems more complex than it needs to be. Comparing with figure 19 of [1], I believe that s1 does not need to be kept as state, so you could replace this: self.s1 = s1 = sha.new(s0 + self.s1).hexdigest() with this: s1 = sha.new(s0).hexdigest() If there's concern about the low hamming-distance between counter values, you could simply hash the counter before feeding it in (or use M-T instead of the counter). Instead of updating s0 every block, you could update it every 10th block or so. This would slightly increase the number of old values an attacker could recover, upon compromising the generator state, but it could be a substantial speedup. SHA1Random1 depends on M-T for some of its security properties. In particular, if I discover the generator state, can I run it backwards and determine previous values? I don't know, it depends on M-T. Unless we know more about the properties of M-T, I think it would be preferable to use M- T only in place of the counter in the SHA1Random2 construction (if at all), *NOT* as the sole repository of PRNG state. [1] http://www.cypherpunks.to/~peter/06_random.pdf ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-04-01 02:48 Message: Logged In: YES user_id=72053 FYI, this was just announced on the python-crypto list. It's a Python wrapper for EGADS, a cross platform entropy-gathering RNG. I haven't looked at the code for it and have no opinion about it. http://wiki.osafoundation.org/twiki/bin/view/Chandler/PyEgads ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-26 00:36 Message: Logged In: YES user_id=72053 1. OK, though the docs should mention this. 2. One-way just means collision resistant, and we're trying to make something without a distinguisher, which is a stronger criterion. I'm not saying there's any problem with a single hash; I just don't see an obvious proof that there's not a problem. Also it makes me cringe slightly to keep the seed around as plaintext even temporarily. The PC that I'm using (midrange Athlon) can do about a million sha1 hashes per second, so an extra hash per reseed "multiple" times per second shouldn't be noticible, for reasonable values of "multiple". 3. Since most of the real computation of this function is in the sha1 compression, the timing indicates that evaluation is dominated by interpreter overhead rather than by hashing. I presume you're using CPython. The results may be different in Jython or with Psyco and different again under PyPy once that becomes real. I think we should take the view that we're designing a mathematical function that exists independently of any specific implementation, then figure out what characteristics it should have and implement those, rather than tailoring it to the peculiarities of CPython. If people are really going to be using this function in 2010, CPython will hopefully be dead and gone (replaced by PyPy) by then, so that's all the more reason to not worry about small CPython-specific effects since the function will outlast CPython. Maybe also sometime between now and then, these libraries can be compiled with psyco. 4. OK 5. OK. Would be good to also change %s for cnt in setstate to %x. 6. Synchronization can be avoided by hashing different fixed strings into s0 and s1 at each rehash (I did that in my version). I think it's worth doing that just to kick the hash function away from standard sha. I actually don't see much need for the counter in either hash, but you were concerned about hitting possible short cycles in sha. 7. OK. WHrandom is already non-threadsafe, so there's precedent. I do have to wonder if the 160 bit arithmetic is slowing things down. If we don't care about non-IEEE doubles, we're ok with 53 bits. Hmm, I also wonder whether the 160 bit int to float conversion is precisely specified even for IEEE and isn't an artifact of Python's long int implementation. But I guess it doesn't matter, since we're never hashing those floats. Re bugs til 2010: oh let's have more confidence than that :). I think if we're careful and get the details correct before deployment, we shouldn't have any problems. This is just one screenful of code or so, not complex by most reasonable standards. However, we might want post the algorithm on sci.crypt for comments, since there's some knowledgeable people there. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-25 09:08 Message: Logged In: YES user_id=80475 Took another look at #5 and will change str() to hex(). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-25 08:25 Message: Logged In: YES user_id=80475 1. We don't care about non IEEE. 2. One-way is one-way, so double hashing is unnecessary. Also, I've fielded bug reports from existing user apps that re-seed very frequently (multiple times per second). 3. The implementations reflect the results of timing experiments which showed that the fastest intermediate representation was hex. 4. ['0x0'] is necessary to hanlde the case where n==0. int('', 16) raises a ValueError while int('0x0', 16) does not. 5. random() and getrandbits() do not have to go through the same intermediate steps (it's okay for one to use hex and the other to use str) -- speed and space issues dominate. 0x comes up enough in Python, there is little use in tossing it away for an obscure, hypothetical micro-controller implementation. 6. Leaving cnt out of the s1 computation guarantees that it will never track the updates of s0 -- any syncronization would be a disaster. Adding a count or some variant smacks of desperation rather than reliance on proven hash properties. 7. The function is already 100 times slower than MT. Adding locks will make the situation worse. It is better to simply document it as being non-threadsafe. Look at back at the mt/sha version. Its code is much cleaner, faster, and threadsafe. It goes a long way towards meeting your request and serving as an alternate generator to validate simulation results. If we use the sha/sha version, I'm certain that we will be fielding bug reports on this through 2010. It is also sufficiently complex that it will spawn lengthy, wasteful discussions and it will create a mine-field for future maintainers. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-25 05:15 Message: Logged In: YES user_id=72053 I'm not sure why the larger state space of mt/sha1 is supposed to be an advantage, but the other points are reasonable. I like the new sha1/sha1 implementation except for a few minor points (below). I made the old version hopefully threadsafe by adding a threading.Lock() object to the instance and locking it on update. I generally like your version better so maybe the lock can be added. Of course that slows it down even further, but in the context of an interpreted Python program I feel that the generator is still fast enough compared to the other stuff the program is likely to be doing. Re the new attachment, some minor issues: 1) The number 2.0**-160 is < 10**-50. This is a valid IEEE double but on some non-IEEE machines it may be a floating underflow or even equal to zero. I don't know if this matters. 2) Paranoia led me to hash the seed twice in the seed operation in my version, to defeat unlikely message-extension attacks against the hash function. I figured reseeding is infrequent enough that an extra hash operation doesn't matter. 3) Storing s1 as a string of 40 hex digits in SHARandom2 means that s1+s2 is 60 characters, which means hashing it will need two sha1 compression operations, slowing it down some. 4) the intiialization of ciphertxt to ["0x0"] instead of [] doesn't seem to do anything useful. int('123abc', 16) is valid without the 0x prefix. 5) random() uses hex(cnt) while getrandbits uses str(cnt) (converting to decimal instead of hex). I think it's better to use hex and remove the 0x prefix from the output, which is cleanest, and simpler to implement on some targets (embedded microcontroller). The same goes for the %s conversion in jumpahead (my version also uses %s there). 6) It may be worthwhile to include cnt in both the s0 and s1 hash updates. That guarantees the s1 hash never gets the same input twice. 7) The variable "s1" in getrandbits (instead of self.s1) is set but never used. Note in my version of sharandom.py, I didn't put a thread lock around the tuple assignment in setstate(). I'm not sure if that's safe or not. But it looks to me like random.py in the CVS does the same thing, so maybe both are unsafe. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-24 11:31 Message: Logged In: YES user_id=80475 I attached a new version of the mt/sha1 combination. Here are the relative merits as compared sha1/sha1 approach: * simpiler to implement/maintain since state tracking is builtin * larger state space (2**19937 vs 2**160) * faster * threadsafe Favoring sha1/sha1: * uses only one primitive * easier to replace in situations where MT is not available ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-22 00:56 Message: Logged In: YES user_id=72053 Two small corrections to below: 1) "in favor of an entropy" is an editing error--the intended meaning should be obvious. 2) I meant Bryan Mongeau, not Eric Mongeau. Bryan's lib is at . Sorry for any confusion. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-22 00:49 Message: Logged In: YES user_id=72053 I'm very much in favor of an entropy and Bram's suggested interface of entropy(x) supplying x bytes of randomness is fine. Perhaps it should really live in a cryptography API rather than in the Random API, but either way is ok. Mark Moraes wrote a Windows module that gets random numbers from the Windows CAPI; I put up a copy at http://www.nightsong.com/phr/python/winrand.module For Linux, Cygwin, and *BSD (including MacOS 10, I think), just read /dev/urandom for random bytes. However, various other systems (I think including Solaris) don't include anything like this. OpenSSL has an entropy gathering daemon that might be of some use in that situation. There's also the Yarrow generator (http://www.schneier.com/yarrow.html) and Eric Mongeau(?) wrote a pure-Python generator a while back that tried to gather entropy from thread racing, similar to Java's default SecureRandom class (I consider that method to be a bit bogus in both Python and Java). I think, though, simply supporting /dev/*random and the Windows CAPI is a pretty good start, even if other OS's aren't supported. Providing that function in the Python lib will make quite a few people happy. A single module integrating both methods would be great. I don't have any Windows dev tools so can't test any wrappers for Mark Moraes's function but maybe one of you guys can do it. I'm not too keen on the md5random.py patch for reasons discussed in the c.l.py thread. It depends too deeply on the precise characteristics of both md5 and Mersenne Twister. I think for a cryptography-based generator, it's better to stick to one cryptography-based primitive, and to use sha instead of md5. That also helps portability since it means other environments (maybe including Jython) can reproduce the PRNG stream without having to re-implement MT, as long as they have SHA (which is a US federal standard). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-21 19:50 Message: Logged In: YES user_id=80475 Bram, if you have a patch, I would be happy to look at it. Please make it as platform independent as possible (its okay to have it conditionally compile differently so long as the API stays the same). Submit it as a separate patch and assign to me -- it doesn't have to be orginal, you can google around to determine prior art. ---------------------------------------------------------------------- Comment By: Bram Cohen (bram_cohen) Date: 2004-03-21 19:34 Message: Logged In: YES user_id=52561 The lack of a 'real' entropy source is the gap which can't be fixed with an application-level bit of code. I think there are simple hooks for this on all systems, such as /dev/random on linux, but they aren't cross platform. A unified API which always calls the native entropy hook would be a very good thing. An example of a reasonable API would be to have a module named entropy, with a single function entropy(x) which returns a random string of length x. This is a problem which almost anyone writing a security-related application runs into, and lots of time is wasted writing dubious hacks to harvest entropy when a single simple library could magically solve it the right way. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-21 17:33 Message: Logged In: YES user_id=80475 Attaching my alternative. If it fulfills your use case, let me know and I'll apply it. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-18 08:51 Message: Logged In: YES user_id=72053 Updated version of sharandom.py is at same url. It folds a counter into the hash and also includes a getrandbits method. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-18 06:27 Message: Logged In: YES user_id=72053 I don't mind dropping the time() auto-seed but I think that means eliminating any auto-seed, and requiring a user-supplied seed. There is no demonstrable minimum period for the SHA-OFB and it would be bad if there was, since it would then no longer act like a PRF. Note that the generator in the sample code actually comes from two different SHA instances and thus its expected period is about 2**160. Anyway, adding a simple counter (incrementing by 1 on every SHA call) to the SHA input removes any lingering chance of a repeating sequence. I'll update the code to do that. It's much less ugly than stirring in Mersenne Twister output. I don't have Python 2.4 yet but will look at it when I get a chance. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-17 11:27 Message: Logged In: YES user_id=80475 It has some potential for being added. That being said, try to avoid a religious fervor. I would like to see considerably more discussion and evolution first. The auto-seeding with time *must* be dropped -- it is not harmonious with the goal of creating a secure random sequence. It is okay for the a subclass to deviate in this way. Also, I was soliciting references stronger than (I don't know of any results ... It is generally considered ... ). If we put this in, people are going to rely on it. The docs *must* include references indicating the strengths and weaknesses of the approach. It should also concisely say why it works (a summary proof that makes it clear how a one-way digest function can be tranformed into a sequence generator that is cryptographicly strong to both the left and right with the latter being the one that is not obvious). Not having a demonstrable minimum period is also bad. Nothing in the discussion so far precludes the existence of a bad seed that has a period of only 1 or 2. See my suggestion on comp.lang.python for a means of mitigating this issue. With respect to the randint question, be sure to look at the current Py2.4 source for random.py. The API is expanded to include and an option method, getrandbits(). That in turn feeds the other integer methods without the int to float to int dance. Upon further consideration, I think the export control question is moot since we're using an existing library function and not really expressing new art. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-17 09:05 Message: Logged In: YES user_id=72053 There are no research results that I know of that can distinguish the output of sha1-ofb from random output in any practical way. To find such a distinguisher would be a significant result. It's a safe bet that cryptographers have searched for distinguishers, though I don't promise to have heard of every result that's been published. I'll ask on sci.crypt if anyone else has heard of such a thing, if you want. SHA1-HMAC is generally considered to be indistinguishable from a PRF (pseudorandom function, i.e. a function selected at random from the space of all functions from arbitrary strings to 160-bit outputs; this term is from concrete security theory). MD5 is deprecated these days and there's no point to using it instead of sha1 for this. I'm not sure what happens if randint is added to the API. If you subclass Random and don't provide a randint method, you inherit from the base class, which can call self.random() to get floats to make the ints from. US export restrictions basically don't exist any more. In principle, if you want to export something, you're supposed to send an email to an address at the commerce department, saying the name of the program and the url where it can be obtained and a few similar things. In practice, email to that address is ignored, they never check anything. I heard the address even stopped working for a while, though they may have fixed it since then. See http://www.bxa.doc.gov/Encryption/ for info. I've emailed notices to the address a few times and never heard back anything. Anyway, I don't think this should count as cryptography; it's simply using a cryptographic hash function as an PRNG to avoid the correlations in other PRNG's; scientific rationale for doing that is given in the Numerical Recipes book mentioned above. The code that I linked uses the standard API but I wonder if the floating point output is optimally uniform, i.e. the N/2**56 calculation may not be exactly the right thing for an ieee float64. Using the time of day is what the Random docs say to do by default. You're correct that any security application needs to supply a higher entropy seed. I would like it very much if the std lib included a module that read some random bytes from the OS for OS's that support it. That means reading /dev/urandom on recent Un*x-ish systems or Cygwin, or calling CryptGenRandom on Windows. Reading /dev/urandom is trivial, and there's a guy on the pycrypt list who wrote a Windows function to call CryptGenRandom and return the output through the Python API. I forwarded the function to Guido with the author's permission but nothing seemed to happen with it. However, this gets away from this sharandom subclass. I'd like to make a few more improvements to the module but after that I think it can be dropped into the lib. Let me know what you think. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-16 11:52 Message: Logged In: YES user_id=80475 One other thought: if cryptographic strength is a goal, then seeding absolutely should require a long seed (key) as an input and the time should *never* be used. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-16 11:49 Message: Logged In: YES user_id=80475 It would have been ideal if the Random API had been designed with an integer generator at the core and floating point as a computed value, but that it the way it has been for a long time and efforts to switch it over would likely result in either incompatibility with existing subclasses or introducing new complexity (making it even harder to subclass). I think the API should be left alone until Py3.0. The attached module would make a good recipe on ASPN where improvements and critiques can be posted. Do you have links to research showing that running SHA-1 in a cipher block feedback mode results in a cryptographically strong random number generator -- the result seems likely, but a research link would be great. Is there a link to research showing the RNG properties of the resulting generator (period, equidistribution, passing tests for randomness, etc)? Also, is there research showing the relative merits of this approach vs MD5, AES, or DES? If something like this gets added to the library, I prefer it to be added to random.py using the existing API. Adding yet another random module would likely do more harm than good. One other question (I don't know the answer to this): would including a cryptographically strong RNG trigger US export restrictions on the python distribution? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=917055&group_id=5470 From noreply at sourceforge.net Sat Jun 12 11:44:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 11:44:31 2004 Subject: [ python-Bugs-971747 ] Wrong indentation of example code in tutorial Message-ID: Bugs item #971747, was opened at 2004-06-12 15:44 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971747&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Gr?goire Dooms (dooms) Assigned to: Nobody/Anonymous (nobody) Summary: Wrong indentation of example code in tutorial Initial Comment: section 9.9: on line 4286 (tut.tex python 2.3.3): >>> class Reverse: "Iterator for looping over a sequence backwards" def __init__(self, data): self.data = data self.index = len(data) def __iter__(self): return self def next(self): if self.index == 0: raise StopIteration self.index = self.index - 1 return self.data[self.index] >>> for char in Reverse('spam'): print char m a p s This is not conformant with the other exmaples (either script or interactive). It looks like this snippet was a script at first and was wrongly converted into an interactive session by prepending >>> on the first line. Prompt only problem in the next example: line 4319: >>> def reverse(data): for index in range(len(data)-1, -1, -1): yield data[index] >>> for char in reverse('golf'): print char Indentation is good but the sys.ps2 prompt is missing. Now an indentation only problem (section 10.7) line 4505: >>> import urllib2 >>> for line in urllib2.urlopen('http://tycho.usno.navy.mil/cgi-bin/timer.pl'): ... if 'EST' in line: # look for Eastern Standard Time ... print line The if and print lines need an additional indentation level. This last one is already fixed in the http://www.python.org/dev/devel/doc/tut/ version. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971747&group_id=5470 From noreply at sourceforge.net Sat Jun 12 12:08:25 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 12:08:31 2004 Subject: [ python-Bugs-971747 ] Wrong indentation of example code in tutorial Message-ID: Bugs item #971747, was opened at 2004-06-12 15:44 Message generated for change (Settings changed) made by dooms You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971747&group_id=5470 >Category: Documentation >Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Gr?goire Dooms (dooms) Assigned to: Nobody/Anonymous (nobody) Summary: Wrong indentation of example code in tutorial Initial Comment: section 9.9: on line 4286 (tut.tex python 2.3.3): >>> class Reverse: "Iterator for looping over a sequence backwards" def __init__(self, data): self.data = data self.index = len(data) def __iter__(self): return self def next(self): if self.index == 0: raise StopIteration self.index = self.index - 1 return self.data[self.index] >>> for char in Reverse('spam'): print char m a p s This is not conformant with the other exmaples (either script or interactive). It looks like this snippet was a script at first and was wrongly converted into an interactive session by prepending >>> on the first line. Prompt only problem in the next example: line 4319: >>> def reverse(data): for index in range(len(data)-1, -1, -1): yield data[index] >>> for char in reverse('golf'): print char Indentation is good but the sys.ps2 prompt is missing. Now an indentation only problem (section 10.7) line 4505: >>> import urllib2 >>> for line in urllib2.urlopen('http://tycho.usno.navy.mil/cgi-bin/timer.pl'): ... if 'EST' in line: # look for Eastern Standard Time ... print line The if and print lines need an additional indentation level. This last one is already fixed in the http://www.python.org/dev/devel/doc/tut/ version. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971747&group_id=5470 From noreply at sourceforge.net Sat Jun 12 13:33:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 13:33:35 2004 Subject: [ python-Bugs-971747 ] Wrong indentation of example code in tutorial Message-ID: Bugs item #971747, was opened at 2004-06-12 10:44 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971747&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Gr?goire Dooms (dooms) >Assigned to: Raymond Hettinger (rhettinger) Summary: Wrong indentation of example code in tutorial Initial Comment: section 9.9: on line 4286 (tut.tex python 2.3.3): >>> class Reverse: "Iterator for looping over a sequence backwards" def __init__(self, data): self.data = data self.index = len(data) def __iter__(self): return self def next(self): if self.index == 0: raise StopIteration self.index = self.index - 1 return self.data[self.index] >>> for char in Reverse('spam'): print char m a p s This is not conformant with the other exmaples (either script or interactive). It looks like this snippet was a script at first and was wrongly converted into an interactive session by prepending >>> on the first line. Prompt only problem in the next example: line 4319: >>> def reverse(data): for index in range(len(data)-1, -1, -1): yield data[index] >>> for char in reverse('golf'): print char Indentation is good but the sys.ps2 prompt is missing. Now an indentation only problem (section 10.7) line 4505: >>> import urllib2 >>> for line in urllib2.urlopen('http://tycho.usno.navy.mil/cgi-bin/timer.pl'): ... if 'EST' in line: # look for Eastern Standard Time ... print line The if and print lines need an additional indentation level. This last one is already fixed in the http://www.python.org/dev/devel/doc/tut/ version. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971747&group_id=5470 From noreply at sourceforge.net Sat Jun 12 13:35:01 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 13:35:14 2004 Subject: [ python-Bugs-971200 ] asyncore sillies Message-ID: Bugs item #971200, was opened at 2004-06-12 01:19 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 7 Submitted By: Michael Hudson (mwh) Assigned to: Nobody/Anonymous (nobody) Summary: asyncore sillies Initial Comment: current cvs head, mac os x 10.2, debug build of python. test_asynchat fails if and only if the compiled asyncore.pyc file is present. this makes no sense to me, but it's consistent. ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-13 03:35 Message: Logged In: YES user_id=29957 Is it worth making marshal issue a warning when someone tries to store an infinity, in that case? It could (ha!) be suprising that a piece of code that works from a .py fails silently from a .pyc. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-12 03:03 Message: Logged In: YES user_id=6656 actually, i think the summary is that the most recent change to asyncore is just broken. blaming the recent changes around LC_NUMERIC and their effect or non-effect on marshal was a read herring. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-12 02:47 Message: Logged In: YES user_id=31435 Well, what marshal (or pickle) do with an infinity (or NaN, or the sign of a signed zero) is a platform accident. Here with the released 2.3.4 on Windows (which doesn't have any LC_NUMERIC changes): Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 1e309 1.#INF >>> import marshal >>> marshal.loads(marshal.dumps(1e309)) 1.0 >>> So simply can't use a literal 1e309 in compiled code. There's no portable way to spell infinity in Python. PEP 754 would introduce a reasonably portable way, were it accepted. Before then, 1e200*1e200 is probably the easiest reasonably portable way -- but since behavior in the presence of an infinity is accidental anyway, much better to avoid using infinity at all in the libraries. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-12 01:58 Message: Logged In: YES user_id=6656 Oh my: >>> 1e309 Inf [40577 refs] >>> marshal.loads(marshal.dumps(1e309)) 0.0 [40577 refs] this must be the new "LC_NUMERIC agnostic" stuff, right? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 From noreply at sourceforge.net Sat Jun 12 13:56:29 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 13:56:47 2004 Subject: [ python-Bugs-917055 ] add a stronger PRNG Message-ID: Bugs item #917055, was opened at 2004-03-15 21:46 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=917055&group_id=5470 Category: Python Library Group: Feature Request Status: Closed Resolution: None Priority: 5 Submitted By: paul rubin (phr) Assigned to: Nobody/Anonymous (nobody) Summary: add a stronger PRNG Initial Comment: The default Mersenne Twister algorithm in the Random module is very fast but makes no serious attempt to generate output that stands up to adversarial analysis. Besides cryptography applications, this can be a serious problem in areas like computer games. Sites like www.partypoker.com routinely run online tournaments with prize funds of 100K USD or more. There's big financial incentives to find ways of guessing your opponent's cards with better than random chance probability. See bug #901285 for some discussion of possible correlations in Mersenne Twister's output. Teukolsky et al discuss PRNG issues at some length in their book "Numerical Recipes". The original edition of Numerical Recipes had a full blown version of the FIPS Data Encryption Standard implemented horrendously in Fortran, as a way of making a PRNG with no easily discoverable output correlations. Later editions replaced the DES routine with a more efficient one based on similar principles. Python already has an SHA module that makes a dandy PRNG. I coded a sample implementation: http://www.nightsong.com/phr/python/sharandom.py I'd like to ask that the Python lib include something like this as an alternative to MT. It would be similar to the existing whrandom module in that it's an alternative subclass to the regular Random class. The existing Random module wouldn't have to be changed. I don't propose directly including the module above, since I think the Random API should also be extended to allow directly requesting pseudo-random integers from the generator subclass, rather than making them from floating-point output. That would allow making the above subclass work more cleanly. I'll make a separate post about this, but first will have to examine the Random module source code. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 12:56 Message: Logged In: YES user_id=80475 Having a separate patch is sufficient. Adding another bug report is wasteful. And, adding a entropy module is a feature request and not a bug. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-06-12 10:21 Message: Logged In: YES user_id=72053 Sorry about not responding earlier. I thought the current patch was ok and I needed to get around to writing a doc for it, which if it covers all the proofs involved, starts getting to be a bit scary--I've never written anything like that but have been wanting to. I do think the patch should be preserved and made available, if not in the python lib then maybe in Vaults of Parnassus, or ASPN Cookbook or someplace like that. It's not a "crypto project", it's just an attempt at writing a PRNG that meets modern criteria. However, it's considerably less important for the Python lib than the entropy module is, since applications can generally paste it from somewhere. I didn't realize it was possible to close a bug and leave its patches open. Should we open a separate bug for the entropy module? ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:20 Message: Logged In: YES user_id=80475 Closing due to lack of interest/progress, etc. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-07 00:29 Message: Logged In: YES user_id=80475 Can we close this one (while leaving open the patch for an entropy module)? Essentially, it provides nothing that couldn't be contributed as a short recipe for those interested in such things. While an alternate RNG would be nice, turning this into a crypto project is probably not a great idea. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-04-14 00:33 Message: Logged In: YES user_id=72053 trevp, the jumpahead operation lets you stir in new entropy. Jeremy, I'll see if I can write some docs for it, and will attempt a concrete security proof. I don't think we should need to say no references were found for using sha1 as a prng. The randomness assumption is based on the Krawczyk-Bellare-Rogaway result that's cited somewhere down the page or in the clpy thread. I'll include a cite in the doc/rationale. I hope that the entropy module is accepted, assuming it works. The entropy module is quite a bit more important than the deterministic PRNG module, since the application can easily supply the DPRNG but can't always easily supply the entropy module. ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-13 23:10 Message: Logged In: YES user_id=973611 I submitted a patch for an entropy module, as was discussed below. See patch #934711. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2004-04-11 22:35 Message: Logged In: YES user_id=31392 The current patch doesn't address any of my concerns about documentation or rationale. ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-09 23:22 Message: Logged In: YES user_id=973611 We should probably clarify the requirements. If we just want to use SHA1 to produce an RNG suitable for Monte Carlo etc., then we could do something simpler and faster than what we're doing. In particular, there's no need for state update, we could just generate outputs by SHA1(seed + counter). This is mentioned in "Applied Cryptography", 17.14. If we want it to "stand up to adversarial analysis" and be usable for cryptography, then I think we need to put a little more into it - in particular, the ability to mix new randomness into the generator state becomes important, and it becomes preferable to use a block cipher construction, not because the SHA1 construction is insecure, but so we can point to something like Yarrow. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2004-04-09 23:01 Message: Logged In: YES user_id=31392 Much earlier in the discussion Raymond wrote: "The docs *must* include references indicating the strengths and weaknesses of the approach. It should also concisely say why it works (a summary proof that makes it clear how a one-way digest function can be tranformed into a sequence generator that is cryptographicly strong to both the left and right with the latter being the one that is not obvious)." I don't see any documentation of this sort in the current patch. I also think it would be helpful if the documentation made some mention of why this generator would be useful. In particular, I suspect some users may by confused by the mention of SHA and be lead to believe that this is CSPRNG, when it is not; perhaps a mention of Yarrow and other approaches for cryptographic applications would be enough to clarify. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-04-09 22:24 Message: Logged In: YES user_id=80475 Thanks for the detailed comments. 1) Will add the references we have but need to make it clear that this exact implementation is not studied and does not guarantee cryptographic security. 2) The API is clear, seed() overwrites and jumpahead() updates. Besides, the goal is to provide a good alternative random number generator. If someone needs real crypto, they should use that. Tossing in ad hoc pieces to "make it harder" is a sure sign of venturing too far from theoretical underpinnings. 3) Good observation. I don't think a change is necessary. The docs do not promise that asking for 160 gives the same as 96 and 64 back to back. The Mersenne Twister has similar behavior. 4) Let's not gum up the API because we have encryption and text API's on the brain. The module is about random numbers not byte strings. ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-09 19:52 Message: Logged In: YES user_id=973611 Hi Raymond, here's some lengthy though not critically important comments (i.e, I'm okay with the latest diff). 1) With regards to this documentation: "No references were found which document the properties of SHA-1 as a random number generator". I can't find anything that documents exactly what we're doing. This type of generator is similar to Bruce Schneier's Yarrow and its offspring (Fortuna and Tiny). However, those use block-ciphers in counter mode, instead of SHA-1. According to the recent Tiny paper: "Cryptographic hash functions can also be a good foundation for a PRNG. Many constructs have used MD5 or SHA1 in this capacity, but the constructions are often ad-hoc. When using a hash function, we would recommend HMAC in CTR mode (i.e., one MACs counter for each successive output block). Ultimately, we prefer the use of block ciphers, as they are generally better-studied constructs." http://www.acsac.org/2003/papers/79.pdf Using HMAC seems like overkill to me, and would slow things down. However, if there's any chance Python will add block ciphers in the future, it might be worth waiting for, so we could implement one of the well-documented block cipher PRNGs. 2) Most cryptographic PRNGs allow for mixing new entropy into the generator state. The idea is you harvest entropy in the background, and once you've got a good chunk (say 128+ bits) you add it in. This makes cryptanalysis of the output harder, and allows you to recover even if the generator state is compromised. We could change the seed() method so it updates the state instead of overwriting it: def __init__(self): self.cnt = 0 self.s0 = '\0' * 20 self.gauss_next = None def seed(self, a=None): if a is None: # Initialize from current time import time a = time.time() b = sha.new(repr(a)).digest() self.s0 = sha.new(self.s0 + b).digest() 'b' may not be necessary, I'm not sure, though it's similar to how some other PRNGs handle seed inputs. If we were using a block cipher PRNG, it would be more obvious how to do this. jumpahead() could also be used instead of seed(). 3) The current generators (not just SHA1Random) won't always return the same sequence of bits from the same state. For example, if I call SHA1Random.getrandbits() asking for 160 bits they'll come from the same block, but if I ask for 96 and 64 bits, they'll come from different blocks. I suggest buffering the output, so getting 160 bits or 96+64 gets the same bits. Changing getrandbits() to getrandbytes () would avoid the need for bit-level buffering. 4) I still think a better interface would only require new generators to return a byte string. That would be easier for SHA1Random, and easier for other generators based on cross- platform entropy sources. I.e., in place of random() and getrandbits(), SHA1Random would only have to implement: def getrandbytes(self, n): while len(buffer) < n: self.cnt += 1 self.s0 = sha.new(self.s0 + hex (self.cnt)).digest() self.buffer += sha.new(self.s0).digest() retVal = self.buffer[:n] self.buffer = self.buffer[n:] return retVal The superclass would call this to get the required number of bytes, and convert them as needed (for converting them to numbers it could use the 'long(s, 256)' patch I submitted. Besides making it easier to add new generators, this would provide a useful function to users of these generators. getrandbits() is less useful, and it's harder to go from a long- integer to a byte-string than vice versa, because you may have to zero-pad if the long-integer is small. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-04-08 22:41 Message: Logged In: YES user_id=80475 Attaching a revised patch. If there are no objections, I will add it to the library (after factoring the unittests and adding docs). ---------------------------------------------------------------------- Comment By: Trevor Perrin (trevp) Date: 2004-04-07 01:56 Message: Logged In: YES user_id=973611 Comments on shardh.py - SHA1Random2 seems more complex than it needs to be. Comparing with figure 19 of [1], I believe that s1 does not need to be kept as state, so you could replace this: self.s1 = s1 = sha.new(s0 + self.s1).hexdigest() with this: s1 = sha.new(s0).hexdigest() If there's concern about the low hamming-distance between counter values, you could simply hash the counter before feeding it in (or use M-T instead of the counter). Instead of updating s0 every block, you could update it every 10th block or so. This would slightly increase the number of old values an attacker could recover, upon compromising the generator state, but it could be a substantial speedup. SHA1Random1 depends on M-T for some of its security properties. In particular, if I discover the generator state, can I run it backwards and determine previous values? I don't know, it depends on M-T. Unless we know more about the properties of M-T, I think it would be preferable to use M- T only in place of the counter in the SHA1Random2 construction (if at all), *NOT* as the sole repository of PRNG state. [1] http://www.cypherpunks.to/~peter/06_random.pdf ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-31 21:48 Message: Logged In: YES user_id=72053 FYI, this was just announced on the python-crypto list. It's a Python wrapper for EGADS, a cross platform entropy-gathering RNG. I haven't looked at the code for it and have no opinion about it. http://wiki.osafoundation.org/twiki/bin/view/Chandler/PyEgads ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-25 19:36 Message: Logged In: YES user_id=72053 1. OK, though the docs should mention this. 2. One-way just means collision resistant, and we're trying to make something without a distinguisher, which is a stronger criterion. I'm not saying there's any problem with a single hash; I just don't see an obvious proof that there's not a problem. Also it makes me cringe slightly to keep the seed around as plaintext even temporarily. The PC that I'm using (midrange Athlon) can do about a million sha1 hashes per second, so an extra hash per reseed "multiple" times per second shouldn't be noticible, for reasonable values of "multiple". 3. Since most of the real computation of this function is in the sha1 compression, the timing indicates that evaluation is dominated by interpreter overhead rather than by hashing. I presume you're using CPython. The results may be different in Jython or with Psyco and different again under PyPy once that becomes real. I think we should take the view that we're designing a mathematical function that exists independently of any specific implementation, then figure out what characteristics it should have and implement those, rather than tailoring it to the peculiarities of CPython. If people are really going to be using this function in 2010, CPython will hopefully be dead and gone (replaced by PyPy) by then, so that's all the more reason to not worry about small CPython-specific effects since the function will outlast CPython. Maybe also sometime between now and then, these libraries can be compiled with psyco. 4. OK 5. OK. Would be good to also change %s for cnt in setstate to %x. 6. Synchronization can be avoided by hashing different fixed strings into s0 and s1 at each rehash (I did that in my version). I think it's worth doing that just to kick the hash function away from standard sha. I actually don't see much need for the counter in either hash, but you were concerned about hitting possible short cycles in sha. 7. OK. WHrandom is already non-threadsafe, so there's precedent. I do have to wonder if the 160 bit arithmetic is slowing things down. If we don't care about non-IEEE doubles, we're ok with 53 bits. Hmm, I also wonder whether the 160 bit int to float conversion is precisely specified even for IEEE and isn't an artifact of Python's long int implementation. But I guess it doesn't matter, since we're never hashing those floats. Re bugs til 2010: oh let's have more confidence than that :). I think if we're careful and get the details correct before deployment, we shouldn't have any problems. This is just one screenful of code or so, not complex by most reasonable standards. However, we might want post the algorithm on sci.crypt for comments, since there's some knowledgeable people there. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-25 04:08 Message: Logged In: YES user_id=80475 Took another look at #5 and will change str() to hex(). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-25 03:25 Message: Logged In: YES user_id=80475 1. We don't care about non IEEE. 2. One-way is one-way, so double hashing is unnecessary. Also, I've fielded bug reports from existing user apps that re-seed very frequently (multiple times per second). 3. The implementations reflect the results of timing experiments which showed that the fastest intermediate representation was hex. 4. ['0x0'] is necessary to hanlde the case where n==0. int('', 16) raises a ValueError while int('0x0', 16) does not. 5. random() and getrandbits() do not have to go through the same intermediate steps (it's okay for one to use hex and the other to use str) -- speed and space issues dominate. 0x comes up enough in Python, there is little use in tossing it away for an obscure, hypothetical micro-controller implementation. 6. Leaving cnt out of the s1 computation guarantees that it will never track the updates of s0 -- any syncronization would be a disaster. Adding a count or some variant smacks of desperation rather than reliance on proven hash properties. 7. The function is already 100 times slower than MT. Adding locks will make the situation worse. It is better to simply document it as being non-threadsafe. Look at back at the mt/sha version. Its code is much cleaner, faster, and threadsafe. It goes a long way towards meeting your request and serving as an alternate generator to validate simulation results. If we use the sha/sha version, I'm certain that we will be fielding bug reports on this through 2010. It is also sufficiently complex that it will spawn lengthy, wasteful discussions and it will create a mine-field for future maintainers. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-25 00:15 Message: Logged In: YES user_id=72053 I'm not sure why the larger state space of mt/sha1 is supposed to be an advantage, but the other points are reasonable. I like the new sha1/sha1 implementation except for a few minor points (below). I made the old version hopefully threadsafe by adding a threading.Lock() object to the instance and locking it on update. I generally like your version better so maybe the lock can be added. Of course that slows it down even further, but in the context of an interpreted Python program I feel that the generator is still fast enough compared to the other stuff the program is likely to be doing. Re the new attachment, some minor issues: 1) The number 2.0**-160 is < 10**-50. This is a valid IEEE double but on some non-IEEE machines it may be a floating underflow or even equal to zero. I don't know if this matters. 2) Paranoia led me to hash the seed twice in the seed operation in my version, to defeat unlikely message-extension attacks against the hash function. I figured reseeding is infrequent enough that an extra hash operation doesn't matter. 3) Storing s1 as a string of 40 hex digits in SHARandom2 means that s1+s2 is 60 characters, which means hashing it will need two sha1 compression operations, slowing it down some. 4) the intiialization of ciphertxt to ["0x0"] instead of [] doesn't seem to do anything useful. int('123abc', 16) is valid without the 0x prefix. 5) random() uses hex(cnt) while getrandbits uses str(cnt) (converting to decimal instead of hex). I think it's better to use hex and remove the 0x prefix from the output, which is cleanest, and simpler to implement on some targets (embedded microcontroller). The same goes for the %s conversion in jumpahead (my version also uses %s there). 6) It may be worthwhile to include cnt in both the s0 and s1 hash updates. That guarantees the s1 hash never gets the same input twice. 7) The variable "s1" in getrandbits (instead of self.s1) is set but never used. Note in my version of sharandom.py, I didn't put a thread lock around the tuple assignment in setstate(). I'm not sure if that's safe or not. But it looks to me like random.py in the CVS does the same thing, so maybe both are unsafe. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-24 06:31 Message: Logged In: YES user_id=80475 I attached a new version of the mt/sha1 combination. Here are the relative merits as compared sha1/sha1 approach: * simpiler to implement/maintain since state tracking is builtin * larger state space (2**19937 vs 2**160) * faster * threadsafe Favoring sha1/sha1: * uses only one primitive * easier to replace in situations where MT is not available ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-21 19:56 Message: Logged In: YES user_id=72053 Two small corrections to below: 1) "in favor of an entropy" is an editing error--the intended meaning should be obvious. 2) I meant Bryan Mongeau, not Eric Mongeau. Bryan's lib is at . Sorry for any confusion. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-21 19:49 Message: Logged In: YES user_id=72053 I'm very much in favor of an entropy and Bram's suggested interface of entropy(x) supplying x bytes of randomness is fine. Perhaps it should really live in a cryptography API rather than in the Random API, but either way is ok. Mark Moraes wrote a Windows module that gets random numbers from the Windows CAPI; I put up a copy at http://www.nightsong.com/phr/python/winrand.module For Linux, Cygwin, and *BSD (including MacOS 10, I think), just read /dev/urandom for random bytes. However, various other systems (I think including Solaris) don't include anything like this. OpenSSL has an entropy gathering daemon that might be of some use in that situation. There's also the Yarrow generator (http://www.schneier.com/yarrow.html) and Eric Mongeau(?) wrote a pure-Python generator a while back that tried to gather entropy from thread racing, similar to Java's default SecureRandom class (I consider that method to be a bit bogus in both Python and Java). I think, though, simply supporting /dev/*random and the Windows CAPI is a pretty good start, even if other OS's aren't supported. Providing that function in the Python lib will make quite a few people happy. A single module integrating both methods would be great. I don't have any Windows dev tools so can't test any wrappers for Mark Moraes's function but maybe one of you guys can do it. I'm not too keen on the md5random.py patch for reasons discussed in the c.l.py thread. It depends too deeply on the precise characteristics of both md5 and Mersenne Twister. I think for a cryptography-based generator, it's better to stick to one cryptography-based primitive, and to use sha instead of md5. That also helps portability since it means other environments (maybe including Jython) can reproduce the PRNG stream without having to re-implement MT, as long as they have SHA (which is a US federal standard). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-21 14:50 Message: Logged In: YES user_id=80475 Bram, if you have a patch, I would be happy to look at it. Please make it as platform independent as possible (its okay to have it conditionally compile differently so long as the API stays the same). Submit it as a separate patch and assign to me -- it doesn't have to be orginal, you can google around to determine prior art. ---------------------------------------------------------------------- Comment By: Bram Cohen (bram_cohen) Date: 2004-03-21 14:34 Message: Logged In: YES user_id=52561 The lack of a 'real' entropy source is the gap which can't be fixed with an application-level bit of code. I think there are simple hooks for this on all systems, such as /dev/random on linux, but they aren't cross platform. A unified API which always calls the native entropy hook would be a very good thing. An example of a reasonable API would be to have a module named entropy, with a single function entropy(x) which returns a random string of length x. This is a problem which almost anyone writing a security-related application runs into, and lots of time is wasted writing dubious hacks to harvest entropy when a single simple library could magically solve it the right way. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-21 12:33 Message: Logged In: YES user_id=80475 Attaching my alternative. If it fulfills your use case, let me know and I'll apply it. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-18 03:51 Message: Logged In: YES user_id=72053 Updated version of sharandom.py is at same url. It folds a counter into the hash and also includes a getrandbits method. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-18 01:27 Message: Logged In: YES user_id=72053 I don't mind dropping the time() auto-seed but I think that means eliminating any auto-seed, and requiring a user-supplied seed. There is no demonstrable minimum period for the SHA-OFB and it would be bad if there was, since it would then no longer act like a PRF. Note that the generator in the sample code actually comes from two different SHA instances and thus its expected period is about 2**160. Anyway, adding a simple counter (incrementing by 1 on every SHA call) to the SHA input removes any lingering chance of a repeating sequence. I'll update the code to do that. It's much less ugly than stirring in Mersenne Twister output. I don't have Python 2.4 yet but will look at it when I get a chance. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-17 06:27 Message: Logged In: YES user_id=80475 It has some potential for being added. That being said, try to avoid a religious fervor. I would like to see considerably more discussion and evolution first. The auto-seeding with time *must* be dropped -- it is not harmonious with the goal of creating a secure random sequence. It is okay for the a subclass to deviate in this way. Also, I was soliciting references stronger than (I don't know of any results ... It is generally considered ... ). If we put this in, people are going to rely on it. The docs *must* include references indicating the strengths and weaknesses of the approach. It should also concisely say why it works (a summary proof that makes it clear how a one-way digest function can be tranformed into a sequence generator that is cryptographicly strong to both the left and right with the latter being the one that is not obvious). Not having a demonstrable minimum period is also bad. Nothing in the discussion so far precludes the existence of a bad seed that has a period of only 1 or 2. See my suggestion on comp.lang.python for a means of mitigating this issue. With respect to the randint question, be sure to look at the current Py2.4 source for random.py. The API is expanded to include and an option method, getrandbits(). That in turn feeds the other integer methods without the int to float to int dance. Upon further consideration, I think the export control question is moot since we're using an existing library function and not really expressing new art. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-03-17 04:05 Message: Logged In: YES user_id=72053 There are no research results that I know of that can distinguish the output of sha1-ofb from random output in any practical way. To find such a distinguisher would be a significant result. It's a safe bet that cryptographers have searched for distinguishers, though I don't promise to have heard of every result that's been published. I'll ask on sci.crypt if anyone else has heard of such a thing, if you want. SHA1-HMAC is generally considered to be indistinguishable from a PRF (pseudorandom function, i.e. a function selected at random from the space of all functions from arbitrary strings to 160-bit outputs; this term is from concrete security theory). MD5 is deprecated these days and there's no point to using it instead of sha1 for this. I'm not sure what happens if randint is added to the API. If you subclass Random and don't provide a randint method, you inherit from the base class, which can call self.random() to get floats to make the ints from. US export restrictions basically don't exist any more. In principle, if you want to export something, you're supposed to send an email to an address at the commerce department, saying the name of the program and the url where it can be obtained and a few similar things. In practice, email to that address is ignored, they never check anything. I heard the address even stopped working for a while, though they may have fixed it since then. See http://www.bxa.doc.gov/Encryption/ for info. I've emailed notices to the address a few times and never heard back anything. Anyway, I don't think this should count as cryptography; it's simply using a cryptographic hash function as an PRNG to avoid the correlations in other PRNG's; scientific rationale for doing that is given in the Numerical Recipes book mentioned above. The code that I linked uses the standard API but I wonder if the floating point output is optimally uniform, i.e. the N/2**56 calculation may not be exactly the right thing for an ieee float64. Using the time of day is what the Random docs say to do by default. You're correct that any security application needs to supply a higher entropy seed. I would like it very much if the std lib included a module that read some random bytes from the OS for OS's that support it. That means reading /dev/urandom on recent Un*x-ish systems or Cygwin, or calling CryptGenRandom on Windows. Reading /dev/urandom is trivial, and there's a guy on the pycrypt list who wrote a Windows function to call CryptGenRandom and return the output through the Python API. I forwarded the function to Guido with the author's permission but nothing seemed to happen with it. However, this gets away from this sharandom subclass. I'd like to make a few more improvements to the module but after that I think it can be dropped into the lib. Let me know what you think. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-16 06:52 Message: Logged In: YES user_id=80475 One other thought: if cryptographic strength is a goal, then seeding absolutely should require a long seed (key) as an input and the time should *never* be used. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-03-16 06:49 Message: Logged In: YES user_id=80475 It would have been ideal if the Random API had been designed with an integer generator at the core and floating point as a computed value, but that it the way it has been for a long time and efforts to switch it over would likely result in either incompatibility with existing subclasses or introducing new complexity (making it even harder to subclass). I think the API should be left alone until Py3.0. The attached module would make a good recipe on ASPN where improvements and critiques can be posted. Do you have links to research showing that running SHA-1 in a cipher block feedback mode results in a cryptographically strong random number generator -- the result seems likely, but a research link would be great. Is there a link to research showing the RNG properties of the resulting generator (period, equidistribution, passing tests for randomness, etc)? Also, is there research showing the relative merits of this approach vs MD5, AES, or DES? If something like this gets added to the library, I prefer it to be added to random.py using the existing API. Adding yet another random module would likely do more harm than good. One other question (I don't know the answer to this): would including a cryptographically strong RNG trigger US export restrictions on the python distribution? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=917055&group_id=5470 From noreply at sourceforge.net Sat Jun 12 15:43:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 15:44:04 2004 Subject: [ python-Bugs-971747 ] Wrong indentation of example code in tutorial Message-ID: Bugs item #971747, was opened at 2004-06-12 10:44 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971747&group_id=5470 Category: Documentation Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Gr?goire Dooms (dooms) Assigned to: Raymond Hettinger (rhettinger) Summary: Wrong indentation of example code in tutorial Initial Comment: section 9.9: on line 4286 (tut.tex python 2.3.3): >>> class Reverse: "Iterator for looping over a sequence backwards" def __init__(self, data): self.data = data self.index = len(data) def __iter__(self): return self def next(self): if self.index == 0: raise StopIteration self.index = self.index - 1 return self.data[self.index] >>> for char in Reverse('spam'): print char m a p s This is not conformant with the other exmaples (either script or interactive). It looks like this snippet was a script at first and was wrongly converted into an interactive session by prepending >>> on the first line. Prompt only problem in the next example: line 4319: >>> def reverse(data): for index in range(len(data)-1, -1, -1): yield data[index] >>> for char in reverse('golf'): print char Indentation is good but the sys.ps2 prompt is missing. Now an indentation only problem (section 10.7) line 4505: >>> import urllib2 >>> for line in urllib2.urlopen('http://tycho.usno.navy.mil/cgi-bin/timer.pl'): ... if 'EST' in line: # look for Eastern Standard Time ... print line The if and print lines need an additional indentation level. This last one is already fixed in the http://www.python.org/dev/devel/doc/tut/ version. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 14:43 Message: Logged In: YES user_id=80475 Backport the fix for the urllib example. The Reverse('spam') example was already fixed and backported. The decision to omit the '...' PS2 prompt was intentional. Using the IDLE style ' ' PS2 prompt makes these examples easier to cut and paste so the reader can try them out interactively. Also, I find them to be more readable. Practicality trumps smaller concerns about putting every example in exactly the same format. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971747&group_id=5470 From noreply at sourceforge.net Sat Jun 12 17:54:47 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 17:54:51 2004 Subject: [ python-Bugs-971872 ] An apparent bug in help Message-ID: Bugs item #971872, was opened at 2004-06-12 17:54 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971872&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Colin J. Williams (cjwhrh) Assigned to: Nobody/Anonymous (nobody) Summary: An apparent bug in help Initial Comment: No response on the newsgroup. ------------------------------------- -------- Original Message -------- Subject: A possible bug with help Date: Sun, 30 May 2004 09:46:16 -0400 From: Colin J. Williams Organization: Bell Sympatico Newsgroups: comp.lang.python The interactive function help(object) generally delivers the doc string of the object and, in the case of a class, with the doc strings of the methods, properties and attributes. If the object is a property, then help doesn't return the doc defined for that property. For example: >>> help(m.M.A) Help on property: In this case m is a module, m.M is a class and m.M.A is a property of M, defined as: A= property(fget= toArray, doc= 'Deliver the data as an array.') Colin W. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971872&group_id=5470 From noreply at sourceforge.net Sat Jun 12 20:48:11 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 20:48:19 2004 Subject: [ python-Bugs-956408 ] Simplifiy coding in cmd.py Message-ID: Bugs item #956408, was opened at 2004-05-18 21:54 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=956408&group_id=5470 Category: None Group: Python 2.4 Status: Open Resolution: None Priority: 3 Submitted By: Raymond Hettinger (rhettinger) >Assigned to: Raymond Hettinger (rhettinger) Summary: Simplifiy coding in cmd.py Initial Comment: In the cmd.py 1.35 checkin on 2/6/2003, there are many lines like: self.stdout.write("%s\n"%str(header)) I believe the str() call in unnecessary. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-12 20:48 Message: Logged In: YES user_id=33168 The str call is necessary if header is a tuple. AFAIK, if header is not a tuple, using str() is redundant. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-07 01:01 Message: Logged In: YES user_id=80475 Neal, is this a simplification you would like to make? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=956408&group_id=5470 From noreply at sourceforge.net Sat Jun 12 22:40:01 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 22:40:07 2004 Subject: [ python-Bugs-971200 ] asyncore sillies Message-ID: Bugs item #971200, was opened at 2004-06-11 11:19 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 7 Submitted By: Michael Hudson (mwh) Assigned to: Nobody/Anonymous (nobody) Summary: asyncore sillies Initial Comment: current cvs head, mac os x 10.2, debug build of python. test_asynchat fails if and only if the compiled asyncore.pyc file is present. this makes no sense to me, but it's consistent. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-12 22:40 Message: Logged In: YES user_id=31435 marshal doesn't know whether the input is an infinity; that's why the result is a platform-dependent accident; "infinity" isn't a C89 concept, and Python inherits its ignorance from C ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-12 13:35 Message: Logged In: YES user_id=29957 Is it worth making marshal issue a warning when someone tries to store an infinity, in that case? It could (ha!) be suprising that a piece of code that works from a .py fails silently from a .pyc. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 13:03 Message: Logged In: YES user_id=6656 actually, i think the summary is that the most recent change to asyncore is just broken. blaming the recent changes around LC_NUMERIC and their effect or non-effect on marshal was a read herring. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-11 12:47 Message: Logged In: YES user_id=31435 Well, what marshal (or pickle) do with an infinity (or NaN, or the sign of a signed zero) is a platform accident. Here with the released 2.3.4 on Windows (which doesn't have any LC_NUMERIC changes): Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 1e309 1.#INF >>> import marshal >>> marshal.loads(marshal.dumps(1e309)) 1.0 >>> So simply can't use a literal 1e309 in compiled code. There's no portable way to spell infinity in Python. PEP 754 would introduce a reasonably portable way, were it accepted. Before then, 1e200*1e200 is probably the easiest reasonably portable way -- but since behavior in the presence of an infinity is accidental anyway, much better to avoid using infinity at all in the libraries. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 11:58 Message: Logged In: YES user_id=6656 Oh my: >>> 1e309 Inf [40577 refs] >>> marshal.loads(marshal.dumps(1e309)) 0.0 [40577 refs] this must be the new "LC_NUMERIC agnostic" stuff, right? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 From noreply at sourceforge.net Sat Jun 12 22:58:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 22:59:19 2004 Subject: [ python-Bugs-971395 ] thread.name crashes interpreter Message-ID: Bugs item #971395, was opened at 2004-06-11 16:15 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971395&group_id=5470 Category: Threads Group: Python 2.3 Status: Open >Resolution: Works For Me Priority: 5 Submitted By: Jonathan Ellis (ellisj) Assigned to: Nobody/Anonymous (nobody) Summary: thread.name crashes interpreter Initial Comment: I changed the __repr__ method of the cookbook Future class -- http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/84317 -- as follows: def __repr__(self): return '<%s at %s:%s>' % (self.__T.name, hex(id(self)), self.__status) this caused obscure crashes with the uninformative message Fatal Python error: PyEval_SaveThread: NULL tstate changing to __T.getName() fixed the crashing. It seems to me that thread.name should be __name or _name to help novices not shoot themselves in the foot. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-12 22:58 Message: Logged In: YES user_id=31435 I've been unable to reproduce any problems with Future modified to use your __repr__, under Python 2.3.4 on Windows. Of course any attempt to invoke that method dies with AttributeError: 'Thread' object has no attribute 'name' but that's not a bug. Python won't change to add other ways of getting the thread name -- the Python philosophy is to supply one clear way to do a thing, and is opposed to Tower of Babel approaches that try to cater to everything someone might type off the top of their head. There's no end to that (why not .Name and .thread_name and ... too?). Would you kindly add specific code to this report, sufficient to reproduce the fatal Python error you mentioned? If that occurs, it's a bug, but I've been unable to provoke it. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971395&group_id=5470 From noreply at sourceforge.net Sat Jun 12 23:56:37 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 23:56:40 2004 Subject: [ python-Bugs-971962 ] Generator mangles returned lists. Message-ID: Bugs item #971962, was opened at 2004-06-13 03:56 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971962&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Ed Watkeys (edw) Assigned to: Nobody/Anonymous (nobody) Summary: Generator mangles returned lists. Initial Comment: I have run into what seems like a bug. Check this out... def gen(): l = [] l.append('eggs') l = l[-1:] yield l l.append('ham') l = l[-1:] yield l >>> [i for i in gen()] [['eggs', 'ham'], ['ham']] >>> g = gen(); [g.next(), g.next()] [['eggs', 'ham'], ['ham']] >>> g = gen(); g.next(); g.next() ['eggs'] ['ham'] >>> g = gen(); i = g.next(); j = g.next(); [i,j] [['eggs', 'ham'], ['ham']] >>> g = gen(); [g.next()[:], g.next()[:]] [['eggs'], ['ham']] Anyone have any insight into this? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971962&group_id=5470 From noreply at sourceforge.net Sat Jun 12 23:57:03 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 12 23:57:08 2004 Subject: [ python-Bugs-971395 ] thread.name crashes interpreter Message-ID: Bugs item #971395, was opened at 2004-06-11 20:15 Message generated for change (Comment added) made by ellisj You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971395&group_id=5470 Category: Threads Group: Python 2.3 Status: Open Resolution: Works For Me Priority: 5 Submitted By: Jonathan Ellis (ellisj) Assigned to: Nobody/Anonymous (nobody) Summary: thread.name crashes interpreter Initial Comment: I changed the __repr__ method of the cookbook Future class -- http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/84317 -- as follows: def __repr__(self): return '<%s at %s:%s>' % (self.__T.name, hex(id(self)), self.__status) this caused obscure crashes with the uninformative message Fatal Python error: PyEval_SaveThread: NULL tstate changing to __T.getName() fixed the crashing. It seems to me that thread.name should be __name or _name to help novices not shoot themselves in the foot. ---------------------------------------------------------------------- >Comment By: Jonathan Ellis (ellisj) Date: 2004-06-13 03:57 Message: Logged In: YES user_id=657828 sorry, if I had been able to fine a simple crash case I would have given it. :-| ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-13 02:58 Message: Logged In: YES user_id=31435 I've been unable to reproduce any problems with Future modified to use your __repr__, under Python 2.3.4 on Windows. Of course any attempt to invoke that method dies with AttributeError: 'Thread' object has no attribute 'name' but that's not a bug. Python won't change to add other ways of getting the thread name -- the Python philosophy is to supply one clear way to do a thing, and is opposed to Tower of Babel approaches that try to cater to everything someone might type off the top of their head. There's no end to that (why not .Name and .thread_name and ... too?). Would you kindly add specific code to this report, sufficient to reproduce the fatal Python error you mentioned? If that occurs, it's a bug, but I've been unable to provoke it. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971395&group_id=5470 From noreply at sourceforge.net Sun Jun 13 00:00:55 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 13 00:00:59 2004 Subject: [ python-Bugs-971965 ] urllib2 raises exception with non-200 success codes. Message-ID: Bugs item #971965, was opened at 2004-06-13 04:00 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971965&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Ed Watkeys (edw) Assigned to: Nobody/Anonymous (nobody) Summary: urllib2 raises exception with non-200 success codes. Initial Comment: If a server returns a code other than 200, specifically 201 (Created), urllib2.urlopen will raise an HTTPError exception. I ran into this while implementing an Atom API client, which solicits 201 responses from the server when submitting a blog post. File "/usr/local/lib/python2.3/urllib2.py", line 129, in urlopen return _opener.open(url, data) File "/usr/local/lib/python2.3/urllib2.py", line 326, in open '_open', req) File "/usr/local/lib/python2.3/urllib2.py", line 306, in _call_chain result = func(*args) File "/usr/local/lib/python2.3/urllib2.py", line 901, in http_open return self.do_open(httplib.HTTP, req) File "/usr/local/lib/python2.3/urllib2.py", line 895, in do_open return self.parent.error('http', req, fp, code, msg, hdrs) File "/usr/local/lib/python2.3/urllib2.py", line 352, in error return self._call_chain(*args) File "/usr/local/lib/python2.3/urllib2.py", line 306, in _call_chain result = func(*args) File "/usr/local/lib/python2.3/urllib2.py", line 412, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 201: Created ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971965&group_id=5470 From noreply at sourceforge.net Sun Jun 13 00:04:14 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 13 00:04:21 2004 Subject: [ python-Bugs-971962 ] Generator mangles returned lists. Message-ID: Bugs item #971962, was opened at 2004-06-13 03:56 Message generated for change (Settings changed) made by edw You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971962&group_id=5470 Category: Python Interpreter Core >Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Ed Watkeys (edw) Assigned to: Nobody/Anonymous (nobody) Summary: Generator mangles returned lists. Initial Comment: I have run into what seems like a bug. Check this out... def gen(): l = [] l.append('eggs') l = l[-1:] yield l l.append('ham') l = l[-1:] yield l >>> [i for i in gen()] [['eggs', 'ham'], ['ham']] >>> g = gen(); [g.next(), g.next()] [['eggs', 'ham'], ['ham']] >>> g = gen(); g.next(); g.next() ['eggs'] ['ham'] >>> g = gen(); i = g.next(); j = g.next(); [i,j] [['eggs', 'ham'], ['ham']] >>> g = gen(); [g.next()[:], g.next()[:]] [['eggs'], ['ham']] Anyone have any insight into this? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971962&group_id=5470 From noreply at sourceforge.net Sun Jun 13 00:04:59 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 13 00:05:03 2004 Subject: [ python-Bugs-971965 ] urllib2 raises exception with non-200 success codes. Message-ID: Bugs item #971965, was opened at 2004-06-13 04:00 Message generated for change (Settings changed) made by edw You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971965&group_id=5470 Category: Python Library >Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Ed Watkeys (edw) Assigned to: Nobody/Anonymous (nobody) Summary: urllib2 raises exception with non-200 success codes. Initial Comment: If a server returns a code other than 200, specifically 201 (Created), urllib2.urlopen will raise an HTTPError exception. I ran into this while implementing an Atom API client, which solicits 201 responses from the server when submitting a blog post. File "/usr/local/lib/python2.3/urllib2.py", line 129, in urlopen return _opener.open(url, data) File "/usr/local/lib/python2.3/urllib2.py", line 326, in open '_open', req) File "/usr/local/lib/python2.3/urllib2.py", line 306, in _call_chain result = func(*args) File "/usr/local/lib/python2.3/urllib2.py", line 901, in http_open return self.do_open(httplib.HTTP, req) File "/usr/local/lib/python2.3/urllib2.py", line 895, in do_open return self.parent.error('http', req, fp, code, msg, hdrs) File "/usr/local/lib/python2.3/urllib2.py", line 352, in error return self._call_chain(*args) File "/usr/local/lib/python2.3/urllib2.py", line 306, in _call_chain result = func(*args) File "/usr/local/lib/python2.3/urllib2.py", line 412, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 201: Created ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971965&group_id=5470 From noreply at sourceforge.net Sun Jun 13 00:20:13 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 13 00:20:18 2004 Subject: [ python-Bugs-971962 ] Generator mangles returned lists. Message-ID: Bugs item #971962, was opened at 2004-06-12 22:56 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971962&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Ed Watkeys (edw) Assigned to: Nobody/Anonymous (nobody) Summary: Generator mangles returned lists. Initial Comment: I have run into what seems like a bug. Check this out... def gen(): l = [] l.append('eggs') l = l[-1:] yield l l.append('ham') l = l[-1:] yield l >>> [i for i in gen()] [['eggs', 'ham'], ['ham']] >>> g = gen(); [g.next(), g.next()] [['eggs', 'ham'], ['ham']] >>> g = gen(); g.next(); g.next() ['eggs'] ['ham'] >>> g = gen(); i = g.next(); j = g.next(); [i,j] [['eggs', 'ham'], ['ham']] >>> g = gen(); [g.next()[:], g.next()[:]] [['eggs'], ['ham']] Anyone have any insight into this? ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 23:20 Message: Logged In: YES user_id=80475 Sorry, this isn't a bug. You've created a cute example of the joys of mutability. For some fun, post this on comp.lang.python and expect some lively results. Essentially the issue is that that the first yield is returns a mutable list. If printed right away, it will show its then current value of ['eggs']. Upon restarting the generator, the list is updated to ['ham', 'eggs'] which is what prints for the *first* list return value (it is still the same list with changed contents). When 'l' is re-assigned with " l = l[-1:]", the original list is still intact while the value assigned to "l" changes to be a new list (the slice). So you have the original list modified and the new list with a different value. Very cute. If none of this is clear to you, try wrapping the output with the id() function to see which object is being displayed: [id(i) for i in gen()] g=gen(); [id(g.next()), id(g.next())] ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971962&group_id=5470 From noreply at sourceforge.net Sun Jun 13 00:58:13 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 13 00:58:19 2004 Subject: [ python-Bugs-971962 ] Generator mangles returned lists. Message-ID: Bugs item #971962, was opened at 2004-06-13 03:56 Message generated for change (Comment added) made by edw You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971962&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Closed Resolution: Invalid Priority: 5 Submitted By: Ed Watkeys (edw) Assigned to: Nobody/Anonymous (nobody) Summary: Generator mangles returned lists. Initial Comment: I have run into what seems like a bug. Check this out... def gen(): l = [] l.append('eggs') l = l[-1:] yield l l.append('ham') l = l[-1:] yield l >>> [i for i in gen()] [['eggs', 'ham'], ['ham']] >>> g = gen(); [g.next(), g.next()] [['eggs', 'ham'], ['ham']] >>> g = gen(); g.next(); g.next() ['eggs'] ['ham'] >>> g = gen(); i = g.next(); j = g.next(); [i,j] [['eggs', 'ham'], ['ham']] >>> g = gen(); [g.next()[:], g.next()[:]] [['eggs'], ['ham']] Anyone have any insight into this? ---------------------------------------------------------------------- >Comment By: Ed Watkeys (edw) Date: 2004-06-13 04:58 Message: Logged In: YES user_id=44209 Ah. I get it. I guess it's time to pull out copy.copy(). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-13 04:20 Message: Logged In: YES user_id=80475 Sorry, this isn't a bug. You've created a cute example of the joys of mutability. For some fun, post this on comp.lang.python and expect some lively results. Essentially the issue is that that the first yield is returns a mutable list. If printed right away, it will show its then current value of ['eggs']. Upon restarting the generator, the list is updated to ['ham', 'eggs'] which is what prints for the *first* list return value (it is still the same list with changed contents). When 'l' is re-assigned with " l = l[-1:]", the original list is still intact while the value assigned to "l" changes to be a new list (the slice). So you have the original list modified and the new list with a different value. Very cute. If none of this is clear to you, try wrapping the output with the id() function to see which object is being displayed: [id(i) for i in gen()] g=gen(); [id(g.next()), id(g.next())] ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971962&group_id=5470 From noreply at sourceforge.net Sun Jun 13 03:27:08 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 13 03:27:18 2004 Subject: [ python-Bugs-971747 ] Wrong indentation of example code in tutorial Message-ID: Bugs item #971747, was opened at 2004-06-12 15:44 Message generated for change (Comment added) made by dooms You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971747&group_id=5470 Category: Documentation Group: Python 2.3 Status: Closed Resolution: Fixed Priority: 5 Submitted By: Gr?goire Dooms (dooms) Assigned to: Raymond Hettinger (rhettinger) Summary: Wrong indentation of example code in tutorial Initial Comment: section 9.9: on line 4286 (tut.tex python 2.3.3): >>> class Reverse: "Iterator for looping over a sequence backwards" def __init__(self, data): self.data = data self.index = len(data) def __iter__(self): return self def next(self): if self.index == 0: raise StopIteration self.index = self.index - 1 return self.data[self.index] >>> for char in Reverse('spam'): print char m a p s This is not conformant with the other exmaples (either script or interactive). It looks like this snippet was a script at first and was wrongly converted into an interactive session by prepending >>> on the first line. Prompt only problem in the next example: line 4319: >>> def reverse(data): for index in range(len(data)-1, -1, -1): yield data[index] >>> for char in reverse('golf'): print char Indentation is good but the sys.ps2 prompt is missing. Now an indentation only problem (section 10.7) line 4505: >>> import urllib2 >>> for line in urllib2.urlopen('http://tycho.usno.navy.mil/cgi-bin/timer.pl'): ... if 'EST' in line: # look for Eastern Standard Time ... print line The if and print lines need an additional indentation level. This last one is already fixed in the http://www.python.org/dev/devel/doc/tut/ version. ---------------------------------------------------------------------- >Comment By: Gr?goire Dooms (dooms) Date: 2004-06-13 07:27 Message: Logged In: YES user_id=846867 The Reverse('spam') example is still not fixed according to CVS tut.tex 1.196.8.20: either the IDLE PS2 prompt or the indentation is missing: the docstring and the method defs are alined with the c in class. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 19:43 Message: Logged In: YES user_id=80475 Backport the fix for the urllib example. The Reverse('spam') example was already fixed and backported. The decision to omit the '...' PS2 prompt was intentional. Using the IDLE style ' ' PS2 prompt makes these examples easier to cut and paste so the reader can try them out interactively. Also, I find them to be more readable. Practicality trumps smaller concerns about putting every example in exactly the same format. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971747&group_id=5470 From noreply at sourceforge.net Sun Jun 13 17:11:06 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 13 17:11:15 2004 Subject: [ python-Bugs-970042 ] fcntl.lockf() signature uses len, doc refers to length Message-ID: Bugs item #970042, was opened at 2004-06-09 20:35 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970042&group_id=5470 Category: Documentation Group: Python 2.3 >Status: Closed >Resolution: Accepted Priority: 5 Submitted By: Clinton Roy (clintonroy) >Assigned to: Neal Norwitz (nnorwitz) >Summary: fcntl.lockf() signature uses len, doc refers to length Initial Comment: The documentation has the signature: lockf(fd, operation, [len, [start, [whence]]]) but the description refers to the length parameter. Obviously very minor. Personally, I'd be happier to see the signature changed, rather than the documentation. cheers, ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-13 17:11 Message: Logged In: YES user_id=33168 The docstring says length (since 2001), so length it is. :-) Checked in as Doc/lib/libfcntl.tex 1.34 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970042&group_id=5470 From noreply at sourceforge.net Sun Jun 13 17:15:47 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 13 17:15:59 2004 Subject: [ python-Bugs-967334 ] Cmd in thread segfaults after Ctrl-C Message-ID: Bugs item #967334, was opened at 2004-06-05 19:11 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967334&group_id=5470 Category: Threads Group: Python 2.3 Status: Open >Resolution: Works For Me Priority: 5 Submitted By: Kevin M. Turner (acapnotic) Assigned to: Nobody/Anonymous (nobody) Summary: Cmd in thread segfaults after Ctrl-C Initial Comment: With Cmd.cmdloop running in a thread, saying Ctrl-C will make Python segfault. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-13 17:15 Message: Logged In: YES user_id=33168 This works for me on Linux. What OS are you using? Are you using any non-standard extension modules? What version of Python? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967334&group_id=5470 From noreply at sourceforge.net Sun Jun 13 17:33:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 13 17:33:33 2004 Subject: [ python-Bugs-900977 ] cygwinccompiler.get_versions fails on `ld -v` output Message-ID: Bugs item #900977, was opened at 2004-02-20 04:52 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=900977&group_id=5470 Category: Distutils Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Pearu Peterson (pearu) >Assigned to: Jason Tishler (jlt63) Summary: cygwinccompiler.get_versions fails on `ld -v` output Initial Comment: Under linux `ld -v` returns GNU ld version 2.14.90.0.7 20031029 Debian GNU/Linux for instance, and get_versions() function uses StrictVersion on '2.14.90.0.7'. This situation triggers an error: ValueError: invalid version number '2.14.90.0.7' As a fix, either use LooseVersion or the following re pattern result = re.search('(\d+\.\d+(\.\d+)?)',out_string) in `if ld_exe` block. Pearu ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-13 17:33 Message: Logged In: YES user_id=33168 Is this still a problem? Jason, do you have any comments? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=900977&group_id=5470 From noreply at sourceforge.net Sun Jun 13 22:13:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 13 22:13:33 2004 Subject: [ python-Bugs-971395 ] thread.name crashes interpreter Message-ID: Bugs item #971395, was opened at 2004-06-11 16:15 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971395&group_id=5470 Category: Threads Group: Python 2.3 >Status: Closed Resolution: Works For Me Priority: 5 Submitted By: Jonathan Ellis (ellisj) Assigned to: Nobody/Anonymous (nobody) Summary: thread.name crashes interpreter Initial Comment: I changed the __repr__ method of the cookbook Future class -- http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/84317 -- as follows: def __repr__(self): return '<%s at %s:%s>' % (self.__T.name, hex(id(self)), self.__status) this caused obscure crashes with the uninformative message Fatal Python error: PyEval_SaveThread: NULL tstate changing to __T.getName() fixed the crashing. It seems to me that thread.name should be __name or _name to help novices not shoot themselves in the foot. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-13 22:13 Message: Logged In: YES user_id=31435 Well, in the absence of a test case, we can't even be sure this specific change was the true cause, so I'm closing this for now. Feel encouraged to reopen it if more evidence appears. FWIW, "NULL tstate" messages are historically almost all due to flawed logic in 3rd-party C extension modules. ---------------------------------------------------------------------- Comment By: Jonathan Ellis (ellisj) Date: 2004-06-12 23:57 Message: Logged In: YES user_id=657828 sorry, if I had been able to fine a simple crash case I would have given it. :-| ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-12 22:58 Message: Logged In: YES user_id=31435 I've been unable to reproduce any problems with Future modified to use your __repr__, under Python 2.3.4 on Windows. Of course any attempt to invoke that method dies with AttributeError: 'Thread' object has no attribute 'name' but that's not a bug. Python won't change to add other ways of getting the thread name -- the Python philosophy is to supply one clear way to do a thing, and is opposed to Tower of Babel approaches that try to cater to everything someone might type off the top of their head. There's no end to that (why not .Name and .thread_name and ... too?). Would you kindly add specific code to this report, sufficient to reproduce the fatal Python error you mentioned? If that occurs, it's a bug, but I've been unable to provoke it. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971395&group_id=5470 From noreply at sourceforge.net Mon Jun 14 01:58:26 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 01:58:48 2004 Subject: [ python-Bugs-956408 ] Simplifiy coding in cmd.py Message-ID: Bugs item #956408, was opened at 2004-05-18 20:54 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=956408&group_id=5470 Category: None Group: Python 2.4 Status: Open Resolution: None >Priority: 1 Submitted By: Raymond Hettinger (rhettinger) >Assigned to: Anthony Baxter (anthonybaxter) Summary: Simplifiy coding in cmd.py Initial Comment: In the cmd.py 1.35 checkin on 2/6/2003, there are many lines like: self.stdout.write("%s\n"%str(header)) I believe the str() call in unnecessary. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-12 19:48 Message: Logged In: YES user_id=33168 The str call is necessary if header is a tuple. AFAIK, if header is not a tuple, using str() is redundant. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-07 00:01 Message: Logged In: YES user_id=80475 Neal, is this a simplification you would like to make? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=956408&group_id=5470 From noreply at sourceforge.net Mon Jun 14 02:19:37 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 02:19:41 2004 Subject: [ python-Bugs-862600 ] Assignment to __builtins__.__debug__ doesn't do anything. Message-ID: Bugs item #862600, was opened at 2003-12-18 18:55 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=862600&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Jeremy Fincher (jemfinch) Assigned to: Raymond Hettinger (rhettinger) Summary: Assignment to __builtins__.__debug__ doesn't do anything. Initial Comment: In 2.2, it would dynamically turn off asserts: Python 2.2.3+ (#1, Sep 30 2003, 01:19:08) [GCC 3.3.2 20030908 (Debian prerelease)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> __debug__ 1 >>> __builtins__.__debug__ = 0 >>> assert 0, "There should be nothing raised." >>> But in 2.3, this changed: Python 2.3.2 (#2, Nov 11 2003, 00:22:57) [GCC 3.3.2 (Debian)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> __debug__ True >>> __builtins__debug__ = False >>> assert 0, "There should be nothing raised." Traceback (most recent call last): File "", line 1, in ? AssertionError: There should be nothing raised. >>> If this is in fact the intended behavior (I hope it's not) then what's an application author to do when he wants to offer users a -O option *to his application* that turns off asserts? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-01-15 23:08 Message: Logged In: YES user_id=31435 The NEWS file in your Python distribution, under the notes for 2.3 alpha 1, says: """ - The assert statement no longer tests __debug__ at runtime. This means that assert statements cannot be disabled by assigning a false value to __debug__. """ Do a Google search on site:mail.python.org python-dev assert __debug__ if you want to read a lot more about it. The decision on Guido's part was certainly intentional. If people had griped during the extremely long alpha/beta/release_candidate 2.3 cycle, he might have backed off -- but nobody did, so you're best off to consider this one a dead issue. ---------------------------------------------------------------------- Comment By: Jeremy Fincher (jemfinch) Date: 2004-01-15 22:38 Message: Logged In: YES user_id=99508 Is this something that's likely to get fixed, or is it something that I should start developing a workaround for? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=862600&group_id=5470 From noreply at sourceforge.net Mon Jun 14 02:20:15 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 02:20:19 2004 Subject: [ python-Bugs-969415 ] CJK codecs list incomplete Message-ID: Bugs item #969415, was opened at 2004-06-09 01:54 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Mike Brown (mike_j_brown) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: CJK codecs list incomplete Initial Comment: http://www.python.org/dev/doc/devel/whatsnew/node7. html states that various CJK encodings have been added, but the list given there does not match the list on http://www.python.org/dev/doc/devel/lib/node128.html. In particular, missing from the latter list are all of the aliases with hyphens: shift-jis, shift-jisx0213, euc-jp, euc-jisx0213, iso-2022- jp, iso-2022-jp-1, iso-2022-jp-2, iso-2022-jp-3, iso- 2022-jp-ext, euc-kr, iso-2022-kr Since I successfully ran codecs.lookup() tests on a few of the hyphenated aliases, I assume that the omission of the hyphenated versions in the docs is merely an oversight. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-12 07:04 Message: Logged In: YES user_id=38388 I think that it might be a good idea to document of how the standard search function of the encodings package work at the top of that page, namely to normalize encoding names before doing the lookup: """ Normalization works as follows: all non-alphanumeric characters except the dot used for Python package names are collapsed and replaced with a single underscore, e.g. ' -;#' becomes '_'. Leading and trailing underscores are removed. Note that encoding names should be ASCII only; if they do use non-ASCII characters, these must be Latin-1 compatible. """ The table should then only list normalized encoding names (which I think is already the case). ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-12 06:59 Message: Logged In: YES user_id=21627 Actually, the top of the page does already say Notice that spelling alternatives that only differ in case or use a hyphen instead of an underscore are also valid aliases. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-12 06:54 Message: Logged In: YES user_id=21627 It is just not feasible to list all recognized aliases. For example, for ISO-8859-1, there are trivial 31 aliases, including Iso_8859-1 and iSO-8859_1. For shift_jisx0213, there are 1023 trivial aliases. The aliases column in the documentation should only list non-trivial aliases, and for these, it should list a form that people are most likely to encounter. So if "s-jis" would be more common than "s_jis", this is what should be listed. If s-JIS is even more common, this should be listed. The top of the page should say that case in encoding names does not matter, and that _ and - can be freely substituted. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:05 Message: Logged In: YES user_id=80475 Mark, would you pronounce on this one. ---------------------------------------------------------------------- Comment By: Mike Brown (mike_j_brown) Date: 2004-06-09 03:25 Message: Logged In: YES user_id=371366 I see no reason to omit any aliases that are recognized, especially when the aliases in question are, more often than not, the IANA's preferred MIME name as shown at http://www.iana.org/assignments/character-sets. I was looking in the docs to see if Python 2.4 was going to support 'euc-jp', and was dismayed to see 'euc_jp' and variants but no 'euc-jp'. I had to obtain and install 2.4a0 to test to find out that it was just a documentation problem. Please consider listing all realnames and aliases. ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 02:10 Message: Logged In: YES user_id=55188 Reopened to consider the consistence with non-cjk codecs. All the non-cjk codecs are written with hyphen even if their realname is with underscore. (eg. iso8859-1 and iso8859_1.py) Will changing cjk codecs's codec/alias names to use not underscores but hyphens make docs more friendly? ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 02:01 Message: Logged In: YES user_id=55188 All hyphens are translated as underscores in encoding lookups. So we may not need to provide encoding list with hyphens additionally. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 From noreply at sourceforge.net Mon Jun 14 03:43:31 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 03:43:36 2004 Subject: [ python-Bugs-823209 ] cmath.log doesn't have the same interface as math.log. Message-ID: Bugs item #823209, was opened at 2003-10-13 23:23 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=823209&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Jeremy Fincher (jemfinch) Assigned to: Raymond Hettinger (rhettinger) Summary: cmath.log doesn't have the same interface as math.log. Initial Comment: Somewhere along the way, math.log gained an optional "base" argument. cmath.log is still missing it. >>> print math.log.__doc__ log(x[, base]) -> the logarithm of x to the given base. If the base not specified, returns the natural logarithm (base e) of x. >>> print cmath.log.__doc__ log(x) Return the natural logarithm of x. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-14 02:43 Message: Logged In: YES user_id=80475 Applied Andrew Gaul's patch (with minor modifications) as Modules/cmathmodule.c 2.33 ---------------------------------------------------------------------- Comment By: Jeremy Fincher (jemfinch) Date: 2003-10-27 17:03 Message: Logged In: YES user_id=99508 In my particular usecase, I define an environment in which people can execute mathematical statements like so: _mathEnv = {'__builtins__': new.module('__builtins__'), 'i': 1j} _mathEnv.update(math.__dict__) _mathEnv.update(cmath.__dict__) As you can see, the cmath definitions shadow the math definitions, and thus I lose the useful ability to offer users a log with a base (which those that know Python expect to work). That's at least my particular use case. In this particular instance, since I don't want to allow the users to cause the application to consume arbitrary amounts of memory, I can't allow integer arithmetic (because of the crazy int/long unification that left people who wanted computationall bounded arithmetic with no choice but to implement a fixed-size integer type or use float/complex instead ;)) so I use complex objects everywhere, and math.log can't operate on complex objects (even those that have no imaginary component). ---------------------------------------------------------------------- Comment By: Andrew Gaul (gaul) Date: 2003-10-27 15:20 Message: Logged In: YES user_id=139865 Base 2 logarithms are somewhat common. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2003-10-27 14:44 Message: Logged In: YES user_id=80475 I can fix this if necessary. My question is whether it should be done. On the one hand, it is nice to have the two interfaces as symmetrical as possible. OTOH, I'm not aware of any use cases for log(z, b). ---------------------------------------------------------------------- Comment By: Andrew Gaul (gaul) Date: 2003-10-18 13:42 Message: Logged In: YES user_id=139865 See patch #826074 for a fix. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=823209&group_id=5470 From noreply at sourceforge.net Mon Jun 14 04:20:38 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 04:20:47 2004 Subject: [ python-Bugs-972467 ] local variables problem Message-ID: Bugs item #972467, was opened at 2004-06-14 10:20 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972467&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Toon Verstraelen (tovrstra) Assigned to: Nobody/Anonymous (nobody) Summary: local variables problem Initial Comment: def test1(): print a a += 1 a = 1 test1() This results in a UnboundLocalError, while a is already assigned Traceback (most recent call last): File "bug.py", line 6, in ? test1() File "bug.py", line 2, in test1 print a UnboundLocalError: local variable 'a' referenced before assignment ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972467&group_id=5470 From noreply at sourceforge.net Mon Jun 14 04:36:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 04:36:40 2004 Subject: [ python-Bugs-972467 ] local variables problem Message-ID: Bugs item #972467, was opened at 2004-06-14 04:20 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972467&group_id=5470 Category: None >Group: Not a Bug >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Toon Verstraelen (tovrstra) Assigned to: Nobody/Anonymous (nobody) Summary: local variables problem Initial Comment: def test1(): print a a += 1 a = 1 test1() This results in a UnboundLocalError, while a is already assigned Traceback (most recent call last): File "bug.py", line 6, in ? test1() File "bug.py", line 2, in test1 print a UnboundLocalError: local variable 'a' referenced before assignment ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-14 04:36 Message: Logged In: YES user_id=31435 Not a bug. Ask on comp.lang.python for an explanation, or read the docs more carefully. Short course: 'a' is local in test1 because 'a' is assigned to in test1. It has no relation to the global named 'a'. If you want test1 to use the global named 'a', put global a as the first line of test1. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972467&group_id=5470 From noreply at sourceforge.net Mon Jun 14 06:36:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 06:36:23 2004 Subject: [ python-Bugs-971200 ] asyncore sillies Message-ID: Bugs item #971200, was opened at 2004-06-11 16:19 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 Category: None Group: None >Status: Closed Resolution: None Priority: 7 Submitted By: Michael Hudson (mwh) Assigned to: Nobody/Anonymous (nobody) Summary: asyncore sillies Initial Comment: current cvs head, mac os x 10.2, debug build of python. test_asynchat fails if and only if the compiled asyncore.pyc file is present. this makes no sense to me, but it's consistent. ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-14 11:36 Message: Logged In: YES user_id=6656 I think this can be closed now. A half-assed test could be added to marshal (if (x!=x) PyErr_SetString(...)) but I'm not sure it's worth it. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-13 03:40 Message: Logged In: YES user_id=31435 marshal doesn't know whether the input is an infinity; that's why the result is a platform-dependent accident; "infinity" isn't a C89 concept, and Python inherits its ignorance from C ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-12 18:35 Message: Logged In: YES user_id=29957 Is it worth making marshal issue a warning when someone tries to store an infinity, in that case? It could (ha!) be suprising that a piece of code that works from a .py fails silently from a .pyc. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 18:03 Message: Logged In: YES user_id=6656 actually, i think the summary is that the most recent change to asyncore is just broken. blaming the recent changes around LC_NUMERIC and their effect or non-effect on marshal was a read herring. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-11 17:47 Message: Logged In: YES user_id=31435 Well, what marshal (or pickle) do with an infinity (or NaN, or the sign of a signed zero) is a platform accident. Here with the released 2.3.4 on Windows (which doesn't have any LC_NUMERIC changes): Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 1e309 1.#INF >>> import marshal >>> marshal.loads(marshal.dumps(1e309)) 1.0 >>> So simply can't use a literal 1e309 in compiled code. There's no portable way to spell infinity in Python. PEP 754 would introduce a reasonably portable way, were it accepted. Before then, 1e200*1e200 is probably the easiest reasonably portable way -- but since behavior in the presence of an infinity is accidental anyway, much better to avoid using infinity at all in the libraries. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 16:58 Message: Logged In: YES user_id=6656 Oh my: >>> 1e309 Inf [40577 refs] >>> marshal.loads(marshal.dumps(1e309)) 0.0 [40577 refs] this must be the new "LC_NUMERIC agnostic" stuff, right? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971200&group_id=5470 From noreply at sourceforge.net Mon Jun 14 07:33:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 07:33:43 2004 Subject: [ python-Bugs-900977 ] cygwinccompiler.get_versions fails on `ld -v` output Message-ID: Bugs item #900977, was opened at 2004-02-20 00:52 Message generated for change (Comment added) made by jlt63 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=900977&group_id=5470 Category: Distutils Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Pearu Peterson (pearu) Assigned to: Jason Tishler (jlt63) Summary: cygwinccompiler.get_versions fails on `ld -v` output Initial Comment: Under linux `ld -v` returns GNU ld version 2.14.90.0.7 20031029 Debian GNU/Linux for instance, and get_versions() function uses StrictVersion on '2.14.90.0.7'. This situation triggers an error: ValueError: invalid version number '2.14.90.0.7' As a fix, either use LooseVersion or the following re pattern result = re.search('(\d+\.\d+(\.\d+)?)',out_string) in `if ld_exe` block. Pearu ---------------------------------------------------------------------- >Comment By: Jason Tishler (jlt63) Date: 2004-06-14 03:33 Message: Logged In: YES user_id=86216 Hmm... I botch a Cygwin Python release and I get assigned a 4 month old Distutils bug. Coincidence or punishment? :,) > Is this still a problem? I don't know. > Jason, do you have any comments? This problem seems more like a Distutils issue than a Cygwin one. Please assign to a Distutils developer (e.g., Rene). Since I'm not familiar with the issues, I'm afraid that if I try to fix this problem I may cause another one... Additionally, I cannot reproduce the problem on my Linux box unless I write a shell script to simulate the behavior of Pearu's ld -v... ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-13 13:33 Message: Logged In: YES user_id=33168 Is this still a problem? Jason, do you have any comments? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=900977&group_id=5470 From noreply at sourceforge.net Mon Jun 14 12:06:21 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 12:07:02 2004 Subject: [ python-Bugs-971106 ] Comparisons of unicode and strings raises UnicodeErrors Message-ID: Bugs item #971106, was opened at 2004-06-11 15:07 Message generated for change (Comment added) made by lemburg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 Category: Unicode Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Christian Theune (ctheune) Assigned to: M.-A. Lemburg (lemburg) Summary: Comparisons of unicode and strings raises UnicodeErrors Initial Comment: When comparing unicode and strings the implicit conversion will raise an exception instead of returning false. See the example later on. We (Christian Theune and Jim Fulton) suggest that if the ordinary string can't be decoded, that False should be returned. This seems to be the only sane approach given's Python's policy of implicitly converting strings to unicode using a default encoding. Python 2.3.4 (#1, Jun 10 2004, 11:08:42) [GCC 3.3.3 20040412 (Gentoo Linux 3.3.3-r6, ssp-3.3.2-2, pie-8.7.6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class Elephant: ... pass ... >>> e1 = Elephant() >>> e1 == 5 False >>> u"asdf" == "asdf?" Traceback (most recent call last): File "", line 1, in ? UnicodeDecodeError: 'ascii' codec can't decode byte 0xf6 in position 4: ordinal not in range(128) >>> e1 == "asdf?" False >>> e1 == u"asdf?" False >>> ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 18:06 Message: Logged In: YES user_id=38388 I'm not sure what you are suggesting here... do you want u"asdf" == "asdf?" to return False ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 From noreply at sourceforge.net Mon Jun 14 12:21:01 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 12:21:27 2004 Subject: [ python-Bugs-971106 ] Comparisons of unicode and strings raises UnicodeErrors Message-ID: Bugs item #971106, was opened at 2004-06-11 15:07 Message generated for change (Comment added) made by lemburg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 Category: Unicode Group: Python 2.3 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Christian Theune (ctheune) Assigned to: M.-A. Lemburg (lemburg) Summary: Comparisons of unicode and strings raises UnicodeErrors Initial Comment: When comparing unicode and strings the implicit conversion will raise an exception instead of returning false. See the example later on. We (Christian Theune and Jim Fulton) suggest that if the ordinary string can't be decoded, that False should be returned. This seems to be the only sane approach given's Python's policy of implicitly converting strings to unicode using a default encoding. Python 2.3.4 (#1, Jun 10 2004, 11:08:42) [GCC 3.3.3 20040412 (Gentoo Linux 3.3.3-r6, ssp-3.3.2-2, pie-8.7.6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class Elephant: ... pass ... >>> e1 = Elephant() >>> e1 == 5 False >>> u"asdf" == "asdf?" Traceback (most recent call last): File "", line 1, in ? UnicodeDecodeError: 'ascii' codec can't decode byte 0xf6 in position 4: ordinal not in range(128) >>> e1 == "asdf?" False >>> e1 == u"asdf?" False >>> ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 18:21 Message: Logged In: YES user_id=38388 In that case I'll have to close the request as "Won't fix": comparisons can raise exceptions and if you're comparing apples and eggs you should get an exception instead of a misleading result which only hides programming errors. Sorry. ---------------------------------------------------------------------- Comment By: Christian Theune (ctheune) Date: 2004-06-14 18:16 Message: Logged In: YES user_id=100622 That's what Jim and I found to be the best solution (for now) ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 18:06 Message: Logged In: YES user_id=38388 I'm not sure what you are suggesting here... do you want u"asdf" == "asdf?" to return False ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 From noreply at sourceforge.net Mon Jun 14 12:48:12 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 12:48:37 2004 Subject: [ python-Bugs-971106 ] Comparisons of unicode and strings raises UnicodeErrors Message-ID: Bugs item #971106, was opened at 2004-06-11 09:07 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 Category: Unicode Group: Python 2.3 Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Christian Theune (ctheune) Assigned to: M.-A. Lemburg (lemburg) Summary: Comparisons of unicode and strings raises UnicodeErrors Initial Comment: When comparing unicode and strings the implicit conversion will raise an exception instead of returning false. See the example later on. We (Christian Theune and Jim Fulton) suggest that if the ordinary string can't be decoded, that False should be returned. This seems to be the only sane approach given's Python's policy of implicitly converting strings to unicode using a default encoding. Python 2.3.4 (#1, Jun 10 2004, 11:08:42) [GCC 3.3.3 20040412 (Gentoo Linux 3.3.3-r6, ssp-3.3.2-2, pie-8.7.6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class Elephant: ... pass ... >>> e1 = Elephant() >>> e1 == 5 False >>> u"asdf" == "asdf?" Traceback (most recent call last): File "", line 1, in ? UnicodeDecodeError: 'ascii' codec can't decode byte 0xf6 in position 4: ordinal not in range(128) >>> e1 == "asdf?" False >>> e1 == u"asdf?" False >>> ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-14 12:48 Message: Logged In: YES user_id=31435 Christian, you and/or Jim would need to make the case on Python-Dev to change this behavior, and get Guido to agree. The current behavior isn't an accident -- it's functioning as documented and designed. Changing existing behavior isn't easy (and shouldn't be easy ...). ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 12:21 Message: Logged In: YES user_id=38388 In that case I'll have to close the request as "Won't fix": comparisons can raise exceptions and if you're comparing apples and eggs you should get an exception instead of a misleading result which only hides programming errors. Sorry. ---------------------------------------------------------------------- Comment By: Christian Theune (ctheune) Date: 2004-06-14 12:16 Message: Logged In: YES user_id=100622 That's what Jim and I found to be the best solution (for now) ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 12:06 Message: Logged In: YES user_id=38388 I'm not sure what you are suggesting here... do you want u"asdf" == "asdf?" to return False ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 From noreply at sourceforge.net Mon Jun 14 13:05:12 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 13:05:34 2004 Subject: [ python-Bugs-971106 ] Comparisons of unicode and strings raises UnicodeErrors Message-ID: Bugs item #971106, was opened at 2004-06-11 15:07 Message generated for change (Comment added) made by lemburg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 Category: Unicode Group: Python 2.3 Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Christian Theune (ctheune) Assigned to: M.-A. Lemburg (lemburg) Summary: Comparisons of unicode and strings raises UnicodeErrors Initial Comment: When comparing unicode and strings the implicit conversion will raise an exception instead of returning false. See the example later on. We (Christian Theune and Jim Fulton) suggest that if the ordinary string can't be decoded, that False should be returned. This seems to be the only sane approach given's Python's policy of implicitly converting strings to unicode using a default encoding. Python 2.3.4 (#1, Jun 10 2004, 11:08:42) [GCC 3.3.3 20040412 (Gentoo Linux 3.3.3-r6, ssp-3.3.2-2, pie-8.7.6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class Elephant: ... pass ... >>> e1 = Elephant() >>> e1 == 5 False >>> u"asdf" == "asdf?" Traceback (most recent call last): File "", line 1, in ? UnicodeDecodeError: 'ascii' codec can't decode byte 0xf6 in position 4: ordinal not in range(128) >>> e1 == "asdf?" False >>> e1 == u"asdf?" False >>> ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 19:05 Message: Logged In: YES user_id=38388 You won't get my approval on this one, that's for sure... then again I'm not sure how much say I have on these things on python-dev anymore four years after having led it's design. As data point, there have been discussions about raising exceptions in cases where types don't match and there is no support for the given type combination like in your e1 == 5 example, so maybe stirring up noise isn't the best strategy in this case :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-14 18:48 Message: Logged In: YES user_id=31435 Christian, you and/or Jim would need to make the case on Python-Dev to change this behavior, and get Guido to agree. The current behavior isn't an accident -- it's functioning as documented and designed. Changing existing behavior isn't easy (and shouldn't be easy ...). ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 18:21 Message: Logged In: YES user_id=38388 In that case I'll have to close the request as "Won't fix": comparisons can raise exceptions and if you're comparing apples and eggs you should get an exception instead of a misleading result which only hides programming errors. Sorry. ---------------------------------------------------------------------- Comment By: Christian Theune (ctheune) Date: 2004-06-14 18:16 Message: Logged In: YES user_id=100622 That's what Jim and I found to be the best solution (for now) ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 18:06 Message: Logged In: YES user_id=38388 I'm not sure what you are suggesting here... do you want u"asdf" == "asdf?" to return False ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 From noreply at sourceforge.net Mon Jun 14 13:17:23 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 13:17:47 2004 Subject: [ python-Bugs-971106 ] Comparisons of unicode and strings raises UnicodeErrors Message-ID: Bugs item #971106, was opened at 2004-06-11 09:07 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 Category: Unicode Group: Python 2.3 Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Christian Theune (ctheune) Assigned to: M.-A. Lemburg (lemburg) Summary: Comparisons of unicode and strings raises UnicodeErrors Initial Comment: When comparing unicode and strings the implicit conversion will raise an exception instead of returning false. See the example later on. We (Christian Theune and Jim Fulton) suggest that if the ordinary string can't be decoded, that False should be returned. This seems to be the only sane approach given's Python's policy of implicitly converting strings to unicode using a default encoding. Python 2.3.4 (#1, Jun 10 2004, 11:08:42) [GCC 3.3.3 20040412 (Gentoo Linux 3.3.3-r6, ssp-3.3.2-2, pie-8.7.6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class Elephant: ... pass ... >>> e1 = Elephant() >>> e1 == 5 False >>> u"asdf" == "asdf?" Traceback (most recent call last): File "", line 1, in ? UnicodeDecodeError: 'ascii' codec can't decode byte 0xf6 in position 4: ordinal not in range(128) >>> e1 == "asdf?" False >>> e1 == u"asdf?" False >>> ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-14 13:17 Message: Logged In: YES user_id=31435 I think some noise would be a good thing . Really! Comparisons in Python are a mess now, and it would be good to hammer out a (more) consistent story. Note that the new-in-2.3 Python sets.* and datetime.* types do some of each for comparisons against incompatible types: for <=, <, >, and >=, they raise an exception. For == they return True, and for != they return False. Guido gave a lot of thought to those at the time, and it's no coincidence that these new types act the same way. But older types don't. Maybe the new idea was a mistake. Maybe the older types should change. Maybe there are good reasons to keep them acting differently. Some noise about all this is really needed before Python 3. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 13:05 Message: Logged In: YES user_id=38388 You won't get my approval on this one, that's for sure... then again I'm not sure how much say I have on these things on python-dev anymore four years after having led it's design. As data point, there have been discussions about raising exceptions in cases where types don't match and there is no support for the given type combination like in your e1 == 5 example, so maybe stirring up noise isn't the best strategy in this case :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-14 12:48 Message: Logged In: YES user_id=31435 Christian, you and/or Jim would need to make the case on Python-Dev to change this behavior, and get Guido to agree. The current behavior isn't an accident -- it's functioning as documented and designed. Changing existing behavior isn't easy (and shouldn't be easy ...). ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 12:21 Message: Logged In: YES user_id=38388 In that case I'll have to close the request as "Won't fix": comparisons can raise exceptions and if you're comparing apples and eggs you should get an exception instead of a misleading result which only hides programming errors. Sorry. ---------------------------------------------------------------------- Comment By: Christian Theune (ctheune) Date: 2004-06-14 12:16 Message: Logged In: YES user_id=100622 That's what Jim and I found to be the best solution (for now) ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 12:06 Message: Logged In: YES user_id=38388 I'm not sure what you are suggesting here... do you want u"asdf" == "asdf?" to return False ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 From noreply at sourceforge.net Mon Jun 14 13:32:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 13:32:49 2004 Subject: [ python-Bugs-971106 ] Comparisons of unicode and strings raises UnicodeErrors Message-ID: Bugs item #971106, was opened at 2004-06-11 15:07 Message generated for change (Comment added) made by lemburg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 Category: Unicode Group: Python 2.3 Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Christian Theune (ctheune) Assigned to: M.-A. Lemburg (lemburg) Summary: Comparisons of unicode and strings raises UnicodeErrors Initial Comment: When comparing unicode and strings the implicit conversion will raise an exception instead of returning false. See the example later on. We (Christian Theune and Jim Fulton) suggest that if the ordinary string can't be decoded, that False should be returned. This seems to be the only sane approach given's Python's policy of implicitly converting strings to unicode using a default encoding. Python 2.3.4 (#1, Jun 10 2004, 11:08:42) [GCC 3.3.3 20040412 (Gentoo Linux 3.3.3-r6, ssp-3.3.2-2, pie-8.7.6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class Elephant: ... pass ... >>> e1 = Elephant() >>> e1 == 5 False >>> u"asdf" == "asdf?" Traceback (most recent call last): File "", line 1, in ? UnicodeDecodeError: 'ascii' codec can't decode byte 0xf6 in position 4: ordinal not in range(128) >>> e1 == "asdf?" False >>> e1 == u"asdf?" False >>> ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 19:32 Message: Logged In: YES user_id=38388 Here's an example to make you think this over: >>> u"???" == "???" False Try to explain that to a novice :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-14 19:17 Message: Logged In: YES user_id=31435 I think some noise would be a good thing . Really! Comparisons in Python are a mess now, and it would be good to hammer out a (more) consistent story. Note that the new-in-2.3 Python sets.* and datetime.* types do some of each for comparisons against incompatible types: for <=, <, >, and >=, they raise an exception. For == they return True, and for != they return False. Guido gave a lot of thought to those at the time, and it's no coincidence that these new types act the same way. But older types don't. Maybe the new idea was a mistake. Maybe the older types should change. Maybe there are good reasons to keep them acting differently. Some noise about all this is really needed before Python 3. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 19:05 Message: Logged In: YES user_id=38388 You won't get my approval on this one, that's for sure... then again I'm not sure how much say I have on these things on python-dev anymore four years after having led it's design. As data point, there have been discussions about raising exceptions in cases where types don't match and there is no support for the given type combination like in your e1 == 5 example, so maybe stirring up noise isn't the best strategy in this case :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-14 18:48 Message: Logged In: YES user_id=31435 Christian, you and/or Jim would need to make the case on Python-Dev to change this behavior, and get Guido to agree. The current behavior isn't an accident -- it's functioning as documented and designed. Changing existing behavior isn't easy (and shouldn't be easy ...). ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 18:21 Message: Logged In: YES user_id=38388 In that case I'll have to close the request as "Won't fix": comparisons can raise exceptions and if you're comparing apples and eggs you should get an exception instead of a misleading result which only hides programming errors. Sorry. ---------------------------------------------------------------------- Comment By: Christian Theune (ctheune) Date: 2004-06-14 18:16 Message: Logged In: YES user_id=100622 That's what Jim and I found to be the best solution (for now) ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 18:06 Message: Logged In: YES user_id=38388 I'm not sure what you are suggesting here... do you want u"asdf" == "asdf?" to return False ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 From noreply at sourceforge.net Mon Jun 14 14:16:34 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 14:17:11 2004 Subject: [ python-Bugs-971106 ] Comparisons of unicode and strings raises UnicodeErrors Message-ID: Bugs item #971106, was opened at 2004-06-11 09:07 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 Category: Unicode Group: Python 2.3 Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Christian Theune (ctheune) Assigned to: M.-A. Lemburg (lemburg) Summary: Comparisons of unicode and strings raises UnicodeErrors Initial Comment: When comparing unicode and strings the implicit conversion will raise an exception instead of returning false. See the example later on. We (Christian Theune and Jim Fulton) suggest that if the ordinary string can't be decoded, that False should be returned. This seems to be the only sane approach given's Python's policy of implicitly converting strings to unicode using a default encoding. Python 2.3.4 (#1, Jun 10 2004, 11:08:42) [GCC 3.3.3 20040412 (Gentoo Linux 3.3.3-r6, ssp-3.3.2-2, pie-8.7.6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class Elephant: ... pass ... >>> e1 = Elephant() >>> e1 == 5 False >>> u"asdf" == "asdf?" Traceback (most recent call last): File "", line 1, in ? UnicodeDecodeError: 'ascii' codec can't decode byte 0xf6 in position 4: ordinal not in range(128) >>> e1 == "asdf?" False >>> e1 == u"asdf?" False >>> ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-14 14:16 Message: Logged In: YES user_id=31435 In current Python, both string literals generate a DeprecationWarning: Non-ASCII character ... msg, with a pointer to the docs (your PEP 263). When the implementation of that PEP is complete, the example line won't be legal Python without an explicit encoding declaration (in which case it will presumably return True). So I don't know that the example matters going forward; 2.3 isn't going to change. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 13:32 Message: Logged In: YES user_id=38388 Here's an example to make you think this over: >>> u"???" == "???" False Try to explain that to a novice :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-14 13:17 Message: Logged In: YES user_id=31435 I think some noise would be a good thing . Really! Comparisons in Python are a mess now, and it would be good to hammer out a (more) consistent story. Note that the new-in-2.3 Python sets.* and datetime.* types do some of each for comparisons against incompatible types: for <=, <, >, and >=, they raise an exception. For == they return True, and for != they return False. Guido gave a lot of thought to those at the time, and it's no coincidence that these new types act the same way. But older types don't. Maybe the new idea was a mistake. Maybe the older types should change. Maybe there are good reasons to keep them acting differently. Some noise about all this is really needed before Python 3. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 13:05 Message: Logged In: YES user_id=38388 You won't get my approval on this one, that's for sure... then again I'm not sure how much say I have on these things on python-dev anymore four years after having led it's design. As data point, there have been discussions about raising exceptions in cases where types don't match and there is no support for the given type combination like in your e1 == 5 example, so maybe stirring up noise isn't the best strategy in this case :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-14 12:48 Message: Logged In: YES user_id=31435 Christian, you and/or Jim would need to make the case on Python-Dev to change this behavior, and get Guido to agree. The current behavior isn't an accident -- it's functioning as documented and designed. Changing existing behavior isn't easy (and shouldn't be easy ...). ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 12:21 Message: Logged In: YES user_id=38388 In that case I'll have to close the request as "Won't fix": comparisons can raise exceptions and if you're comparing apples and eggs you should get an exception instead of a misleading result which only hides programming errors. Sorry. ---------------------------------------------------------------------- Comment By: Christian Theune (ctheune) Date: 2004-06-14 12:16 Message: Logged In: YES user_id=100622 That's what Jim and I found to be the best solution (for now) ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 12:06 Message: Logged In: YES user_id=38388 I'm not sure what you are suggesting here... do you want u"asdf" == "asdf?" to return False ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 From noreply at sourceforge.net Mon Jun 14 14:27:08 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 14:27:32 2004 Subject: [ python-Bugs-971106 ] Comparisons of unicode and strings raises UnicodeErrors Message-ID: Bugs item #971106, was opened at 2004-06-11 15:07 Message generated for change (Comment added) made by lemburg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 Category: Unicode Group: Python 2.3 Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Christian Theune (ctheune) Assigned to: M.-A. Lemburg (lemburg) Summary: Comparisons of unicode and strings raises UnicodeErrors Initial Comment: When comparing unicode and strings the implicit conversion will raise an exception instead of returning false. See the example later on. We (Christian Theune and Jim Fulton) suggest that if the ordinary string can't be decoded, that False should be returned. This seems to be the only sane approach given's Python's policy of implicitly converting strings to unicode using a default encoding. Python 2.3.4 (#1, Jun 10 2004, 11:08:42) [GCC 3.3.3 20040412 (Gentoo Linux 3.3.3-r6, ssp-3.3.2-2, pie-8.7.6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class Elephant: ... pass ... >>> e1 = Elephant() >>> e1 == 5 False >>> u"asdf" == "asdf?" Traceback (most recent call last): File "", line 1, in ? UnicodeDecodeError: 'ascii' codec can't decode byte 0xf6 in position 4: ordinal not in range(128) >>> e1 == "asdf?" False >>> e1 == u"asdf?" False >>> ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 20:27 Message: Logged In: YES user_id=38388 Type it into an interactive session: >>> u"???" == "???" Traceback (most recent call last): File "", line 1, in ? UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 0: ordinal not in range(128) This would return False if the change were to happen. In source code the above is not valid without encoding declaration (which is good). But even with encoding declaration, the string literal will still be interpreted using the default encoding (the encoding only applies to the Unicode literal), so the result is the same: unicompare.py: # -*- coding:latin-1 -* print u"???".encode('latin-1') print "???" print u"???" == "???" $ python2.3 unicompare.py ??? ??? Traceback (most recent call last): File "unicompare.py", line 5, in ? print u"???" == "???" UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 0: ordinal not in range(128) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-14 20:16 Message: Logged In: YES user_id=31435 In current Python, both string literals generate a DeprecationWarning: Non-ASCII character ... msg, with a pointer to the docs (your PEP 263). When the implementation of that PEP is complete, the example line won't be legal Python without an explicit encoding declaration (in which case it will presumably return True). So I don't know that the example matters going forward; 2.3 isn't going to change. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 19:32 Message: Logged In: YES user_id=38388 Here's an example to make you think this over: >>> u"???" == "???" False Try to explain that to a novice :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-14 19:17 Message: Logged In: YES user_id=31435 I think some noise would be a good thing . Really! Comparisons in Python are a mess now, and it would be good to hammer out a (more) consistent story. Note that the new-in-2.3 Python sets.* and datetime.* types do some of each for comparisons against incompatible types: for <=, <, >, and >=, they raise an exception. For == they return True, and for != they return False. Guido gave a lot of thought to those at the time, and it's no coincidence that these new types act the same way. But older types don't. Maybe the new idea was a mistake. Maybe the older types should change. Maybe there are good reasons to keep them acting differently. Some noise about all this is really needed before Python 3. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 19:05 Message: Logged In: YES user_id=38388 You won't get my approval on this one, that's for sure... then again I'm not sure how much say I have on these things on python-dev anymore four years after having led it's design. As data point, there have been discussions about raising exceptions in cases where types don't match and there is no support for the given type combination like in your e1 == 5 example, so maybe stirring up noise isn't the best strategy in this case :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-14 18:48 Message: Logged In: YES user_id=31435 Christian, you and/or Jim would need to make the case on Python-Dev to change this behavior, and get Guido to agree. The current behavior isn't an accident -- it's functioning as documented and designed. Changing existing behavior isn't easy (and shouldn't be easy ...). ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 18:21 Message: Logged In: YES user_id=38388 In that case I'll have to close the request as "Won't fix": comparisons can raise exceptions and if you're comparing apples and eggs you should get an exception instead of a misleading result which only hides programming errors. Sorry. ---------------------------------------------------------------------- Comment By: Christian Theune (ctheune) Date: 2004-06-14 18:16 Message: Logged In: YES user_id=100622 That's what Jim and I found to be the best solution (for now) ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 18:06 Message: Logged In: YES user_id=38388 I'm not sure what you are suggesting here... do you want u"asdf" == "asdf?" to return False ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 From noreply at sourceforge.net Mon Jun 14 14:36:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 14:37:06 2004 Subject: [ python-Bugs-972724 ] Python 2.3.4, Solaris 7, socketmodule.c does not compile Message-ID: Bugs item #972724, was opened at 2004-06-14 13:36 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972724&group_id=5470 Category: Build Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Bruce D. Ray (brucedray) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.3.4, Solaris 7, socketmodule.c does not compile Initial Comment: Build of Python 2.3.4 on Solaris 7 with SUN WorkshopPro compilers fails to compiile socketmodule.c with the following error messages on the build: "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 2979: undefined symbol: AF_INET6 "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 3023: undefined symbol: INET_ADDRSTRLEN "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 3023: integral constant expression expected "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 3053: warning: improper pointer/integer combination: op "=" "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 3062: warning: statement not reached cc: acomp failed for /export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c As a consequence of the above, make test gave errors for every test that attempted to import _socket Other error messages in the build not related to the socketmodule.c issue were: "Objects/frameobject.c", line 301: warning: non-constant initializer: op "--" "Objects/frameobject.c", line 303: warning: non-constant initializer: op "--" "Objects/stringobject.c", line 1765: warning: statement not reached "Python/ceval.c", line 3427: warning: non-constant initializer: op "--" "Python/ceval.c", line 3550: warning: non-constant initializer: op "--" "Python/ceval.c", line 3551: warning: non-constant initializer: op "--" "/export/home/bruce/python/Python-2.3.4/Modules/pypcre.c", line 995: warning: non-constant initializer: op "++" "/export/home/bruce/python/Python-2.3.4/Modules/pypcre.c", line 2574: warning: non-constant initializer: op "--" "/export/home/bruce/python/Python-2.3.4/Modules/unicodedata.c", line 313: warning: non-constant initializer: op "--" "/export/home/bruce/python/Python-2.3.4/Modules/parsermodule.c", line 2493: warning: non-constant initializer: op "++" "/export/home/bruce/python/Python-2.3.4/Modules/expat/xmlparse.c", line 3942: warning: non-constant initializer: op "<<=" Configuration output was: checking MACHDEP... sunos5 checking EXTRAPLATDIR... checking for --without-gcc... no checking for --with-cxx=... no checking for c++... no checking for g++... no checking for gcc... no checking for CC... CC checking for C++ compiler default output... a.out checking whether the C++ compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for gcc... no checking for cc... cc checking for C compiler default output... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... no checking whether cc accepts -g... yes checking for cc option to accept ANSI C... none needed checking how to run the C preprocessor... cc -E checking for egrep... egrep checking for AIX... no checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... no checking for unistd.h... yes checking minix/config.h usability... no checking minix/config.h presence... no checking for minix/config.h... no checking for --with-suffix... checking for case-insensitive build directory... no checking LIBRARY... libpython$(VERSION).a checking LINKCC... $(PURIFY) $(CC) checking for --enable-shared... no checking LDLIBRARY... libpython$(VERSION).a checking for ranlib... ranlib checking for ar... ar checking for a BSD-compatible install... ./install-sh -c checking for --with-pydebug... no checking whether cc accepts -OPT:Olimit=0... no checking whether cc accepts -Olimit 1500... no checking whether pthreads are available without options... no checking whether cc accepts -Kpthread... no checking whether cc accepts -Kthread... no checking whether cc accepts -pthread... no checking whether CC also accepts flags for thread support... no checking for ANSI C header files... (cached) yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking grp.h usability... yes checking grp.h presence... yes checking for grp.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking langinfo.h usability... yes checking langinfo.h presence... yes checking for langinfo.h... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking locale.h usability... yes checking locale.h presence... yes checking for locale.h... yes checking ncurses.h usability... no checking ncurses.h presence... no checking for ncurses.h... no checking poll.h usability... yes checking poll.h presence... yes checking for poll.h... yes checking pthread.h usability... yes checking pthread.h presence... yes checking for pthread.h... yes checking signal.h usability... yes checking signal.h presence... yes checking for signal.h... yes checking stdarg.h usability... yes checking stdarg.h presence... yes checking for stdarg.h... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdlib.h... (cached) yes checking stropts.h usability... yes checking stropts.h presence... yes checking for stropts.h... yes checking termios.h usability... yes checking termios.h presence... yes checking for termios.h... yes checking thread.h usability... yes checking thread.h presence... yes checking for thread.h... yes checking for unistd.h... (cached) yes checking utime.h usability... yes checking utime.h presence... yes checking for utime.h... yes checking sys/audioio.h usability... yes checking sys/audioio.h presence... yes checking for sys/audioio.h... yes checking sys/bsdtty.h usability... no checking sys/bsdtty.h presence... no checking for sys/bsdtty.h... no checking sys/file.h usability... yes checking sys/file.h presence... yes checking for sys/file.h... yes checking sys/lock.h usability... yes checking sys/lock.h presence... yes checking for sys/lock.h... yes checking sys/mkdev.h usability... yes checking sys/mkdev.h presence... yes checking for sys/mkdev.h... yes checking sys/modem.h usability... no checking sys/modem.h presence... no checking for sys/modem.h... no checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking sys/poll.h usability... yes checking sys/poll.h presence... yes checking for sys/poll.h... yes checking sys/select.h usability... yes checking sys/select.h presence... yes checking for sys/select.h... yes checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking sys/times.h usability... yes checking sys/times.h presence... yes checking for sys/times.h... yes checking sys/un.h usability... yes checking sys/un.h presence... yes checking for sys/un.h... yes checking sys/utsname.h usability... yes checking sys/utsname.h presence... yes checking for sys/utsname.h... yes checking sys/wait.h usability... yes checking sys/wait.h presence... yes checking for sys/wait.h... yes checking pty.h usability... no checking pty.h presence... no checking for pty.h... no checking term.h usability... no checking term.h presence... yes configure: WARNING: term.h: present but cannot be compiled configure: WARNING: term.h: check for missing prerequisite headers? configure: WARNING: term.h: proceeding with the preprocessor's result configure: WARNING: ## ------------------------------------ ## configure: WARNING: ## Report this to bug-autoconf@gnu.org. ## configure: WARNING: ## ------------------------------------ ## checking for term.h... yes checking libutil.h usability... no checking libutil.h presence... no checking for libutil.h... no checking sys/resource.h usability... yes checking sys/resource.h presence... yes checking for sys/resource.h... yes checking netpacket/packet.h usability... no checking netpacket/packet.h presence... no checking for netpacket/packet.h... no checking sysexits.h usability... yes checking sysexits.h presence... yes checking for sysexits.h... yes checking for dirent.h that defines DIR... yes checking for library containing opendir... none required checking whether sys/types.h defines makedev... no checking for sys/mkdev.h... (cached) yes checking for clock_t in time.h... yes checking for makedev... no checking Solaris LFS bug... no checking for mode_t... yes checking for off_t... yes checking for pid_t... yes checking return type of signal handlers... void checking for size_t... yes checking for uid_t in sys/types.h... yes checking for int... yes checking size of int... 4 checking for long... yes checking size of long... 4 checking for void *... yes checking size of void *... 4 checking for char... yes checking size of char... 1 checking for short... yes checking size of short... 2 checking for float... yes checking size of float... 4 checking for double... yes checking size of double... 8 checking for fpos_t... yes checking size of fpos_t... 8 checking for long long support... yes checking for long long... yes checking size of long long... 8 checking for uintptr_t support... no checking size of off_t... 8 checking whether to enable large file support... yes checking size of time_t... 4 checking for pthread_t... yes checking size of pthread_t... 4 checking for --enable-toolbox-glue... no checking for --enable-framework... no checking for dyld... no checking SO... .so checking LDSHARED... $(CC) -G checking CCSHARED... checking LINKFORSHARED... checking CFLAGSFORSHARED... checking SHLIBS... $(LIBS) checking for dlopen in -ldl... yes checking for shl_load in -ldld... no checking for library containing sem_init... -lrt checking for textdomain in -lintl... yes checking for t_open in -lnsl... yes checking for socket in -lsocket... yes checking for --with-libs... no checking for --with-signal-module... yes checking for --with-dec-threads... no checking for --with-threads... yes checking for _POSIX_THREADS in unistd.h... yes checking cthreads.h usability... no checking cthreads.h presence... no checking for cthreads.h... no checking mach/cthreads.h usability... no checking mach/cthreads.h presence... no checking for mach/cthreads.h... no checking for --with-pth... no checking for pthread_create in -lpthread... yes checking for usconfig in -lmpc... no checking if PTHREAD_SCOPE_SYSTEM is supported... yes checking for pthread_sigmask... yes checking if --enable-ipv6 is specified... no checking for --with-universal-newlines... yes checking for --with-doc-strings... yes checking for --with-pymalloc... yes checking for --with-wctype-functions... no checking for --with-sgi-dl... no checking for --with-dl-dld... no checking for dlopen... yes checking DYNLOADFILE... dynload_shlib.o checking MACHDEP_OBJS... MACHDEP_OBJS checking for alarm... yes checking for chown... yes checking for clock... yes checking for confstr... yes checking for ctermid... yes checking for execv... yes checking for fork... yes checking for fpathconf... yes checking for ftime... yes checking for ftruncate... yes checking for gai_strerror... no checking for getgroups... yes checking for getlogin... yes checking for getloadavg... yes checking for getpeername... yes checking for getpgid... yes checking for getpid... yes checking for getpriority... yes checking for getpwent... yes checking for getwd... yes checking for kill... yes checking for killpg... yes checking for lchown... yes checking for lstat... yes checking for mkfifo... yes checking for mknod... yes checking for mktime... yes checking for mremap... no checking for nice... yes checking for pathconf... yes checking for pause... yes checking for plock... yes checking for poll... yes checking for pthread_init... no checking for putenv... yes checking for readlink... yes checking for realpath... yes checking for select... yes checking for setegid... yes checking for seteuid... yes checking for setgid... yes checking for setlocale... yes checking for setregid... yes checking for setreuid... yes checking for setsid... yes checking for setpgid... yes checking for setpgrp... yes checking for setuid... yes checking for setvbuf... yes checking for snprintf... yes checking for sigaction... yes checking for siginterrupt... yes checking for sigrelse... yes checking for strftime... yes checking for strptime... yes checking for sysconf... yes checking for tcgetpgrp... yes checking for tcsetpgrp... yes checking for tempnam... yes checking for timegm... no checking for times... yes checking for tmpfile... yes checking for tmpnam... yes checking for tmpnam_r... yes checking for truncate... yes checking for uname... yes checking for unsetenv... no checking for utimes... yes checking for waitpid... yes checking for wcscoll... yes checking for _getpty... no checking for chroot... yes checking for link... yes checking for symlink... yes checking for fchdir... yes checking for fsync... yes checking for fdatasync... yes checking for ctermid_r... no checking for flock... no checking for getpagesize... yes checking for true... true checking for inet_aton in -lc... no checking for inet_aton in -lresolv... yes checking for hstrerror... yes checking for inet_aton... yes checking for inet_pton... yes checking for setgroups... yes checking for openpty... no checking for openpty in -lutil... no checking for forkpty... no checking for forkpty in -lutil... no checking for fseek64... no checking for fseeko... yes checking for fstatvfs... yes checking for ftell64... no checking for ftello... yes checking for statvfs... yes checking for dup2... yes checking for getcwd... yes checking for strdup... yes checking for strerror... yes checking for memmove... yes checking for getpgrp... yes checking for setpgrp... (cached) yes checking for gettimeofday... yes checking for major... yes checking for getaddrinfo... no checking for getnameinfo... no checking whether time.h and sys/time.h may both be included... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for struct tm.tm_zone... no checking for tzname... yes checking for struct stat.st_rdev... yes checking for struct stat.st_blksize... yes checking for struct stat.st_blocks... yes checking for time.h that defines altzone... no checking whether sys/select.h and sys/time.h may both be included... yes checking for addrinfo... no checking for sockaddr_storage... no checking whether char is unsigned... no checking for an ANSI C-conforming const... yes checking for working volatile... yes checking for working signed char... yes checking for prototypes... yes checking for variable length prototypes and stdarg.h... yes checking for bad exec* prototypes... no checking if sockaddr has sa_len member... no checking whether va_list is an array... no checking for gethostbyname_r... yes checking gethostbyname_r with 6 args... no checking gethostbyname_r with 5 args... yes checking for __fpu_control... no checking for __fpu_control in -lieee... no checking for --with-fpectl... no checking for --with-libm=STRING... default LIBM="-lm" checking for --with-libc=STRING... default LIBC="" checking for hypot... yes checking wchar.h usability... yes checking wchar.h presence... yes checking for wchar.h... yes checking for wchar_t... yes checking size of wchar_t... 4 checking for UCS-4 tcl... no checking what type to use for unicode... unsigned short checking whether byte ordering is bigendian... yes checking whether right shift extends the sign bit... yes checking for getc_unlocked() and friends... yes checking for rl_pre_input_hook in -lreadline... no checking for rl_completion_matches in -lreadline... no checking for broken nice()... no checking for broken poll()... no checking for working tzset()... no checking for tv_nsec in struct stat... yes checking whether mvwdelch is an expression... yes checking whether WINDOW has _flags... yes checking for /dev/ptmx... yes checking for /dev/ptc... no checking for socklen_t... yes checking for build directories... done configure: creating ./config.status config.status: creating Makefile.pre config.status: creating Modules/Setup.config config.status: creating pyconfig.h creating Setup creating Setup.local creating Makefile ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972724&group_id=5470 From noreply at sourceforge.net Mon Jun 14 12:16:16 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 14:37:41 2004 Subject: [ python-Bugs-971106 ] Comparisons of unicode and strings raises UnicodeErrors Message-ID: Bugs item #971106, was opened at 2004-06-11 15:07 Message generated for change (Comment added) made by ctheune You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 Category: Unicode Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Christian Theune (ctheune) Assigned to: M.-A. Lemburg (lemburg) Summary: Comparisons of unicode and strings raises UnicodeErrors Initial Comment: When comparing unicode and strings the implicit conversion will raise an exception instead of returning false. See the example later on. We (Christian Theune and Jim Fulton) suggest that if the ordinary string can't be decoded, that False should be returned. This seems to be the only sane approach given's Python's policy of implicitly converting strings to unicode using a default encoding. Python 2.3.4 (#1, Jun 10 2004, 11:08:42) [GCC 3.3.3 20040412 (Gentoo Linux 3.3.3-r6, ssp-3.3.2-2, pie-8.7.6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class Elephant: ... pass ... >>> e1 = Elephant() >>> e1 == 5 False >>> u"asdf" == "asdf?" Traceback (most recent call last): File "", line 1, in ? UnicodeDecodeError: 'ascii' codec can't decode byte 0xf6 in position 4: ordinal not in range(128) >>> e1 == "asdf?" False >>> e1 == u"asdf?" False >>> ---------------------------------------------------------------------- >Comment By: Christian Theune (ctheune) Date: 2004-06-14 18:16 Message: Logged In: YES user_id=100622 That's what Jim and I found to be the best solution (for now) ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-14 18:06 Message: Logged In: YES user_id=38388 I'm not sure what you are suggesting here... do you want u"asdf" == "asdf?" to return False ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971106&group_id=5470 From noreply at sourceforge.net Mon Jun 14 16:28:18 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 16:28:28 2004 Subject: [ python-Bugs-969415 ] CJK codecs list incomplete Message-ID: Bugs item #969415, was opened at 2004-06-09 08:54 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Mike Brown (mike_j_brown) >Assigned to: Nobody/Anonymous (nobody) Summary: CJK codecs list incomplete Initial Comment: http://www.python.org/dev/doc/devel/whatsnew/node7. html states that various CJK encodings have been added, but the list given there does not match the list on http://www.python.org/dev/doc/devel/lib/node128.html. In particular, missing from the latter list are all of the aliases with hyphens: shift-jis, shift-jisx0213, euc-jp, euc-jisx0213, iso-2022- jp, iso-2022-jp-1, iso-2022-jp-2, iso-2022-jp-3, iso- 2022-jp-ext, euc-kr, iso-2022-kr Since I successfully ran codecs.lookup() tests on a few of the hyphenated aliases, I assume that the omission of the hyphenated versions in the docs is merely an oversight. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-14 22:28 Message: Logged In: YES user_id=21627 Assigning to somebody else without asking for permission is impolite, IMO; unassigning the report from anybody. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-12 14:04 Message: Logged In: YES user_id=38388 I think that it might be a good idea to document of how the standard search function of the encodings package work at the top of that page, namely to normalize encoding names before doing the lookup: """ Normalization works as follows: all non-alphanumeric characters except the dot used for Python package names are collapsed and replaced with a single underscore, e.g. ' -;#' becomes '_'. Leading and trailing underscores are removed. Note that encoding names should be ASCII only; if they do use non-ASCII characters, these must be Latin-1 compatible. """ The table should then only list normalized encoding names (which I think is already the case). ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-12 13:59 Message: Logged In: YES user_id=21627 Actually, the top of the page does already say Notice that spelling alternatives that only differ in case or use a hyphen instead of an underscore are also valid aliases. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-12 13:54 Message: Logged In: YES user_id=21627 It is just not feasible to list all recognized aliases. For example, for ISO-8859-1, there are trivial 31 aliases, including Iso_8859-1 and iSO-8859_1. For shift_jisx0213, there are 1023 trivial aliases. The aliases column in the documentation should only list non-trivial aliases, and for these, it should list a form that people are most likely to encounter. So if "s-jis" would be more common than "s_jis", this is what should be listed. If s-JIS is even more common, this should be listed. The top of the page should say that case in encoding names does not matter, and that _ and - can be freely substituted. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 09:05 Message: Logged In: YES user_id=80475 Mark, would you pronounce on this one. ---------------------------------------------------------------------- Comment By: Mike Brown (mike_j_brown) Date: 2004-06-09 10:25 Message: Logged In: YES user_id=371366 I see no reason to omit any aliases that are recognized, especially when the aliases in question are, more often than not, the IANA's preferred MIME name as shown at http://www.iana.org/assignments/character-sets. I was looking in the docs to see if Python 2.4 was going to support 'euc-jp', and was dismayed to see 'euc_jp' and variants but no 'euc-jp'. I had to obtain and install 2.4a0 to test to find out that it was just a documentation problem. Please consider listing all realnames and aliases. ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 09:10 Message: Logged In: YES user_id=55188 Reopened to consider the consistence with non-cjk codecs. All the non-cjk codecs are written with hyphen even if their realname is with underscore. (eg. iso8859-1 and iso8859_1.py) Will changing cjk codecs's codec/alias names to use not underscores but hyphens make docs more friendly? ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-09 09:01 Message: Logged In: YES user_id=55188 All hyphens are translated as underscores in encoding lookups. So we may not need to provide encoding list with hyphens additionally. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969415&group_id=5470 From noreply at sourceforge.net Mon Jun 14 17:07:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 17:07:13 2004 Subject: [ python-Bugs-890872 ] BSDDB set_location bug Message-ID: Bugs item #890872, was opened at 2004-02-05 09:08 Message generated for change (Comment added) made by chomo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=890872&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Mike C. Fletcher (mcfletch) Assigned to: Nobody/Anonymous (nobody) Summary: BSDDB set_location bug Initial Comment: In module \lib\bsddb\__init__.py line 147, the method set_location() uses self.dbc.set(key) , which raises an error if the key is not found. It should be using self.dbc.set_range(key) , which matches the documented behaviour for btree databases. Test for correct behaviour is attached. Behaviour is seen under Python 2.3.3 win32 binary build. ---------------------------------------------------------------------- Comment By: alan johnson (chomo) Date: 2004-06-14 21:07 Message: Logged In: YES user_id=943591 looks like need testcase ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=890872&group_id=5470 From noreply at sourceforge.net Mon Jun 14 21:41:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 21:41:31 2004 Subject: [ python-Bugs-972917 ] attached script causes python segfault Message-ID: Bugs item #972917, was opened at 2004-06-15 01:41 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972917&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Faheem Mitha (fmitha) Assigned to: Nobody/Anonymous (nobody) Summary: attached script causes python segfault Initial Comment: I have filed a detailed report at http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=252517 This corresponds to Debian bug #252517 The python maintainer, Matthias Klose, can reproduce this without any Debian patches. On the other hand, I cannot reproduce it on either of the two RedHat machines I tried. In any case, it seems possible this is an upstream problem. The operating system is Debian sarge on i386 (Athlon XP 1700). The version of python I am using 2.3.3, the current Debian defaults. The relevant Debian packages are ii python2.3 2.3.4-1 ii python2.3-numarray 0.9-3 For convenience, I attach a tar.gz file (python_debug.tar.gz) which contains enough information to reproduce the problem. See the included README. Please refer to the Debian bug report for more information and comments. Please also copy (if possible) the Debian bug report with relevant info. You can do so by emailing/ccing 252517@bugs.debian.org. Thanks. Faheem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972917&group_id=5470 From noreply at sourceforge.net Mon Jun 14 21:48:39 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 21:48:46 2004 Subject: [ python-Bugs-972917 ] attached script causes python segfault Message-ID: Bugs item #972917, was opened at 2004-06-14 21:41 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972917&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Faheem Mitha (fmitha) Assigned to: Nobody/Anonymous (nobody) Summary: attached script causes python segfault Initial Comment: I have filed a detailed report at http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=252517 This corresponds to Debian bug #252517 The python maintainer, Matthias Klose, can reproduce this without any Debian patches. On the other hand, I cannot reproduce it on either of the two RedHat machines I tried. In any case, it seems possible this is an upstream problem. The operating system is Debian sarge on i386 (Athlon XP 1700). The version of python I am using 2.3.3, the current Debian defaults. The relevant Debian packages are ii python2.3 2.3.4-1 ii python2.3-numarray 0.9-3 For convenience, I attach a tar.gz file (python_debug.tar.gz) which contains enough information to reproduce the problem. See the included README. Please refer to the Debian bug report for more information and comments. Please also copy (if possible) the Debian bug report with relevant info. You can do so by emailing/ccing 252517@bugs.debian.org. Thanks. Faheem. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-14 21:48 Message: Logged In: YES user_id=31435 There's no file attached. Note that SourceForge requires that you check the "Check to Upload and Attach a File" box, it's not enough just to fill in the filepath name. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972917&group_id=5470 From noreply at sourceforge.net Mon Jun 14 22:13:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 22:14:00 2004 Subject: [ python-Bugs-972917 ] attached script causes python segfault Message-ID: Bugs item #972917, was opened at 2004-06-15 10:41 Message generated for change (Comment added) made by perky You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972917&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Faheem Mitha (fmitha) Assigned to: Nobody/Anonymous (nobody) Summary: attached script causes python segfault Initial Comment: I have filed a detailed report at http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=252517 This corresponds to Debian bug #252517 The python maintainer, Matthias Klose, can reproduce this without any Debian patches. On the other hand, I cannot reproduce it on either of the two RedHat machines I tried. In any case, it seems possible this is an upstream problem. The operating system is Debian sarge on i386 (Athlon XP 1700). The version of python I am using 2.3.3, the current Debian defaults. The relevant Debian packages are ii python2.3 2.3.4-1 ii python2.3-numarray 0.9-3 For convenience, I attach a tar.gz file (python_debug.tar.gz) which contains enough information to reproduce the problem. See the included README. Please refer to the Debian bug report for more information and comments. Please also copy (if possible) the Debian bug report with relevant info. You can do so by emailing/ccing 252517@bugs.debian.org. Thanks. Faheem. ---------------------------------------------------------------------- >Comment By: Hye-Shik Chang (perky) Date: 2004-06-15 11:13 Message: Logged In: YES user_id=55188 It's just a stack overflow due to excessive recursion calls of factorial(). Your code: sys.setrecursionlimit(1000000) This may very dangerous in CPython. Consider spanning factorial() function to for-loops or using Stackless Python. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-15 10:48 Message: Logged In: YES user_id=31435 There's no file attached. Note that SourceForge requires that you check the "Check to Upload and Attach a File" box, it's not enough just to fill in the filepath name. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972917&group_id=5470 From noreply at sourceforge.net Mon Jun 14 22:34:23 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 14 22:34:31 2004 Subject: [ python-Bugs-972917 ] attached script causes python segfault Message-ID: Bugs item #972917, was opened at 2004-06-14 21:41 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972917&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Faheem Mitha (fmitha) Assigned to: Nobody/Anonymous (nobody) Summary: attached script causes python segfault Initial Comment: I have filed a detailed report at http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=252517 This corresponds to Debian bug #252517 The python maintainer, Matthias Klose, can reproduce this without any Debian patches. On the other hand, I cannot reproduce it on either of the two RedHat machines I tried. In any case, it seems possible this is an upstream problem. The operating system is Debian sarge on i386 (Athlon XP 1700). The version of python I am using 2.3.3, the current Debian defaults. The relevant Debian packages are ii python2.3 2.3.4-1 ii python2.3-numarray 0.9-3 For convenience, I attach a tar.gz file (python_debug.tar.gz) which contains enough information to reproduce the problem. See the included README. Please refer to the Debian bug report for more information and comments. Please also copy (if possible) the Debian bug report with relevant info. You can do so by emailing/ccing 252517@bugs.debian.org. Thanks. Faheem. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-14 22:34 Message: Logged In: YES user_id=31435 Looking at the original Debian report, I agree with perky that the recursive factorial is most likely to blame. Python sets the default recursion limit to a much smaller value, because it can't reliably detect overflow of the C stack. If you boost it above its default, you indeed risk segfaults. The docs for setrecursion limit warn about this, too. ---------------------------------------------------------------------- Comment By: Hye-Shik Chang (perky) Date: 2004-06-14 22:13 Message: Logged In: YES user_id=55188 It's just a stack overflow due to excessive recursion calls of factorial(). Your code: sys.setrecursionlimit(1000000) This may very dangerous in CPython. Consider spanning factorial() function to for-loops or using Stackless Python. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-14 21:48 Message: Logged In: YES user_id=31435 There's no file attached. Note that SourceForge requires that you check the "Check to Upload and Attach a File" box, it's not enough just to fill in the filepath name. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972917&group_id=5470 From noreply at sourceforge.net Tue Jun 15 03:55:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 15 03:55:35 2004 Subject: [ python-Bugs-973054 ] bsddb in Python 2.3 incompatible with SourceNav output Message-ID: Bugs item #973054, was opened at 2004-06-15 09:55 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973054&group_id=5470 Category: Extension Modules Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: gdy (gdy999) Assigned to: Nobody/Anonymous (nobody) Summary: bsddb in Python 2.3 incompatible with SourceNav output Initial Comment: bsddb module in Python 2.3 cannot read SourceNavigator output (old btree format): Python 2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import bsddb >>> a = bsddb.btopen('jEdit_14-12-2004.cl') Traceback (most recent call last): File "", line 1, in ? File "C:\Program Files\Python23\lib\bsddb\__init__.py", line 209, in btopen d.open(file, db.DB_BTREE, flags, mode) bsddb._db.DBInvalidArgError: (22, 'Invalid argument -- jEdit_14-12-2004.cl: unexpected file type or format') With Python 2.2 the file is opened without problem. SourceNavigator is an open source source code analyser (http://sourcenav.sourceforge.net/) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973054&group_id=5470 From noreply at sourceforge.net Tue Jun 15 05:09:52 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 15 05:10:06 2004 Subject: [ python-Bugs-973092 ] inspect.getframeinfo bug if 'context' is to big Message-ID: Bugs item #973092, was opened at 2004-06-15 11:09 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973092&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Hans Georg Krauthaeuser (krauthae) Assigned to: Nobody/Anonymous (nobody) Summary: inspect.getframeinfo bug if 'context' is to big Initial Comment: In inspect.getframeinfo index gets wrong if context is larger than len(lines). the lines are: ... if context > 0: start = lineno - 1 - context//2 try: lines, lnum = findsource(frame) except IOError: lines = index = None else: start = max(start, 1) start = min(start, len(lines) - context) ^^^^^^^^^^^^^^^^^^^^^^^^ lines = lines[start:start+context] index = lineno - 1 - start else: lines = index = None ... The 'min' statement gives a negative start if context is to large. As a result index gets to large. The solution is just to put context = min(context,len(lines)) after the first 'else:' Regards ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973092&group_id=5470 From noreply at sourceforge.net Tue Jun 15 05:36:34 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 15 05:50:09 2004 Subject: [ python-Bugs-973103 ] empty raise after handled exception Message-ID: Bugs item #973103, was opened at 2004-06-15 09:36 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973103&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Niki Spahiev (nikis) Assigned to: Nobody/Anonymous (nobody) Summary: empty raise after handled exception Initial Comment: executing empty raise after handled exception produces wrong traceback. no exception: Traceback (most recent call last): File "bug.py", line 19, in ? test(i) File "bug.py", line 15, in test badfn() File "bug.py", line 5, in badfn raise TypeError: exceptions must be classes, instances, or strings (deprecated), not NoneType handled exception: no Traceback (most recent call last): File "bug.py", line 19, in ? test(i) File "bug.py", line 15, in test badfn() File "bug.py", line 11, in test print d[12345] KeyError: 12345 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973103&group_id=5470 From noreply at sourceforge.net Tue Jun 15 07:23:25 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 15 07:38:35 2004 Subject: [ python-Bugs-973092 ] inspect.getframeinfo bug if 'context' is to big Message-ID: Bugs item #973092, was opened at 2004-06-15 04:09 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973092&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Hans Georg Krauthaeuser (krauthae) Assigned to: Nobody/Anonymous (nobody) Summary: inspect.getframeinfo bug if 'context' is to big Initial Comment: In inspect.getframeinfo index gets wrong if context is larger than len(lines). the lines are: ... if context > 0: start = lineno - 1 - context//2 try: lines, lnum = findsource(frame) except IOError: lines = index = None else: start = max(start, 1) start = min(start, len(lines) - context) ^^^^^^^^^^^^^^^^^^^^^^^^ lines = lines[start:start+context] index = lineno - 1 - start else: lines = index = None ... The 'min' statement gives a negative start if context is to large. As a result index gets to large. The solution is just to put context = min(context,len(lines)) after the first 'else:' Regards ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-15 06:23 Message: Logged In: YES user_id=80475 Fixed. See Lib/inspect.py 1.51 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973092&group_id=5470 From noreply at sourceforge.net Tue Jun 15 14:50:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 15 15:18:25 2004 Subject: [ python-Bugs-924703 ] test_unicode_file fails on Win98SE Message-ID: Bugs item #924703, was opened at 2004-03-28 03:48 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924703&group_id=5470 Category: Unicode Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 7 Submitted By: Tim Peters (tim_one) Assigned to: Martin v. L?wis (loewis) Summary: test_unicode_file fails on Win98SE Initial Comment: In current CVS, test_unicode_file fails on Win98SE. This has been going on for some time, actually. ERROR: test_single_files (__main__.TestUnicodeFiles) Traceback (most recent call last): File ".../lib/test/test_unicode_file.py", line 162, in test_single_files self._test_single(TESTFN_UNICODE) File ".../lib/test/test_unicode_file.py", line 136, in _test_single self._do_single(filename) File ".../lib/test/test_unicode_file.py", line 49, in _do_single new_base = unicodedata.normalize("NFD", new_base) TypeError: normalized() argument 2 must be unicode, not str At this point, filename is TESTFN_UNICODE is u'@test-\xe0\xf2' os.path.abspath(filename) is 'C:\Code\python\PC\VC6\@test-\xe0\xf2' new_base is '@test-\xe0\xf2 So abspath() removed the "Unicodeness" of filename, and new_base is indeed not a Unicode string at this point. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-15 20:50 Message: Logged In: YES user_id=21627 This should be fixed with posixmodule.c 2.321. Unfortunately, I cannot test it, because I don't have W9X. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 09:11 Message: Logged In: YES user_id=80475 This is still failing. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-03-30 07:44 Message: Logged In: YES user_id=31435 Just a guess: the os.path functions are generally just string manipulation, and on Windows I sometimes import posixpath.py directly to do Unixish path manipulations. So it's conceivable that someone (not me) on a non-Windows box imports ntpath directly to manipulate Windows paths. In fact, I see that Fredrik's "Python Standard Library" book explicitly mentions this use case for importing ntpath directly. So maybe he actually did it -- once . ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-03-30 07:25 Message: Logged In: YES user_id=21627 I see. I'll look into changing _getfullpathname to return Unicode output for Unicode input even if unicode_file_names() is false. However, I do wonder what the purpose of _abspath then is: On what system would it be used??? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-03-30 01:11 Message: Logged In: YES user_id=31435 Nope, that can't help -- ntpath.py's _abspath doesn't exist on Win98SE (the "from nt import _getfullpathname" succeeds, so _abspath is never defined). It's _getfullpathname() that's taking a Unicode input and returning a str output here. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-03-30 00:17 Message: Logged In: YES user_id=21627 abspath(unicode) should return a Unicode path. Does it help if _abspath (in ntpath.py) is changed to contain if not isabs(path): if isinstance(path, unicode): cwd = os.getcwdu() else: cwd = os.getcwd() path = join(cwd, path) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924703&group_id=5470 From noreply at sourceforge.net Tue Jun 15 16:09:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 15 16:09:33 2004 Subject: [ python-Bugs-964870 ] sys.getfilesystemencoding() Message-ID: Bugs item #964870, was opened at 2004-06-02 09:15 Message generated for change (Comment added) made by manlioperillo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964870&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: sys.getfilesystemencoding() Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. In the documentation it is reported that: 'On Windows NT+, file names are Unicode natively, so no conversion is performed'. But: import sys >>> sys.getfilesystemencoding() 'mbcs' Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-15 20:09 Message: Logged In: YES user_id=1054957 I'm not an expert but... mbcs is the encoding used by Windows 9x. This is not clear, if the encoding for win NT+ is 'mbcs' this should be written in the documentation explicitly. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-02 12:40 Message: Logged In: YES user_id=21627 The documentation is correct. Filenames are not converted. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964870&group_id=5470 From noreply at sourceforge.net Tue Jun 15 18:43:24 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 15 18:43:32 2004 Subject: [ python-Bugs-973579 ] Doc error on super(cls,self) Message-ID: Bugs item #973579, was opened at 2004-06-15 15:43 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973579&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: David MacQuigg (macquigg) Assigned to: Nobody/Anonymous (nobody) Summary: Doc error on super(cls,self) Initial Comment: In both the Library Reference, section 2.1, and in the Python 2.2 Quick Reference, page 19, the explanation for this function is: super( type[, object-or-type]) Returns the superclass of type. ... This is misleading. I could not get this function to work right until I realized that it is searching the entire MRO, not just the superclasses of 'type'. See comp.lang.python 6/11/04, same subject as above, for further discussion and an example. I think a better explanation would be: super(cls,self).m(arg) Calls method 'm' from a class in the MRO (Method Resolution Order) of 'self'. The selected class is the first one which is above 'cls' in the MRO and which contains 'm'. The 'super' built-in function actually returns not a class, but a 'super' object. This object can be saved, like a bound method, and later used to do a new search of the MRO for a different method to be applied to the saved instance. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973579&group_id=5470 From noreply at sourceforge.net Tue Jun 15 16:34:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 15 19:00:36 2004 Subject: [ python-Bugs-973507 ] sys.stdout problems with pythonw.exe Message-ID: Bugs item #973507, was opened at 2004-06-15 20:34 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973507&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: sys.stdout problems with pythonw.exe Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. I have written this script for reproducing the bug: import sys class teeIO: def __init__(self, *files): self.__files = files def write(self, str): for i in self.__files: print >> trace, 'writing on %s: %s' % (i, str) i.write(str) print >> trace, '-' * 70 def tee(*files): return teeIO(*files) log = file('log.txt', 'w') err = file('err.txt', 'w') trace = file('trace.txt', 'w') sys.stdout = tee(log, sys.__stdout__) sys.stderr = tee(err, sys.__stderr__) def write(n, width): sys.stdout.write('x' * width) if n == 1: return write(n - 1, width) try: 1/0 except: write(1, 4096) [output from err.log] Traceback (most recent call last): File "sys.py", line 36, in ? write(1, 4096) File "sys.py", line 28, in write sys.stdout.write('x' * width) File "sys.py", line 10, in write i.write(str) IOError: [Errno 9] Bad file descriptor TeeIO is needed for actually read the program output, but I don't know if the problem is due to teeIO. The same problem is present for stderr, as can be seen by swapping sys.__stdout__ and sys.__stderr__. As I can see, 4096 is the buffer size for sys.stdout/err. The problem is the same if the data is written in chunks, ad example: write(2, 4096/2). The bug isn't present if I use python.exe or if I write less than 4096 bytes. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973507&group_id=5470 From noreply at sourceforge.net Tue Jun 15 08:09:42 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 15 19:06:01 2004 Subject: [ python-Feature Requests-776100 ] new function 'islastline' in fileinput Message-ID: Feature Requests item #776100, was opened at 2003-07-23 02:49 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=776100&group_id=5470 >Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Edouard Hinard (edhi) Assigned to: Nobody/Anonymous (nobody) Summary: new function 'islastline' in fileinput Initial Comment: a function 'islastline' is missing to fileinput module. when that function exists, it is possible to perform an action at the end of a file like in: for line in fileinput.input(): processline(line) if fileinput.lastline(): processfile() Without this function, the same script looks like: for line in fileinput.input(): if fileinput.isfirstline() and fileinput.lineno() != 1: processfile() processline(line) processfile() ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-15 07:09 Message: Logged In: YES user_id=80475 Reclassifying as a feature request. I'm -0 on complicating this venerable interface. Also, I find it unnatural for the inside of a loop to be able to ascertain whether it is in its final iteration. The closest approximation is the for-else clause which is vary rarely used in normal python programming. While the OP was able to sketch a use case, it involved only saving a couple of lines over what can be done with the existing API. P.S. There is a less buggy version of the patch at www.python.org/sf/968063 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=776100&group_id=5470 From noreply at sourceforge.net Tue Jun 15 19:09:50 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 15 19:09:56 2004 Subject: [ python-Bugs-973507 ] sys.stdout problems with pythonw.exe Message-ID: Bugs item #973507, was opened at 2004-06-15 16:34 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973507&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: sys.stdout problems with pythonw.exe Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. I have written this script for reproducing the bug: import sys class teeIO: def __init__(self, *files): self.__files = files def write(self, str): for i in self.__files: print >> trace, 'writing on %s: %s' % (i, str) i.write(str) print >> trace, '-' * 70 def tee(*files): return teeIO(*files) log = file('log.txt', 'w') err = file('err.txt', 'w') trace = file('trace.txt', 'w') sys.stdout = tee(log, sys.__stdout__) sys.stderr = tee(err, sys.__stderr__) def write(n, width): sys.stdout.write('x' * width) if n == 1: return write(n - 1, width) try: 1/0 except: write(1, 4096) [output from err.log] Traceback (most recent call last): File "sys.py", line 36, in ? write(1, 4096) File "sys.py", line 28, in write sys.stdout.write('x' * width) File "sys.py", line 10, in write i.write(str) IOError: [Errno 9] Bad file descriptor TeeIO is needed for actually read the program output, but I don't know if the problem is due to teeIO. The same problem is present for stderr, as can be seen by swapping sys.__stdout__ and sys.__stderr__. As I can see, 4096 is the buffer size for sys.stdout/err. The problem is the same if the data is written in chunks, ad example: write(2, 4096/2). The bug isn't present if I use python.exe or if I write less than 4096 bytes. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-15 19:09 Message: Logged In: YES user_id=31435 Ya, this is well known, although it may not be documented. pythonw's purpose in life is *not* to create (or inherit) a console window (a "DOS box"). Therefore stdin, stdout, and stderr aren't attached to anything usable. Microsoft's C runtime seems to attach them to buffers that aren't connected to anything, so they complain if you ever exceed the buffer size. The short course is that stdin, stdout and stderr are useless in programs without a console window, so you shouldn't use them. Or you should you install your own file-like objects, and make them do something useful to you. I think it would be helpful if pythonw did something fancier (e.g., pop up a window containing attempted output), but that's in new-feature terrority, and nobody has contributed code for it anyway. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973507&group_id=5470 From noreply at sourceforge.net Tue Jun 15 23:10:16 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 15 23:10:24 2004 Subject: [ python-Bugs-781883 ] Listbox itemconfigure leaks memory Message-ID: Bugs item #781883, was opened at 2003-08-02 04:01 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=781883&group_id=5470 Category: Tkinter >Group: 3rd Party >Status: Closed >Resolution: Rejected Priority: 5 Submitted By: Angelo Bonet (abonet) >Assigned to: Neal Norwitz (nnorwitz) Summary: Listbox itemconfigure leaks memory Initial Comment: Calling itemconfigure on Tkinter.Listbox to change item colors seems to leak memory without bounds. I've seen this in Python 2.2 and 2.3 on SunOS, Tru64, and Linux. Here's a small script that demostrates it: import Tkinter as Tk Lb = None def update_lb (): global Lb Lb.delete(0, Tk.END) for ii in range(100): Lb.insert(Tk.END, 'Item %d' % ii) Lb.itemconfigure(ii, bg='red') Lb.after(10, update_lb) root = Tk.Tk() Lb = Tk.Listbox(root) Lb.pack() Lb.after(1000, update_lb) root.mainloop() ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-15 23:10 Message: Logged In: YES user_id=33168 Closing this since it seems to be related to Tcl/Tk, not Python. ---------------------------------------------------------------------- Comment By: Jeff Epler (jepler) Date: 2003-08-03 11:24 Message: Logged In: YES user_id=2772 The problem can be seen in an equivalent tcl script. This may indicate that the problem is not in Python itself, but in the third-party Tcl/Tk library. The script follows. I hope indentation isn't lost, but at least Tk has curly braces for just such an emergency. proc update_lb {} { .lb delete 0 end for {set i 0} {$i < 100} {incr i} { .lb insert end "Item $i" .lb itemco $i -bg red } after 10 update_lb } listbox .lb pack .lb after 10 update_lb ---------------------------------------------------------------------- Comment By: Angelo Bonet (abonet) Date: 2003-08-02 16:40 Message: Logged In: YES user_id=716439 Calling gc.collect() seems to reduce the rate at which memory is leaking, but it does not eliminate the leak. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2003-08-02 05:01 Message: Logged In: YES user_id=80475 Try a gc.collect() to see if the problem persists. ---------------------------------------------------------------------- Comment By: Angelo Bonet (abonet) Date: 2003-08-02 04:06 Message: Logged In: YES user_id=716439 Sorry, but indentation got lost on script sample. Look at attached file instead. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=781883&group_id=5470 From noreply at sourceforge.net Tue Jun 15 23:11:46 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 15 23:11:54 2004 Subject: [ python-Bugs-782689 ] PyObject_Free unallocated memory read warning Message-ID: Bugs item #782689, was opened at 2003-08-04 04:38 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=782689&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Grzegorz Makarewicz (makaron) >Assigned to: Neal Norwitz (nnorwitz) Summary: PyObject_Free unallocated memory read warning Initial Comment: win2k, vc6sp5 ADDRESS_IN_RANGE(p, pool->arenaindex)) is too simple and may read unmanaged memory when "p" does not belong to selected pool and there is something allocated by python allocator and pool->arenaindex is smaller than narenas (random case). valgrind messages for PyObject_Free line 711: Conditional jump or move depends on uninitialised value(s) Use of uninitialised value of size 4 Invalid read of size 4 simple test: #include extern void *PyObject_Malloc(int size); extern void PyObject_Free(void *mem); void main() { void *p0; void *p; int i; p0 = PyObject_Malloc(100); for(i = 1; i < 512; i++ ){ p = PyObject_Malloc(i); PyObject_Free(p); } PyObject_Free(p0); } ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-15 23:11 Message: Logged In: YES user_id=33168 Misc/README.valgrind was added as well as a default valgrind suppression file that can be used. ---------------------------------------------------------------------- Comment By: Jeff Epler (jepler) Date: 2003-08-10 15:40 Message: Logged In: YES user_id=2772 Please see: http://mail.python.org/pipermail/python-dev/2002-October/029712.html http://mail.python.org/pipermail/python-dev/2003-July/036740.html .. and possibly other past python-dev threads. It is believed that this UMR is safe (non-segfaulting) on systems with a few assumptions about memory allocation. When the first test erroneously passes for a block not from a pool, the next check still gets the right result. If the valgrind suppressions file isn't included in the Python distribution, perhaps it should be added. (how portable have these files been across valgrind versions?) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=782689&group_id=5470 From noreply at sourceforge.net Tue Jun 15 23:14:04 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 15 23:14:10 2004 Subject: [ python-Bugs-788526 ] Closing dbenv first bsddb doesn't release locks & segfau Message-ID: Bugs item #788526, was opened at 2003-08-14 01:13 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=788526&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jane Austine (janeaustine50) >Assigned to: Gregory P. Smith (greg) >Summary: Closing dbenv first bsddb doesn't release locks & segfau Initial Comment: There is a test code named test_env_close in bsddb/test, but it doesn't test the case thoroughly. There seems to be a bug in closing the db environment first -- the lock is not released, and sometimes it seg-faults. Following is the code that shows this bug. import os from bsddb import db dir,dbname='test_dbenv','test_db' def getDbEnv(dir): try: os.mkdir(dir) except: pass dbenv = db.DBEnv() dbenv.open(dir, db.DB_INIT_CDB| db.DB_CREATE |db.DB_INIT_MPOOL) return dbenv def getDbHandler(db_env,db_name): d = db.DB(dbenv) d.open(db_name, db.DB_BTREE, db.DB_CREATE) return d dbenv=getDbEnv(dir) assert dbenv.lock_stat()['nlocks']==0 d=getDbHandler(dbenv,dbname) assert dbenv.lock_stat()['nlocks']==1 try: dbenv.close() except db.DBError: pass else: assert 0 del d import gc gc.collect() dbenv=getDbEnv(dir) assert dbenv.lock_stat()['nlocks']==0,'number of current locks should be 0' #this fails If you close dbenv before db handler, the lock is not released. Moreover, try this with dbshelve and it segfaults. >>> from bsddb import dbshelve >>> dbenv2=getDbEnv('test_dbenv2') >>> d2=dbshelve.open(dbname,dbenv=dbenv2) >>> try: ... dbenv2.close() ... except db.DBError: ... pass ... else: ... assert 0 ... >>> >>> Exception bsddb._db.DBError: (0, 'DBEnv object has been closed') in Segmentation fault Tested on: 1. linux with Python 2.3 final, Berkeley DB 4.1.25 2. windows xp with Python 2.3 final (with _bsddb that comes along) ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-15 23:14 Message: Logged In: YES user_id=33168 Greg do you know anything about this? Is it still a problem? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=788526&group_id=5470 From noreply at sourceforge.net Wed Jun 16 00:54:19 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 16 00:54:26 2004 Subject: [ python-Bugs-964870 ] sys.getfilesystemencoding() Message-ID: Bugs item #964870, was opened at 2004-06-02 11:15 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964870&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: sys.getfilesystemencoding() Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. In the documentation it is reported that: 'On Windows NT+, file names are Unicode natively, so no conversion is performed'. But: import sys >>> sys.getfilesystemencoding() 'mbcs' Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-16 06:54 Message: Logged In: YES user_id=21627 When filenames need to be converted between byte strings and unicode strings, "mbcs" is the right encoding even on Windows NT. Python rarely needs to perform this conversion, as it passes Unicode strings directly to the operating system. It might be that applications have the need to perform the conversion themselves. I have added a comment in this direction in libsys.tex 1.72. ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-15 22:09 Message: Logged In: YES user_id=1054957 I'm not an expert but... mbcs is the encoding used by Windows 9x. This is not clear, if the encoding for win NT+ is 'mbcs' this should be written in the documentation explicitly. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-02 14:40 Message: Logged In: YES user_id=21627 The documentation is correct. Filenames are not converted. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=964870&group_id=5470 From noreply at sourceforge.net Wed Jun 16 15:19:55 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 16 15:31:38 2004 Subject: [ python-Bugs-974159 ] Starting a script in OSX within a specific folder Message-ID: Bugs item #974159, was opened at 2004-06-16 21:19 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974159&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Floris van Manen (klankschap) Assigned to: Nobody/Anonymous (nobody) Summary: Starting a script in OSX within a specific folder Initial Comment: How do I start a Python script in OSX in such a way that the cwd folder remains de same as the one in which the script is located. And not the root of the current user, or the Content folder of an application package. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974159&group_id=5470 From noreply at sourceforge.net Wed Jun 16 18:18:55 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 16 18:23:46 2004 Subject: [ python-Bugs-788526 ] Closing dbenv first bsddb doesn't release locks & segfau Message-ID: Bugs item #788526, was opened at 2003-08-13 22:13 Message generated for change (Comment added) made by greg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=788526&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jane Austine (janeaustine50) Assigned to: Gregory P. Smith (greg) Summary: Closing dbenv first bsddb doesn't release locks & segfau Initial Comment: There is a test code named test_env_close in bsddb/test, but it doesn't test the case thoroughly. There seems to be a bug in closing the db environment first -- the lock is not released, and sometimes it seg-faults. Following is the code that shows this bug. import os from bsddb import db dir,dbname='test_dbenv','test_db' def getDbEnv(dir): try: os.mkdir(dir) except: pass dbenv = db.DBEnv() dbenv.open(dir, db.DB_INIT_CDB| db.DB_CREATE |db.DB_INIT_MPOOL) return dbenv def getDbHandler(db_env,db_name): d = db.DB(dbenv) d.open(db_name, db.DB_BTREE, db.DB_CREATE) return d dbenv=getDbEnv(dir) assert dbenv.lock_stat()['nlocks']==0 d=getDbHandler(dbenv,dbname) assert dbenv.lock_stat()['nlocks']==1 try: dbenv.close() except db.DBError: pass else: assert 0 del d import gc gc.collect() dbenv=getDbEnv(dir) assert dbenv.lock_stat()['nlocks']==0,'number of current locks should be 0' #this fails If you close dbenv before db handler, the lock is not released. Moreover, try this with dbshelve and it segfaults. >>> from bsddb import dbshelve >>> dbenv2=getDbEnv('test_dbenv2') >>> d2=dbshelve.open(dbname,dbenv=dbenv2) >>> try: ... dbenv2.close() ... except db.DBError: ... pass ... else: ... assert 0 ... >>> >>> Exception bsddb._db.DBError: (0, 'DBEnv object has been closed') in Segmentation fault Tested on: 1. linux with Python 2.3 final, Berkeley DB 4.1.25 2. windows xp with Python 2.3 final (with _bsddb that comes along) ---------------------------------------------------------------------- >Comment By: Gregory P. Smith (greg) Date: 2004-06-16 15:18 Message: Logged In: YES user_id=413 Yes this bug is still there. A "workaround" is just a "don't do that" when it comes to closing sleepycat DBEnv objects while there are things using them still open. I believe we can prevent this... One proposal: internally in _bsddb.c DBEnv could be made to keep a weak reference to all objects created using it (DB and DBLock objects) and refuse to call the sleepycat close() method if any still exist (overridable using a force=1 flag). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-15 20:14 Message: Logged In: YES user_id=33168 Greg do you know anything about this? Is it still a problem? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=788526&group_id=5470 From noreply at sourceforge.net Wed Jun 16 18:23:07 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 06:15:03 2004 Subject: [ python-Bugs-966618 ] float_subtype_new() bug Message-ID: Bugs item #966618, was opened at 2004-06-04 15:39 Message generated for change (Comment added) made by arigo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Guido van Rossum (gvanrossum) Summary: float_subtype_new() bug Initial Comment: A rather obsure bug in the subclassing code: >>> class A: ... def __float__(self): return 'hello' ... >>> float(A()) 'hello' >>> class f(float): pass ... >>> f(A()) -5.7590155905901735e-56 In debug mode, the assert() in float_subtype_new() fails instead. In non-debug mode, the value we get is the result of typecasting the PyStringObject* to a PyFloatObject*. ---------------------------------------------------------------------- >Comment By: Armin Rigo (arigo) Date: 2004-06-16 22:23 Message: Logged In: YES user_id=4771 Let's be careful here. I can imagine that some __int__() implementations in user code would return a long instead, as it is the behavior of int(something_too_big) already. As far as I know, the original bug this tracker is for is the only place where it was blindly assumed that a specific conversion method returned an object of a specific type. I'm fine with just fixing that case. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-08 02:54 Message: Logged In: YES user_id=35752 Attaching patch. One outstanding issue is that it may make sense to search for and remove unnecessary type checks (e.g. PyNumber_Int followed by PyInt_Check). Also, this change only broke one test case but I have no idea how much user code this might break. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-07 15:37 Message: Logged In: YES user_id=31435 Assigned to Neil, as a reminder to attach his patch. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 21:00 Message: Logged In: YES user_id=35752 I've got an alternative patch. SF cvs is down at the moment so I'll have to generate a patch later. My change makes CPython match the behavior of Jython. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 19:46 Message: Logged In: YES user_id=35752 I think the right fix is to have PyNumber_Int, PyNumber_Float, and PyNumber_Long check the return value of slot function (i.e. nb_int, nb_float). That matches the behavior of PyObject_Str and PyObject_Repr. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:13 Message: Logged In: YES user_id=1057404 (ack, spelling error copied from intobject.c) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must be convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:11 Message: Logged In: YES user_id=1057404 (Inline, I can't seem to attach things) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:01 Message: Logged In: YES user_id=1057404 floatobject.c contains an assertion that the value can be coerced into a float, but not a runtime if. I've changed it to be in line with what int_subtype_new() does. This may not be 100% correct, however, as they both allow a string to be returned from __int__() and __float__(), respectively. complex() does not allow this, however, and it throws TypeError (while int_subtype_new() and float_subtype_new() throw ValueError). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 From noreply at sourceforge.net Thu Jun 17 06:22:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 07:05:42 2004 Subject: [ python-Bugs-974635 ] Slice indexes passed to __getitem__ are wrapped Message-ID: Bugs item #974635, was opened at 2004-06-17 10:22 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974635&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Connelly (connelly) Assigned to: Nobody/Anonymous (nobody) Summary: Slice indexes passed to __getitem__ are wrapped Initial Comment: Hi, When a slice is passed to __getitem__, the indices for the slice are wrapped by the length of the object (adding len(self) once to both start index if < 0 and the end index if < 0). class C: def __getitem__(self, item): print item def __len__(self): return 10 >>> x = C() >>> x[-3] -3 >>> x[-3:-2] slice(7, 8, None) This seems to be incorrect (at least inconsistent). However, it truly becomes a BUG when one tries to emulate a list type: class C: # Emulated list type def __init__(self, d): self.data = d def __len__(self): return len(self.data) def __getitem__(self, item): if isinstance(item, slice): indices = item.indices(len(self)) return [self[i] for i in range(*indices)] else: return self.data[item] x = [1, 2, 3] y = C([1, 2, 3]) >>> x[-7:-5] [] >>> print y[-7:-5] [1] (incorrect behavior) Here the item.indices method does the exact same wrapping process AGAIN. So indices are wrapped once as the slice index is constructed and passed to __getitem__, and again when item.indices is called. This makes it impossible to implement a correctly behaving list data type without using hacks to suppress this Python bug. I would advise that you make the slices object passed to __getitem__ not have its start/end indexes wrapped. Thanks, Connelly Barnes E-mail: '636f6e6e656c6c796261726e6573407961686f6f2e636f6d'.decode('hex') ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974635&group_id=5470 From noreply at sourceforge.net Thu Jun 17 07:23:44 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 07:23:53 2004 Subject: [ python-Bugs-974635 ] Slice indexes passed to __getitem__ are wrapped Message-ID: Bugs item #974635, was opened at 2004-06-17 11:22 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974635&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Connelly (connelly) Assigned to: Nobody/Anonymous (nobody) Summary: Slice indexes passed to __getitem__ are wrapped Initial Comment: Hi, When a slice is passed to __getitem__, the indices for the slice are wrapped by the length of the object (adding len(self) once to both start index if < 0 and the end index if < 0). class C: def __getitem__(self, item): print item def __len__(self): return 10 >>> x = C() >>> x[-3] -3 >>> x[-3:-2] slice(7, 8, None) This seems to be incorrect (at least inconsistent). However, it truly becomes a BUG when one tries to emulate a list type: class C: # Emulated list type def __init__(self, d): self.data = d def __len__(self): return len(self.data) def __getitem__(self, item): if isinstance(item, slice): indices = item.indices(len(self)) return [self[i] for i in range(*indices)] else: return self.data[item] x = [1, 2, 3] y = C([1, 2, 3]) >>> x[-7:-5] [] >>> print y[-7:-5] [1] (incorrect behavior) Here the item.indices method does the exact same wrapping process AGAIN. So indices are wrapped once as the slice index is constructed and passed to __getitem__, and again when item.indices is called. This makes it impossible to implement a correctly behaving list data type without using hacks to suppress this Python bug. I would advise that you make the slices object passed to __getitem__ not have its start/end indexes wrapped. Thanks, Connelly Barnes E-mail: '636f6e6e656c6c796261726e6573407961686f6f2e636f6d'.decode('hex') ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-17 12:23 Message: Logged In: YES user_id=6656 You'll be happier if you make your classes new-style. I don't know if it's worth changing old-style classes at this point. Personally, I'm trying to forget about them just as quickly as possible :-) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974635&group_id=5470 From noreply at sourceforge.net Wed Jun 16 12:36:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 07:29:14 2004 Subject: [ python-Bugs-974019 ] ConfigParser defaults not handled correctly Message-ID: Bugs item #974019, was opened at 2004-06-16 10:36 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974019&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Charles (melicertes) Assigned to: Nobody/Anonymous (nobody) Summary: ConfigParser defaults not handled correctly Initial Comment: ConfigParser.getboolean() fails if it falls back to a default value, and the value passed in was a boolean object (instead of a string) because it unconditionally does v.lower(). This should be fixed; there's something un-Pythonic about expecting a boolean value but not being able to actually provide a boolean as the default. I've attached a patch (against 2.3.4c1; should apply to 2.3.4, I think) which makes the v.lower() conditional on v being a string, and adds bool(True), bool(False), int(1), and int(0) to _boolean_states. Alternative resolution: change the documentation to specify that /only/ strings should be passed in the defaults dictionary. Less Pythonic. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974019&group_id=5470 From noreply at sourceforge.net Thu Jun 17 08:37:57 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 08:38:06 2004 Subject: [ python-Feature Requests-974019 ] ConfigParser non-string defaults broken with .getboolean() Message-ID: Feature Requests item #974019, was opened at 2004-06-16 11:36 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=974019&group_id=5470 >Category: None >Group: None Status: Open Resolution: None Priority: 5 Submitted By: Charles (melicertes) Assigned to: Nobody/Anonymous (nobody) Summary: ConfigParser non-string defaults broken with .getboolean() Initial Comment: ConfigParser.getboolean() fails if it falls back to a default value, and the value passed in was a boolean object (instead of a string) because it unconditionally does v.lower(). This should be fixed; there's something un-Pythonic about expecting a boolean value but not being able to actually provide a boolean as the default. I've attached a patch (against 2.3.4c1; should apply to 2.3.4, I think) which makes the v.lower() conditional on v being a string, and adds bool(True), bool(False), int(1), and int(0) to _boolean_states. Alternative resolution: change the documentation to specify that /only/ strings should be passed in the defaults dictionary. Less Pythonic. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-17 07:37 Message: Logged In: YES user_id=80475 The method is functioning exactly as documented. Changing this to a feature request. FWIW, it seems reasonable to allow a boolean to be passed in a default; however, calling it unpythonic is a bit extreme since the whole API is designed to work with text arguments. My solution would be to just add two lines after the self.get: if isinstance(v, bool): return v ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=974019&group_id=5470 From noreply at sourceforge.net Wed Jun 16 18:40:53 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 08:41:25 2004 Subject: [ python-Bugs-897820 ] db4 4.2 == :-( (test_anydbm and test_bsddb3) Message-ID: Bugs item #897820, was opened at 2004-02-15 22:03 Message generated for change (Comment added) made by greg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=897820&group_id=5470 Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) >Assigned to: Gregory P. Smith (greg) Summary: db4 4.2 == :-( (test_anydbm and test_bsddb3) Initial Comment: This machine, running fedora core 2 (test) has db4 4.2.52 installed. test_anydbm fails utterly with this combination. 3 of the 4 tests fail, the failing part is the same in each case: File "Lib/anydbm.py", line 83, in open return mod.open(file, flag, mode) File "Lib/dbhash.py", line 16, in open return bsddb.hashopen(file, flag, mode) File "Lib/bsddb/__init__.py", line 293, in hashopen d.open(file, db.DB_HASH, flags, mode) DBInvalidArgError: (22, 'Invalid argument -- DB_TRUNCATE illegal with locking specified') test_bsddb passes, but test_bsddb3 fails with a similar error: test test_bsddb3 failed -- Traceback (most recent call last): File "Lib/bsddb/test/test_compat.py", line 82, in test04_n_flag f = hashopen(self.filename, 'n') File "Lib/bsddb/__init__.py", line 293, in hashopen d.open(file, db.DB_HASH, flags, mode) DBInvalidArgError: (22, 'Invalid argument -- DB_TRUNCATE illegal with locking specified') ---------------------------------------------------------------------- >Comment By: Gregory P. Smith (greg) Date: 2004-06-16 15:40 Message: Logged In: YES user_id=413 As Tim Peters pointed out on python-dev in march: > I suspect Sleepycat screwed us there, changing the rules in midstream. > Someone on c.l.py appeared to track down the same thing here, but in an app > instead of in our test suite: > > http://mail.python.org/pipermail/python-list/2004-May/220168.html > > The change log of Berkeley DB 4.2.52 says "9. Fix a bug to now > disallow DB_TRUNCATE on opens in locking environments, since we > cannot prevent race conditions ..." leaving the bug open until i look to see if there is a workaround. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=897820&group_id=5470 From noreply at sourceforge.net Thu Jun 17 08:44:07 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 08:44:24 2004 Subject: [ python-Feature Requests-964437 ] idle help is modal Message-ID: Feature Requests item #964437, was opened at 2004-06-01 13:05 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=964437&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 4 Submitted By: Matthias Klose (doko) >Assigned to: Kurt B. Kaiser (kbk) Summary: idle help is modal Initial Comment: [forwarded from http://bugs.debian.org/252130] the idle online help is unfortunately modal so that one cannot have the help window open and read it, and at the same time work in idle. One must close the help window before continuing in idle which is a nuisance. ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2004-06-04 00:19 Message: Logged In: YES user_id=149084 Making this an RFE. If you have time to work up a patch, that would be a big help. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=964437&group_id=5470 From noreply at sourceforge.net Wed Jun 16 16:58:44 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 09:16:35 2004 Subject: [ python-Bugs-974019 ] ConfigParser non-string defaults broken with .getboolean() Message-ID: Bugs item #974019, was opened at 2004-06-16 10:36 Message generated for change (Settings changed) made by melicertes You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974019&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Charles (melicertes) Assigned to: Nobody/Anonymous (nobody) >Summary: ConfigParser non-string defaults broken with .getboolean() Initial Comment: ConfigParser.getboolean() fails if it falls back to a default value, and the value passed in was a boolean object (instead of a string) because it unconditionally does v.lower(). This should be fixed; there's something un-Pythonic about expecting a boolean value but not being able to actually provide a boolean as the default. I've attached a patch (against 2.3.4c1; should apply to 2.3.4, I think) which makes the v.lower() conditional on v being a string, and adds bool(True), bool(False), int(1), and int(0) to _boolean_states. Alternative resolution: change the documentation to specify that /only/ strings should be passed in the defaults dictionary. Less Pythonic. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974019&group_id=5470 From noreply at sourceforge.net Thu Jun 17 10:09:59 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 10:10:08 2004 Subject: [ python-Bugs-974757 ] urllib2's HTTPPasswordMgr and uri's with port numbers Message-ID: Bugs item #974757, was opened at 2004-06-17 14:09 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974757&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Chris Withers (fresh) Assigned to: Nobody/Anonymous (nobody) Summary: urllib2's HTTPPasswordMgr and uri's with port numbers Initial Comment: Python 2.3.3 The title just about sums it up. If you add a password with a uri containing a port number using add_password, then it probably won't be returned by find_user_password. That's not right ;-) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974757&group_id=5470 From noreply at sourceforge.net Wed Jun 16 18:33:44 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 12:21:31 2004 Subject: [ python-Bugs-973054 ] bsddb in Python 2.3 incompatible with SourceNav output Message-ID: Bugs item #973054, was opened at 2004-06-15 00:55 Message generated for change (Comment added) made by greg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973054&group_id=5470 Category: Extension Modules Group: Python 2.3 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: gdy (gdy999) >Assigned to: Gregory P. Smith (greg) Summary: bsddb in Python 2.3 incompatible with SourceNav output Initial Comment: bsddb module in Python 2.3 cannot read SourceNavigator output (old btree format): Python 2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import bsddb >>> a = bsddb.btopen('jEdit_14-12-2004.cl') Traceback (most recent call last): File "", line 1, in ? File "C:\Program Files\Python23\lib\bsddb\__init__.py", line 209, in btopen d.open(file, db.DB_BTREE, flags, mode) bsddb._db.DBInvalidArgError: (22, 'Invalid argument -- jEdit_14-12-2004.cl: unexpected file type or format') With Python 2.2 the file is opened without problem. SourceNavigator is an open source source code analyser (http://sourcenav.sourceforge.net/) ---------------------------------------------------------------------- >Comment By: Gregory P. Smith (greg) Date: 2004-06-16 15:33 Message: Logged In: YES user_id=413 Yes python 2.3 ships with bsddb using BerkeleyDB 4.1; all previous versions of python were used the old "BerkeleyDB" 1.85. I highly recommend using the new database interface rather than the old compatibility interface (pybsddb.sf.net as well as sleepycat.com for docs). Does the windows build of python include the BerkeleyDB db_dump.exe and db_load.exe utilities? i'd guess not, but if so you can hopefully use those to dump your old format database and load it into a new one. If you need binaries for those and don't want to build them yourself using BerkeleyDB source available from sleepycat.com they are included in the win32 binary release of pybsddb (http://sourceforge.net/projects/pybsddb/) for python 2.1 and 2.2. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973054&group_id=5470 From noreply at sourceforge.net Thu Jun 17 12:29:02 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 12:29:25 2004 Subject: [ python-Bugs-973054 ] bsddb in Python 2.3 incompatible with SourceNav output Message-ID: Bugs item #973054, was opened at 2004-06-15 03:55 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973054&group_id=5470 Category: Extension Modules Group: Python 2.3 Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: gdy (gdy999) Assigned to: Gregory P. Smith (greg) Summary: bsddb in Python 2.3 incompatible with SourceNav output Initial Comment: bsddb module in Python 2.3 cannot read SourceNavigator output (old btree format): Python 2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import bsddb >>> a = bsddb.btopen('jEdit_14-12-2004.cl') Traceback (most recent call last): File "", line 1, in ? File "C:\Program Files\Python23\lib\bsddb\__init__.py", line 209, in btopen d.open(file, db.DB_BTREE, flags, mode) bsddb._db.DBInvalidArgError: (22, 'Invalid argument -- jEdit_14-12-2004.cl: unexpected file type or format') With Python 2.2 the file is opened without problem. SourceNavigator is an open source source code analyser (http://sourcenav.sourceforge.net/) ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-17 12:29 Message: Logged In: YES user_id=31435 Unfortunately, no: the PSF Windows distro doesn't include any of the Sleepycat utilities. That isn't a matter of policy, it's a matter of nobody volunteering to do it. ---------------------------------------------------------------------- Comment By: Gregory P. Smith (greg) Date: 2004-06-16 18:33 Message: Logged In: YES user_id=413 Yes python 2.3 ships with bsddb using BerkeleyDB 4.1; all previous versions of python were used the old "BerkeleyDB" 1.85. I highly recommend using the new database interface rather than the old compatibility interface (pybsddb.sf.net as well as sleepycat.com for docs). Does the windows build of python include the BerkeleyDB db_dump.exe and db_load.exe utilities? i'd guess not, but if so you can hopefully use those to dump your old format database and load it into a new one. If you need binaries for those and don't want to build them yourself using BerkeleyDB source available from sleepycat.com they are included in the win32 binary release of pybsddb (http://sourceforge.net/projects/pybsddb/) for python 2.1 and 2.2. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973054&group_id=5470 From noreply at sourceforge.net Wed Jun 16 10:26:01 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 12:44:18 2004 Subject: [ python-Bugs-966618 ] float_subtype_new() bug Message-ID: Bugs item #966618, was opened at 2004-06-04 15:39 Message generated for change (Settings changed) made by nascheme You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) >Assigned to: Guido van Rossum (gvanrossum) Summary: float_subtype_new() bug Initial Comment: A rather obsure bug in the subclassing code: >>> class A: ... def __float__(self): return 'hello' ... >>> float(A()) 'hello' >>> class f(float): pass ... >>> f(A()) -5.7590155905901735e-56 In debug mode, the assert() in float_subtype_new() fails instead. In non-debug mode, the value we get is the result of typecasting the PyStringObject* to a PyFloatObject*. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-08 02:54 Message: Logged In: YES user_id=35752 Attaching patch. One outstanding issue is that it may make sense to search for and remove unnecessary type checks (e.g. PyNumber_Int followed by PyInt_Check). Also, this change only broke one test case but I have no idea how much user code this might break. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-07 15:37 Message: Logged In: YES user_id=31435 Assigned to Neil, as a reminder to attach his patch. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 21:00 Message: Logged In: YES user_id=35752 I've got an alternative patch. SF cvs is down at the moment so I'll have to generate a patch later. My change makes CPython match the behavior of Jython. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 19:46 Message: Logged In: YES user_id=35752 I think the right fix is to have PyNumber_Int, PyNumber_Float, and PyNumber_Long check the return value of slot function (i.e. nb_int, nb_float). That matches the behavior of PyObject_Str and PyObject_Repr. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:13 Message: Logged In: YES user_id=1057404 (ack, spelling error copied from intobject.c) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must be convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:11 Message: Logged In: YES user_id=1057404 (Inline, I can't seem to attach things) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:01 Message: Logged In: YES user_id=1057404 floatobject.c contains an assertion that the value can be coerced into a float, but not a runtime if. I've changed it to be in line with what int_subtype_new() does. This may not be 100% correct, however, as they both allow a string to be returned from __int__() and __float__(), respectively. complex() does not allow this, however, and it throws TypeError (while int_subtype_new() and float_subtype_new() throw ValueError). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 From noreply at sourceforge.net Wed Jun 16 18:50:14 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 12:58:07 2004 Subject: [ python-Bugs-857909 ] bsddb craps out sporadically Message-ID: Bugs item #857909, was opened at 2003-12-10 14:41 Message generated for change (Comment added) made by greg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=857909&group_id=5470 Category: None Group: Python 2.3 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Predrag Miocinovic (predragm) >Assigned to: Gregory P. Smith (greg) Summary: bsddb craps out sporadically Initial Comment: I get following from Python2.3.2 with BerkeleyDB 3.3.11 running on linux RH7.3; ------------------------ Traceback (most recent call last): File "/raid/ANITA-lite/gse/unpackd.py", line 702, in ? PacketObject.shelve() File "/raid/ANITA-lite/gse/unpackd.py", line 78, in shelve wvShelf[shelfKey] = self File "/usr/local/lib/python2.3/shelve.py", line 130, in __setitem__ self.dict[key] = f.getvalue() File "/usr/local/lib/python2.3/bsddb/__init__.py", line 120, in __setitem__ self.db[key] = value bsddb._db.DBRunRecoveryError: (-30987, 'DB_RUNRECOVERY: Fatal error, run database recovery -- PANIC: Invalid argument') Exception bsddb._db.DBRunRecoveryError: (-30987, 'DB_RUNRECOVERY: Fatal error, run database recovery') in ignored Exception bsddb._db.DBRunRecoveryError: (-30987, 'DB_RUNRECOVERY: Fatal error, run database recovery') in ignored ---------------------------------- The server reporting this is running at relatively heavy load and the error occurs several times per day (this call occurs roughly 100,000 per day, but only 42 times per any given shelve instance). It reminds be of bug report #775414, but this is a non-threaded application. That said, another process is accessing the same shelve, but I've implemented a lockout system which should make sure they don't have simultaneous access. The lockout seems to work fine. The same application is running on different machine using Python2.3.2 with BerkeleyDB 4.0.14 on linux RH9 and the same error occured once (to my knowledge), but with "30987" replaced by "30981" in the traceback above, if it makes any difference. Finally, a third system, python2.3.2 with BerkeleyDB 4.0.14 on linux RH9 (but quite a bit faster, and thus lighter load) runs w/o reporting this problem so far. I don't have a convenient code snipet to exemplify the problem, but I don't do anything more than open (or re-open) a shelve and write a single python object instance to it per opening. If necessary, I can provide the code in question. ---------------------------------------------------------------------- >Comment By: Gregory P. Smith (greg) Date: 2004-06-16 15:50 Message: Logged In: YES user_id=413 DB_RUNRECOVERY errors are a sleepycat BerkeleyDB internal error and don't have anything to do with the python library wrapper. For help in tracking them down I suggest using the latest BerkeleyDB version and ask with example code on the berkeleydb newsgroups or ask sleepycat themselves (they don't bite, they're friendly). closing this bug as its not a python or extension module bug. If you're looking for a multiprocess aware BerkeleyDB shelve support, that should be a feature request (ideally with an example implementation :). ---------------------------------------------------------------------- Comment By: Andrew I MacIntyre (aimacintyre) Date: 2003-12-22 16:04 Message: Logged In: YES user_id=250749 I can sympathise with your POV, but shelve has a documented restriction that it is not supported for multiuser user use without specific external support - that is multiple readers are Ok, but writing requires exclusive access to the shelve database. As you are using it in such an environment, it is up to you to guarantee the required safety. The error being reported is highly likely to be a consequence of your locking scheme being inadequate for use with the BerkeleyDB environment, at least on that system, and my suggestion that you take this up in a BerkeleyDB forum was directed at you getting sufficient information to improve your locking scheme to avoid the problem you see. I think you are a little optimistic expecting the shelve module (let alone the anydbm module) to cope with exceptions arising from use outside its documented restrictions - and BerkeleyDB supports lots of capability beyond the scope of the functionality used by shelve and anydbm and the exceptions to go with that. If you care about the shelve storage format, you can force the type of storage by creating an empty database of the format of your choice, provided that that format is supported by anydbm. With a bit of care, you should be able to convert a shelve from one format to another, within the anydbm format support restriction. While it might be nice to have some format control, anydbm's purpose is hide the database format/interface. If you care about the format, you're expected to use the desired interface module directly. ---------------------------------------------------------------------- Comment By: Predrag Miocinovic (predragm) Date: 2003-12-21 20:48 Message: Logged In: YES user_id=860222 I find the last comment somewhat unsatisfactory. While this appears to be BerkeleyDB issue (and w/o going into details why the exception gets thrown), it's strange that Shelve module doesn't handle this more gracefully. Since the concept of Shelve is hiding implementation details from the application, having to catch BerkeleyDB exceptions for simple shelf operations is bit over the top. If I move to another system, using different underlying DB (as given by anydbm), will I have to catch new set of exceptions all over again? Shelve (or anydbm) should either provide ability to select underlying DB implementation to use, or it should handle all DB implementation aspects so that it is trully transparent to the end user. Just my $0.02. ---------------------------------------------------------------------- Comment By: Andrew I MacIntyre (aimacintyre) Date: 2003-12-21 03:50 Message: Logged In: YES user_id=250749 As far as I can make out, what you're seeing is a BerkeleyDB issue, and bsddb is just reporting what BDB is telling it. DB_RUNRECOVERY (-30987 on DB 3.3, -30981 on DB 4.0) is documented as (quoted from DB4.0 HTML docs): "There exists a class of errors that Berkeley DB considers fatal to an entire Berkeley DB environment. An example of this type of error is a corrupted database or a log write failure because the disk is out of free space. The only way to recover from these failures is to have all threads of control exit the Berkeley DB environment, run recovery of the environment, and re-enter Berkeley DB." Therefore I think you should to followup this in a BerkeleyDB forum. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=857909&group_id=5470 From noreply at sourceforge.net Wed Jun 16 13:05:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 13:24:45 2004 Subject: [ python-Bugs-973507 ] sys.stdout problems with pythonw.exe Message-ID: Bugs item #973507, was opened at 2004-06-15 20:34 Message generated for change (Comment added) made by manlioperillo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973507&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: sys.stdout problems with pythonw.exe Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. I have written this script for reproducing the bug: import sys class teeIO: def __init__(self, *files): self.__files = files def write(self, str): for i in self.__files: print >> trace, 'writing on %s: %s' % (i, str) i.write(str) print >> trace, '-' * 70 def tee(*files): return teeIO(*files) log = file('log.txt', 'w') err = file('err.txt', 'w') trace = file('trace.txt', 'w') sys.stdout = tee(log, sys.__stdout__) sys.stderr = tee(err, sys.__stderr__) def write(n, width): sys.stdout.write('x' * width) if n == 1: return write(n - 1, width) try: 1/0 except: write(1, 4096) [output from err.log] Traceback (most recent call last): File "sys.py", line 36, in ? write(1, 4096) File "sys.py", line 28, in write sys.stdout.write('x' * width) File "sys.py", line 10, in write i.write(str) IOError: [Errno 9] Bad file descriptor TeeIO is needed for actually read the program output, but I don't know if the problem is due to teeIO. The same problem is present for stderr, as can be seen by swapping sys.__stdout__ and sys.__stderr__. As I can see, 4096 is the buffer size for sys.stdout/err. The problem is the same if the data is written in chunks, ad example: write(2, 4096/2). The bug isn't present if I use python.exe or if I write less than 4096 bytes. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-16 17:05 Message: Logged In: YES user_id=1054957 The problem with this bug is that I have a script that can be executed both with python.exe that with pythonw.exe! How can I know if stdout is connected to a console? I think a 'patch' would be to replace sys.stdout/err with a null stream instead of using windows stdout/err implementation. If fileno can't be implemented, it should not be a problem. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-15 23:09 Message: Logged In: YES user_id=31435 Ya, this is well known, although it may not be documented. pythonw's purpose in life is *not* to create (or inherit) a console window (a "DOS box"). Therefore stdin, stdout, and stderr aren't attached to anything usable. Microsoft's C runtime seems to attach them to buffers that aren't connected to anything, so they complain if you ever exceed the buffer size. The short course is that stdin, stdout and stderr are useless in programs without a console window, so you shouldn't use them. Or you should you install your own file-like objects, and make them do something useful to you. I think it would be helpful if pythonw did something fancier (e.g., pop up a window containing attempted output), but that's in new-feature terrority, and nobody has contributed code for it anyway. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973507&group_id=5470 From noreply at sourceforge.net Thu Jun 17 12:11:11 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 17 15:58:15 2004 Subject: [ python-Bugs-924703 ] test_unicode_file fails on Win98SE Message-ID: Bugs item #924703, was opened at 2004-03-27 20:48 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924703&group_id=5470 Category: Unicode Group: Python 2.4 >Status: Open >Resolution: None Priority: 7 Submitted By: Tim Peters (tim_one) Assigned to: Martin v. L?wis (loewis) Summary: test_unicode_file fails on Win98SE Initial Comment: In current CVS, test_unicode_file fails on Win98SE. This has been going on for some time, actually. ERROR: test_single_files (__main__.TestUnicodeFiles) Traceback (most recent call last): File ".../lib/test/test_unicode_file.py", line 162, in test_single_files self._test_single(TESTFN_UNICODE) File ".../lib/test/test_unicode_file.py", line 136, in _test_single self._do_single(filename) File ".../lib/test/test_unicode_file.py", line 49, in _do_single new_base = unicodedata.normalize("NFD", new_base) TypeError: normalized() argument 2 must be unicode, not str At this point, filename is TESTFN_UNICODE is u'@test-\xe0\xf2' os.path.abspath(filename) is 'C:\Code\python\PC\VC6\@test-\xe0\xf2' new_base is '@test-\xe0\xf2 So abspath() removed the "Unicodeness" of filename, and new_base is indeed not a Unicode string at this point. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-17 12:11 Message: Logged In: YES user_id=31435 Reopened, because the same test is still failing on Win98SE, but for a different reason. The traceback is identical, except that it's now failing in the listcomp on the line following the line it used to fail on: File ".../lib/test/test_unicode_file.py", line 50, in _do_single file_list = [unicodedata.normalize("NFD", f) for f in file_list] TypeError: normalized() argument 2 must be unicode, not str filename is u'@test-\xe0\xf2' os.path.abspath(filename) is u'C:\Code\python\PC\VC6\@test-\xe0\xf2' new_base is u'@test-a\u0300o\u0300' The problem now is that the first name in file_list is 'CVS' so [unicodedata.normalize("NFD", f) for f in file_list] is passing an 8-bit string to normalize(). Earlier code in the test *appears* to assume that if filename is Unicode, then os.listdir() will return a list of Unicode strings. But file_list is a list of 153 8-bit strings on this box. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-15 14:50 Message: Logged In: YES user_id=21627 This should be fixed with posixmodule.c 2.321. Unfortunately, I cannot test it, because I don't have W9X. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 03:11 Message: Logged In: YES user_id=80475 This is still failing. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-03-30 00:44 Message: Logged In: YES user_id=31435 Just a guess: the os.path functions are generally just string manipulation, and on Windows I sometimes import posixpath.py directly to do Unixish path manipulations. So it's conceivable that someone (not me) on a non-Windows box imports ntpath directly to manipulate Windows paths. In fact, I see that Fredrik's "Python Standard Library" book explicitly mentions this use case for importing ntpath directly. So maybe he actually did it -- once . ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-03-30 00:25 Message: Logged In: YES user_id=21627 I see. I'll look into changing _getfullpathname to return Unicode output for Unicode input even if unicode_file_names() is false. However, I do wonder what the purpose of _abspath then is: On what system would it be used??? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-03-29 18:11 Message: Logged In: YES user_id=31435 Nope, that can't help -- ntpath.py's _abspath doesn't exist on Win98SE (the "from nt import _getfullpathname" succeeds, so _abspath is never defined). It's _getfullpathname() that's taking a Unicode input and returning a str output here. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-03-29 17:17 Message: Logged In: YES user_id=21627 abspath(unicode) should return a Unicode path. Does it help if _abspath (in ntpath.py) is changed to contain if not isabs(path): if isinstance(path, unicode): cwd = os.getcwdu() else: cwd = os.getcwd() path = join(cwd, path) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924703&group_id=5470 From noreply at sourceforge.net Fri Jun 18 06:46:00 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 18 14:09:57 2004 Subject: [ python-Bugs-974635 ] Slice indexes passed to __getitem__ are wrapped Message-ID: Bugs item #974635, was opened at 2004-06-17 11:22 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974635&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Connelly (connelly) Assigned to: Nobody/Anonymous (nobody) Summary: Slice indexes passed to __getitem__ are wrapped Initial Comment: Hi, When a slice is passed to __getitem__, the indices for the slice are wrapped by the length of the object (adding len(self) once to both start index if < 0 and the end index if < 0). class C: def __getitem__(self, item): print item def __len__(self): return 10 >>> x = C() >>> x[-3] -3 >>> x[-3:-2] slice(7, 8, None) This seems to be incorrect (at least inconsistent). However, it truly becomes a BUG when one tries to emulate a list type: class C: # Emulated list type def __init__(self, d): self.data = d def __len__(self): return len(self.data) def __getitem__(self, item): if isinstance(item, slice): indices = item.indices(len(self)) return [self[i] for i in range(*indices)] else: return self.data[item] x = [1, 2, 3] y = C([1, 2, 3]) >>> x[-7:-5] [] >>> print y[-7:-5] [1] (incorrect behavior) Here the item.indices method does the exact same wrapping process AGAIN. So indices are wrapped once as the slice index is constructed and passed to __getitem__, and again when item.indices is called. This makes it impossible to implement a correctly behaving list data type without using hacks to suppress this Python bug. I would advise that you make the slices object passed to __getitem__ not have its start/end indexes wrapped. Thanks, Connelly Barnes E-mail: '636f6e6e656c6c796261726e6573407961686f6f2e636f6d'.decode('hex') ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-18 11:46 Message: Logged In: YES user_id=6656 'make your classes new-style' == subclass object. >>> class C(object): ... def __getitem__(self, item): print item ... def __len__(self): return 10 ... >>> C()[-3:-2] slice(-3, -2, None) ---------------------------------------------------------------------- Comment By: Connelly (connelly) Date: 2004-06-17 22:50 Message: Logged In: YES user_id=1039782 I'm not sure what you mean by 'make your classes new-style'. According to http://www.python.org/doc/2.3.4/ref/specialnames.html, the __getitem__ method should be used, and the __getslice__ method is deprecated. If you subclass the built-in list type, then the __getitem__ method is *not* called when subscripting with a slice. Instead, the __getslice__ method is called. Try it out. So I can't see any reasonable way to get around this bug. You can try and modify the class C shown above, so that it behaves correctly. I don't think you will be able to do it without putting in special "workaround" code to avoid this Python bug. y = C([1, 2, 3]) >>> print y[-7:-5] # should be []. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-17 12:23 Message: Logged In: YES user_id=6656 You'll be happier if you make your classes new-style. I don't know if it's worth changing old-style classes at this point. Personally, I'm trying to forget about them just as quickly as possible :-) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974635&group_id=5470 From noreply at sourceforge.net Fri Jun 18 15:34:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:26:03 2004 Subject: [ python-Bugs-944082 ] urllib2 authentication mishandles empty password Message-ID: Bugs item #944082, was opened at 2004-04-28 19:02 Message generated for change (Comment added) made by mkc You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944082&group_id=5470 Category: Python Library Group: None Status: Closed Resolution: Fixed Priority: 5 Submitted By: Jacek Trzmiel (yangir) Assigned to: Nobody/Anonymous (nobody) Summary: urllib2 authentication mishandles empty password Initial Comment: If example.org requires authentication, then following code: host = 'example.org' user = 'testuser' password = '' url = 'http://%s/' % host authInfo = urllib2.HTTPPasswordMgrWithDefaultRealm() authInfo.add_password( None, host, user, password ) authHandler = urllib2.HTTPBasicAuthHandler( authInfo ) opener = urllib2.build_opener( authHandler ) urlFile = opener.open( url ) print urlFile.read() will die by throwing HTTPError 401: File "/usr/lib/python2.3/urllib2.py", line 419, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 401: Authorization Required even if authenticating with 'testuser' and empty password is valid. Empty password is mishandled (i.e. authentication with empty password string is ignored) in AbstractBasicAuthHandler.retry_http_basic_auth def retry_http_basic_auth(self, host, req, realm): user,pw = self.passwd.find_user_password(realm, host) if pw: [...] It can be fixed by changing: if pw: to if pw is not None: Python 2.3.2 (#1, Oct 9 2003, 12:03:29) [GCC 3.3.1 (cygming special)] on cygwin Type "help", "copyright", "credits" or "license" for more information. ---------------------------------------------------------------------- Comment By: Mike Coleman (mkc) Date: 2004-06-18 14:34 Message: Logged In: YES user_id=555 The change that was made here probably fixes the bug, but it looks like it would be better to make the test "user is not None" rather than "pw is not None", since there are two other places in the code that check the output of this function by checking the None-ness of user and no code that checks the None-ness of pw. (A comment that 'user' is what is to be checked would also be useful.) ---------------------------------------------------------------------- Comment By: Mike Coleman (mkc) Date: 2004-06-18 10:54 Message: Logged In: YES user_id=555 The change that was made here probably fixes the bug, but it looks like it would be better to make the test "user is not None" rather than "pw is not None", since there are two other places in the code that check the output of this function by checking the None-ness of user and no code that checks the None-ness of pw. (A comment that 'user' is what is to be checked would also be useful.) ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-05-05 20:41 Message: Logged In: YES user_id=21627 This is fixed with patch #944110. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944082&group_id=5470 From noreply at sourceforge.net Fri Jun 18 18:04:18 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:28:37 2004 Subject: [ python-Bugs-975646 ] tp_(get|set)attro? inheritance bug Message-ID: Bugs item #975646, was opened at 2004-06-18 23:04 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975646&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Gustavo J. A. M. Carneiro (gustavo) Assigned to: Nobody/Anonymous (nobody) Summary: tp_(get|set)attro? inheritance bug Initial Comment: Documentation says, regarding tp_getattr: ? This field is inherited by subtypes together with tp_getattro: a subtype inherits both tp_getattr and tp_getattro from its base type when the subtype's tp_getattr and tp_getattro are both NULL. ? Implementation disagrees, at least in cvs head, but the effect of the bug (non-inheritance of tp_getattr) happens in 2.3.3. Follow with me: In function type_new (typeobject.c) line 1927: /* Special case some slots */ if (type->tp_dictoffset != 0 || nslots > 0) { if (base->tp_getattr == NULL && base->tp_getattro == NULL) type->tp_getattro = PyObject_GenericGetAttr; if (base->tp_setattr == NULL && base->tp_setattro == NULL) type->tp_setattro = PyObject_GenericSetAttr; } ...later in the same function... /* Initialize the rest */ if (PyType_Ready(type) < 0) { Py_DECREF(type); return NULL; } Inside PyType_Ready(), line 3208: for (i = 1; i < n; i++) { PyObject *b = PyTuple_GET_ITEM(bases, i); if (PyType_Check(b)) inherit_slots(type, (PyTypeObject *)b); } Inside inherit_slots, line (3056): if (type->tp_getattr == NULL && type->tp_getattro == NULL) { type->tp_getattr = base->tp_getattr; type->tp_getattro = base->tp_getattro; } if (type->tp_setattr == NULL && type->tp_setattro == NULL) { type->tp_setattr = base->tp_setattr; type->tp_setattro = base->tp_setattro; } So, if you have followed through, you'll notice that type_new first sets tp_getattro = GenericGetAttr, in case 'base' has neither tp_getattr nor tp_getattro. So, you are thinking that there is no problem. If base has tp_getattr, that code path won't be execute. The problem is with multiple inheritance. In type_new, 'base' is determined by calling best_base(). But the selected base may not have tp_getattr, while another might have. In this case, setting tp_getattro based on information from the wrong base precludes the slot from being inherited from the right base. This is happening in pygtk, unfortunately. One possible solution would be to move the first code block to after the PyType_Ready() call. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975646&group_id=5470 From noreply at sourceforge.net Thu Jun 17 20:06:42 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:29:25 2004 Subject: [ python-Bugs-969574 ] BSD restartable signals not correctly disabled Message-ID: Bugs item #969574, was opened at 2004-06-09 22:59 Message generated for change (Comment added) made by lukem You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969574&group_id=5470 Category: Python Interpreter Core Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Luke Mewburn (lukem) Assigned to: Nobody/Anonymous (nobody) Summary: BSD restartable signals not correctly disabled Initial Comment: I noticed a problem with some python scripts not performing correctly when ^C (SIGINT) was pressed. I managed to isolate it to the following test case: import sys foo=sys.stdin.read() Once that's executed, you need to press ^C _twice_ for KeyboardInterrupt to be raised; the first ^C is ignored. If you manually enter that into an interactive python session this behaviour also occurs, although it appears to function correctly if you run the foo=sys.stdin.read() a second time (only one ^C is needed). This occurs on NetBSD 1.6 and 2.0, with python 2.1, 2.2 and 2.3, configured with and without pthreads. It also occurs on FreeBSD 4.8 with python 2.2.2. It does not occur on various Linux systems I asked people to test for me. (I have a NetBSD problem report about this issue: http://www.netbsd.org/cgi-bin/query-pr-single.pl?number=24797 ) I traced the process and noticed that the read() system call was being restarted when the first SIGINT was received. This hint, and the fact that Linux was unaffected indicated that python was probably not expecting restartable signal behaviour, and that behaviour is the default in BSD systems for signal(3) (but not sigaction(2) or the other POSIX signal calls). After doing some research in the python 2.3.4 source it appeared to me that the intent was to disable restartable signals, but that was not what was happening in practice. I noticed the following code issues: * not all code had been converted from using signal(3) to PyOS_getsig() and PyOS_setsig(). This is contrary to the HISTORY for python 2.0beta2. * siginterrupt(2) (an older BSD function) was being used in places in an attempt to ensure that signals were not restartable. However, in some cases this function was called _before_ signal(3)/PyOS_setsig(), which would mean that the call may be ineffective if PyOS_setsig() was implemented using signal(3) and not sigaction(2) * PyOS_setsig() was using sigaction(2) suboptimally, iand inheriting the sa_flags from the existing handler. If SA_RESTART happened to be already set for the signal, it would be copied. I provide the following patch, which does: * converts a few remaining signal(3) calls to PyOS_setsig(). There should be none left in a build on a UNIX system, although there may be on other systems. Converting any remaining calls to signal(3) is left as an exercise :) * moves siginterrupt(2) to PyOS_setsig() when the latter is implemented using signal(3) instead of sigaction(2). * when implementing PyOS_setsig() in terms of sigaction(2), use sigaction(2) in a more portable and "common" manner that explicitly clears the flags for the new handler, thus preventing SA_RESTART from being implicitly copied. With this patch applied, python passes all the same regression tests as without it, and my test case now exits on the first ^C as expected. Also, it is possible that this patch may also fix other outstanding signal issues on systems with BSD restartable signal semantics. Cheers, Luke. ---------------------------------------------------------------------- >Comment By: Luke Mewburn (lukem) Date: 2004-06-18 10:06 Message: Logged In: YES user_id=135998 I've submitted patch 975056 which has a more complete version of the patch to fix this. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969574&group_id=5470 From noreply at sourceforge.net Fri Jun 18 07:36:21 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:34:45 2004 Subject: [ python-Bugs-972724 ] Python 2.3.4, Solaris 7, socketmodule.c does not compile Message-ID: Bugs item #972724, was opened at 2004-06-15 04:36 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972724&group_id=5470 Category: Build Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Bruce D. Ray (brucedray) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.3.4, Solaris 7, socketmodule.c does not compile Initial Comment: Build of Python 2.3.4 on Solaris 7 with SUN WorkshopPro compilers fails to compiile socketmodule.c with the following error messages on the build: "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 2979: undefined symbol: AF_INET6 "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 3023: undefined symbol: INET_ADDRSTRLEN "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 3023: integral constant expression expected "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 3053: warning: improper pointer/integer combination: op "=" "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 3062: warning: statement not reached cc: acomp failed for /export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c As a consequence of the above, make test gave errors for every test that attempted to import _socket Other error messages in the build not related to the socketmodule.c issue were: "Objects/frameobject.c", line 301: warning: non-constant initializer: op "--" "Objects/frameobject.c", line 303: warning: non-constant initializer: op "--" "Objects/stringobject.c", line 1765: warning: statement not reached "Python/ceval.c", line 3427: warning: non-constant initializer: op "--" "Python/ceval.c", line 3550: warning: non-constant initializer: op "--" "Python/ceval.c", line 3551: warning: non-constant initializer: op "--" "/export/home/bruce/python/Python-2.3.4/Modules/pypcre.c", line 995: warning: non-constant initializer: op "++" "/export/home/bruce/python/Python-2.3.4/Modules/pypcre.c", line 2574: warning: non-constant initializer: op "--" "/export/home/bruce/python/Python-2.3.4/Modules/unicodedata.c", line 313: warning: non-constant initializer: op "--" "/export/home/bruce/python/Python-2.3.4/Modules/parsermodule.c", line 2493: warning: non-constant initializer: op "++" "/export/home/bruce/python/Python-2.3.4/Modules/expat/xmlparse.c", line 3942: warning: non-constant initializer: op "<<=" Configuration output was: checking MACHDEP... sunos5 checking EXTRAPLATDIR... checking for --without-gcc... no checking for --with-cxx=... no checking for c++... no checking for g++... no checking for gcc... no checking for CC... CC checking for C++ compiler default output... a.out checking whether the C++ compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for gcc... no checking for cc... cc checking for C compiler default output... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... no checking whether cc accepts -g... yes checking for cc option to accept ANSI C... none needed checking how to run the C preprocessor... cc -E checking for egrep... egrep checking for AIX... no checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... no checking for unistd.h... yes checking minix/config.h usability... no checking minix/config.h presence... no checking for minix/config.h... no checking for --with-suffix... checking for case-insensitive build directory... no checking LIBRARY... libpython$(VERSION).a checking LINKCC... $(PURIFY) $(CC) checking for --enable-shared... no checking LDLIBRARY... libpython$(VERSION).a checking for ranlib... ranlib checking for ar... ar checking for a BSD-compatible install... ./install-sh -c checking for --with-pydebug... no checking whether cc accepts -OPT:Olimit=0... no checking whether cc accepts -Olimit 1500... no checking whether pthreads are available without options... no checking whether cc accepts -Kpthread... no checking whether cc accepts -Kthread... no checking whether cc accepts -pthread... no checking whether CC also accepts flags for thread support... no checking for ANSI C header files... (cached) yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking grp.h usability... yes checking grp.h presence... yes checking for grp.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking langinfo.h usability... yes checking langinfo.h presence... yes checking for langinfo.h... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking locale.h usability... yes checking locale.h presence... yes checking for locale.h... yes checking ncurses.h usability... no checking ncurses.h presence... no checking for ncurses.h... no checking poll.h usability... yes checking poll.h presence... yes checking for poll.h... yes checking pthread.h usability... yes checking pthread.h presence... yes checking for pthread.h... yes checking signal.h usability... yes checking signal.h presence... yes checking for signal.h... yes checking stdarg.h usability... yes checking stdarg.h presence... yes checking for stdarg.h... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdlib.h... (cached) yes checking stropts.h usability... yes checking stropts.h presence... yes checking for stropts.h... yes checking termios.h usability... yes checking termios.h presence... yes checking for termios.h... yes checking thread.h usability... yes checking thread.h presence... yes checking for thread.h... yes checking for unistd.h... (cached) yes checking utime.h usability... yes checking utime.h presence... yes checking for utime.h... yes checking sys/audioio.h usability... yes checking sys/audioio.h presence... yes checking for sys/audioio.h... yes checking sys/bsdtty.h usability... no checking sys/bsdtty.h presence... no checking for sys/bsdtty.h... no checking sys/file.h usability... yes checking sys/file.h presence... yes checking for sys/file.h... yes checking sys/lock.h usability... yes checking sys/lock.h presence... yes checking for sys/lock.h... yes checking sys/mkdev.h usability... yes checking sys/mkdev.h presence... yes checking for sys/mkdev.h... yes checking sys/modem.h usability... no checking sys/modem.h presence... no checking for sys/modem.h... no checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking sys/poll.h usability... yes checking sys/poll.h presence... yes checking for sys/poll.h... yes checking sys/select.h usability... yes checking sys/select.h presence... yes checking for sys/select.h... yes checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking sys/times.h usability... yes checking sys/times.h presence... yes checking for sys/times.h... yes checking sys/un.h usability... yes checking sys/un.h presence... yes checking for sys/un.h... yes checking sys/utsname.h usability... yes checking sys/utsname.h presence... yes checking for sys/utsname.h... yes checking sys/wait.h usability... yes checking sys/wait.h presence... yes checking for sys/wait.h... yes checking pty.h usability... no checking pty.h presence... no checking for pty.h... no checking term.h usability... no checking term.h presence... yes configure: WARNING: term.h: present but cannot be compiled configure: WARNING: term.h: check for missing prerequisite headers? configure: WARNING: term.h: proceeding with the preprocessor's result configure: WARNING: ## ------------------------------------ ## configure: WARNING: ## Report this to bug-autoconf@gnu.org. ## configure: WARNING: ## ------------------------------------ ## checking for term.h... yes checking libutil.h usability... no checking libutil.h presence... no checking for libutil.h... no checking sys/resource.h usability... yes checking sys/resource.h presence... yes checking for sys/resource.h... yes checking netpacket/packet.h usability... no checking netpacket/packet.h presence... no checking for netpacket/packet.h... no checking sysexits.h usability... yes checking sysexits.h presence... yes checking for sysexits.h... yes checking for dirent.h that defines DIR... yes checking for library containing opendir... none required checking whether sys/types.h defines makedev... no checking for sys/mkdev.h... (cached) yes checking for clock_t in time.h... yes checking for makedev... no checking Solaris LFS bug... no checking for mode_t... yes checking for off_t... yes checking for pid_t... yes checking return type of signal handlers... void checking for size_t... yes checking for uid_t in sys/types.h... yes checking for int... yes checking size of int... 4 checking for long... yes checking size of long... 4 checking for void *... yes checking size of void *... 4 checking for char... yes checking size of char... 1 checking for short... yes checking size of short... 2 checking for float... yes checking size of float... 4 checking for double... yes checking size of double... 8 checking for fpos_t... yes checking size of fpos_t... 8 checking for long long support... yes checking for long long... yes checking size of long long... 8 checking for uintptr_t support... no checking size of off_t... 8 checking whether to enable large file support... yes checking size of time_t... 4 checking for pthread_t... yes checking size of pthread_t... 4 checking for --enable-toolbox-glue... no checking for --enable-framework... no checking for dyld... no checking SO... .so checking LDSHARED... $(CC) -G checking CCSHARED... checking LINKFORSHARED... checking CFLAGSFORSHARED... checking SHLIBS... $(LIBS) checking for dlopen in -ldl... yes checking for shl_load in -ldld... no checking for library containing sem_init... -lrt checking for textdomain in -lintl... yes checking for t_open in -lnsl... yes checking for socket in -lsocket... yes checking for --with-libs... no checking for --with-signal-module... yes checking for --with-dec-threads... no checking for --with-threads... yes checking for _POSIX_THREADS in unistd.h... yes checking cthreads.h usability... no checking cthreads.h presence... no checking for cthreads.h... no checking mach/cthreads.h usability... no checking mach/cthreads.h presence... no checking for mach/cthreads.h... no checking for --with-pth... no checking for pthread_create in -lpthread... yes checking for usconfig in -lmpc... no checking if PTHREAD_SCOPE_SYSTEM is supported... yes checking for pthread_sigmask... yes checking if --enable-ipv6 is specified... no checking for --with-universal-newlines... yes checking for --with-doc-strings... yes checking for --with-pymalloc... yes checking for --with-wctype-functions... no checking for --with-sgi-dl... no checking for --with-dl-dld... no checking for dlopen... yes checking DYNLOADFILE... dynload_shlib.o checking MACHDEP_OBJS... MACHDEP_OBJS checking for alarm... yes checking for chown... yes checking for clock... yes checking for confstr... yes checking for ctermid... yes checking for execv... yes checking for fork... yes checking for fpathconf... yes checking for ftime... yes checking for ftruncate... yes checking for gai_strerror... no checking for getgroups... yes checking for getlogin... yes checking for getloadavg... yes checking for getpeername... yes checking for getpgid... yes checking for getpid... yes checking for getpriority... yes checking for getpwent... yes checking for getwd... yes checking for kill... yes checking for killpg... yes checking for lchown... yes checking for lstat... yes checking for mkfifo... yes checking for mknod... yes checking for mktime... yes checking for mremap... no checking for nice... yes checking for pathconf... yes checking for pause... yes checking for plock... yes checking for poll... yes checking for pthread_init... no checking for putenv... yes checking for readlink... yes checking for realpath... yes checking for select... yes checking for setegid... yes checking for seteuid... yes checking for setgid... yes checking for setlocale... yes checking for setregid... yes checking for setreuid... yes checking for setsid... yes checking for setpgid... yes checking for setpgrp... yes checking for setuid... yes checking for setvbuf... yes checking for snprintf... yes checking for sigaction... yes checking for siginterrupt... yes checking for sigrelse... yes checking for strftime... yes checking for strptime... yes checking for sysconf... yes checking for tcgetpgrp... yes checking for tcsetpgrp... yes checking for tempnam... yes checking for timegm... no checking for times... yes checking for tmpfile... yes checking for tmpnam... yes checking for tmpnam_r... yes checking for truncate... yes checking for uname... yes checking for unsetenv... no checking for utimes... yes checking for waitpid... yes checking for wcscoll... yes checking for _getpty... no checking for chroot... yes checking for link... yes checking for symlink... yes checking for fchdir... yes checking for fsync... yes checking for fdatasync... yes checking for ctermid_r... no checking for flock... no checking for getpagesize... yes checking for true... true checking for inet_aton in -lc... no checking for inet_aton in -lresolv... yes checking for hstrerror... yes checking for inet_aton... yes checking for inet_pton... yes checking for setgroups... yes checking for openpty... no checking for openpty in -lutil... no checking for forkpty... no checking for forkpty in -lutil... no checking for fseek64... no checking for fseeko... yes checking for fstatvfs... yes checking for ftell64... no checking for ftello... yes checking for statvfs... yes checking for dup2... yes checking for getcwd... yes checking for strdup... yes checking for strerror... yes checking for memmove... yes checking for getpgrp... yes checking for setpgrp... (cached) yes checking for gettimeofday... yes checking for major... yes checking for getaddrinfo... no checking for getnameinfo... no checking whether time.h and sys/time.h may both be included... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for struct tm.tm_zone... no checking for tzname... yes checking for struct stat.st_rdev... yes checking for struct stat.st_blksize... yes checking for struct stat.st_blocks... yes checking for time.h that defines altzone... no checking whether sys/select.h and sys/time.h may both be included... yes checking for addrinfo... no checking for sockaddr_storage... no checking whether char is unsigned... no checking for an ANSI C-conforming const... yes checking for working volatile... yes checking for working signed char... yes checking for prototypes... yes checking for variable length prototypes and stdarg.h... yes checking for bad exec* prototypes... no checking if sockaddr has sa_len member... no checking whether va_list is an array... no checking for gethostbyname_r... yes checking gethostbyname_r with 6 args... no checking gethostbyname_r with 5 args... yes checking for __fpu_control... no checking for __fpu_control in -lieee... no checking for --with-fpectl... no checking for --with-libm=STRING... default LIBM="-lm" checking for --with-libc=STRING... default LIBC="" checking for hypot... yes checking wchar.h usability... yes checking wchar.h presence... yes checking for wchar.h... yes checking for wchar_t... yes checking size of wchar_t... 4 checking for UCS-4 tcl... no checking what type to use for unicode... unsigned short checking whether byte ordering is bigendian... yes checking whether right shift extends the sign bit... yes checking for getc_unlocked() and friends... yes checking for rl_pre_input_hook in -lreadline... no checking for rl_completion_matches in -lreadline... no checking for broken nice()... no checking for broken poll()... no checking for working tzset()... no checking for tv_nsec in struct stat... yes checking whether mvwdelch is an expression... yes checking whether WINDOW has _flags... yes checking for /dev/ptmx... yes checking for /dev/ptc... no checking for socklen_t... yes checking for build directories... done configure: creating ./config.status config.status: creating Makefile.pre config.status: creating Modules/Setup.config config.status: creating pyconfig.h creating Setup creating Setup.local creating Makefile ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-18 21:36 Message: Logged In: YES user_id=29957 Hm. I don't get this on Solaris 7 using GCC. Can you try with gcc, and see if the problem is in Sun's headers on your system, or with the Sun compiler? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972724&group_id=5470 From noreply at sourceforge.net Fri Jun 18 12:42:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:34:49 2004 Subject: [ python-Bugs-973507 ] sys.stdout problems with pythonw.exe Message-ID: Bugs item #973507, was opened at 2004-06-15 20:34 Message generated for change (Comment added) made by manlioperillo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973507&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: sys.stdout problems with pythonw.exe Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. I have written this script for reproducing the bug: import sys class teeIO: def __init__(self, *files): self.__files = files def write(self, str): for i in self.__files: print >> trace, 'writing on %s: %s' % (i, str) i.write(str) print >> trace, '-' * 70 def tee(*files): return teeIO(*files) log = file('log.txt', 'w') err = file('err.txt', 'w') trace = file('trace.txt', 'w') sys.stdout = tee(log, sys.__stdout__) sys.stderr = tee(err, sys.__stderr__) def write(n, width): sys.stdout.write('x' * width) if n == 1: return write(n - 1, width) try: 1/0 except: write(1, 4096) [output from err.log] Traceback (most recent call last): File "sys.py", line 36, in ? write(1, 4096) File "sys.py", line 28, in write sys.stdout.write('x' * width) File "sys.py", line 10, in write i.write(str) IOError: [Errno 9] Bad file descriptor TeeIO is needed for actually read the program output, but I don't know if the problem is due to teeIO. The same problem is present for stderr, as can be seen by swapping sys.__stdout__ and sys.__stderr__. As I can see, 4096 is the buffer size for sys.stdout/err. The problem is the same if the data is written in chunks, ad example: write(2, 4096/2). The bug isn't present if I use python.exe or if I write less than 4096 bytes. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-18 16:42 Message: Logged In: YES user_id=1054957 I have found a very simple patch. First I have implemented this function: import os def isrealfile(file): """ Test if file is on the os filesystem """ if not hasattr(file, 'fileno'): return False try: tmp = os.dup(file.fileno()) except: return False else: os.close(tmp); return True Microsoft implementation of stdout/err/in when no console is created (and when no pipes are used) actually are not 'real' files. Then I have added the following code in sitecustomize.py: import sys class NullStream: """ A file like class that writes nothing """ def close(self): pass def flush(self): pass def write(self, str): pass def writelines(self, sequence): pass if not isrealfile(sys.__stdout__): sys.stdout = NullStream() if not isrealfile(sys.__stderr__): sys.stderr = NullStream() I have tested the code only on Windows XP Pro. P.S. isrealfile could be added in os module. Regards Manlio Perillo ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-16 17:05 Message: Logged In: YES user_id=1054957 The problem with this bug is that I have a script that can be executed both with python.exe that with pythonw.exe! How can I know if stdout is connected to a console? I think a 'patch' would be to replace sys.stdout/err with a null stream instead of using windows stdout/err implementation. If fileno can't be implemented, it should not be a problem. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-15 23:09 Message: Logged In: YES user_id=31435 Ya, this is well known, although it may not be documented. pythonw's purpose in life is *not* to create (or inherit) a console window (a "DOS box"). Therefore stdin, stdout, and stderr aren't attached to anything usable. Microsoft's C runtime seems to attach them to buffers that aren't connected to anything, so they complain if you ever exceed the buffer size. The short course is that stdin, stdout and stderr are useless in programs without a console window, so you shouldn't use them. Or you should you install your own file-like objects, and make them do something useful to you. I think it would be helpful if pythonw did something fancier (e.g., pop up a window containing attempted output), but that's in new-feature terrority, and nobody has contributed code for it anyway. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973507&group_id=5470 From noreply at sourceforge.net Fri Jun 18 10:32:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:39:31 2004 Subject: [ python-Bugs-975387 ] Python and segfaulting extension modules Message-ID: Bugs item #975387, was opened at 2004-06-18 16:32 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975387&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Folke Lemaitre (zypher) Assigned to: Nobody/Anonymous (nobody) Summary: Python and segfaulting extension modules Initial Comment: Normally when a segfault occurs in a python thread (mainly in extension modules), two things can happen: * Python segfaults * Python uses 99% CPU while Garbage Collecting the same INVALID object over and over again The second result is reported as a bug somewhere else. In a python program with lots of threads and lots of loaded extension modules it is almost impossible to find the cause of a segfault. Wouldn't it be possible to have some traceback printed when a SIGSEGV occurs? Would be really very handy. There even exists an extension module that does just that, but unfortunately only intercepts problems from the main thread. (http://systems.cs.uchicago.edu/wad/) I think something similar should be standard behaviour of python. Even nicer would be if python just raises an exception encapsulating the c stacktrace or even converting a c trace to a python traceback Example WAD output: WAD can either be imported as a Python extension module or linked to an extension module. To illustrate, consider the earlier example: % python foo.py Segmentation Fault (core dumped) % To identify the problem, a programmer can run Python interactively and import WAD as follows: % python Python 2.0 (#1, Oct 27 2000, 14:34:45) [GCC 2.95.2 19991024 (release)] on sunos5 Type "copyright", "credits" or "license" for more information. >>> import libwadpy WAD Enabled >>> execfile("foo.py") Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 16, in ? foo() File "foo.py", line 13, in foo bar() File "foo.py", line 10, in bar spam() File "foo.py", line 7, in spam doh.doh(a,b,c) SegFault: [ C stack trace ] #2 0x00027774 in call_builtin (func=0x1c74f0,arg=0x1a1ccc,kw=0x0) #1 0xff022f7c in _wrap_doh (0x0,0x1a1ccc,0x160ef4,0x9c,0x56b44,0x1aa3d8) #0 0xfe7e0568 in doh(a=0x3,b=0x4,c=0x0) in 'foo.c', line 28 /u0/beazley/Projects/WAD/Python/foo.c, line 28 int doh(int a, int b, int *c) { => *c = a + b; return *c; } >>> ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975387&group_id=5470 From noreply at sourceforge.net Fri Jun 18 19:57:52 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:40:24 2004 Subject: [ python-Bugs-945665 ] platform.system() Windows inconsistency Message-ID: Bugs item #945665, was opened at 2004-04-30 16:19 Message generated for change (Settings changed) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Guido van Rossum (gvanrossum) >Assigned to: M.-A. Lemburg (lemburg) Summary: platform.system() Windows inconsistency Initial Comment: On Windows, platform.system() (and platform.uname() [0]) return 'Windows' or 'Microsoft Windows' depending on whether win32api is available or not. This is confusing and can lead to hard-to-find bugs where testing in one environment doesn't reveal a bug that only occurs in another environment. I believe this hasn't been fixed in Python 2.4 yet (only the XP recognition has been fixed, it is also broken in 2.3 when win32api was available). ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 09:30 Message: Logged In: YES user_id=113328 Looks OK to me. Not sure where you found docs which specify the behaviour, but I'm OK with "Windows". There's a very small risk of compatibility issues, but as the module was new in 2.3, and the behaviour was inconsistent before, I see no reason why this should be an issue. I'd recommend committing this. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 07:38 Message: Logged In: YES user_id=469548 New patch, the docs say we should use 'Windows' instead of 'Microsoft Windows', so we do: Index: platform.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/platform.py,v retrieving revision 1.13 diff -u -r1.13 platform.py --- platform.py 4 May 2004 18:18:59 -0000 1.13 +++ platform.py 5 Jun 2004 13:39:10 -0000 @@ -957,6 +957,8 @@ # platforms if use_syscmd_ver: system,release,version = _syscmd_ver(system) + if string.find(system, 'Microsoft Windows') != -1: + system = 'Windows' # In case we still don't know anything useful, we'll try to # help ourselves ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 07:07 Message: Logged In: YES user_id=469548 Yep, _syscmd_ver() returns 'Microsoft Windows' while the default is 'Windows'. Setting the default to Microsoft Windows seems the easiest way here (putting patch inline because I can't attach it): Index: platform.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/platform.py,v retrieving revision 1.13 diff -u -r1.13 platform.py --- platform.py 4 May 2004 18:18:59 -0000 1.13 +++ platform.py 5 Jun 2004 13:07:45 -0000 @@ -966,7 +966,7 @@ version = '32bit' else: version = '16bit' - system = 'Windows' + system = 'Microsoft Windows' elif system[:4] == 'java': release,vendor,vminfo,osinfo = java_ver() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 From noreply at sourceforge.net Fri Jun 18 20:03:02 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:40:55 2004 Subject: [ python-Bugs-919012 ] shutil.move can destroy files Message-ID: Bugs item #919012, was opened at 2004-03-18 11:29 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jeff Epler (jepler) Assigned to: Nobody/Anonymous (nobody) Summary: shutil.move can destroy files Initial Comment: $ mkdir a; touch a/b; python2.3 -c 'import shutil; shutil.move("a", "a/c") $ ls -l a ls: a: no such file or directory The same problem exists on Windows, as reported by one "shagshag". ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-18 17:03 Message: Logged In: YES user_id=357491 I just checked the link to the diff, Johannes, and the diff says it was generated on June 5. Can you check to see if you did upload the newest version to the location? ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-07 04:32 Message: Logged In: YES user_id=469548 Sorry, in my haste made a silly mistake. Removed 'self' from is_destination_in_source definition and uploaded new patch to previous location. ---------------------------------------------------------------------- Comment By: Jeff Epler (jepler) Date: 2004-06-05 10:46 Message: Logged In: YES user_id=2772 I applied the attached patch, and got this exception: >>> shutil.move("a", "a/c") Traceback (most recent call last): File "", line 1, in ? File "/usr/src/cvs-src/python/dist/src/Lib/shutil.py", line 168, in move if is_destination_in_source(src, dst): TypeError: is_destination_in_source() takes exactly 3 arguments (2 given) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 10:25 Message: Logged In: YES user_id=31435 Attached Johannes's patch. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 08:08 Message: Logged In: YES user_id=469548 Here's a patch (with tests) that disallows moving a directory inside itself altogether. I can't upload patches to SF, so here's a link to it on my homepage: http://home.student.uva.nl/johannes.gijsbers/shutil.diff. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 From noreply at sourceforge.net Thu Jun 17 17:50:16 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:42:51 2004 Subject: [ python-Bugs-974635 ] Slice indexes passed to __getitem__ are wrapped Message-ID: Bugs item #974635, was opened at 2004-06-17 10:22 Message generated for change (Comment added) made by connelly You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974635&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Connelly (connelly) Assigned to: Nobody/Anonymous (nobody) Summary: Slice indexes passed to __getitem__ are wrapped Initial Comment: Hi, When a slice is passed to __getitem__, the indices for the slice are wrapped by the length of the object (adding len(self) once to both start index if < 0 and the end index if < 0). class C: def __getitem__(self, item): print item def __len__(self): return 10 >>> x = C() >>> x[-3] -3 >>> x[-3:-2] slice(7, 8, None) This seems to be incorrect (at least inconsistent). However, it truly becomes a BUG when one tries to emulate a list type: class C: # Emulated list type def __init__(self, d): self.data = d def __len__(self): return len(self.data) def __getitem__(self, item): if isinstance(item, slice): indices = item.indices(len(self)) return [self[i] for i in range(*indices)] else: return self.data[item] x = [1, 2, 3] y = C([1, 2, 3]) >>> x[-7:-5] [] >>> print y[-7:-5] [1] (incorrect behavior) Here the item.indices method does the exact same wrapping process AGAIN. So indices are wrapped once as the slice index is constructed and passed to __getitem__, and again when item.indices is called. This makes it impossible to implement a correctly behaving list data type without using hacks to suppress this Python bug. I would advise that you make the slices object passed to __getitem__ not have its start/end indexes wrapped. Thanks, Connelly Barnes E-mail: '636f6e6e656c6c796261726e6573407961686f6f2e636f6d'.decode('hex') ---------------------------------------------------------------------- >Comment By: Connelly (connelly) Date: 2004-06-17 21:50 Message: Logged In: YES user_id=1039782 I'm not sure what you mean by 'make your classes new-style'. According to http://www.python.org/doc/2.3.4/ref/specialnames.html, the __getitem__ method should be used, and the __getslice__ method is deprecated. If you subclass the built-in list type, then the __getitem__ method is *not* called when subscripting with a slice. Instead, the __getslice__ method is called. Try it out. So I can't see any reasonable way to get around this bug. You can try and modify the class C shown above, so that it behaves correctly. I don't think you will be able to do it without putting in special "workaround" code to avoid this Python bug. y = C([1, 2, 3]) >>> print y[-7:-5] # should be []. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-17 11:23 Message: Logged In: YES user_id=6656 You'll be happier if you make your classes new-style. I don't know if it's worth changing old-style classes at this point. Personally, I'm trying to forget about them just as quickly as possible :-) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=974635&group_id=5470 From noreply at sourceforge.net Fri Jun 18 08:50:50 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:45:43 2004 Subject: [ python-Bugs-975330 ] Inconsistent newline handling in email module Message-ID: Bugs item #975330, was opened at 2004-06-18 14:50 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975330&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Anders Hammarquist (iko) Assigned to: Nobody/Anonymous (nobody) Summary: Inconsistent newline handling in email module Initial Comment: text/* parts of email messages must use \r\n as the newline separator. For unencoded messages. smtplib and friends take care of the translation from \n to \r\n in the SMTP processing. Parts which are unencoded (i.e. 7bit character sets) MUST use \n line endings, or smtplib with translate to \r\r\n. Parts that get encoded using quoted-printable can use either, because the qp-encoder assumes input data is text and reencodes with \n. However, parts which get encoded using base64 are NOT translated, and so must use \r\n line endings. This means you have to guess whether your text is going to get encoded or not (admittedly, usually not that hard), and translate the line endings appropriately before generating a Message instance. I think the fix would be for Charset.encode_body() to alway force the encoder to text mode (i.e.binary=False), since it seems unlikely to have a Charset for something which is not text. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975330&group_id=5470 From noreply at sourceforge.net Thu Jun 17 17:31:39 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:46:19 2004 Subject: [ python-Bugs-970799 ] Pyton 2.3.4 Make Test Fails on Mac OS X Message-ID: Bugs item #970799, was opened at 2004-06-10 16:42 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970799&group_id=5470 Category: Build Group: Python 2.3 Status: Open Resolution: None >Priority: 3 Submitted By: D. Evan Kiefer (dekiefer) >Assigned to: Brett Cannon (bcannon) Summary: Pyton 2.3.4 Make Test Fails on Mac OS X Initial Comment: Under Mac OSX 10.3.4 with latest security update. Power Mac G4 733MHz Trying to install Zope 2.7.0 with Python 2.3.4. I first used fink to install 2.3.4 but Zope could find module 'os' to import. I then followed the instructions at: http://zope.org/Members/jens/docs/Document.2003-12-27.2431/ document_view to install Python with the default configure. Unlike under fink, doing this allowed me to run 'make test'. It too noted import problems for 'os' and 'site'. test_tempfile 'import site' failed; use -v for traceback Traceback (most recent call last): File "/Volumes/Spielen/Python-2.3.4/Lib/test/tf_inherit_check.py", line 6, in ? import os ImportError: No module named os test test_tempfile failed -- Traceback (most recent call last): File "/Volumes/Spielen/Python-2.3.4/Lib/test/test_tempfile.py", line 307, in test_noinherit self.failIf(retval > 0, "child process reports failure") File "/Volumes/Spielen/Python-2.3.4/Lib/unittest.py", line 274, in failIf if expr: raise self.failureException, msg AssertionError: child process reports failure test_atexit 'import site' failed; use -v for traceback Traceback (most recent call last): File "@test.py", line 1, in ? import atexit ImportError: No module named atexit test test_atexit failed -- '' == "handler2 (7,) {'kw': 'abc'}\nhandler2 () {}\nhandler1\n" test_audioop ---------- test_poll skipped -- select.poll not defined -- skipping test_poll test_popen 'import site' failed; use -v for traceback 'import site' failed; use -v for traceback 'import site' failed; use -v for traceback test_popen2 ------------------- 229 tests OK. 2 tests failed: test_atexit test_tempfile 24 tests skipped: test_al test_bsddb3 test_cd test_cl test_curses test_dl test_email_codecs test_gl test_imgfile test_largefile test_linuxaudiodev test_locale test_nis test_normalization test_ossaudiodev test_pep277 test_poll test_socket_ssl test_socketserver test_sunaudiodev test_timeout test_urllibnet test_winreg test_winsound Those skips are all expected on darwin. make: *** [test] Error 1 deksmacintosh:3-> ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-17 14:31 Message: Logged In: YES user_id=357491 Do you get these failures if you compile Python from scratch instead of using Fink? How about running the tests directly? I suspect the test_atexit failure is a Fink-specific issue and the test_tempfile failure was just a timing quirk since it has a threaded and that can make it sensitive to timing. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970799&group_id=5470 From noreply at sourceforge.net Fri Jun 18 20:07:00 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:48:17 2004 Subject: [ python-Bugs-945665 ] platform.system() Windows inconsistency Message-ID: Bugs item #945665, was opened at 2004-04-30 19:19 Message generated for change (Comment added) made by gvanrossum You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Guido van Rossum (gvanrossum) Assigned to: M.-A. Lemburg (lemburg) Summary: platform.system() Windows inconsistency Initial Comment: On Windows, platform.system() (and platform.uname() [0]) return 'Windows' or 'Microsoft Windows' depending on whether win32api is available or not. This is confusing and can lead to hard-to-find bugs where testing in one environment doesn't reveal a bug that only occurs in another environment. I believe this hasn't been fixed in Python 2.4 yet (only the XP recognition has been fixed, it is also broken in 2.3 when win32api was available). ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-18 20:07 Message: Logged In: YES user_id=6380 Note that the patch by jlgijsbers contains some ugly code. s.find(...) != -1? Yuck! I think it could just be if system == "Microsoft Windows": system = "Windows" That should work even in Python 1.5.2. And I don't think the string would ever be "Microsoft Windows" plus some additive. And yes, please, check it in. (Or do I have to do it myself? :-) ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 12:30 Message: Logged In: YES user_id=113328 Looks OK to me. Not sure where you found docs which specify the behaviour, but I'm OK with "Windows". There's a very small risk of compatibility issues, but as the module was new in 2.3, and the behaviour was inconsistent before, I see no reason why this should be an issue. I'd recommend committing this. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 10:38 Message: Logged In: YES user_id=469548 New patch, the docs say we should use 'Windows' instead of 'Microsoft Windows', so we do: Index: platform.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/platform.py,v retrieving revision 1.13 diff -u -r1.13 platform.py --- platform.py 4 May 2004 18:18:59 -0000 1.13 +++ platform.py 5 Jun 2004 13:39:10 -0000 @@ -957,6 +957,8 @@ # platforms if use_syscmd_ver: system,release,version = _syscmd_ver(system) + if string.find(system, 'Microsoft Windows') != -1: + system = 'Windows' # In case we still don't know anything useful, we'll try to # help ourselves ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 10:07 Message: Logged In: YES user_id=469548 Yep, _syscmd_ver() returns 'Microsoft Windows' while the default is 'Windows'. Setting the default to Microsoft Windows seems the easiest way here (putting patch inline because I can't attach it): Index: platform.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/platform.py,v retrieving revision 1.13 diff -u -r1.13 platform.py --- platform.py 4 May 2004 18:18:59 -0000 1.13 +++ platform.py 5 Jun 2004 13:07:45 -0000 @@ -966,7 +966,7 @@ version = '32bit' else: version = '16bit' - system = 'Windows' + system = 'Microsoft Windows' elif system[:4] == 'java': release,vendor,vminfo,osinfo = java_ver() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 From noreply at sourceforge.net Fri Jun 18 21:24:35 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:52:38 2004 Subject: [ python-Bugs-934282 ] pydoc.stripid doesn't strip ID Message-ID: Bugs item #934282, was opened at 2004-04-13 08:32 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 Category: Python Library Group: None >Status: Closed Resolution: None Priority: 5 Submitted By: Jim Jewett (jimjjewett) Assigned to: Nobody/Anonymous (nobody) Summary: pydoc.stripid doesn't strip ID Initial Comment: pydoc function stripid should strip the ID from an object's repr. It assumes that ID will be represented as one of two patterns -- but this is not the case with (at least) the 2.3.3 distributed binary, because of case-sensitivity. ' at 0x[0-9a-f]{6,}(>+)$' fails because the address is capitalized -- A-F. (Note that hex(15) is not capitalized -- this seems to be unique to addresses.) ' at [0-9A-F]{8,}(>+)$' fails because the address does contain a 0x. stripid checks both as a guard against false alarms, but I'm not sure how to guarantee that an address would contain a letter, so matching on either all-upper or all-lower may be the tightest possible bound. ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-18 18:24 Message: Logged In: YES user_id=357491 OK, I took Robin's idea of extracting out the regex, but just made it case- insensitive with re.IGNORECASE. Also ripped out dealing with the case lacking '0x' thanks to Tim's tip. Finally, I changed the match length from 6 to 6-16 to be able to handle 64-bit addresses (only in 2.4 since I could be wrong). Checked in as rev. 1.93 in HEAD and rev. 1.86.8.2 in 2.3 . Thanks, Robin. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 10:05 Message: Logged In: YES user_id=31435 This can be simplifed. The code in PyString_FromFormatV() massages the native %p result to guarantee it begins with "0x". It didn't always do this, and inspect.py was written when Python didn't massage the native %p result at all. Now there's no need to cater to "0X", or to cater to that "0x" might be missing. The case of a-f may still differ across platforms, and that's deliberate (addresses are of most interest to C coders, and they're "used to" whichever case their platform delivers for %p in C code). ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 09:31 Message: Logged In: YES user_id=6946 This is the PROPER pasted in patch =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/pydoc.py,v retrieving revision 1.90 diff -c -r1.90 pydoc.py *** pydoc.py 29 Jan 2004 06:37:49 -0000 1.90 --- pydoc.py 5 Jun 2004 15:33:52 -0000 *************** *** 113,124 **** return text[:pre] + '...' + text[len(text)-post:] return text def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! for pattern in [' at 0x[0-9a-f]{6,}(>+)$', ' at [0-9A-F]{8,}(>+)$']: ! if re.search(pattern, repr(Exception)): ! return re.sub(pattern, '\1', text) return text def _is_some_method(object): --- 113,124 ---- return text[:pre] + '...' + text[len(text)-post:] return text + _re_stripid =re.compile(' at (?:0[xX][0-9a-fA-F]{6,}|[0-9a-fA-F]{8,})(>+)$') def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! if _re_stripid.search(repr(Exception)): ! return _re_stripid.sub('\1', text) return text def _is_some_method(object): ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 09:23 Message: Logged In: YES user_id=6946 This patch seems to fix variable case problems =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/pydoc.py,v retrieving revision 1.90 diff -c -r1.90 pydoc.py *** pydoc.py 29 Jan 2004 06:37:49 -0000 1.90 --- pydoc.py 5 Jun 2004 15:26:31 -0000 *************** *** 113,124 **** return text[:pre] + '...' + text[len(text)-post:] return text def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! for pattern in [' at 0x[0-9a-f]{6,}(>+)$', ' at [0-9A-F]{8,}(>+)$']: ! if re.search(pattern, repr(Exception)): ! return re.sub(pattern, '\1', text) return text def _is_some_method(object): --- 113,124 ---- return text[:pre] + '...' + text[len(text)-post:] return text + _re_stripid =re.compile(' at (?:0[xX][0-9a-fA-F]{6,}|[0-9a-fA-F]{8,})(>+)$'] def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! if _re_stripid.search(repr(Exception)): ! return _re_stripid.sub('\1', text) return text def _is_some_method(object): ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 08:36 Message: Logged In: YES user_id=6946 Definitely a problem in 2.3.3. using class bongo: pass print bongo() On freebsd with 2.3.3 I get <__main__.bongo instance at 0x81a05ac> with win2k I see <__main__.bongo instance at 0x0112FFD0> both are 8 characters, but the case differs. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-04-14 12:34 Message: Logged In: YES user_id=11105 It seems this depends on the operating system, more exactly on how the C compiler interprets the %p printf format. According to what I see, on windows it fails, on linux it works. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 From noreply at sourceforge.net Fri Jun 18 21:43:26 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:54:40 2004 Subject: [ python-Bugs-874042 ] wrong answers from ctime Message-ID: Bugs item #874042, was opened at 2004-01-09 12:57 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 Category: Python Library Group: Python 2.2.2 Status: Open Resolution: None Priority: 5 Submitted By: paul rubin (phr) Assigned to: Nobody/Anonymous (nobody) Summary: wrong answers from ctime Initial Comment: For any time value less than -2**31, ctime returns the same result, 'Fri Dec 13 12:45:52 1901'. It should either compute a correct value (preferable) or raise ValueError. It should not return the wrong answer. >>> from time import * >>> ctime(-2**31) 'Fri Dec 13 12:45:52 1901' >>> ctime(-2**34) 'Fri Dec 13 12:45:52 1901' >>> ctime(-1e30) 'Fri Dec 13 12:45:52 1901' ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-18 18:43 Message: Logged In: YES user_id=357491 I get the "problem" under 2.4 CVS on OS X. But as Tim said, the ISO C standard just says that it should accept time_t which can be *any* arithmetic type. I say don't bother fixing this since you shouldn't be passing in random values to ctime as it is. Plus ctime is not the best way to do string formatting of dates. What do you think, Tim? Think okay to just close this sucker as "won't fix"? ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 11:11 Message: Logged In: YES user_id=1057404 The below is from me (insomnike) if there's any query. Like SF less and less. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2004-06-05 11:08 Message: Logged In: NO I wish SF would let me upload patches. The below throws a ValueError when ctime is supplied with a negative value or a value over sys.maxint. ### diff -u -r2.140 timemodule.c --- timemodule.c 2 Mar 2004 04:38:10 -0000 2.140 +++ timemodule.c 5 Jun 2004 17:11:20 -0000 @@ -482,6 +482,10 @@ return NULL; tt = (time_t)dt; } + if (tt > INT_MAX || tt < 0) { + PyErr_SetString(PyExc_ValueError, "unconvertible time"); + return NULL; + } p = ctime(&tt); if (p == NULL) { PyErr_SetString(PyExc_ValueError, "unconvertible time"); ### ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-01-09 13:49 Message: Logged In: YES user_id=72053 Python 2.2.2, Red Hat GNU/Linux version 9, not sure what C runtime, whatever comes with Red Hat 9. If the value is coming from the C library's ctime function, then at minimum Python should check that the arg converts to a valid int32. It sounds like it's converting large negative values (like -1e30) to -sys.maxint. I see that ctime(sys.maxint+1) is also being converted to a large negative date. Since python's ctime (and presumably related functions) accept long and float arguments, they need to be range checked. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-01-09 13:22 Message: Logged In: YES user_id=31435 Please identify the Python version, OS and C runtime you're using. Here on Windows 2.3.3, >>> import time >>> time.ctime(-2**31) Traceback (most recent call last): File "", line 1, in ? ValueError: unconvertible time >>> The C standard doesn't define the range of convertible values for ctime(). Python raises ValueError if and only if the platform ctime() returns a NULL pointer. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 From noreply at sourceforge.net Fri Jun 18 15:33:18 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 20 23:57:06 2004 Subject: [ python-Bugs-975556 ] HTMLParser lukewarm on bogus bare attribute chars Message-ID: Bugs item #975556, was opened at 2004-06-18 14:33 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975556&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Mike Coleman (mkc) Assigned to: Nobody/Anonymous (nobody) Summary: HTMLParser lukewarm on bogus bare attribute chars Initial Comment: I tripped over the same problem mentioned in bug #921657 (HTMLParser.py), except that my bogus attribute char is '|' instead of '@'. May I suggest that HTMLParser either require strict compliance with the HTML spec, or alternatively that it accept everything reasonable? The latter approach would be much more useful, and it would also be valuable to have this decision documented. In particular, 'attrfind' needs to be changed to accept (following the '=\s*') something like the subpattern given for 'locatestarttagend' (see the "bare value" line). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975556&group_id=5470 From noreply at sourceforge.net Fri Jun 18 07:27:11 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:01:48 2004 Subject: [ python-Bugs-897820 ] db4 4.2 == :-( (test_anydbm and test_bsddb3) Message-ID: Bugs item #897820, was opened at 2004-02-16 17:03 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=897820&group_id=5470 Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Gregory P. Smith (greg) Summary: db4 4.2 == :-( (test_anydbm and test_bsddb3) Initial Comment: This machine, running fedora core 2 (test) has db4 4.2.52 installed. test_anydbm fails utterly with this combination. 3 of the 4 tests fail, the failing part is the same in each case: File "Lib/anydbm.py", line 83, in open return mod.open(file, flag, mode) File "Lib/dbhash.py", line 16, in open return bsddb.hashopen(file, flag, mode) File "Lib/bsddb/__init__.py", line 293, in hashopen d.open(file, db.DB_HASH, flags, mode) DBInvalidArgError: (22, 'Invalid argument -- DB_TRUNCATE illegal with locking specified') test_bsddb passes, but test_bsddb3 fails with a similar error: test test_bsddb3 failed -- Traceback (most recent call last): File "Lib/bsddb/test/test_compat.py", line 82, in test04_n_flag f = hashopen(self.filename, 'n') File "Lib/bsddb/__init__.py", line 293, in hashopen d.open(file, db.DB_HASH, flags, mode) DBInvalidArgError: (22, 'Invalid argument -- DB_TRUNCATE illegal with locking specified') ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-18 21:27 Message: Logged In: YES user_id=29957 Regardless of whether there's a workaround or not, the test suite should not fail. Fedora Core is a fairly well known distro, and while FC2 is probably the first to ship with db 4.2, I'm sure it won't be the last. ---------------------------------------------------------------------- Comment By: Gregory P. Smith (greg) Date: 2004-06-17 08:40 Message: Logged In: YES user_id=413 As Tim Peters pointed out on python-dev in march: > I suspect Sleepycat screwed us there, changing the rules in midstream. > Someone on c.l.py appeared to track down the same thing here, but in an app > instead of in our test suite: > > http://mail.python.org/pipermail/python-list/2004-May/220168.html > > The change log of Berkeley DB 4.2.52 says "9. Fix a bug to now > disallow DB_TRUNCATE on opens in locking environments, since we > cannot prevent race conditions ..." leaving the bug open until i look to see if there is a workaround. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=897820&group_id=5470 From noreply at sourceforge.net Thu Jun 17 19:58:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:03:14 2004 Subject: [ python-Feature Requests-964437 ] idle help is modal Message-ID: Feature Requests item #964437, was opened at 2004-06-01 13:05 Message generated for change (Comment added) made by kbk You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=964437&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 4 Submitted By: Matthias Klose (doko) >Assigned to: Nobody/Anonymous (nobody) Summary: idle help is modal Initial Comment: [forwarded from http://bugs.debian.org/252130] the idle online help is unfortunately modal so that one cannot have the help window open and read it, and at the same time work in idle. One must close the help window before continuing in idle which is a nuisance. ---------------------------------------------------------------------- >Comment By: Kurt B. Kaiser (kbk) Date: 2004-06-17 18:58 Message: Logged In: YES user_id=149084 Raymond, I'm not planning on working on this now. Please don't assign it to me again. ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2004-06-04 00:19 Message: Logged In: YES user_id=149084 Making this an RFE. If you have time to work up a patch, that would be a big help. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=964437&group_id=5470 From noreply at sourceforge.net Wed Jun 16 10:22:34 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:10:45 2004 Subject: [ python-Bugs-973901 ] missing word in defintion of xmldom InvalidStateErr Message-ID: Bugs item #973901, was opened at 2004-06-16 14:22 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973901&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Brian Gough (bgough) Assigned to: Nobody/Anonymous (nobody) Summary: missing word in defintion of xmldom InvalidStateErr Initial Comment: there appears to be a missing word after "that is not" in the following text in Doc/lib/xmldom.tex \begin{excdesc}{InvalidStateErr} Raised when an attempt is made to use an object that is not or is no longer usable. \end{excdesc} perhaps it should be "that is not defined" or something like that. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973901&group_id=5470 From noreply at sourceforge.net Fri Jun 18 15:13:46 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:14:52 2004 Subject: [ python-Bugs-924301 ] A leak case with cmd.py & readline & exception Message-ID: Bugs item #924301, was opened at 2004-03-27 01:28 Message generated for change (Comment added) made by svenil You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Sverker Nilsson (svenil) Assigned to: Michael Hudson (mwh) Summary: A leak case with cmd.py & readline & exception Initial Comment: A leak to do with readline & cmd, in Python 2.3. I found out what hold on to my interactive objects too long ('for ever') in certain circumstances. The circumstance had to do with an exception being raised in Cmd.cmdloop and handled (or not handled) outside of Cmd.cmdloop. In cmd.py, class Cmd, in cmdloop(), if an exception is raised and propagated out from the interior of cmdloop, the function postloop() is not called. The default function of this, (in 2.3) when the readline library is present, is to restore the completer, via: readline.set_completer(self.old_completer) If this is not done, the newly (by preloop) inserted completer will remain. Even if the loop is called again and run without exception, the new completer will remain, because then in postloop the old completer will be set to our new completer. When we exit, the completer will remain the one we set. This will hold on to our object, aka 'leak'. - In cmd.py in 2.2 no attempt was made to restore the completer, so this was also a kind of leak, but it was replaced the next time a Cmd instance was initialized. Now, however, the next time we will not replace the old completer, but both of them will remain in memory. The old one will be stored as self.old_completer. If we terminate with an exception, bad luck... the stored completer retains both of the instances. If we terminate normally, the old one will be retained. In no case do we restore the space of the first instance. The only way that would happen, would be if the second instance first exited the loop with an exception, and then entered the loop again an exited normally. But then, the second instance is retained instead! If each instance happens to terminate with an exception, it seems well possible that an ever increasing chain of leaking instances will be accumulated. My fix is to always call the postloop, given the preloop succeeded. This is done via a try:finally clause. def cmdloop(self, intro=None): ... self.preloop() try: ... finally: # Make sure postloop called self.postloop() I am attaching my patched version of cmd.py. It was originally from the tarball of Python 2.3.3 downloaded from Python.org some month or so ago in which cmd.py had this size & date: 14504 Feb 19 2003 cmd.py Best regards, Sverker Nilsson ---------------------------------------------------------------------- >Comment By: Sverker Nilsson (svenil) Date: 2004-06-18 21:13 Message: Logged In: YES user_id=356603 I think it is OK. Just noting that it changes the completer (just like my version) even if use_rawinput is false. I guess one should remember to pass a null completekey in that case, in case some other thread was using raw_input. Perhaps a check for use_rawinput could be added in cmd.py to avoid changing the completer in that case, for less risk of future mistakes. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 14:29 Message: Logged In: YES user_id=6656 yay, that appears to have worked. let me know what you think. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 14:26 Message: Logged In: YES user_id=6656 trying again... ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-01 12:10 Message: Logged In: YES user_id=6656 Bah. I don't have the laptop with the patch with me, I'll try uploading again in a couple of days. ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-05-29 09:43 Message: Logged In: YES user_id=356603 I couldn't find a new attached file. I acknowledge some problems with my original patch, but have no other suggestion at the moment. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-26 18:36 Message: Logged In: YES user_id=6656 What do you think of the attached? This makes the documentation of pre & post loop more accurate again, which I think is nice. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-19 11:38 Message: Logged In: YES user_id=6656 This is where I go "I wish I'd reviewed that patch more carefully". In particular, the documentation of {pre,post}loop is now out of date. I wonder setting/getting the completer in these functions was a good idea. Hmm. This bug report confuses me :-) but I can certainly see the intent of the patch... ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-19 03:52 Message: Logged In: YES user_id=80475 Michael, this touches some of your code. Do you want to handle this one? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 From noreply at sourceforge.net Fri Jun 18 10:52:57 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:15:56 2004 Subject: [ python-Bugs-975404 ] logging module uses deprecate apply() Message-ID: Bugs item #975404, was opened at 2004-06-18 15:52 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975404&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Barry Alan Scott (barry-scott) Assigned to: Nobody/Anonymous (nobody) Summary: logging module uses deprecate apply() Initial Comment: The use of apply in logging causes warning to be issued by python when turning programs into executables with Gordon's McMillians Installer and probably others. Replacing the apply() calls with the modern idium would fix the problem. The work around is the add "import warnings" in the main module of your program. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975404&group_id=5470 From noreply at sourceforge.net Fri Jun 18 18:07:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:16:28 2004 Subject: [ python-Bugs-966618 ] float_subtype_new() bug Message-ID: Bugs item #966618, was opened at 2004-06-04 15:39 Message generated for change (Comment added) made by nascheme You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Neil Schemenauer (nascheme) Summary: float_subtype_new() bug Initial Comment: A rather obsure bug in the subclassing code: >>> class A: ... def __float__(self): return 'hello' ... >>> float(A()) 'hello' >>> class f(float): pass ... >>> f(A()) -5.7590155905901735e-56 In debug mode, the assert() in float_subtype_new() fails instead. In non-debug mode, the value we get is the result of typecasting the PyStringObject* to a PyFloatObject*. ---------------------------------------------------------------------- >Comment By: Neil Schemenauer (nascheme) Date: 2004-06-18 22:07 Message: Logged In: YES user_id=35752 My patch allows nb_int and nb_long to return either long or int objects. That means callers of PyNumber_Long need to be careful too. I'm uploading a new version of the patch that adds a test for int/long interchangeability. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-17 17:02 Message: Logged In: YES user_id=6380 (BTW, shouldn't it be "convertible"?) I'm torn. For practical reasons, I'm for the minimal patch originally proposed. Also because I want to allow __int__ returning a long (and vice versa!). But according to my post this morning to python-dev, I would like to ensure that the return type of int(), float(), etc. can be relied upon by type inferencing engines. It would be a shame to have to assume that after x = int(y) the type of x could be anything... Neil, can you rework your patch to be lenient about int/long but otherwise be strict? It means callers of PyNumber_Int() should still be careful, but that's the way of the future anyway. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-16 22:23 Message: Logged In: YES user_id=4771 Let's be careful here. I can imagine that some __int__() implementations in user code would return a long instead, as it is the behavior of int(something_too_big) already. As far as I know, the original bug this tracker is for is the only place where it was blindly assumed that a specific conversion method returned an object of a specific type. I'm fine with just fixing that case. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-08 02:54 Message: Logged In: YES user_id=35752 Attaching patch. One outstanding issue is that it may make sense to search for and remove unnecessary type checks (e.g. PyNumber_Int followed by PyInt_Check). Also, this change only broke one test case but I have no idea how much user code this might break. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-07 15:37 Message: Logged In: YES user_id=31435 Assigned to Neil, as a reminder to attach his patch. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 21:00 Message: Logged In: YES user_id=35752 I've got an alternative patch. SF cvs is down at the moment so I'll have to generate a patch later. My change makes CPython match the behavior of Jython. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 19:46 Message: Logged In: YES user_id=35752 I think the right fix is to have PyNumber_Int, PyNumber_Float, and PyNumber_Long check the return value of slot function (i.e. nb_int, nb_float). That matches the behavior of PyObject_Str and PyObject_Repr. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:13 Message: Logged In: YES user_id=1057404 (ack, spelling error copied from intobject.c) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must be convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:11 Message: Logged In: YES user_id=1057404 (Inline, I can't seem to attach things) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 13:01 Message: Logged In: YES user_id=1057404 floatobject.c contains an assertion that the value can be coerced into a float, but not a runtime if. I've changed it to be in line with what int_subtype_new() does. This may not be 100% correct, however, as they both allow a string to be returned from __int__() and __float__(), respectively. complex() does not allow this, however, and it throws TypeError (while int_subtype_new() and float_subtype_new() throw ValueError). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 From noreply at sourceforge.net Wed Jun 16 18:44:35 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:20:52 2004 Subject: [ python-Bugs-865120 ] bsddb test_all.py - incorrect Message-ID: Bugs item #865120, was opened at 2003-12-23 11:42 Message generated for change (Comment added) made by greg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=865120&group_id=5470 Category: Extension Modules Group: None >Status: Closed >Resolution: Fixed >Priority: 4 Submitted By: Shura Zam (debil_urod) >Assigned to: Gregory P. Smith (greg) Summary: bsddb test_all.py - incorrect Initial Comment: bsddb\test\test_all.py raise exception, bat all other tests work correct. ======Command Line======= c:\>python c:\Python\Lib\bsddb\test\test_all.py ======Output============ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- =-=-=-=-=-=-=-=-=-=-=-=-=-=-= Sleepycat Software: Berkeley DB 4.1.25: (December 19, 2002) bsddb.db.version(): (4, 1, 25) bsddb.db.__version__: 4.2.0 bsddb.db.cvsid: $Id: _bsddb.c,v 1.17.6.2 2003/09/21 23:10:23 greg Exp $ python version: 2.3.2 (#49, Oct 2 2003, 20:02:00) [MSC v.1200 32 bit (Intel)] My pid: 2012 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- =-=-=-=-=-=-=-=-=-=-=-=-=-=-= Traceback (most recent call last): File "C:\Lang\ActiveState\Python\Lib\bsddb\test\test_all. py", line 81, in ? unittest.main(defaultTest='suite') File "C:\Lang\ACTIVE~1\Python\lib\unittest.py", line 720, in __init__ self.parseArgs(argv) File "C:\Lang\ACTIVE~1\Python\lib\unittest.py", line 747, in parseArgs self.createTests() File "C:\Lang\ACTIVE~1\Python\lib\unittest.py", line 753, in createTests self.module) File "C:\Lang\ACTIVE~1\Python\lib\unittest.py", line 519, in loadTestsFromNames suites.append(self.loadTestsFromName(name, module)) File "C:\Lang\ACTIVE~1\Python\lib\unittest.py", line 504, in loadTestsFromName test = obj() File "C:\Lang\ActiveState\Python\Lib\bsddb\test\test_all. py", line 69, in suite alltests.addTest(module.suite()) AttributeError: 'module' object has no attribute 'suite' ---------------------------------------------------------------------- >Comment By: Gregory P. Smith (greg) Date: 2004-06-16 15:44 Message: Logged In: YES user_id=413 typo in test_all.py, a fix was committed to HEAD in march: revision 1.4 date: 2004/03/16 07:07:06; author: greg; state: Exp; lines: +1 -1 bugfix for people executing test_all to run the test suite. (call the correct function) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=865120&group_id=5470 From noreply at sourceforge.net Thu Jun 17 13:02:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:22:40 2004 Subject: [ python-Bugs-966618 ] float_subtype_new() bug Message-ID: Bugs item #966618, was opened at 2004-06-04 11:39 Message generated for change (Comment added) made by gvanrossum You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) >Assigned to: Neil Schemenauer (nascheme) Summary: float_subtype_new() bug Initial Comment: A rather obsure bug in the subclassing code: >>> class A: ... def __float__(self): return 'hello' ... >>> float(A()) 'hello' >>> class f(float): pass ... >>> f(A()) -5.7590155905901735e-56 In debug mode, the assert() in float_subtype_new() fails instead. In non-debug mode, the value we get is the result of typecasting the PyStringObject* to a PyFloatObject*. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-17 13:02 Message: Logged In: YES user_id=6380 (BTW, shouldn't it be "convertible"?) I'm torn. For practical reasons, I'm for the minimal patch originally proposed. Also because I want to allow __int__ returning a long (and vice versa!). But according to my post this morning to python-dev, I would like to ensure that the return type of int(), float(), etc. can be relied upon by type inferencing engines. It would be a shame to have to assume that after x = int(y) the type of x could be anything... Neil, can you rework your patch to be lenient about int/long but otherwise be strict? It means callers of PyNumber_Int() should still be careful, but that's the way of the future anyway. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-16 18:23 Message: Logged In: YES user_id=4771 Let's be careful here. I can imagine that some __int__() implementations in user code would return a long instead, as it is the behavior of int(something_too_big) already. As far as I know, the original bug this tracker is for is the only place where it was blindly assumed that a specific conversion method returned an object of a specific type. I'm fine with just fixing that case. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-07 22:54 Message: Logged In: YES user_id=35752 Attaching patch. One outstanding issue is that it may make sense to search for and remove unnecessary type checks (e.g. PyNumber_Int followed by PyInt_Check). Also, this change only broke one test case but I have no idea how much user code this might break. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-07 11:37 Message: Logged In: YES user_id=31435 Assigned to Neil, as a reminder to attach his patch. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 17:00 Message: Logged In: YES user_id=35752 I've got an alternative patch. SF cvs is down at the moment so I'll have to generate a patch later. My change makes CPython match the behavior of Jython. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-05 15:46 Message: Logged In: YES user_id=35752 I think the right fix is to have PyNumber_Int, PyNumber_Float, and PyNumber_Long check the return value of slot function (i.e. nb_int, nb_float). That matches the behavior of PyObject_Str and PyObject_Repr. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 09:13 Message: Logged In: YES user_id=1057404 (ack, spelling error copied from intobject.c) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must be convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 09:11 Message: Logged In: YES user_id=1057404 (Inline, I can't seem to attach things) ### --- floatobject.c- Sat Jun 5 13:21:07 2004 +++ floatobject.c Sat Jun 5 13:23:00 2004 @@ -765,7 +765,13 @@ tmp = float_new(&PyFloat_Type, args, kwds); if (tmp == NULL) return NULL; - assert(PyFloat_CheckExact(tmp)); + if(!PyFloat_CheckExact(tmp)) { + PyErr_SetString(PyExc_ValueError, + "value must convertable to a float"); + Py_DECREF(tmp); + return NULL; + } + new = type->tp_alloc(type, 0); if (new == NULL) { Py_DECREF(tmp); ### ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 09:01 Message: Logged In: YES user_id=1057404 floatobject.c contains an assertion that the value can be coerced into a float, but not a runtime if. I've changed it to be in line with what int_subtype_new() does. This may not be 100% correct, however, as they both allow a string to be returned from __int__() and __float__(), respectively. complex() does not allow this, however, and it throws TypeError (while int_subtype_new() and float_subtype_new() throw ValueError). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=966618&group_id=5470 From noreply at sourceforge.net Sat Jun 19 02:32:10 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:23:55 2004 Subject: [ python-Bugs-874042 ] wrong answers from ctime Message-ID: Bugs item #874042, was opened at 2004-01-09 15:57 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 Category: Python Library Group: Python 2.2.2 Status: Open Resolution: None Priority: 5 Submitted By: paul rubin (phr) Assigned to: Nobody/Anonymous (nobody) Summary: wrong answers from ctime Initial Comment: For any time value less than -2**31, ctime returns the same result, 'Fri Dec 13 12:45:52 1901'. It should either compute a correct value (preferable) or raise ValueError. It should not return the wrong answer. >>> from time import * >>> ctime(-2**31) 'Fri Dec 13 12:45:52 1901' >>> ctime(-2**34) 'Fri Dec 13 12:45:52 1901' >>> ctime(-1e30) 'Fri Dec 13 12:45:52 1901' ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-19 02:32 Message: Logged In: YES user_id=31435 Sorry, I can't make more time for this. Attached is a patch that's the best I can do without #ifdef'ing the snot out of every platform on Earth. As the comments explain, it's not bulletproof (and probably can't be). It introduces a _PyTime_DoubleToTimet() function that attempts to detect "unreasonable" information loss, and fiddles ctime(), localtime() and gmtime() to use it. It should really be made extern (added to Python's internal C API & declared in a header file), and little bits of datetimemodule.c fiddled to use it too. insomnike, while C89 was clear as mud on this point, C99 is clear that there's nothing wrong with a negative time_t value, and *most* platforms accept them (back to about 1900). Nothing says a time_t can't be bigger than INT_MAX either, and, indeed, platforms that use ints to store time_t will be forced to switch to fatter types before Unix hits its own version of "the Y2K bug" in a few decades (the # of seconds since 1970 is already over a billion; signed 32-bit ints will be too small in another 30+ years). ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-06-19 00:42 Message: Logged In: YES user_id=72053 I think this needs to be fixed and not just closed. Ctime in a C library might be able to accept any numeric type as a time_t, but no matter what type that turns out to be, I don't think ctime is allowed to give a totally wrong answer. The issue here is Python numeric types don't necessarily map onto C numeric types. I think it's ok to raise an exception if a Python integer doesn't correctly map onto a valid time_t, but the current behavior is to map incorrectly. That can cause all kinds of silent bugs in a program. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-18 21:43 Message: Logged In: YES user_id=357491 I get the "problem" under 2.4 CVS on OS X. But as Tim said, the ISO C standard just says that it should accept time_t which can be *any* arithmetic type. I say don't bother fixing this since you shouldn't be passing in random values to ctime as it is. Plus ctime is not the best way to do string formatting of dates. What do you think, Tim? Think okay to just close this sucker as "won't fix"? ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 14:11 Message: Logged In: YES user_id=1057404 The below is from me (insomnike) if there's any query. Like SF less and less. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2004-06-05 14:08 Message: Logged In: NO I wish SF would let me upload patches. The below throws a ValueError when ctime is supplied with a negative value or a value over sys.maxint. ### diff -u -r2.140 timemodule.c --- timemodule.c 2 Mar 2004 04:38:10 -0000 2.140 +++ timemodule.c 5 Jun 2004 17:11:20 -0000 @@ -482,6 +482,10 @@ return NULL; tt = (time_t)dt; } + if (tt > INT_MAX || tt < 0) { + PyErr_SetString(PyExc_ValueError, "unconvertible time"); + return NULL; + } p = ctime(&tt); if (p == NULL) { PyErr_SetString(PyExc_ValueError, "unconvertible time"); ### ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-01-09 16:49 Message: Logged In: YES user_id=72053 Python 2.2.2, Red Hat GNU/Linux version 9, not sure what C runtime, whatever comes with Red Hat 9. If the value is coming from the C library's ctime function, then at minimum Python should check that the arg converts to a valid int32. It sounds like it's converting large negative values (like -1e30) to -sys.maxint. I see that ctime(sys.maxint+1) is also being converted to a large negative date. Since python's ctime (and presumably related functions) accept long and float arguments, they need to be range checked. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-01-09 16:22 Message: Logged In: YES user_id=31435 Please identify the Python version, OS and C runtime you're using. Here on Windows 2.3.3, >>> import time >>> time.ctime(-2**31) Traceback (most recent call last): File "", line 1, in ? ValueError: unconvertible time >>> The C standard doesn't define the range of convertible values for ctime(). Python raises ValueError if and only if the platform ctime() returns a NULL pointer. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 From noreply at sourceforge.net Thu Jun 17 17:35:57 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:24:13 2004 Subject: [ python-Bugs-973901 ] missing word in defintion of xmldom InvalidStateErr Message-ID: Bugs item #973901, was opened at 2004-06-16 07:22 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973901&group_id=5470 Category: Documentation Group: None >Status: Closed Resolution: None Priority: 5 Submitted By: Brian Gough (bgough) Assigned to: Nobody/Anonymous (nobody) Summary: missing word in defintion of xmldom InvalidStateErr Initial Comment: there appears to be a missing word after "that is not" in the following text in Doc/lib/xmldom.tex \begin{excdesc}{InvalidStateErr} Raised when an attempt is made to use an object that is not or is no longer usable. \end{excdesc} perhaps it should be "that is not defined" or something like that. ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-17 14:35 Message: Logged In: YES user_id=357491 Fixed in rev. 1.24 in 2.4 and rev. 1.23.12.1 in 2.3 . Thanks, Brian. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973901&group_id=5470 From noreply at sourceforge.net Sat Jun 19 02:35:48 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:24:23 2004 Subject: [ python-Bugs-975763 ] shutil.copytree uses os.mkdir instead of os.mkdirs Message-ID: Bugs item #975763, was opened at 2004-06-18 23:35 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975763&group_id=5470 Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 3 Submitted By: Brett Cannon (bcannon) Assigned to: Nobody/Anonymous (nobody) Summary: shutil.copytree uses os.mkdir instead of os.mkdirs Initial Comment: shutil.copytree uses os.mkdir instead of os.mkdirs for creating the new destination directory. Any reason why it doesn't use os.mkdirs? If there is the docs should be made more specific in stating that it will not create intermediate directories for the destination. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975763&group_id=5470 From noreply at sourceforge.net Fri Jun 18 11:54:23 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:25:20 2004 Subject: [ python-Bugs-944082 ] urllib2 authentication mishandles empty password Message-ID: Bugs item #944082, was opened at 2004-04-28 19:02 Message generated for change (Comment added) made by mkc You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944082&group_id=5470 Category: Python Library Group: None Status: Closed Resolution: Fixed Priority: 5 Submitted By: Jacek Trzmiel (yangir) Assigned to: Nobody/Anonymous (nobody) Summary: urllib2 authentication mishandles empty password Initial Comment: If example.org requires authentication, then following code: host = 'example.org' user = 'testuser' password = '' url = 'http://%s/' % host authInfo = urllib2.HTTPPasswordMgrWithDefaultRealm() authInfo.add_password( None, host, user, password ) authHandler = urllib2.HTTPBasicAuthHandler( authInfo ) opener = urllib2.build_opener( authHandler ) urlFile = opener.open( url ) print urlFile.read() will die by throwing HTTPError 401: File "/usr/lib/python2.3/urllib2.py", line 419, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 401: Authorization Required even if authenticating with 'testuser' and empty password is valid. Empty password is mishandled (i.e. authentication with empty password string is ignored) in AbstractBasicAuthHandler.retry_http_basic_auth def retry_http_basic_auth(self, host, req, realm): user,pw = self.passwd.find_user_password(realm, host) if pw: [...] It can be fixed by changing: if pw: to if pw is not None: Python 2.3.2 (#1, Oct 9 2003, 12:03:29) [GCC 3.3.1 (cygming special)] on cygwin Type "help", "copyright", "credits" or "license" for more information. ---------------------------------------------------------------------- Comment By: Mike Coleman (mkc) Date: 2004-06-18 10:54 Message: Logged In: YES user_id=555 The change that was made here probably fixes the bug, but it looks like it would be better to make the test "user is not None" rather than "pw is not None", since there are two other places in the code that check the output of this function by checking the None-ness of user and no code that checks the None-ness of pw. (A comment that 'user' is what is to be checked would also be useful.) ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-05-05 20:41 Message: Logged In: YES user_id=21627 This is fixed with patch #944110. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=944082&group_id=5470 From noreply at sourceforge.net Sat Jun 19 00:42:00 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:32:07 2004 Subject: [ python-Bugs-874042 ] wrong answers from ctime Message-ID: Bugs item #874042, was opened at 2004-01-09 20:57 Message generated for change (Comment added) made by phr You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 Category: Python Library Group: Python 2.2.2 Status: Open Resolution: None Priority: 5 Submitted By: paul rubin (phr) Assigned to: Nobody/Anonymous (nobody) Summary: wrong answers from ctime Initial Comment: For any time value less than -2**31, ctime returns the same result, 'Fri Dec 13 12:45:52 1901'. It should either compute a correct value (preferable) or raise ValueError. It should not return the wrong answer. >>> from time import * >>> ctime(-2**31) 'Fri Dec 13 12:45:52 1901' >>> ctime(-2**34) 'Fri Dec 13 12:45:52 1901' >>> ctime(-1e30) 'Fri Dec 13 12:45:52 1901' ---------------------------------------------------------------------- >Comment By: paul rubin (phr) Date: 2004-06-19 04:42 Message: Logged In: YES user_id=72053 I think this needs to be fixed and not just closed. Ctime in a C library might be able to accept any numeric type as a time_t, but no matter what type that turns out to be, I don't think ctime is allowed to give a totally wrong answer. The issue here is Python numeric types don't necessarily map onto C numeric types. I think it's ok to raise an exception if a Python integer doesn't correctly map onto a valid time_t, but the current behavior is to map incorrectly. That can cause all kinds of silent bugs in a program. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-19 01:43 Message: Logged In: YES user_id=357491 I get the "problem" under 2.4 CVS on OS X. But as Tim said, the ISO C standard just says that it should accept time_t which can be *any* arithmetic type. I say don't bother fixing this since you shouldn't be passing in random values to ctime as it is. Plus ctime is not the best way to do string formatting of dates. What do you think, Tim? Think okay to just close this sucker as "won't fix"? ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 18:11 Message: Logged In: YES user_id=1057404 The below is from me (insomnike) if there's any query. Like SF less and less. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2004-06-05 18:08 Message: Logged In: NO I wish SF would let me upload patches. The below throws a ValueError when ctime is supplied with a negative value or a value over sys.maxint. ### diff -u -r2.140 timemodule.c --- timemodule.c 2 Mar 2004 04:38:10 -0000 2.140 +++ timemodule.c 5 Jun 2004 17:11:20 -0000 @@ -482,6 +482,10 @@ return NULL; tt = (time_t)dt; } + if (tt > INT_MAX || tt < 0) { + PyErr_SetString(PyExc_ValueError, "unconvertible time"); + return NULL; + } p = ctime(&tt); if (p == NULL) { PyErr_SetString(PyExc_ValueError, "unconvertible time"); ### ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-01-09 21:49 Message: Logged In: YES user_id=72053 Python 2.2.2, Red Hat GNU/Linux version 9, not sure what C runtime, whatever comes with Red Hat 9. If the value is coming from the C library's ctime function, then at minimum Python should check that the arg converts to a valid int32. It sounds like it's converting large negative values (like -1e30) to -sys.maxint. I see that ctime(sys.maxint+1) is also being converted to a large negative date. Since python's ctime (and presumably related functions) accept long and float arguments, they need to be range checked. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-01-09 21:22 Message: Logged In: YES user_id=31435 Please identify the Python version, OS and C runtime you're using. Here on Windows 2.3.3, >>> import time >>> time.ctime(-2**31) Traceback (most recent call last): File "", line 1, in ? ValueError: unconvertible time >>> The C standard doesn't define the range of convertible values for ctime(). Python raises ValueError if and only if the platform ctime() returns a NULL pointer. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 From noreply at sourceforge.net Sat Jun 19 09:44:51 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:35:53 2004 Subject: [ python-Bugs-919012 ] shutil.move can destroy files Message-ID: Bugs item #919012, was opened at 2004-03-18 20:29 Message generated for change (Comment added) made by jlgijsbers You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jeff Epler (jepler) Assigned to: Nobody/Anonymous (nobody) Summary: shutil.move can destroy files Initial Comment: $ mkdir a; touch a/b; python2.3 -c 'import shutil; shutil.move("a", "a/c") $ ls -l a ls: a: no such file or directory The same problem exists on Windows, as reported by one "shagshag". ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-19 15:44 Message: Logged In: YES user_id=469548 Yes, I did upload the newest version. The diff was generated on June 7, I just used the right copy of shutil.py this time (which I had already made on June 5). ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-19 02:03 Message: Logged In: YES user_id=357491 I just checked the link to the diff, Johannes, and the diff says it was generated on June 5. Can you check to see if you did upload the newest version to the location? ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-07 13:32 Message: Logged In: YES user_id=469548 Sorry, in my haste made a silly mistake. Removed 'self' from is_destination_in_source definition and uploaded new patch to previous location. ---------------------------------------------------------------------- Comment By: Jeff Epler (jepler) Date: 2004-06-05 19:46 Message: Logged In: YES user_id=2772 I applied the attached patch, and got this exception: >>> shutil.move("a", "a/c") Traceback (most recent call last): File "", line 1, in ? File "/usr/src/cvs-src/python/dist/src/Lib/shutil.py", line 168, in move if is_destination_in_source(src, dst): TypeError: is_destination_in_source() takes exactly 3 arguments (2 given) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 19:25 Message: Logged In: YES user_id=31435 Attached Johannes's patch. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 17:08 Message: Logged In: YES user_id=469548 Here's a patch (with tests) that disallows moving a directory inside itself altogether. I can't upload patches to SF, so here's a link to it on my homepage: http://home.student.uva.nl/johannes.gijsbers/shutil.diff. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 From noreply at sourceforge.net Sat Jun 19 13:20:49 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:44:23 2004 Subject: [ python-Bugs-945665 ] platform.system() Windows inconsistency Message-ID: Bugs item #945665, was opened at 2004-05-01 01:19 Message generated for change (Comment added) made by lemburg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed Resolution: None Priority: 5 Submitted By: Guido van Rossum (gvanrossum) Assigned to: M.-A. Lemburg (lemburg) Summary: platform.system() Windows inconsistency Initial Comment: On Windows, platform.system() (and platform.uname() [0]) return 'Windows' or 'Microsoft Windows' depending on whether win32api is available or not. This is confusing and can lead to hard-to-find bugs where testing in one environment doesn't reveal a bug that only occurs in another environment. I believe this hasn't been fixed in Python 2.4 yet (only the XP recognition has been fixed, it is also broken in 2.3 when win32api was available). ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-19 19:20 Message: Logged In: YES user_id=38388 Checked into CVS HEAD. I don't have a Python 2.3 branch checkout, so please check it in there as well if you have a need. Thanks. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-19 02:07 Message: Logged In: YES user_id=6380 Note that the patch by jlgijsbers contains some ugly code. s.find(...) != -1? Yuck! I think it could just be if system == "Microsoft Windows": system = "Windows" That should work even in Python 1.5.2. And I don't think the string would ever be "Microsoft Windows" plus some additive. And yes, please, check it in. (Or do I have to do it myself? :-) ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 18:30 Message: Logged In: YES user_id=113328 Looks OK to me. Not sure where you found docs which specify the behaviour, but I'm OK with "Windows". There's a very small risk of compatibility issues, but as the module was new in 2.3, and the behaviour was inconsistent before, I see no reason why this should be an issue. I'd recommend committing this. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 16:38 Message: Logged In: YES user_id=469548 New patch, the docs say we should use 'Windows' instead of 'Microsoft Windows', so we do: Index: platform.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/platform.py,v retrieving revision 1.13 diff -u -r1.13 platform.py --- platform.py 4 May 2004 18:18:59 -0000 1.13 +++ platform.py 5 Jun 2004 13:39:10 -0000 @@ -957,6 +957,8 @@ # platforms if use_syscmd_ver: system,release,version = _syscmd_ver(system) + if string.find(system, 'Microsoft Windows') != -1: + system = 'Windows' # In case we still don't know anything useful, we'll try to # help ourselves ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 16:07 Message: Logged In: YES user_id=469548 Yep, _syscmd_ver() returns 'Microsoft Windows' while the default is 'Windows'. Setting the default to Microsoft Windows seems the easiest way here (putting patch inline because I can't attach it): Index: platform.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/platform.py,v retrieving revision 1.13 diff -u -r1.13 platform.py --- platform.py 4 May 2004 18:18:59 -0000 1.13 +++ platform.py 5 Jun 2004 13:07:45 -0000 @@ -966,7 +966,7 @@ version = '32bit' else: version = '16bit' - system = 'Windows' + system = 'Microsoft Windows' elif system[:4] == 'java': release,vendor,vminfo,osinfo = java_ver() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 From noreply at sourceforge.net Sat Jun 19 16:11:35 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:45:05 2004 Subject: [ python-Bugs-919012 ] shutil.move can destroy files Message-ID: Bugs item #919012, was opened at 2004-03-18 11:29 Message generated for change (Settings changed) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jeff Epler (jepler) >Assigned to: Brett Cannon (bcannon) Summary: shutil.move can destroy files Initial Comment: $ mkdir a; touch a/b; python2.3 -c 'import shutil; shutil.move("a", "a/c") $ ls -l a ls: a: no such file or directory The same problem exists on Windows, as reported by one "shagshag". ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-19 06:44 Message: Logged In: YES user_id=469548 Yes, I did upload the newest version. The diff was generated on June 7, I just used the right copy of shutil.py this time (which I had already made on June 5). ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-18 17:03 Message: Logged In: YES user_id=357491 I just checked the link to the diff, Johannes, and the diff says it was generated on June 5. Can you check to see if you did upload the newest version to the location? ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-07 04:32 Message: Logged In: YES user_id=469548 Sorry, in my haste made a silly mistake. Removed 'self' from is_destination_in_source definition and uploaded new patch to previous location. ---------------------------------------------------------------------- Comment By: Jeff Epler (jepler) Date: 2004-06-05 10:46 Message: Logged In: YES user_id=2772 I applied the attached patch, and got this exception: >>> shutil.move("a", "a/c") Traceback (most recent call last): File "", line 1, in ? File "/usr/src/cvs-src/python/dist/src/Lib/shutil.py", line 168, in move if is_destination_in_source(src, dst): TypeError: is_destination_in_source() takes exactly 3 arguments (2 given) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 10:25 Message: Logged In: YES user_id=31435 Attached Johannes's patch. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 08:08 Message: Logged In: YES user_id=469548 Here's a patch (with tests) that disallows moving a directory inside itself altogether. I can't upload patches to SF, so here's a link to it on my homepage: http://home.student.uva.nl/johannes.gijsbers/shutil.diff. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 From noreply at sourceforge.net Sat Jun 19 16:18:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:45:42 2004 Subject: [ python-Bugs-874042 ] wrong answers from ctime Message-ID: Bugs item #874042, was opened at 2004-01-09 15:57 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 Category: Python Library Group: Python 2.2.2 Status: Open Resolution: None Priority: 4 Submitted By: paul rubin (phr) Assigned to: Brett Cannon (bcannon) Summary: wrong answers from ctime Initial Comment: For any time value less than -2**31, ctime returns the same result, 'Fri Dec 13 12:45:52 1901'. It should either compute a correct value (preferable) or raise ValueError. It should not return the wrong answer. >>> from time import * >>> ctime(-2**31) 'Fri Dec 13 12:45:52 1901' >>> ctime(-2**34) 'Fri Dec 13 12:45:52 1901' >>> ctime(-1e30) 'Fri Dec 13 12:45:52 1901' ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-19 16:18 Message: Logged In: YES user_id=31435 I should note that datetime.ctime() works reliably and portably for all years in 1 thru 9999 (it doesn't use the platform C library, so is sane). OTOH, the assorted datetime constructors that build from a POSIX timestamp have the same kinds of endcase portability woes as the time module functions working from timestamps. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-19 02:32 Message: Logged In: YES user_id=31435 Sorry, I can't make more time for this. Attached is a patch that's the best I can do without #ifdef'ing the snot out of every platform on Earth. As the comments explain, it's not bulletproof (and probably can't be). It introduces a _PyTime_DoubleToTimet() function that attempts to detect "unreasonable" information loss, and fiddles ctime(), localtime() and gmtime() to use it. It should really be made extern (added to Python's internal C API & declared in a header file), and little bits of datetimemodule.c fiddled to use it too. insomnike, while C89 was clear as mud on this point, C99 is clear that there's nothing wrong with a negative time_t value, and *most* platforms accept them (back to about 1900). Nothing says a time_t can't be bigger than INT_MAX either, and, indeed, platforms that use ints to store time_t will be forced to switch to fatter types before Unix hits its own version of "the Y2K bug" in a few decades (the # of seconds since 1970 is already over a billion; signed 32-bit ints will be too small in another 30+ years). ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-06-19 00:42 Message: Logged In: YES user_id=72053 I think this needs to be fixed and not just closed. Ctime in a C library might be able to accept any numeric type as a time_t, but no matter what type that turns out to be, I don't think ctime is allowed to give a totally wrong answer. The issue here is Python numeric types don't necessarily map onto C numeric types. I think it's ok to raise an exception if a Python integer doesn't correctly map onto a valid time_t, but the current behavior is to map incorrectly. That can cause all kinds of silent bugs in a program. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-18 21:43 Message: Logged In: YES user_id=357491 I get the "problem" under 2.4 CVS on OS X. But as Tim said, the ISO C standard just says that it should accept time_t which can be *any* arithmetic type. I say don't bother fixing this since you shouldn't be passing in random values to ctime as it is. Plus ctime is not the best way to do string formatting of dates. What do you think, Tim? Think okay to just close this sucker as "won't fix"? ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 14:11 Message: Logged In: YES user_id=1057404 The below is from me (insomnike) if there's any query. Like SF less and less. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2004-06-05 14:08 Message: Logged In: NO I wish SF would let me upload patches. The below throws a ValueError when ctime is supplied with a negative value or a value over sys.maxint. ### diff -u -r2.140 timemodule.c --- timemodule.c 2 Mar 2004 04:38:10 -0000 2.140 +++ timemodule.c 5 Jun 2004 17:11:20 -0000 @@ -482,6 +482,10 @@ return NULL; tt = (time_t)dt; } + if (tt > INT_MAX || tt < 0) { + PyErr_SetString(PyExc_ValueError, "unconvertible time"); + return NULL; + } p = ctime(&tt); if (p == NULL) { PyErr_SetString(PyExc_ValueError, "unconvertible time"); ### ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-01-09 16:49 Message: Logged In: YES user_id=72053 Python 2.2.2, Red Hat GNU/Linux version 9, not sure what C runtime, whatever comes with Red Hat 9. If the value is coming from the C library's ctime function, then at minimum Python should check that the arg converts to a valid int32. It sounds like it's converting large negative values (like -1e30) to -sys.maxint. I see that ctime(sys.maxint+1) is also being converted to a large negative date. Since python's ctime (and presumably related functions) accept long and float arguments, they need to be range checked. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-01-09 16:22 Message: Logged In: YES user_id=31435 Please identify the Python version, OS and C runtime you're using. Here on Windows 2.3.3, >>> import time >>> time.ctime(-2**31) Traceback (most recent call last): File "", line 1, in ? ValueError: unconvertible time >>> The C standard doesn't define the range of convertible values for ctime(). Python raises ValueError if and only if the platform ctime() returns a NULL pointer. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 From noreply at sourceforge.net Sat Jun 19 16:54:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:46:15 2004 Subject: [ python-Bugs-874042 ] wrong answers from ctime Message-ID: Bugs item #874042, was opened at 2004-01-09 12:57 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 Category: Python Library Group: Python 2.2.2 >Status: Closed >Resolution: Accepted Priority: 4 Submitted By: paul rubin (phr) Assigned to: Brett Cannon (bcannon) Summary: wrong answers from ctime Initial Comment: For any time value less than -2**31, ctime returns the same result, 'Fri Dec 13 12:45:52 1901'. It should either compute a correct value (preferable) or raise ValueError. It should not return the wrong answer. >>> from time import * >>> ctime(-2**31) 'Fri Dec 13 12:45:52 1901' >>> ctime(-2**34) 'Fri Dec 13 12:45:52 1901' >>> ctime(-1e30) 'Fri Dec 13 12:45:52 1901' ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-19 13:54 Message: Logged In: YES user_id=357491 OK, code looked good to me, so I checked it in as rev. 2.141 of timemodule.c on HEAD (not patching to 2.3 since not critical bugfix). Didn't add it to the C API (and thus did not change datetime) for lack of time. Filed bug #975996 to make sure it eventually gets done. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-19 13:18 Message: Logged In: YES user_id=31435 I should note that datetime.ctime() works reliably and portably for all years in 1 thru 9999 (it doesn't use the platform C library, so is sane). OTOH, the assorted datetime constructors that build from a POSIX timestamp have the same kinds of endcase portability woes as the time module functions working from timestamps. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-18 23:32 Message: Logged In: YES user_id=31435 Sorry, I can't make more time for this. Attached is a patch that's the best I can do without #ifdef'ing the snot out of every platform on Earth. As the comments explain, it's not bulletproof (and probably can't be). It introduces a _PyTime_DoubleToTimet() function that attempts to detect "unreasonable" information loss, and fiddles ctime(), localtime() and gmtime() to use it. It should really be made extern (added to Python's internal C API & declared in a header file), and little bits of datetimemodule.c fiddled to use it too. insomnike, while C89 was clear as mud on this point, C99 is clear that there's nothing wrong with a negative time_t value, and *most* platforms accept them (back to about 1900). Nothing says a time_t can't be bigger than INT_MAX either, and, indeed, platforms that use ints to store time_t will be forced to switch to fatter types before Unix hits its own version of "the Y2K bug" in a few decades (the # of seconds since 1970 is already over a billion; signed 32-bit ints will be too small in another 30+ years). ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-06-18 21:42 Message: Logged In: YES user_id=72053 I think this needs to be fixed and not just closed. Ctime in a C library might be able to accept any numeric type as a time_t, but no matter what type that turns out to be, I don't think ctime is allowed to give a totally wrong answer. The issue here is Python numeric types don't necessarily map onto C numeric types. I think it's ok to raise an exception if a Python integer doesn't correctly map onto a valid time_t, but the current behavior is to map incorrectly. That can cause all kinds of silent bugs in a program. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-18 18:43 Message: Logged In: YES user_id=357491 I get the "problem" under 2.4 CVS on OS X. But as Tim said, the ISO C standard just says that it should accept time_t which can be *any* arithmetic type. I say don't bother fixing this since you shouldn't be passing in random values to ctime as it is. Plus ctime is not the best way to do string formatting of dates. What do you think, Tim? Think okay to just close this sucker as "won't fix"? ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 11:11 Message: Logged In: YES user_id=1057404 The below is from me (insomnike) if there's any query. Like SF less and less. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2004-06-05 11:08 Message: Logged In: NO I wish SF would let me upload patches. The below throws a ValueError when ctime is supplied with a negative value or a value over sys.maxint. ### diff -u -r2.140 timemodule.c --- timemodule.c 2 Mar 2004 04:38:10 -0000 2.140 +++ timemodule.c 5 Jun 2004 17:11:20 -0000 @@ -482,6 +482,10 @@ return NULL; tt = (time_t)dt; } + if (tt > INT_MAX || tt < 0) { + PyErr_SetString(PyExc_ValueError, "unconvertible time"); + return NULL; + } p = ctime(&tt); if (p == NULL) { PyErr_SetString(PyExc_ValueError, "unconvertible time"); ### ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-01-09 13:49 Message: Logged In: YES user_id=72053 Python 2.2.2, Red Hat GNU/Linux version 9, not sure what C runtime, whatever comes with Red Hat 9. If the value is coming from the C library's ctime function, then at minimum Python should check that the arg converts to a valid int32. It sounds like it's converting large negative values (like -1e30) to -sys.maxint. I see that ctime(sys.maxint+1) is also being converted to a large negative date. Since python's ctime (and presumably related functions) accept long and float arguments, they need to be range checked. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-01-09 13:22 Message: Logged In: YES user_id=31435 Please identify the Python version, OS and C runtime you're using. Here on Windows 2.3.3, >>> import time >>> time.ctime(-2**31) Traceback (most recent call last): File "", line 1, in ? ValueError: unconvertible time >>> The C standard doesn't define the range of convertible values for ctime(). Python raises ValueError if and only if the platform ctime() returns a NULL pointer. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 From noreply at sourceforge.net Sat Jun 19 16:53:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:46:20 2004 Subject: [ python-Bugs-975996 ] Add _PyTime_DoubletoTimet to C API Message-ID: Bugs item #975996, was opened at 2004-06-19 13:53 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975996&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 3 Submitted By: Brett Cannon (bcannon) Assigned to: Nobody/Anonymous (nobody) Summary: Add _PyTime_DoubletoTimet to C API Initial Comment: Need to add the function to the C API and then patch datetime to use it where it accepts a time_t timestamp. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975996&group_id=5470 From noreply at sourceforge.net Sat Jun 19 16:57:33 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:46:54 2004 Subject: [ python-Bugs-975404 ] logging module uses deprecate apply() Message-ID: Bugs item #975404, was opened at 2004-06-18 07:52 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975404&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Barry Alan Scott (barry-scott) Assigned to: Nobody/Anonymous (nobody) Summary: logging module uses deprecate apply() Initial Comment: The use of apply in logging causes warning to be issued by python when turning programs into executables with Gordon's McMillians Installer and probably others. Replacing the apply() calls with the modern idium would fix the problem. The work around is the add "import warnings" in the main module of your program. ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-19 13:57 Message: Logged In: YES user_id=357491 If you read PEP 291 (http://www.python.org/peps/pep-0291.html) you will notice that the logging module is to be kept backwards-compatible to 1.5.2 . This requires using apply() instead of ``*args, **kwargs``. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975404&group_id=5470 From noreply at sourceforge.net Sat Jun 19 17:26:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:47:17 2004 Subject: [ python-Bugs-935117 ] pkgutil doesn't understand case-senseless filesystems Message-ID: Bugs item #935117, was opened at 2004-04-14 14:35 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=935117&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Fred L. Drake, Jr. (fdrake) Assigned to: Nobody/Anonymous (nobody) Summary: pkgutil doesn't understand case-senseless filesystems Initial Comment: The pkgutil.extend_path() function doesn't understand case-senseless filesystems the way Python's import does, but it should. On a case-insensitive filesystem, a directory that matches the package name in spelling but not case can be mistakenly added to a package by extend_path(); this can cause differently-named packages to be mistakenly merged and allow unrelated code to shadow code in a secondary component of a package (where secondary means something other than the first directory found that matches the package). Consider this tree in a filesystem: d1/ foo/ __init__.py # this calls pkgutil.extend_path() module.py # imports module "foo.something" d2/ Foo/ __init__.py # an unrelated package something.py d3/ foo/ __init__.py something.py sys.path contains d1/, d2/, d3/, in that order. After the call to extend_path() in d1/foo/__init__.py, foo.__path__ will contain d1/foo/, d2/Foo/, d3/foo/ (in that order), and foo/module.py will get d2/Foo/something.py when it imports foo.something. What's intended is that it get d3/foo/something.py; on a case-sensitive filesystem, that's what would have happened. pkgutil.extend_path() should exercise the same care and check the same constraints as Python's import. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-19 17:26 Message: Logged In: YES user_id=31435 Attaching a pure-Python case_ok() "building block" that should be adequate for doing the hard part of this. It's trickier than first appears because Python's case-sensitive imports are not sensitive to the case of extensions (.py, .pyo, ...) on case-insensitive filesystems. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-04-14 14:38 Message: Logged In: YES user_id=3066 Argh! Here's the tree again, since comments don't get screwed up the same way initial reports are: d1/ foo/ __init__.py # this calls pkgutil.extend_path() module.py # imports module "foo.something" d2/ Foo/ __init__.py # an unrelated package something.py d3/ foo/ __init__.py something.py ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=935117&group_id=5470 From noreply at sourceforge.net Sat Jun 19 17:40:23 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:47:30 2004 Subject: [ python-Bugs-919012 ] shutil.move can destroy files Message-ID: Bugs item #919012, was opened at 2004-03-18 11:29 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed >Resolution: Accepted Priority: 5 Submitted By: Jeff Epler (jepler) Assigned to: Brett Cannon (bcannon) Summary: shutil.move can destroy files Initial Comment: $ mkdir a; touch a/b; python2.3 -c 'import shutil; shutil.move("a", "a/c") $ ls -l a ls: a: no such file or directory The same problem exists on Windows, as reported by one "shagshag". ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-19 14:40 Message: Logged In: YES user_id=357491 Tweaked the function name and made the test clean up the temp directory; checked in as rev. 1.29 for Lib/shutil.py and rev. 1.3 for Lib/ test/test_shutil.py on HEAD. Checked in on 2.3 as rev. 1.28.10.1 and rev. 1.2.8.1. Thanks, Johannes. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-19 06:44 Message: Logged In: YES user_id=469548 Yes, I did upload the newest version. The diff was generated on June 7, I just used the right copy of shutil.py this time (which I had already made on June 5). ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-18 17:03 Message: Logged In: YES user_id=357491 I just checked the link to the diff, Johannes, and the diff says it was generated on June 5. Can you check to see if you did upload the newest version to the location? ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-07 04:32 Message: Logged In: YES user_id=469548 Sorry, in my haste made a silly mistake. Removed 'self' from is_destination_in_source definition and uploaded new patch to previous location. ---------------------------------------------------------------------- Comment By: Jeff Epler (jepler) Date: 2004-06-05 10:46 Message: Logged In: YES user_id=2772 I applied the attached patch, and got this exception: >>> shutil.move("a", "a/c") Traceback (most recent call last): File "", line 1, in ? File "/usr/src/cvs-src/python/dist/src/Lib/shutil.py", line 168, in move if is_destination_in_source(src, dst): TypeError: is_destination_in_source() takes exactly 3 arguments (2 given) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 10:25 Message: Logged In: YES user_id=31435 Attached Johannes's patch. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 08:08 Message: Logged In: YES user_id=469548 Here's a patch (with tests) that disallows moving a directory inside itself altogether. I can't upload patches to SF, so here's a link to it on my homepage: http://home.student.uva.nl/johannes.gijsbers/shutil.diff. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=919012&group_id=5470 From noreply at sourceforge.net Sat Jun 19 22:30:40 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:52:01 2004 Subject: [ python-Bugs-975996 ] Add _PyTime_DoubletoTimet to C API Message-ID: Bugs item #975996, was opened at 2004-06-19 16:53 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975996&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None >Priority: 5 Submitted By: Brett Cannon (bcannon) >Assigned to: Tim Peters (tim_one) Summary: Add _PyTime_DoubletoTimet to C API Initial Comment: Need to add the function to the C API and then patch datetime to use it where it accepts a time_t timestamp. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-19 22:30 Message: Logged In: YES user_id=31435 I already have code for this locally, so assigned to me and gave it normal priority. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975996&group_id=5470 From noreply at sourceforge.net Sat Jun 19 22:52:48 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:52:41 2004 Subject: [ python-Bugs-975996 ] Add _PyTime_DoubletoTimet to C API Message-ID: Bugs item #975996, was opened at 2004-06-19 16:53 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975996&group_id=5470 Category: Extension Modules Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Brett Cannon (bcannon) Assigned to: Tim Peters (tim_one) Summary: Add _PyTime_DoubletoTimet to C API Initial Comment: Need to add the function to the C API and then patch datetime to use it where it accepts a time_t timestamp. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-19 22:52 Message: Logged In: YES user_id=31435 Checked in: Include/timefuncs.h; initial revision: 1.1 Lib/test/test_datetime.py; new revision: 1.48 Lib/test/test_time.py; new revision: 1.17 Misc/NEWS; new revision: 1.1007 Modules/datetimemodule.c; new revision: 1.73 Modules/timemodule.c; new revision: 2.142 ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-19 22:30 Message: Logged In: YES user_id=31435 I already have code for this locally, so assigned to me and gave it normal priority. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975996&group_id=5470 From noreply at sourceforge.net Sat Jun 19 23:13:31 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:53:14 2004 Subject: [ python-Bugs-973507 ] sys.stdout problems with pythonw.exe Message-ID: Bugs item #973507, was opened at 2004-06-15 16:34 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973507&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: sys.stdout problems with pythonw.exe Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. I have written this script for reproducing the bug: import sys class teeIO: def __init__(self, *files): self.__files = files def write(self, str): for i in self.__files: print >> trace, 'writing on %s: %s' % (i, str) i.write(str) print >> trace, '-' * 70 def tee(*files): return teeIO(*files) log = file('log.txt', 'w') err = file('err.txt', 'w') trace = file('trace.txt', 'w') sys.stdout = tee(log, sys.__stdout__) sys.stderr = tee(err, sys.__stderr__) def write(n, width): sys.stdout.write('x' * width) if n == 1: return write(n - 1, width) try: 1/0 except: write(1, 4096) [output from err.log] Traceback (most recent call last): File "sys.py", line 36, in ? write(1, 4096) File "sys.py", line 28, in write sys.stdout.write('x' * width) File "sys.py", line 10, in write i.write(str) IOError: [Errno 9] Bad file descriptor TeeIO is needed for actually read the program output, but I don't know if the problem is due to teeIO. The same problem is present for stderr, as can be seen by swapping sys.__stdout__ and sys.__stderr__. As I can see, 4096 is the buffer size for sys.stdout/err. The problem is the same if the data is written in chunks, ad example: write(2, 4096/2). The bug isn't present if I use python.exe or if I write less than 4096 bytes. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-19 23:13 Message: Logged In: YES user_id=31435 Just noting that "the usual" way to determine whether you're running under pythonw is to see whether sys.executable.endswith("pythonw.exe") The usual way to get a do-nothing file object on Windows is to open the special (to Windows) file named "nul" (that's akin to opening the special file /dev/null on Unixish boxes). Note that file('nul').fileno() does return a handle on Windows, despite that it's not a file in the filesystem. ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-18 12:42 Message: Logged In: YES user_id=1054957 I have found a very simple patch. First I have implemented this function: import os def isrealfile(file): """ Test if file is on the os filesystem """ if not hasattr(file, 'fileno'): return False try: tmp = os.dup(file.fileno()) except: return False else: os.close(tmp); return True Microsoft implementation of stdout/err/in when no console is created (and when no pipes are used) actually are not 'real' files. Then I have added the following code in sitecustomize.py: import sys class NullStream: """ A file like class that writes nothing """ def close(self): pass def flush(self): pass def write(self, str): pass def writelines(self, sequence): pass if not isrealfile(sys.__stdout__): sys.stdout = NullStream() if not isrealfile(sys.__stderr__): sys.stderr = NullStream() I have tested the code only on Windows XP Pro. P.S. isrealfile could be added in os module. Regards Manlio Perillo ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-16 13:05 Message: Logged In: YES user_id=1054957 The problem with this bug is that I have a script that can be executed both with python.exe that with pythonw.exe! How can I know if stdout is connected to a console? I think a 'patch' would be to replace sys.stdout/err with a null stream instead of using windows stdout/err implementation. If fileno can't be implemented, it should not be a problem. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-15 19:09 Message: Logged In: YES user_id=31435 Ya, this is well known, although it may not be documented. pythonw's purpose in life is *not* to create (or inherit) a console window (a "DOS box"). Therefore stdin, stdout, and stderr aren't attached to anything usable. Microsoft's C runtime seems to attach them to buffers that aren't connected to anything, so they complain if you ever exceed the buffer size. The short course is that stdin, stdout and stderr are useless in programs without a console window, so you shouldn't use them. Or you should you install your own file-like objects, and make them do something useful to you. I think it would be helpful if pythonw did something fancier (e.g., pop up a window containing attempted output), but that's in new-feature terrority, and nobody has contributed code for it anyway. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973507&group_id=5470 From noreply at sourceforge.net Sat Jun 19 22:32:49 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 00:53:19 2004 Subject: [ python-Bugs-874042 ] wrong answers from ctime Message-ID: Bugs item #874042, was opened at 2004-01-09 15:57 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 Category: Python Library Group: Python 2.2.2 Status: Closed >Resolution: Fixed Priority: 4 Submitted By: paul rubin (phr) Assigned to: Brett Cannon (bcannon) Summary: wrong answers from ctime Initial Comment: For any time value less than -2**31, ctime returns the same result, 'Fri Dec 13 12:45:52 1901'. It should either compute a correct value (preferable) or raise ValueError. It should not return the wrong answer. >>> from time import * >>> ctime(-2**31) 'Fri Dec 13 12:45:52 1901' >>> ctime(-2**34) 'Fri Dec 13 12:45:52 1901' >>> ctime(-1e30) 'Fri Dec 13 12:45:52 1901' ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-19 22:32 Message: Logged In: YES user_id=31435 Changed resolution to Fixed. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-19 16:54 Message: Logged In: YES user_id=357491 OK, code looked good to me, so I checked it in as rev. 2.141 of timemodule.c on HEAD (not patching to 2.3 since not critical bugfix). Didn't add it to the C API (and thus did not change datetime) for lack of time. Filed bug #975996 to make sure it eventually gets done. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-19 16:18 Message: Logged In: YES user_id=31435 I should note that datetime.ctime() works reliably and portably for all years in 1 thru 9999 (it doesn't use the platform C library, so is sane). OTOH, the assorted datetime constructors that build from a POSIX timestamp have the same kinds of endcase portability woes as the time module functions working from timestamps. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-19 02:32 Message: Logged In: YES user_id=31435 Sorry, I can't make more time for this. Attached is a patch that's the best I can do without #ifdef'ing the snot out of every platform on Earth. As the comments explain, it's not bulletproof (and probably can't be). It introduces a _PyTime_DoubleToTimet() function that attempts to detect "unreasonable" information loss, and fiddles ctime(), localtime() and gmtime() to use it. It should really be made extern (added to Python's internal C API & declared in a header file), and little bits of datetimemodule.c fiddled to use it too. insomnike, while C89 was clear as mud on this point, C99 is clear that there's nothing wrong with a negative time_t value, and *most* platforms accept them (back to about 1900). Nothing says a time_t can't be bigger than INT_MAX either, and, indeed, platforms that use ints to store time_t will be forced to switch to fatter types before Unix hits its own version of "the Y2K bug" in a few decades (the # of seconds since 1970 is already over a billion; signed 32-bit ints will be too small in another 30+ years). ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-06-19 00:42 Message: Logged In: YES user_id=72053 I think this needs to be fixed and not just closed. Ctime in a C library might be able to accept any numeric type as a time_t, but no matter what type that turns out to be, I don't think ctime is allowed to give a totally wrong answer. The issue here is Python numeric types don't necessarily map onto C numeric types. I think it's ok to raise an exception if a Python integer doesn't correctly map onto a valid time_t, but the current behavior is to map incorrectly. That can cause all kinds of silent bugs in a program. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-18 21:43 Message: Logged In: YES user_id=357491 I get the "problem" under 2.4 CVS on OS X. But as Tim said, the ISO C standard just says that it should accept time_t which can be *any* arithmetic type. I say don't bother fixing this since you shouldn't be passing in random values to ctime as it is. Plus ctime is not the best way to do string formatting of dates. What do you think, Tim? Think okay to just close this sucker as "won't fix"? ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 14:11 Message: Logged In: YES user_id=1057404 The below is from me (insomnike) if there's any query. Like SF less and less. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2004-06-05 14:08 Message: Logged In: NO I wish SF would let me upload patches. The below throws a ValueError when ctime is supplied with a negative value or a value over sys.maxint. ### diff -u -r2.140 timemodule.c --- timemodule.c 2 Mar 2004 04:38:10 -0000 2.140 +++ timemodule.c 5 Jun 2004 17:11:20 -0000 @@ -482,6 +482,10 @@ return NULL; tt = (time_t)dt; } + if (tt > INT_MAX || tt < 0) { + PyErr_SetString(PyExc_ValueError, "unconvertible time"); + return NULL; + } p = ctime(&tt); if (p == NULL) { PyErr_SetString(PyExc_ValueError, "unconvertible time"); ### ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2004-01-09 16:49 Message: Logged In: YES user_id=72053 Python 2.2.2, Red Hat GNU/Linux version 9, not sure what C runtime, whatever comes with Red Hat 9. If the value is coming from the C library's ctime function, then at minimum Python should check that the arg converts to a valid int32. It sounds like it's converting large negative values (like -1e30) to -sys.maxint. I see that ctime(sys.maxint+1) is also being converted to a large negative date. Since python's ctime (and presumably related functions) accept long and float arguments, they need to be range checked. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-01-09 16:22 Message: Logged In: YES user_id=31435 Please identify the Python version, OS and C runtime you're using. Here on Windows 2.3.3, >>> import time >>> time.ctime(-2**31) Traceback (most recent call last): File "", line 1, in ? ValueError: unconvertible time >>> The C standard doesn't define the range of convertible values for ctime(). Python raises ValueError if and only if the platform ctime() returns a NULL pointer. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874042&group_id=5470 From noreply at sourceforge.net Sun Jun 20 09:39:12 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 01:05:12 2004 Subject: [ python-Bugs-973507 ] sys.stdout problems with pythonw.exe Message-ID: Bugs item #973507, was opened at 2004-06-15 20:34 Message generated for change (Comment added) made by manlioperillo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973507&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: sys.stdout problems with pythonw.exe Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. I have written this script for reproducing the bug: import sys class teeIO: def __init__(self, *files): self.__files = files def write(self, str): for i in self.__files: print >> trace, 'writing on %s: %s' % (i, str) i.write(str) print >> trace, '-' * 70 def tee(*files): return teeIO(*files) log = file('log.txt', 'w') err = file('err.txt', 'w') trace = file('trace.txt', 'w') sys.stdout = tee(log, sys.__stdout__) sys.stderr = tee(err, sys.__stderr__) def write(n, width): sys.stdout.write('x' * width) if n == 1: return write(n - 1, width) try: 1/0 except: write(1, 4096) [output from err.log] Traceback (most recent call last): File "sys.py", line 36, in ? write(1, 4096) File "sys.py", line 28, in write sys.stdout.write('x' * width) File "sys.py", line 10, in write i.write(str) IOError: [Errno 9] Bad file descriptor TeeIO is needed for actually read the program output, but I don't know if the problem is due to teeIO. The same problem is present for stderr, as can be seen by swapping sys.__stdout__ and sys.__stderr__. As I can see, 4096 is the buffer size for sys.stdout/err. The problem is the same if the data is written in chunks, ad example: write(2, 4096/2). The bug isn't present if I use python.exe or if I write less than 4096 bytes. Thanks and regards Manlio Perillo ---------------------------------------------------------------------- >Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-20 13:39 Message: Logged In: YES user_id=1054957 Thanks for sys.executable and 'nul' hints! I only want to add two notes: 1) isrealfile(file('nul')) -> True So 'nul' has a 'real' implementation 2) sys.executables isn't very useful for me, since I can do: pythonw ascript.py > afile In this case sys.stdout is a 'real file', so I don't want to redirect it to a null device. In all cases, isrealfile work as I want. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-20 03:13 Message: Logged In: YES user_id=31435 Just noting that "the usual" way to determine whether you're running under pythonw is to see whether sys.executable.endswith("pythonw.exe") The usual way to get a do-nothing file object on Windows is to open the special (to Windows) file named "nul" (that's akin to opening the special file /dev/null on Unixish boxes). Note that file('nul').fileno() does return a handle on Windows, despite that it's not a file in the filesystem. ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-18 16:42 Message: Logged In: YES user_id=1054957 I have found a very simple patch. First I have implemented this function: import os def isrealfile(file): """ Test if file is on the os filesystem """ if not hasattr(file, 'fileno'): return False try: tmp = os.dup(file.fileno()) except: return False else: os.close(tmp); return True Microsoft implementation of stdout/err/in when no console is created (and when no pipes are used) actually are not 'real' files. Then I have added the following code in sitecustomize.py: import sys class NullStream: """ A file like class that writes nothing """ def close(self): pass def flush(self): pass def write(self, str): pass def writelines(self, sequence): pass if not isrealfile(sys.__stdout__): sys.stdout = NullStream() if not isrealfile(sys.__stderr__): sys.stderr = NullStream() I have tested the code only on Windows XP Pro. P.S. isrealfile could be added in os module. Regards Manlio Perillo ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-16 17:05 Message: Logged In: YES user_id=1054957 The problem with this bug is that I have a script that can be executed both with python.exe that with pythonw.exe! How can I know if stdout is connected to a console? I think a 'patch' would be to replace sys.stdout/err with a null stream instead of using windows stdout/err implementation. If fileno can't be implemented, it should not be a problem. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-15 23:09 Message: Logged In: YES user_id=31435 Ya, this is well known, although it may not be documented. pythonw's purpose in life is *not* to create (or inherit) a console window (a "DOS box"). Therefore stdin, stdout, and stderr aren't attached to anything usable. Microsoft's C runtime seems to attach them to buffers that aren't connected to anything, so they complain if you ever exceed the buffer size. The short course is that stdin, stdout and stderr are useless in programs without a console window, so you shouldn't use them. Or you should you install your own file-like objects, and make them do something useful to you. I think it would be helpful if pythonw did something fancier (e.g., pop up a window containing attempted output), but that's in new-feature terrority, and nobody has contributed code for it anyway. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973507&group_id=5470 From noreply at sourceforge.net Sun Jun 20 17:10:26 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 01:12:33 2004 Subject: [ python-Bugs-975387 ] Python and segfaulting extension modules Message-ID: Bugs item #975387, was opened at 2004-06-18 16:32 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975387&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Folke Lemaitre (zypher) Assigned to: Nobody/Anonymous (nobody) Summary: Python and segfaulting extension modules Initial Comment: Normally when a segfault occurs in a python thread (mainly in extension modules), two things can happen: * Python segfaults * Python uses 99% CPU while Garbage Collecting the same INVALID object over and over again The second result is reported as a bug somewhere else. In a python program with lots of threads and lots of loaded extension modules it is almost impossible to find the cause of a segfault. Wouldn't it be possible to have some traceback printed when a SIGSEGV occurs? Would be really very handy. There even exists an extension module that does just that, but unfortunately only intercepts problems from the main thread. (http://systems.cs.uchicago.edu/wad/) I think something similar should be standard behaviour of python. Even nicer would be if python just raises an exception encapsulating the c stacktrace or even converting a c trace to a python traceback Example WAD output: WAD can either be imported as a Python extension module or linked to an extension module. To illustrate, consider the earlier example: % python foo.py Segmentation Fault (core dumped) % To identify the problem, a programmer can run Python interactively and import WAD as follows: % python Python 2.0 (#1, Oct 27 2000, 14:34:45) [GCC 2.95.2 19991024 (release)] on sunos5 Type "copyright", "credits" or "license" for more information. >>> import libwadpy WAD Enabled >>> execfile("foo.py") Traceback (most recent call last): File "", line 1, in ? File "foo.py", line 16, in ? foo() File "foo.py", line 13, in foo bar() File "foo.py", line 10, in bar spam() File "foo.py", line 7, in spam doh.doh(a,b,c) SegFault: [ C stack trace ] #2 0x00027774 in call_builtin (func=0x1c74f0,arg=0x1a1ccc,kw=0x0) #1 0xff022f7c in _wrap_doh (0x0,0x1a1ccc,0x160ef4,0x9c,0x56b44,0x1aa3d8) #0 0xfe7e0568 in doh(a=0x3,b=0x4,c=0x0) in 'foo.c', line 28 /u0/beazley/Projects/WAD/Python/foo.c, line 28 int doh(int a, int b, int *c) { => *c = a + b; return *c; } >>> ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-20 23:10 Message: Logged In: YES user_id=21627 This is not a bug report: there is no specific bug being reported. Instead, it is a feature request, but I'd like to reject it, as it is unimplementable, in a generic, platform-independent way. The fact that WAD only works for the main thread should be reported as a bug report to the WAD maintainers. If you think something should be done about this, please contribute patches. I personally prefer to use a fully-featured debugger to analyse problems in extension modules. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975387&group_id=5470 From noreply at sourceforge.net Mon Jun 21 01:05:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 01:22:45 2004 Subject: [ python-Bugs-975646 ] tp_(get|set)attro? inheritance bug Message-ID: Bugs item #975646, was opened at 2004-06-19 00:04 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975646&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Gustavo J. A. M. Carneiro (gustavo) >Assigned to: Guido van Rossum (gvanrossum) Summary: tp_(get|set)attro? inheritance bug Initial Comment: Documentation says, regarding tp_getattr: ? This field is inherited by subtypes together with tp_getattro: a subtype inherits both tp_getattr and tp_getattro from its base type when the subtype's tp_getattr and tp_getattro are both NULL. ? Implementation disagrees, at least in cvs head, but the effect of the bug (non-inheritance of tp_getattr) happens in 2.3.3. Follow with me: In function type_new (typeobject.c) line 1927: /* Special case some slots */ if (type->tp_dictoffset != 0 || nslots > 0) { if (base->tp_getattr == NULL && base->tp_getattro == NULL) type->tp_getattro = PyObject_GenericGetAttr; if (base->tp_setattr == NULL && base->tp_setattro == NULL) type->tp_setattro = PyObject_GenericSetAttr; } ...later in the same function... /* Initialize the rest */ if (PyType_Ready(type) < 0) { Py_DECREF(type); return NULL; } Inside PyType_Ready(), line 3208: for (i = 1; i < n; i++) { PyObject *b = PyTuple_GET_ITEM(bases, i); if (PyType_Check(b)) inherit_slots(type, (PyTypeObject *)b); } Inside inherit_slots, line (3056): if (type->tp_getattr == NULL && type->tp_getattro == NULL) { type->tp_getattr = base->tp_getattr; type->tp_getattro = base->tp_getattro; } if (type->tp_setattr == NULL && type->tp_setattro == NULL) { type->tp_setattr = base->tp_setattr; type->tp_setattro = base->tp_setattro; } So, if you have followed through, you'll notice that type_new first sets tp_getattro = GenericGetAttr, in case 'base' has neither tp_getattr nor tp_getattro. So, you are thinking that there is no problem. If base has tp_getattr, that code path won't be execute. The problem is with multiple inheritance. In type_new, 'base' is determined by calling best_base(). But the selected base may not have tp_getattr, while another might have. In this case, setting tp_getattro based on information from the wrong base precludes the slot from being inherited from the right base. This is happening in pygtk, unfortunately. One possible solution would be to move the first code block to after the PyType_Ready() call. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-21 07:05 Message: Logged In: YES user_id=21627 Guido, is this a bug? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975646&group_id=5470 From noreply at sourceforge.net Mon Jun 21 01:49:31 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 01:52:48 2004 Subject: [ python-Feature Requests-960325 ] "require " configure option Message-ID: Feature Requests item #960325, was opened at 2004-05-25 21:07 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=960325&group_id=5470 >Category: None >Group: None Status: Open Resolution: None Priority: 5 Submitted By: Hallvard B Furuseth (hfuru) Assigned to: Nobody/Anonymous (nobody) Summary: "require " configure option Initial Comment: I'd like to be able to configure Python so that Configure or Make will fail if a particular feature is unavailable. Currently I'm concerned with SSL, which just gets a warning from Make: building '_ssl' extension *** WARNING: renaming "_ssl" since importing it failed: ld.so.1: ./python: fatal: libssl.so.0.9.8: open failed: No such file or directory Since that's buried in a lot of Make output, it's easy to miss. Besides, for semi-automatic builds it's in any case good to get a non-success exit status from the build process. Looking at the Make output, I see the bz2 extension is another example where this might be useful. Maybe the option would simply be '--enable-ssl', unless you want that to merely try to build with ssl. Or '--require=ssl,bz2,...'. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-21 07:49 Message: Logged In: YES user_id=21627 Moved to RFE. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-04 23:35 Message: Logged In: YES user_id=593130 See item 964703 for further information and then decide. ---------------------------------------------------------------------- Comment By: Hallvard B Furuseth (hfuru) Date: 2004-06-02 13:56 Message: Logged In: YES user_id=726647 Ah, so that's what RFE means. You could rename that to 'Enhancement Requests'. Anyway, QoI issues tend to resemble bug issues more than enhancement issues, so '"bug" of type feature request' looks good to me. Though I'll resubmit as RFE if you ask. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-02 04:07 Message: Logged In: YES user_id=593130 Yes, this is not a PEP item. I didn't notice Feature Reqest since it is redundant vis a vis the separate RFE list. ---------------------------------------------------------------------- Comment By: Hallvard B Furuseth (hfuru) Date: 2004-06-01 20:13 Message: Logged In: YES user_id=726647 I marked it with Group: Feature Request. Not a bug, but a quality of implementation issue. It seemed more proper here than as a PEP. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2004-06-01 19:58 Message: Logged In: YES user_id=593130 Are you claiming that there is an actual bug, or is this merely an RFE (Request For Enhancement) item? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=960325&group_id=5470 From noreply at sourceforge.net Mon Jun 21 05:26:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 05:28:53 2004 Subject: [ python-Bugs-976608 ] Unhelpful error message when getmtime.c fails Message-ID: Bugs item #976608, was opened at 2004-06-21 09:26 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976608&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Peter Maxwell (pm67nz) Assigned to: Nobody/Anonymous (nobody) Summary: Unhelpful error message when getmtime.c fails Initial Comment: This fragment of import.c: mtime = PyOS_GetLastModificationTime(pathname, fp); if (mtime == (time_t)(-1)) return NULL; is missing a PyErr_SetString(), so in at least one circumstance (an __init__.py file with an apparent mtime of 1 Jan 1970 created by a bug in darcs on debian linux) it produces "SystemError: NULL result without error in PyObject_Call". ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976608&group_id=5470 From noreply at sourceforge.net Mon Jun 21 05:36:13 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 06:19:44 2004 Subject: [ python-Bugs-976613 ] socket timeouts problems on Solaris Message-ID: Bugs item #976613, was opened at 2004-06-21 11:36 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976613&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Peter ?strand (astrand) Assigned to: Nobody/Anonymous (nobody) Summary: socket timeouts problems on Solaris Initial Comment: The timeout stuff in the socket module does not work correctly on Solaris. Here's a typical example: import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(("localhost", 9048)) s.listen(1) s.settimeout(10) conn, addr = s.accept() print 'Connected by', addr while 1: data = conn.recv(1024) if not data: break conn.send(data) conn.close() When connecting, I get this traceback: Connected by ('127.0.0.1', 32866) Traceback (most recent call last): File "foo.py", line 10, in ? data = conn.recv(1024) socket.error: (11, 'Resource temporarily unavailable') This is because Python treats the new socket object as blocking (the timeout value is -1). However, in Solaris, sockets returned from accept() inherits the blocking property. So, because the listenting socket was in non-blocking mode, the new connected socket will be non-blocking as well. Since the timeout is -1, internal_select will not call select. The solution to this problem is to explicitly set the blocking mode on new socket objects. The attached patch implements this. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976613&group_id=5470 From noreply at sourceforge.net Mon Jun 21 06:35:33 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 06:38:09 2004 Subject: [ python-Bugs-924301 ] A leak case with cmd.py & readline & exception Message-ID: Bugs item #924301, was opened at 2004-03-27 00:28 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Sverker Nilsson (svenil) Assigned to: Michael Hudson (mwh) Summary: A leak case with cmd.py & readline & exception Initial Comment: A leak to do with readline & cmd, in Python 2.3. I found out what hold on to my interactive objects too long ('for ever') in certain circumstances. The circumstance had to do with an exception being raised in Cmd.cmdloop and handled (or not handled) outside of Cmd.cmdloop. In cmd.py, class Cmd, in cmdloop(), if an exception is raised and propagated out from the interior of cmdloop, the function postloop() is not called. The default function of this, (in 2.3) when the readline library is present, is to restore the completer, via: readline.set_completer(self.old_completer) If this is not done, the newly (by preloop) inserted completer will remain. Even if the loop is called again and run without exception, the new completer will remain, because then in postloop the old completer will be set to our new completer. When we exit, the completer will remain the one we set. This will hold on to our object, aka 'leak'. - In cmd.py in 2.2 no attempt was made to restore the completer, so this was also a kind of leak, but it was replaced the next time a Cmd instance was initialized. Now, however, the next time we will not replace the old completer, but both of them will remain in memory. The old one will be stored as self.old_completer. If we terminate with an exception, bad luck... the stored completer retains both of the instances. If we terminate normally, the old one will be retained. In no case do we restore the space of the first instance. The only way that would happen, would be if the second instance first exited the loop with an exception, and then entered the loop again an exited normally. But then, the second instance is retained instead! If each instance happens to terminate with an exception, it seems well possible that an ever increasing chain of leaking instances will be accumulated. My fix is to always call the postloop, given the preloop succeeded. This is done via a try:finally clause. def cmdloop(self, intro=None): ... self.preloop() try: ... finally: # Make sure postloop called self.postloop() I am attaching my patched version of cmd.py. It was originally from the tarball of Python 2.3.3 downloaded from Python.org some month or so ago in which cmd.py had this size & date: 14504 Feb 19 2003 cmd.py Best regards, Sverker Nilsson ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-21 11:35 Message: Logged In: YES user_id=6656 Well, that would seem to be easy enough to fix (see attached). If you're using cmd.Cmd instances from different threads at the same time, mind, I think you're screwed anyway. You're certainly walking on thin ice... ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-06-18 20:13 Message: Logged In: YES user_id=356603 I think it is OK. Just noting that it changes the completer (just like my version) even if use_rawinput is false. I guess one should remember to pass a null completekey in that case, in case some other thread was using raw_input. Perhaps a check for use_rawinput could be added in cmd.py to avoid changing the completer in that case, for less risk of future mistakes. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 13:29 Message: Logged In: YES user_id=6656 yay, that appears to have worked. let me know what you think. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 13:26 Message: Logged In: YES user_id=6656 trying again... ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-01 11:10 Message: Logged In: YES user_id=6656 Bah. I don't have the laptop with the patch with me, I'll try uploading again in a couple of days. ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-05-29 08:43 Message: Logged In: YES user_id=356603 I couldn't find a new attached file. I acknowledge some problems with my original patch, but have no other suggestion at the moment. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-26 17:36 Message: Logged In: YES user_id=6656 What do you think of the attached? This makes the documentation of pre & post loop more accurate again, which I think is nice. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-19 10:38 Message: Logged In: YES user_id=6656 This is where I go "I wish I'd reviewed that patch more carefully". In particular, the documentation of {pre,post}loop is now out of date. I wonder setting/getting the completer in these functions was a good idea. Hmm. This bug report confuses me :-) but I can certainly see the intent of the patch... ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-19 02:52 Message: Logged In: YES user_id=80475 Michael, this touches some of your code. Do you want to handle this one? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 From noreply at sourceforge.net Mon Jun 21 06:45:07 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 06:48:34 2004 Subject: [ python-Bugs-971213 ] another threads+readline+signals nasty Message-ID: Bugs item #971213, was opened at 2004-06-11 16:30 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971213&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Nobody/Anonymous (nobody) Summary: another threads+readline+signals nasty Initial Comment: python -c "import time, readline, thread; thread.start_new_thread(raw_input, ()); time.sleep(2)" Segfaults on ^C Fails on Linux, freebsd. On linux (FC - using kernel 2.6.1, glibc 2.3.3, gcc-3.3.3) (gdb) where #0 0x002627a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x008172b1 in ___newselect_nocancel () from /lib/tls/libc.so.6 #2 0x0011280b in time_sleep (self=0x0, args=0xb7fe17ac) at Modules/timemodule.c:815 on FreeBSD 5.2.1-RC, a different error. Fatal error 'longjmp()ing between thread contexts is undefined by POSIX 1003.1' at line 72 in file /usr/src/lib/libc_r/uthread/uthread_jmp.c (errno = 2) Abort (core dumped) ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-21 11:45 Message: Logged In: YES user_id=6656 Can you try the patch that's *now* in 960406? It seems to help for me (but I really would rather not think too hard about this!). ---------------------------------------------------------------------- Comment By: Michal Pasternak (mpasternak) Date: 2004-06-11 16:43 Message: Logged In: YES user_id=799039 readline used on FreeBSD was readline-4.3pl5; everything else: gcc 3.3.3, ncurses, libc were standard from 5.2.1. ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-11 16:39 Message: Logged In: YES user_id=29957 The patch in #960406 doesn't help here. The FC test system also has readline-4.3, if it helps, as does the FreeBSD box. It apparently doesn't crash on OSX. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 16:38 Message: Logged In: YES user_id=6656 Hmm. Doesn't crash on OS X. Messes the terminal up good and proper, though. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=971213&group_id=5470 From noreply at sourceforge.net Mon Jun 21 08:20:40 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 08:21:06 2004 Subject: [ python-Bugs-976613 ] socket timeout problems on Solaris Message-ID: Bugs item #976613, was opened at 2004-06-21 11:36 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976613&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Peter ?strand (astrand) Assigned to: Nobody/Anonymous (nobody) >Summary: socket timeout problems on Solaris Initial Comment: The timeout stuff in the socket module does not work correctly on Solaris. Here's a typical example: import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(("localhost", 9048)) s.listen(1) s.settimeout(10) conn, addr = s.accept() print 'Connected by', addr while 1: data = conn.recv(1024) if not data: break conn.send(data) conn.close() When connecting, I get this traceback: Connected by ('127.0.0.1', 32866) Traceback (most recent call last): File "foo.py", line 10, in ? data = conn.recv(1024) socket.error: (11, 'Resource temporarily unavailable') This is because Python treats the new socket object as blocking (the timeout value is -1). However, in Solaris, sockets returned from accept() inherits the blocking property. So, because the listenting socket was in non-blocking mode, the new connected socket will be non-blocking as well. Since the timeout is -1, internal_select will not call select. The solution to this problem is to explicitly set the blocking mode on new socket objects. The attached patch implements this. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2004-06-21 14:20 Message: Logged In: YES user_id=344921 One workaround for this problem is: socket.setdefaulttimeout(somethinglarge) Or, if you have the possibility, you can do conn.setblocking(1) right after accept(). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976613&group_id=5470 From noreply at sourceforge.net Mon Jun 21 10:11:01 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 10:11:51 2004 Subject: [ python-Bugs-934282 ] pydoc.stripid doesn't strip ID Message-ID: Bugs item #934282, was opened at 2004-04-13 11:32 Message generated for change (Comment added) made by jimjjewett You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 Category: Python Library Group: None Status: Closed Resolution: None Priority: 5 Submitted By: Jim Jewett (jimjjewett) Assigned to: Nobody/Anonymous (nobody) Summary: pydoc.stripid doesn't strip ID Initial Comment: pydoc function stripid should strip the ID from an object's repr. It assumes that ID will be represented as one of two patterns -- but this is not the case with (at least) the 2.3.3 distributed binary, because of case-sensitivity. ' at 0x[0-9a-f]{6,}(>+)$' fails because the address is capitalized -- A-F. (Note that hex(15) is not capitalized -- this seems to be unique to addresses.) ' at [0-9A-F]{8,}(>+)$' fails because the address does contain a 0x. stripid checks both as a guard against false alarms, but I'm not sure how to guarantee that an address would contain a letter, so matching on either all-upper or all-lower may be the tightest possible bound. ---------------------------------------------------------------------- >Comment By: Jim Jewett (jimjjewett) Date: 2004-06-21 10:11 Message: Logged In: YES user_id=764593 Using ignorecase means it will also select mixed-case, such as 0xDead86. Given that 0x is now required, that might actually be desirable, but it is a change. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-18 21:24 Message: Logged In: YES user_id=357491 OK, I took Robin's idea of extracting out the regex, but just made it case- insensitive with re.IGNORECASE. Also ripped out dealing with the case lacking '0x' thanks to Tim's tip. Finally, I changed the match length from 6 to 6-16 to be able to handle 64-bit addresses (only in 2.4 since I could be wrong). Checked in as rev. 1.93 in HEAD and rev. 1.86.8.2 in 2.3 . Thanks, Robin. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 13:05 Message: Logged In: YES user_id=31435 This can be simplifed. The code in PyString_FromFormatV() massages the native %p result to guarantee it begins with "0x". It didn't always do this, and inspect.py was written when Python didn't massage the native %p result at all. Now there's no need to cater to "0X", or to cater to that "0x" might be missing. The case of a-f may still differ across platforms, and that's deliberate (addresses are of most interest to C coders, and they're "used to" whichever case their platform delivers for %p in C code). ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 12:31 Message: Logged In: YES user_id=6946 This is the PROPER pasted in patch =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/pydoc.py,v retrieving revision 1.90 diff -c -r1.90 pydoc.py *** pydoc.py 29 Jan 2004 06:37:49 -0000 1.90 --- pydoc.py 5 Jun 2004 15:33:52 -0000 *************** *** 113,124 **** return text[:pre] + '...' + text[len(text)-post:] return text def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! for pattern in [' at 0x[0-9a-f]{6,}(>+)$', ' at [0-9A-F]{8,}(>+)$']: ! if re.search(pattern, repr(Exception)): ! return re.sub(pattern, '\1', text) return text def _is_some_method(object): --- 113,124 ---- return text[:pre] + '...' + text[len(text)-post:] return text + _re_stripid =re.compile(' at (?:0[xX][0-9a-fA-F]{6,}|[0-9a-fA-F]{8,})(>+)$') def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! if _re_stripid.search(repr(Exception)): ! return _re_stripid.sub('\1', text) return text def _is_some_method(object): ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 12:23 Message: Logged In: YES user_id=6946 This patch seems to fix variable case problems =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/pydoc.py,v retrieving revision 1.90 diff -c -r1.90 pydoc.py *** pydoc.py 29 Jan 2004 06:37:49 -0000 1.90 --- pydoc.py 5 Jun 2004 15:26:31 -0000 *************** *** 113,124 **** return text[:pre] + '...' + text[len(text)-post:] return text def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! for pattern in [' at 0x[0-9a-f]{6,}(>+)$', ' at [0-9A-F]{8,}(>+)$']: ! if re.search(pattern, repr(Exception)): ! return re.sub(pattern, '\1', text) return text def _is_some_method(object): --- 113,124 ---- return text[:pre] + '...' + text[len(text)-post:] return text + _re_stripid =re.compile(' at (?:0[xX][0-9a-fA-F]{6,}|[0-9a-fA-F]{8,})(>+)$'] def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! if _re_stripid.search(repr(Exception)): ! return _re_stripid.sub('\1', text) return text def _is_some_method(object): ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 11:36 Message: Logged In: YES user_id=6946 Definitely a problem in 2.3.3. using class bongo: pass print bongo() On freebsd with 2.3.3 I get <__main__.bongo instance at 0x81a05ac> with win2k I see <__main__.bongo instance at 0x0112FFD0> both are 8 characters, but the case differs. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-04-14 15:34 Message: Logged In: YES user_id=11105 It seems this depends on the operating system, more exactly on how the C compiler interprets the %p printf format. According to what I see, on windows it fails, on linux it works. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 From noreply at sourceforge.net Mon Jun 21 10:50:55 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 10:51:03 2004 Subject: [ python-Bugs-973579 ] Doc error on super(cls,self) Message-ID: Bugs item #973579, was opened at 2004-06-15 18:43 Message generated for change (Comment added) made by jimjjewett You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973579&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: David MacQuigg (macquigg) Assigned to: Nobody/Anonymous (nobody) Summary: Doc error on super(cls,self) Initial Comment: In both the Library Reference, section 2.1, and in the Python 2.2 Quick Reference, page 19, the explanation for this function is: super( type[, object-or-type]) Returns the superclass of type. ... This is misleading. I could not get this function to work right until I realized that it is searching the entire MRO, not just the superclasses of 'type'. See comp.lang.python 6/11/04, same subject as above, for further discussion and an example. I think a better explanation would be: super(cls,self).m(arg) Calls method 'm' from a class in the MRO (Method Resolution Order) of 'self'. The selected class is the first one which is above 'cls' in the MRO and which contains 'm'. The 'super' built-in function actually returns not a class, but a 'super' object. This object can be saved, like a bound method, and later used to do a new search of the MRO for a different method to be applied to the saved instance. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2004-06-21 10:50 Message: Logged In: YES user_id=764593 Would an expanded example also help? I'm not sure I like my own wording yet, but ... I propose it as a straw man. """super returns the next parent class[1] class A(object): pass class B(A): def meth(self, arg): super(B, self).meth(arg) class C(A): pass class D(B, C): pass d=D() d.meth() In this case, the super(B, self) call will actually return a reference to class C. Class C is not a parent of class B, but it is the next parent for this particular instance d of class B. [1] Actually, a super class mimicing the parent class. """ ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973579&group_id=5470 From noreply at sourceforge.net Mon Jun 21 12:23:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 12:27:57 2004 Subject: [ python-Bugs-973579 ] Doc error on super(cls,self) Message-ID: Bugs item #973579, was opened at 2004-06-15 15:43 Message generated for change (Comment added) made by macquigg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973579&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: David MacQuigg (macquigg) Assigned to: Nobody/Anonymous (nobody) Summary: Doc error on super(cls,self) Initial Comment: In both the Library Reference, section 2.1, and in the Python 2.2 Quick Reference, page 19, the explanation for this function is: super( type[, object-or-type]) Returns the superclass of type. ... This is misleading. I could not get this function to work right until I realized that it is searching the entire MRO, not just the superclasses of 'type'. See comp.lang.python 6/11/04, same subject as above, for further discussion and an example. I think a better explanation would be: super(cls,self).m(arg) Calls method 'm' from a class in the MRO (Method Resolution Order) of 'self'. The selected class is the first one which is above 'cls' in the MRO and which contains 'm'. The 'super' built-in function actually returns not a class, but a 'super' object. This object can be saved, like a bound method, and later used to do a new search of the MRO for a different method to be applied to the saved instance. ---------------------------------------------------------------------- >Comment By: David MacQuigg (macquigg) Date: 2004-06-21 09:23 Message: Logged In: YES user_id=676422 I like the example, but the new explanation still leaves the impression that super() returns a class ( or something that acts like a class). This is what made super() so difficult to figure out the first time I tried it. The 'super' object returned by the function appears to be a collection of references, one to the 'self' instance, and one to each of the classes in the MRO of self above 'cls'. The reason it can't be just a class is that a given super object needs to retrieve a different class each time it is used, depending on what method is provided. The only thing lacking in the example is motivation for why we need super(B,self).meth(arg) instead of just calling C.meth (self,arg). I have a longer example and some motivation on page 16 in my OOP chapter at http://ece.arizona.edu/~edatools/Python/PythonOOP.doc but that may be too long if what we need here is a "man page" explanation. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2004-06-21 07:50 Message: Logged In: YES user_id=764593 Would an expanded example also help? I'm not sure I like my own wording yet, but ... I propose it as a straw man. """super returns the next parent class[1] class A(object): pass class B(A): def meth(self, arg): super(B, self).meth(arg) class C(A): pass class D(B, C): pass d=D() d.meth() In this case, the super(B, self) call will actually return a reference to class C. Class C is not a parent of class B, but it is the next parent for this particular instance d of class B. [1] Actually, a super class mimicing the parent class. """ ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973579&group_id=5470 From noreply at sourceforge.net Mon Jun 21 12:56:57 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 12:57:01 2004 Subject: [ python-Bugs-976878 ] PDB: unreliable breakpoints on functions Message-ID: Bugs item #976878, was opened at 2004-06-21 16:56 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976878&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dieter Maurer (dmaurer) Assigned to: Nobody/Anonymous (nobody) Summary: PDB: unreliable breakpoints on functions Initial Comment: Breakpoints set on functions are unreliable because "pdb.Pdb.checkline" thinks lines inside a multi-column docstring were adequate lines for breakpoints. Of course, such breakpoints are ignored during execution. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976878&group_id=5470 From noreply at sourceforge.net Mon Jun 21 12:58:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 12:59:31 2004 Subject: [ python-Bugs-976878 ] PDB: unreliable breakpoints on functions Message-ID: Bugs item #976878, was opened at 2004-06-21 16:56 Message generated for change (Comment added) made by dmaurer You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976878&group_id=5470 Category: Python Library Group: None Status: Open >Resolution: Postponed Priority: 5 Submitted By: Dieter Maurer (dmaurer) Assigned to: Nobody/Anonymous (nobody) Summary: PDB: unreliable breakpoints on functions Initial Comment: Breakpoints set on functions are unreliable because "pdb.Pdb.checkline" thinks lines inside a multi-column docstring were adequate lines for breakpoints. Of course, such breakpoints are ignored during execution. ---------------------------------------------------------------------- >Comment By: Dieter Maurer (dmaurer) Date: 2004-06-21 16:58 Message: Logged In: YES user_id=265829 Patch attached ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976878&group_id=5470 From noreply at sourceforge.net Mon Jun 21 12:59:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 12:59:58 2004 Subject: [ python-Bugs-976880 ] mmap needs a rfind method Message-ID: Bugs item #976880, was opened at 2004-06-21 11:59 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976880&group_id=5470 Category: Extension Modules Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Nicholas Riley (nriley) Assigned to: Nobody/Anonymous (nobody) Summary: mmap needs a rfind method Initial Comment: It would be convenient to have an 'rfind' method equivalent to the string one; the only (slow, wasteful) alternative I can find is taking slices of the mmap, or using successive seek/read, followed by rindex. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976880&group_id=5470 From noreply at sourceforge.net Mon Jun 21 18:02:46 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 19:18:59 2004 Subject: [ python-Bugs-970783 ] PyObject_GenericGetAttr is undocumented Message-ID: Bugs item #970783, was opened at 2004-06-10 17:51 Message generated for change (Comment added) made by jepler You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970783&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Eric Huss (ehuss) Assigned to: Nobody/Anonymous (nobody) Summary: PyObject_GenericGetAttr is undocumented Initial Comment: The Python/C API documentation references the PyObject_GenericGetAttr function in a few places, but doesn't actually document what it does. Same with PyObject_GenericSetAttr. ---------------------------------------------------------------------- Comment By: Jeff Epler (jepler) Date: 2004-06-21 17:02 Message: Logged In: YES user_id=2772 Perhaps the wording from the PEP can be adapted for the documentation. http://python.org/peps/pep-0252.html ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970783&group_id=5470 From noreply at sourceforge.net Mon Jun 21 17:52:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 19:38:52 2004 Subject: [ python-Bugs-969492 ] Python hangs up on I/O operations on the latest FreeBSD 4.10 Message-ID: Bugs item #969492, was opened at 2004-06-09 05:03 Message generated for change (Comment added) made by jepler You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969492&group_id=5470 Category: None Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: _Iww_ (iww) Assigned to: Nobody/Anonymous (nobody) Summary: Python hangs up on I/O operations on the latest FreeBSD 4.10 Initial Comment: Hello, friends! Here is my sample code, which works perfectly on other systems, but not the FreeBSD 4.10-STABLE I got today by cvsupping. #!/usr/local/bin/python from threading import Thread class Reading(Thread): def __init__(self): Thread.__init__(self) def run(self): print "Start!" z = 1 while 1: print z z += 1 fl = open('blah.txt') fl.read() fl.close() for i in range(10): print "i:", i zu = open('bzzz.txt') print "|->", zu.read() bzz = Reading() bzz.start() #--- I have tested this on Python 2.3.3, 2.3.4 and 2.4a0 from CVS. The interpretar falls in the infinite loop and stays in the poll-state. You can see it in the top: 34446 goga 2 0 3328K 2576K poll 0:00 0.00% 0.00% python I think it has some connection to the latest bug, found in the select() function (http://www.securityfocus.com/bid/10455) and its fix on BSD. Best regards, _Iww_ ---------------------------------------------------------------------- Comment By: Jeff Epler (jepler) Date: 2004-06-21 16:52 Message: Logged In: YES user_id=2772 Indentation was lost on your example. Please attach it it to the bug report as a file instead. In my opinion, the problem you're having is unlikely to be related to the securityfocus URL you mentioned. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969492&group_id=5470 From noreply at sourceforge.net Mon Jun 21 20:35:00 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 20:38:20 2004 Subject: [ python-Bugs-924301 ] A leak case with cmd.py & readline & exception Message-ID: Bugs item #924301, was opened at 2004-03-27 01:28 Message generated for change (Comment added) made by svenil You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Sverker Nilsson (svenil) Assigned to: Michael Hudson (mwh) Summary: A leak case with cmd.py & readline & exception Initial Comment: A leak to do with readline & cmd, in Python 2.3. I found out what hold on to my interactive objects too long ('for ever') in certain circumstances. The circumstance had to do with an exception being raised in Cmd.cmdloop and handled (or not handled) outside of Cmd.cmdloop. In cmd.py, class Cmd, in cmdloop(), if an exception is raised and propagated out from the interior of cmdloop, the function postloop() is not called. The default function of this, (in 2.3) when the readline library is present, is to restore the completer, via: readline.set_completer(self.old_completer) If this is not done, the newly (by preloop) inserted completer will remain. Even if the loop is called again and run without exception, the new completer will remain, because then in postloop the old completer will be set to our new completer. When we exit, the completer will remain the one we set. This will hold on to our object, aka 'leak'. - In cmd.py in 2.2 no attempt was made to restore the completer, so this was also a kind of leak, but it was replaced the next time a Cmd instance was initialized. Now, however, the next time we will not replace the old completer, but both of them will remain in memory. The old one will be stored as self.old_completer. If we terminate with an exception, bad luck... the stored completer retains both of the instances. If we terminate normally, the old one will be retained. In no case do we restore the space of the first instance. The only way that would happen, would be if the second instance first exited the loop with an exception, and then entered the loop again an exited normally. But then, the second instance is retained instead! If each instance happens to terminate with an exception, it seems well possible that an ever increasing chain of leaking instances will be accumulated. My fix is to always call the postloop, given the preloop succeeded. This is done via a try:finally clause. def cmdloop(self, intro=None): ... self.preloop() try: ... finally: # Make sure postloop called self.postloop() I am attaching my patched version of cmd.py. It was originally from the tarball of Python 2.3.3 downloaded from Python.org some month or so ago in which cmd.py had this size & date: 14504 Feb 19 2003 cmd.py Best regards, Sverker Nilsson ---------------------------------------------------------------------- >Comment By: Sverker Nilsson (svenil) Date: 2004-06-22 02:34 Message: Logged In: YES user_id=356603 Your comment about threads worries me, I am not sure I understand it. Would it be unsafe to use a cmd.Cmd instance in a separate thread, talking eg via a socket file? The instance is used only by that thread and not by others, but there may be other threads using other instances. I understand that it could be unsafe to have two threads share the same instance, but how about different instances? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-21 12:35 Message: Logged In: YES user_id=6656 Well, that would seem to be easy enough to fix (see attached). If you're using cmd.Cmd instances from different threads at the same time, mind, I think you're screwed anyway. You're certainly walking on thin ice... ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-06-18 21:13 Message: Logged In: YES user_id=356603 I think it is OK. Just noting that it changes the completer (just like my version) even if use_rawinput is false. I guess one should remember to pass a null completekey in that case, in case some other thread was using raw_input. Perhaps a check for use_rawinput could be added in cmd.py to avoid changing the completer in that case, for less risk of future mistakes. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 14:29 Message: Logged In: YES user_id=6656 yay, that appears to have worked. let me know what you think. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 14:26 Message: Logged In: YES user_id=6656 trying again... ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-01 12:10 Message: Logged In: YES user_id=6656 Bah. I don't have the laptop with the patch with me, I'll try uploading again in a couple of days. ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-05-29 09:43 Message: Logged In: YES user_id=356603 I couldn't find a new attached file. I acknowledge some problems with my original patch, but have no other suggestion at the moment. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-26 18:36 Message: Logged In: YES user_id=6656 What do you think of the attached? This makes the documentation of pre & post loop more accurate again, which I think is nice. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-19 11:38 Message: Logged In: YES user_id=6656 This is where I go "I wish I'd reviewed that patch more carefully". In particular, the documentation of {pre,post}loop is now out of date. I wonder setting/getting the completer in these functions was a good idea. Hmm. This bug report confuses me :-) but I can certainly see the intent of the patch... ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-19 03:52 Message: Logged In: YES user_id=80475 Michael, this touches some of your code. Do you want to handle this one? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 From noreply at sourceforge.net Mon Jun 21 22:46:03 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 21 23:53:25 2004 Subject: [ python-Bugs-945665 ] platform.system() Windows inconsistency Message-ID: Bugs item #945665, was opened at 2004-04-30 19:19 Message generated for change (Comment added) made by gvanrossum You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 Category: Python Library Group: Python 2.3 Status: Closed Resolution: None Priority: 5 Submitted By: Guido van Rossum (gvanrossum) Assigned to: M.-A. Lemburg (lemburg) Summary: platform.system() Windows inconsistency Initial Comment: On Windows, platform.system() (and platform.uname() [0]) return 'Windows' or 'Microsoft Windows' depending on whether win32api is available or not. This is confusing and can lead to hard-to-find bugs where testing in one environment doesn't reveal a bug that only occurs in another environment. I believe this hasn't been fixed in Python 2.4 yet (only the XP recognition has been fixed, it is also broken in 2.3 when win32api was available). ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-21 22:46 Message: Logged In: YES user_id=6380 Thanks, I'll leave 2.3 alone, it's just enough of an incompatibility not to risk it. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-06-19 13:20 Message: Logged In: YES user_id=38388 Checked into CVS HEAD. I don't have a Python 2.3 branch checkout, so please check it in there as well if you have a need. Thanks. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-18 20:07 Message: Logged In: YES user_id=6380 Note that the patch by jlgijsbers contains some ugly code. s.find(...) != -1? Yuck! I think it could just be if system == "Microsoft Windows": system = "Windows" That should work even in Python 1.5.2. And I don't think the string would ever be "Microsoft Windows" plus some additive. And yes, please, check it in. (Or do I have to do it myself? :-) ---------------------------------------------------------------------- Comment By: Paul Moore (pmoore) Date: 2004-06-05 12:30 Message: Logged In: YES user_id=113328 Looks OK to me. Not sure where you found docs which specify the behaviour, but I'm OK with "Windows". There's a very small risk of compatibility issues, but as the module was new in 2.3, and the behaviour was inconsistent before, I see no reason why this should be an issue. I'd recommend committing this. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 10:38 Message: Logged In: YES user_id=469548 New patch, the docs say we should use 'Windows' instead of 'Microsoft Windows', so we do: Index: platform.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/platform.py,v retrieving revision 1.13 diff -u -r1.13 platform.py --- platform.py 4 May 2004 18:18:59 -0000 1.13 +++ platform.py 5 Jun 2004 13:39:10 -0000 @@ -957,6 +957,8 @@ # platforms if use_syscmd_ver: system,release,version = _syscmd_ver(system) + if string.find(system, 'Microsoft Windows') != -1: + system = 'Windows' # In case we still don't know anything useful, we'll try to # help ourselves ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2004-06-05 10:07 Message: Logged In: YES user_id=469548 Yep, _syscmd_ver() returns 'Microsoft Windows' while the default is 'Windows'. Setting the default to Microsoft Windows seems the easiest way here (putting patch inline because I can't attach it): Index: platform.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/platform.py,v retrieving revision 1.13 diff -u -r1.13 platform.py --- platform.py 4 May 2004 18:18:59 -0000 1.13 +++ platform.py 5 Jun 2004 13:07:45 -0000 @@ -966,7 +966,7 @@ version = '32bit' else: version = '16bit' - system = 'Windows' + system = 'Microsoft Windows' elif system[:4] == 'java': release,vendor,vminfo,osinfo = java_ver() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=945665&group_id=5470 From noreply at sourceforge.net Tue Jun 22 00:58:03 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 00:58:23 2004 Subject: [ python-Bugs-934282 ] pydoc.stripid doesn't strip ID Message-ID: Bugs item #934282, was opened at 2004-04-13 08:32 Message generated for change (Settings changed) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 Category: Python Library Group: None Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Jim Jewett (jimjjewett) Assigned to: Nobody/Anonymous (nobody) Summary: pydoc.stripid doesn't strip ID Initial Comment: pydoc function stripid should strip the ID from an object's repr. It assumes that ID will be represented as one of two patterns -- but this is not the case with (at least) the 2.3.3 distributed binary, because of case-sensitivity. ' at 0x[0-9a-f]{6,}(>+)$' fails because the address is capitalized -- A-F. (Note that hex(15) is not capitalized -- this seems to be unique to addresses.) ' at [0-9A-F]{8,}(>+)$' fails because the address does contain a 0x. stripid checks both as a guard against false alarms, but I'm not sure how to guarantee that an address would contain a letter, so matching on either all-upper or all-lower may be the tightest possible bound. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2004-06-21 07:11 Message: Logged In: YES user_id=764593 Using ignorecase means it will also select mixed-case, such as 0xDead86. Given that 0x is now required, that might actually be desirable, but it is a change. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-18 18:24 Message: Logged In: YES user_id=357491 OK, I took Robin's idea of extracting out the regex, but just made it case- insensitive with re.IGNORECASE. Also ripped out dealing with the case lacking '0x' thanks to Tim's tip. Finally, I changed the match length from 6 to 6-16 to be able to handle 64-bit addresses (only in 2.4 since I could be wrong). Checked in as rev. 1.93 in HEAD and rev. 1.86.8.2 in 2.3 . Thanks, Robin. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-05 10:05 Message: Logged In: YES user_id=31435 This can be simplifed. The code in PyString_FromFormatV() massages the native %p result to guarantee it begins with "0x". It didn't always do this, and inspect.py was written when Python didn't massage the native %p result at all. Now there's no need to cater to "0X", or to cater to that "0x" might be missing. The case of a-f may still differ across platforms, and that's deliberate (addresses are of most interest to C coders, and they're "used to" whichever case their platform delivers for %p in C code). ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 09:31 Message: Logged In: YES user_id=6946 This is the PROPER pasted in patch =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/pydoc.py,v retrieving revision 1.90 diff -c -r1.90 pydoc.py *** pydoc.py 29 Jan 2004 06:37:49 -0000 1.90 --- pydoc.py 5 Jun 2004 15:33:52 -0000 *************** *** 113,124 **** return text[:pre] + '...' + text[len(text)-post:] return text def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! for pattern in [' at 0x[0-9a-f]{6,}(>+)$', ' at [0-9A-F]{8,}(>+)$']: ! if re.search(pattern, repr(Exception)): ! return re.sub(pattern, '\1', text) return text def _is_some_method(object): --- 113,124 ---- return text[:pre] + '...' + text[len(text)-post:] return text + _re_stripid =re.compile(' at (?:0[xX][0-9a-fA-F]{6,}|[0-9a-fA-F]{8,})(>+)$') def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! if _re_stripid.search(repr(Exception)): ! return _re_stripid.sub('\1', text) return text def _is_some_method(object): ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 09:23 Message: Logged In: YES user_id=6946 This patch seems to fix variable case problems =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/pydoc.py,v retrieving revision 1.90 diff -c -r1.90 pydoc.py *** pydoc.py 29 Jan 2004 06:37:49 -0000 1.90 --- pydoc.py 5 Jun 2004 15:26:31 -0000 *************** *** 113,124 **** return text[:pre] + '...' + text[len(text)-post:] return text def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! for pattern in [' at 0x[0-9a-f]{6,}(>+)$', ' at [0-9A-F]{8,}(>+)$']: ! if re.search(pattern, repr(Exception)): ! return re.sub(pattern, '\1', text) return text def _is_some_method(object): --- 113,124 ---- return text[:pre] + '...' + text[len(text)-post:] return text + _re_stripid =re.compile(' at (?:0[xX][0-9a-fA-F]{6,}|[0-9a-fA-F]{8,})(>+)$'] def stripid(text): """Remove the hexadecimal id from a Python object representation.""" # The behaviour of %p is implementation-dependent; we check two cases. ! if _re_stripid.search(repr(Exception)): ! return _re_stripid.sub('\1', text) return text def _is_some_method(object): ---------------------------------------------------------------------- Comment By: Robin Becker (rgbecker) Date: 2004-06-05 08:36 Message: Logged In: YES user_id=6946 Definitely a problem in 2.3.3. using class bongo: pass print bongo() On freebsd with 2.3.3 I get <__main__.bongo instance at 0x81a05ac> with win2k I see <__main__.bongo instance at 0x0112FFD0> both are 8 characters, but the case differs. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-04-14 12:34 Message: Logged In: YES user_id=11105 It seems this depends on the operating system, more exactly on how the C compiler interprets the %p printf format. According to what I see, on windows it fails, on linux it works. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=934282&group_id=5470 From noreply at sourceforge.net Tue Jun 22 02:55:07 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 02:58:30 2004 Subject: [ python-Bugs-977250 ] Double __init__.py executing Message-ID: Bugs item #977250, was opened at 2004-06-22 10:55 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977250&group_id=5470 Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: Alexandre (kuskakus) Assigned to: Nobody/Anonymous (nobody) Summary: Double __init__.py executing Initial Comment: There is some strange feature, looking like a bug. I have 'pkg' dir with 2 files: ./pkg/__init__.py print '__init__.py' ./pkg/test.py print 'test.py' import __init__ Python 2.3.4 (#53, May 25 2004, 21:17:02) >>> import pkg.test __init__.py test.py __init__.py With '-v' option: >>> import pkg.test import pkg # directory pkg # pkg\__init__.pyc matches pkg\__init__.py import pkg # precompiled from pkg\__init__.pyc __init__.py # pkg\test.pyc matches pkg\test.py import pkg.test # precompiled from pkg\test.pyc test.py # pkg\__init__.pyc matches pkg\__init__.py import pkg.__init__ # precompiled from pkg\__init__.pyc __init__.py Why __init__.py executed two times? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977250&group_id=5470 From noreply at sourceforge.net Tue Jun 22 05:07:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 05:42:59 2004 Subject: [ python-Bugs-924301 ] A leak case with cmd.py & readline & exception Message-ID: Bugs item #924301, was opened at 2004-03-27 00:28 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Sverker Nilsson (svenil) Assigned to: Michael Hudson (mwh) Summary: A leak case with cmd.py & readline & exception Initial Comment: A leak to do with readline & cmd, in Python 2.3. I found out what hold on to my interactive objects too long ('for ever') in certain circumstances. The circumstance had to do with an exception being raised in Cmd.cmdloop and handled (or not handled) outside of Cmd.cmdloop. In cmd.py, class Cmd, in cmdloop(), if an exception is raised and propagated out from the interior of cmdloop, the function postloop() is not called. The default function of this, (in 2.3) when the readline library is present, is to restore the completer, via: readline.set_completer(self.old_completer) If this is not done, the newly (by preloop) inserted completer will remain. Even if the loop is called again and run without exception, the new completer will remain, because then in postloop the old completer will be set to our new completer. When we exit, the completer will remain the one we set. This will hold on to our object, aka 'leak'. - In cmd.py in 2.2 no attempt was made to restore the completer, so this was also a kind of leak, but it was replaced the next time a Cmd instance was initialized. Now, however, the next time we will not replace the old completer, but both of them will remain in memory. The old one will be stored as self.old_completer. If we terminate with an exception, bad luck... the stored completer retains both of the instances. If we terminate normally, the old one will be retained. In no case do we restore the space of the first instance. The only way that would happen, would be if the second instance first exited the loop with an exception, and then entered the loop again an exited normally. But then, the second instance is retained instead! If each instance happens to terminate with an exception, it seems well possible that an ever increasing chain of leaking instances will be accumulated. My fix is to always call the postloop, given the preloop succeeded. This is done via a try:finally clause. def cmdloop(self, intro=None): ... self.preloop() try: ... finally: # Make sure postloop called self.postloop() I am attaching my patched version of cmd.py. It was originally from the tarball of Python 2.3.3 downloaded from Python.org some month or so ago in which cmd.py had this size & date: 14504 Feb 19 2003 cmd.py Best regards, Sverker Nilsson ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-22 10:07 Message: Logged In: YES user_id=6656 Um. Unless I'm *hopelessly* misreading things, cmd.Cmd writes to sys.stdout unconditionally and either calls raw_input() or sys.stdin.readline(). So I'm not sure how one would "use a cmd.Cmd instance in a separate thread, talking eg via a socket file" without rewriting such huge amounts of the class that thread- safety becomes your own problem. Apologies if I'm being dumb... also, please note: I didn't write this module :-) ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-06-22 01:34 Message: Logged In: YES user_id=356603 Your comment about threads worries me, I am not sure I understand it. Would it be unsafe to use a cmd.Cmd instance in a separate thread, talking eg via a socket file? The instance is used only by that thread and not by others, but there may be other threads using other instances. I understand that it could be unsafe to have two threads share the same instance, but how about different instances? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-21 11:35 Message: Logged In: YES user_id=6656 Well, that would seem to be easy enough to fix (see attached). If you're using cmd.Cmd instances from different threads at the same time, mind, I think you're screwed anyway. You're certainly walking on thin ice... ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-06-18 20:13 Message: Logged In: YES user_id=356603 I think it is OK. Just noting that it changes the completer (just like my version) even if use_rawinput is false. I guess one should remember to pass a null completekey in that case, in case some other thread was using raw_input. Perhaps a check for use_rawinput could be added in cmd.py to avoid changing the completer in that case, for less risk of future mistakes. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 13:29 Message: Logged In: YES user_id=6656 yay, that appears to have worked. let me know what you think. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 13:26 Message: Logged In: YES user_id=6656 trying again... ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-01 11:10 Message: Logged In: YES user_id=6656 Bah. I don't have the laptop with the patch with me, I'll try uploading again in a couple of days. ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-05-29 08:43 Message: Logged In: YES user_id=356603 I couldn't find a new attached file. I acknowledge some problems with my original patch, but have no other suggestion at the moment. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-26 17:36 Message: Logged In: YES user_id=6656 What do you think of the attached? This makes the documentation of pre & post loop more accurate again, which I think is nice. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-19 10:38 Message: Logged In: YES user_id=6656 This is where I go "I wish I'd reviewed that patch more carefully". In particular, the documentation of {pre,post}loop is now out of date. I wonder setting/getting the completer in these functions was a good idea. Hmm. This bug report confuses me :-) but I can certainly see the intent of the patch... ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-19 02:52 Message: Logged In: YES user_id=80475 Michael, this touches some of your code. Do you want to handle this one? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 From noreply at sourceforge.net Tue Jun 22 05:17:33 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 05:50:08 2004 Subject: [ python-Bugs-975646 ] tp_(get|set)attro? inheritance bug Message-ID: Bugs item #975646, was opened at 2004-06-18 23:04 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975646&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Gustavo J. A. M. Carneiro (gustavo) Assigned to: Guido van Rossum (gvanrossum) Summary: tp_(get|set)attro? inheritance bug Initial Comment: Documentation says, regarding tp_getattr: ? This field is inherited by subtypes together with tp_getattro: a subtype inherits both tp_getattr and tp_getattro from its base type when the subtype's tp_getattr and tp_getattro are both NULL. ? Implementation disagrees, at least in cvs head, but the effect of the bug (non-inheritance of tp_getattr) happens in 2.3.3. Follow with me: In function type_new (typeobject.c) line 1927: /* Special case some slots */ if (type->tp_dictoffset != 0 || nslots > 0) { if (base->tp_getattr == NULL && base->tp_getattro == NULL) type->tp_getattro = PyObject_GenericGetAttr; if (base->tp_setattr == NULL && base->tp_setattro == NULL) type->tp_setattro = PyObject_GenericSetAttr; } ...later in the same function... /* Initialize the rest */ if (PyType_Ready(type) < 0) { Py_DECREF(type); return NULL; } Inside PyType_Ready(), line 3208: for (i = 1; i < n; i++) { PyObject *b = PyTuple_GET_ITEM(bases, i); if (PyType_Check(b)) inherit_slots(type, (PyTypeObject *)b); } Inside inherit_slots, line (3056): if (type->tp_getattr == NULL && type->tp_getattro == NULL) { type->tp_getattr = base->tp_getattr; type->tp_getattro = base->tp_getattro; } if (type->tp_setattr == NULL && type->tp_setattro == NULL) { type->tp_setattr = base->tp_setattr; type->tp_setattro = base->tp_setattro; } So, if you have followed through, you'll notice that type_new first sets tp_getattro = GenericGetAttr, in case 'base' has neither tp_getattr nor tp_getattro. So, you are thinking that there is no problem. If base has tp_getattr, that code path won't be execute. The problem is with multiple inheritance. In type_new, 'base' is determined by calling best_base(). But the selected base may not have tp_getattr, while another might have. In this case, setting tp_getattro based on information from the wrong base precludes the slot from being inherited from the right base. This is happening in pygtk, unfortunately. One possible solution would be to move the first code block to after the PyType_Ready() call. ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-22 10:17 Message: Logged In: YES user_id=6656 Without wanting to think about this terribly hard, wouldn't a workaround be for pygtk to implement tp_getattro and not tp_getattr? IMO, this is a good idea anyway... ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-21 06:05 Message: Logged In: YES user_id=21627 Guido, is this a bug? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975646&group_id=5470 From noreply at sourceforge.net Tue Jun 22 06:04:59 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 06:05:55 2004 Subject: [ python-Bugs-976608 ] Unhelpful error message when mtime of a module is -1 Message-ID: Bugs item #976608, was opened at 2004-06-21 09:26 Message generated for change (Settings changed) made by pm67nz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976608&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None >Priority: 3 Submitted By: Peter Maxwell (pm67nz) Assigned to: Nobody/Anonymous (nobody) >Summary: Unhelpful error message when mtime of a module is -1 Initial Comment: This fragment of import.c: mtime = PyOS_GetLastModificationTime(pathname, fp); if (mtime == (time_t)(-1)) return NULL; is missing a PyErr_SetString(), so in at least one circumstance (an __init__.py file with an apparent mtime of 1 Jan 1970 created by a bug in darcs on debian linux) it produces "SystemError: NULL result without error in PyObject_Call". ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976608&group_id=5470 From noreply at sourceforge.net Tue Jun 22 06:19:35 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 06:19:54 2004 Subject: [ python-Bugs-977343 ] Solaris likes sys/loadavg.h Message-ID: Bugs item #977343, was opened at 2004-06-22 20:19 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977343&group_id=5470 Category: Build Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Nobody/Anonymous (nobody) Summary: Solaris likes sys/loadavg.h Initial Comment: Recent Solaris puts the prototype for getloadavg() in The following patch adds a configure check for sys/loadavg.h, and #includes it in Modules/posixmodule.c If someone can give it a cursory glance, I'll check it in. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977343&group_id=5470 From noreply at sourceforge.net Tue Jun 22 08:55:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 08:55:44 2004 Subject: [ python-Bugs-977461 ] Cannot specify compiler for 'install' on command line Message-ID: Bugs item #977461, was opened at 2004-06-22 14:55 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977461&group_id=5470 Category: Distutils Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Anders J. Munch (andersjm) Assigned to: Nobody/Anonymous (nobody) Summary: Cannot specify compiler for 'install' on command line Initial Comment: Python 2.3.3 on W2K. For a project with a C extension python setup.py install fails if VC6 is not installed, and there seems to be no way to specify an alternate compiler on the command line. python setup.py build --compiler=bcpp works on the same setup, but python setup.py install --compiler=bcpp doesn't work because --compiler is not a legal option for install. install fails even following a succesful build. A workaround is to provide the compiler option in one of the configuration files. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977461&group_id=5470 From noreply at sourceforge.net Tue Jun 22 09:07:52 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 09:44:46 2004 Subject: [ python-Bugs-977470 ] Deleted files are reinstalled Message-ID: Bugs item #977470, was opened at 2004-06-22 15:07 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977470&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: Anders J. Munch (andersjm) Assigned to: Nobody/Anonymous (nobody) Summary: Deleted files are reinstalled Initial Comment: Python 2.3.3 on W2K Run python setup.py install then delete a .py-file from setup.py and from the Python installation, then run python setup.py install again. Now the removed .py-file will be reinstalled - presumably because there's still a copy in build/lib. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977470&group_id=5470 From noreply at sourceforge.net Tue Jun 22 14:12:22 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 14:12:42 2004 Subject: [ python-Bugs-977470 ] Deleted files are reinstalled Message-ID: Bugs item #977470, was opened at 2004-06-22 15:07 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977470&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: Anders J. Munch (andersjm) Assigned to: Nobody/Anonymous (nobody) Summary: Deleted files are reinstalled Initial Comment: Python 2.3.3 on W2K Run python setup.py install then delete a .py-file from setup.py and from the Python installation, then run python setup.py install again. Now the removed .py-file will be reinstalled - presumably because there's still a copy in build/lib. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-22 20:12 Message: Logged In: YES user_id=21627 Why is that a bug? You have to remove the build directory in that case. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977470&group_id=5470 From noreply at sourceforge.net Tue Jun 22 14:13:40 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 14:13:46 2004 Subject: [ python-Bugs-977680 ] IMAP4_SSL() class incompatible with socket.timeout Message-ID: Bugs item #977680, was opened at 2004-06-22 12:13 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977680&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Charles (melicertes) Assigned to: Nobody/Anonymous (nobody) Summary: IMAP4_SSL() class incompatible with socket.timeout Initial Comment: If you do socket.setdefaulttimeout(X) before trying to instantiate imaplib.IMAP4_SSL(), the connection hangs partway through the SSL negotiation. The cause is that socket.ssl() is actually incompatible with timeouts (not sure why) and IMAP4_SSL() doesn't do anything to guard against it. It should do .setblocking(1) on the socket.socket object before passing it to socket.ssl(). The documentation should also clarify timeouts not working with socket.ssl. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977680&group_id=5470 From noreply at sourceforge.net Tue Jun 22 17:07:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 17:48:49 2004 Subject: [ python-Bugs-215126 ] Over restricted type checking on eval() function Message-ID: Bugs item #215126, was opened at 2000-09-22 19:36 Message generated for change (Comment added) made by arigo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 Category: Python Interpreter Core Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Raymond Hettinger (rhettinger) Summary: Over restricted type checking on eval() function Initial Comment: The built-in function eval() takes a string argument and a dictionary. The second argument should allow any instance which defines __getitem__ as opposed to just dictionaries. The following example creates a type error: eval, argument 2: expected dictionary, instance found class SpreadSheet: _cells = {} def __setitem__( self, key, formula ): self._cells[key] = formula def __getitem__( self, key ): return eval( self._cells[key], self ) ss = SpreadSheet() ss['a1'] = '5' ss['a2'] = 'a1*5' ss['a2'] ---------------------------------------------------------------------- >Comment By: Armin Rigo (arigo) Date: 2004-06-22 21:07 Message: Logged In: YES user_id=4771 Quick patch attached. I didn't try to use the PyDict_GetItem trick described, but just systematically use PyObject_GetItem/SetItem/DelItem when working with f_locals. This might confuse some extension modules that expect PyEval_GetLocals() to return a dict object. The eval trick is now: eval(code, nondict) --> eval(code, globals(), nondict). Besides eval() I removed the relevant typecheck from execfile() and the exec statement. Any other place I am missing? We might want to still somehow check the type of the locals, to avoid strange errors caused by e.g. eval("a", "b"). PyMapping_Check() is the obvious candidate, but it looks like a hack. More testing is needed. test_descrtut.py line 84 now succeeds, unexpectedly, which is interpreted as a test failure. Needs some docs, too. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 07:27 Message: Logged In: YES user_id=80475 Armin, can you whip-up a quick patch so that we can explore the implications of your suggestion. Ideally, I would like to see something like this go in for Py2.4. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-22 16:53 Message: Logged In: YES user_id=80475 +1 Amrin's idea provides most of the needed functionality with zero performance impact. Also, using a local dictionary for the application variables is likely to be just exactly what a programmer would want to do. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 18:21 Message: Logged In: YES user_id=4771 With minimal work and performance impact, we could allow frame->f_locals to be something else than a real dictionary. It looks like it would be easily possible as long as we don't touch the time-critical frame->f_globals. Code compiled by eval() always uses LOAD_NAME and STORE_NAME, which is anyway allowed to be a bit slower than LOAD_FAST or LOAD_GLOBAL. Note that PyDict_GetItem() & co., as called by LOAD/STORE_NAME, do a PyDict_Check() anyway. We could just replace it with PyDict_CheckExact() and if it fails fall back to calling the appropriate ob_type->tp_as_mapping method. This would turn some of the PyDict_Xxx() API into a general mapping API, half-way between its current dict-only usage and the fully abstract PyObject_Xxx() API. This is maybe a bit strange from the user's point of view, because eval("expr", mydict) still wouldn't work: only eval("expr", {}, mydict) would. which does which does Now, there is no way I can think of that code compiled by eval() could contain LOAD_GLOBAL or STORE_GLOBAL. The only way to tell the difference between eval("expr", mydict) and eval("expr", {}, mydict) appears to be by calling globals() or somehow inspecting the frame directly. Therefore it might be acceptable to redefine the two-argument eval(expr, dict) to be equivalent not to eval(expr, dict, dict) but eval(expr, {}, dict). This hack might be useful enough even if it won't work with the exec statement. ---------------------------------------------------------------------- Comment By: Philippe Fremy (pfremy) Date: 2004-03-03 09:40 Message: Logged In: YES user_id=233844 I have exactly the same need as the original poster. The only difference in my case is that I inhreted a dictionnary. I want to use the eval() function as a simple expression evaluator. I have the follwing dictionnary: d['a']='1' d['b']='2' d['c']='a+b' I want the following results: d[a] -> 1 d[b] -> 2 d[c] -> 3 To do that, I was planning to use the eval() function and overloading the __getitem__ of the global or local dictionnary: class MyDict( dict ) : def __getitem__( self, key ): print "__getitem__", key val = dict.__getitem__( self, key ) print "val = '%s'" % val return eval( val , self ) But it does not work: d[a]: __getitem__ a val = '1' -> 1 d[b]: __getitem__ b val = '2' -> 2 d[c]: __getitem__ c val = 'e+1' ERROR Traceback (most recent call last): File "test_parse_jaycos_config.py", line 83, in testMyDict self.assertEquals( d['c'], 2 ) File "parse_config_file.py", line 10, in __getitem__ return eval( val , self ) File "", line 0, in ? TypeError: cannot concatenate 'str' and 'int' objects d['c'] did fetch the 'a+1' value, which was passed to eval. However, eval() tried to evaluate the expression using the content of the dictionnary instead of using __getitem__. So it fetched the string '1' for 'a' instead of the value 1, so the result is not suitable for the addition. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-01-12 22:01 Message: Logged In: YES user_id=99874 Hmm... I like this! Of course, I am wary of adding *yet another* special double- underscore function to satisfy a single special purpose, but this would satisfy all of *my* needs, and without any performance impact for lookups that are FOUND. Lookups that are NOT found would have a slight performance degrade (I know better than to speculate about the size of the effect without measuring it). I'm not really sure what percentage of dict lookups succeed. At any rate, what are these "other contexts" you mention in which a __keyerror__ would "also be useful"? Because if we can find other places where it is useful, that significantly improves the usefulness. OTOH, if the tests can be done ONLY for eval (I don't really think that's possible), then I'm certain no one cares about the performance of eval. ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2004-01-12 19:38 Message: Logged In: YES user_id=292741 This may be a compromise solution, which could also be useful in other contexts: What if the object passed is derived from dict - presumably that doesn't help because the point is to do low-level calls to the actual dict's lookup functions. Now, suppose we modify the basic dict type, so that before throwing a KeyError, it checks to see if it is really a derived object with a method __keyerror__, and if so, calls that and returns its result (or lets it throw)? Now you can make objects that look like dicts, and act like them at the low level, but can automatically populate themselves when non-existent keys are requested. Of course, __keyerror__('x') does not have to make an 'x' entry in the dict; it could make no change, or add several entries, depending on the desired semantics regarding future lookups. It could be set up so that every lookup fails and is forwarded by __keyerror__ to the __getitem__ of another object, for instance. The cost of this to the 'normal' dict lookup is that the need to do PyDict_CheckExact() each time a lookup fails. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2003-12-16 15:37 Message: Logged In: YES user_id=99874 I'll just add my voice as somebody who would find this to be "darn handy" if it could ever done without noticably impacting the speed of python code that DOESN'T use eval(). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-23 04:18 Message: Added to PEP 42. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 19:42 Message: Changed Group to Feature Request. Should be added to PEP 42 (I'll do that if nobody beats me to it). CPython requires a genuine dict now for speed. I believe JPython allows any mapping object. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 From noreply at sourceforge.net Tue Jun 22 14:14:06 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 18:27:41 2004 Subject: [ python-Bugs-977343 ] Solaris likes sys/loadavg.h Message-ID: Bugs item #977343, was opened at 2004-06-22 12:19 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977343&group_id=5470 Category: Build Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) >Assigned to: Anthony Baxter (anthonybaxter) Summary: Solaris likes sys/loadavg.h Initial Comment: Recent Solaris puts the prototype for getloadavg() in The following patch adds a configure check for sys/loadavg.h, and #includes it in Modules/posixmodule.c If someone can give it a cursory glance, I'll check it in. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-22 20:14 Message: Logged In: YES user_id=21627 Looks fine, please apply. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977343&group_id=5470 From noreply at sourceforge.net Tue Jun 22 22:17:21 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 22:17:52 2004 Subject: [ python-Bugs-977934 ] Python compiler encodes path to source in .pyc Message-ID: Bugs item #977934, was opened at 2004-06-22 21:17 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 Category: Parser/Compiler Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jonathan Polley (jwpolley) Assigned to: Nobody/Anonymous (nobody) Summary: Python compiler encodes path to source in .pyc Initial Comment: This bug was notices in version 2.0, but I was able to reproduce it in version 2.3.4. When a python module is compiled, the path to that file is encoded in the .pyc file. This causes problems when a multi-platform development environment is used, in my case it is a hybrid UNIX/Windows platform. To reproduce the problem, perform the following steps from within IDLE: 1) run python 2) import crlf 3) exit python 4) copy the crlf.py and crlf.pyc files from Tools/Scripts to another directory 5) run python 6) add the path to the copies of crlf.py* to the start of the system path. 7) import crlf 8) print crlf.__file__ (this will yield the proper path to the files) 9) using the Open Module..., try to open the crlf module. You will get an error saying that the module was not found. This is a major problem when doing multi-platform debugging. If an exception is raised in a module that was compiled on another system (with a different path or OS), the debugger can not find the file to open it so it can be debugged. It also makes the "Open Module..." menu option unreliable. If you look in the .pyc file you will find the path to the location where the file was originally generated. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 From noreply at sourceforge.net Tue Jun 22 22:26:07 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 22 22:26:41 2004 Subject: [ python-Bugs-977937 ] "build" target doesn't check umask Message-ID: Bugs item #977937, was opened at 2004-06-22 20:26 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977937&group_id=5470 Category: Distutils Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Charles (melicertes) Assigned to: Nobody/Anonymous (nobody) Summary: "build" target doesn't check umask Initial Comment: Normal procedure is to do "python setup.py build" as a non-root user, doing only "python setup.py install" as root. If the non-root user has a restrictive umask (i.e. 077), the built files will be mode 0600 (in directories created 0700), etc, and "setup.py install" will not make them world readable, so you end up with things like doc files installed mode 0600 in a new directory under /usr/share/doc/ that's mode 0700 and no one but root can read/use them. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977937&group_id=5470 From noreply at sourceforge.net Wed Jun 23 00:38:02 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 23 00:38:55 2004 Subject: [ python-Bugs-975646 ] tp_(get|set)attro? inheritance bug Message-ID: Bugs item #975646, was opened at 2004-06-18 18:04 Message generated for change (Comment added) made by gvanrossum You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975646&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Gustavo J. A. M. Carneiro (gustavo) >Assigned to: Nobody/Anonymous (nobody) Summary: tp_(get|set)attro? inheritance bug Initial Comment: Documentation says, regarding tp_getattr: ? This field is inherited by subtypes together with tp_getattro: a subtype inherits both tp_getattr and tp_getattro from its base type when the subtype's tp_getattr and tp_getattro are both NULL. ? Implementation disagrees, at least in cvs head, but the effect of the bug (non-inheritance of tp_getattr) happens in 2.3.3. Follow with me: In function type_new (typeobject.c) line 1927: /* Special case some slots */ if (type->tp_dictoffset != 0 || nslots > 0) { if (base->tp_getattr == NULL && base->tp_getattro == NULL) type->tp_getattro = PyObject_GenericGetAttr; if (base->tp_setattr == NULL && base->tp_setattro == NULL) type->tp_setattro = PyObject_GenericSetAttr; } ...later in the same function... /* Initialize the rest */ if (PyType_Ready(type) < 0) { Py_DECREF(type); return NULL; } Inside PyType_Ready(), line 3208: for (i = 1; i < n; i++) { PyObject *b = PyTuple_GET_ITEM(bases, i); if (PyType_Check(b)) inherit_slots(type, (PyTypeObject *)b); } Inside inherit_slots, line (3056): if (type->tp_getattr == NULL && type->tp_getattro == NULL) { type->tp_getattr = base->tp_getattr; type->tp_getattro = base->tp_getattro; } if (type->tp_setattr == NULL && type->tp_setattro == NULL) { type->tp_setattr = base->tp_setattr; type->tp_setattro = base->tp_setattro; } So, if you have followed through, you'll notice that type_new first sets tp_getattro = GenericGetAttr, in case 'base' has neither tp_getattr nor tp_getattro. So, you are thinking that there is no problem. If base has tp_getattr, that code path won't be execute. The problem is with multiple inheritance. In type_new, 'base' is determined by calling best_base(). But the selected base may not have tp_getattr, while another might have. In this case, setting tp_getattro based on information from the wrong base precludes the slot from being inherited from the right base. This is happening in pygtk, unfortunately. One possible solution would be to move the first code block to after the PyType_Ready() call. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-23 00:38 Message: Logged In: YES user_id=6380 No time to think about this more, but yes, a type should only implement *one* of tp_getattr and tp_getattro, and only *one* (best the matching one!) of tp_setattr and tp_setattro. If you are trying to get different semantics for getattro and getattr, you're crazy (that's not what the two are for -- tp_getattr is only there for b/w compat). ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-22 05:17 Message: Logged In: YES user_id=6656 Without wanting to think about this terribly hard, wouldn't a workaround be for pygtk to implement tp_getattro and not tp_getattr? IMO, this is a good idea anyway... ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-21 01:05 Message: Logged In: YES user_id=21627 Guido, is this a bug? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975646&group_id=5470 From noreply at sourceforge.net Wed Jun 23 05:50:10 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 23 05:50:20 2004 Subject: [ python-Bugs-977470 ] Deleted files are reinstalled Message-ID: Bugs item #977470, was opened at 2004-06-22 15:07 Message generated for change (Comment added) made by andersjm You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977470&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: Anders J. Munch (andersjm) Assigned to: Nobody/Anonymous (nobody) Summary: Deleted files are reinstalled Initial Comment: Python 2.3.3 on W2K Run python setup.py install then delete a .py-file from setup.py and from the Python installation, then run python setup.py install again. Now the removed .py-file will be reinstalled - presumably because there's still a copy in build/lib. ---------------------------------------------------------------------- >Comment By: Anders J. Munch (andersjm) Date: 2004-06-23 11:50 Message: Logged In: YES user_id=384806 It's a bug because it bit me :-) I had a module xml.py in a package and renamed it because the name clash with the top-level module was creating problems. Having the renamed file reappear under the old name seemingly from out of nowhere was very confusing. I guess my mental model is that everything in the build directory is an implementation detail. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-22 20:12 Message: Logged In: YES user_id=21627 Why is that a bug? You have to remove the build directory in that case. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977470&group_id=5470 From noreply at sourceforge.net Wed Jun 23 08:30:24 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 23 08:30:37 2004 Subject: [ python-Bugs-975646 ] tp_(get|set)attro? inheritance bug Message-ID: Bugs item #975646, was opened at 2004-06-18 23:04 Message generated for change (Comment added) made by gustavo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975646&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Gustavo J. A. M. Carneiro (gustavo) Assigned to: Nobody/Anonymous (nobody) Summary: tp_(get|set)attro? inheritance bug Initial Comment: Documentation says, regarding tp_getattr: ? This field is inherited by subtypes together with tp_getattro: a subtype inherits both tp_getattr and tp_getattro from its base type when the subtype's tp_getattr and tp_getattro are both NULL. ? Implementation disagrees, at least in cvs head, but the effect of the bug (non-inheritance of tp_getattr) happens in 2.3.3. Follow with me: In function type_new (typeobject.c) line 1927: /* Special case some slots */ if (type->tp_dictoffset != 0 || nslots > 0) { if (base->tp_getattr == NULL && base->tp_getattro == NULL) type->tp_getattro = PyObject_GenericGetAttr; if (base->tp_setattr == NULL && base->tp_setattro == NULL) type->tp_setattro = PyObject_GenericSetAttr; } ...later in the same function... /* Initialize the rest */ if (PyType_Ready(type) < 0) { Py_DECREF(type); return NULL; } Inside PyType_Ready(), line 3208: for (i = 1; i < n; i++) { PyObject *b = PyTuple_GET_ITEM(bases, i); if (PyType_Check(b)) inherit_slots(type, (PyTypeObject *)b); } Inside inherit_slots, line (3056): if (type->tp_getattr == NULL && type->tp_getattro == NULL) { type->tp_getattr = base->tp_getattr; type->tp_getattro = base->tp_getattro; } if (type->tp_setattr == NULL && type->tp_setattro == NULL) { type->tp_setattr = base->tp_setattr; type->tp_setattro = base->tp_setattro; } So, if you have followed through, you'll notice that type_new first sets tp_getattro = GenericGetAttr, in case 'base' has neither tp_getattr nor tp_getattro. So, you are thinking that there is no problem. If base has tp_getattr, that code path won't be execute. The problem is with multiple inheritance. In type_new, 'base' is determined by calling best_base(). But the selected base may not have tp_getattr, while another might have. In this case, setting tp_getattro based on information from the wrong base precludes the slot from being inherited from the right base. This is happening in pygtk, unfortunately. One possible solution would be to move the first code block to after the PyType_Ready() call. ---------------------------------------------------------------------- >Comment By: Gustavo J. A. M. Carneiro (gustavo) Date: 2004-06-23 13:30 Message: Logged In: YES user_id=908 PyGTK implements only tp_getattr, none of tp_getattro, tp_setattr, or tp_setattro. We already have a workaround in pygtk, but it was my civic duty to report this bug O:-) ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-23 05:38 Message: Logged In: YES user_id=6380 No time to think about this more, but yes, a type should only implement *one* of tp_getattr and tp_getattro, and only *one* (best the matching one!) of tp_setattr and tp_setattro. If you are trying to get different semantics for getattro and getattr, you're crazy (that's not what the two are for -- tp_getattr is only there for b/w compat). ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-22 10:17 Message: Logged In: YES user_id=6656 Without wanting to think about this terribly hard, wouldn't a workaround be for pygtk to implement tp_getattro and not tp_getattr? IMO, this is a good idea anyway... ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-21 06:05 Message: Logged In: YES user_id=21627 Guido, is this a bug? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975646&group_id=5470 From noreply at sourceforge.net Wed Jun 23 12:59:26 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 23 12:59:35 2004 Subject: [ python-Bugs-215126 ] Over restricted type checking on eval() function Message-ID: Bugs item #215126, was opened at 2000-09-22 14:36 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 Category: Python Interpreter Core Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) >Assigned to: Armin Rigo (arigo) Summary: Over restricted type checking on eval() function Initial Comment: The built-in function eval() takes a string argument and a dictionary. The second argument should allow any instance which defines __getitem__ as opposed to just dictionaries. The following example creates a type error: eval, argument 2: expected dictionary, instance found class SpreadSheet: _cells = {} def __setitem__( self, key, formula ): self._cells[key] = formula def __getitem__( self, key ): return eval( self._cells[key], self ) ss = SpreadSheet() ss['a1'] = '5' ss['a2'] = 'a1*5' ss['a2'] ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 11:59 Message: Logged In: YES user_id=80475 The quick patch works fine. Do change the PyArg_ParseTuple() into the faster PyArg_UnpackTuple(). Does this patch show any changes to pystone or other key metrics? Would the PyDict_GetItem trick have better performance? My guess is that it would. +1 on using PyMapping_Check() for checking the locals argument to eval(). That is as good as you can get without actually running it. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-22 16:07 Message: Logged In: YES user_id=4771 Quick patch attached. I didn't try to use the PyDict_GetItem trick described, but just systematically use PyObject_GetItem/SetItem/DelItem when working with f_locals. This might confuse some extension modules that expect PyEval_GetLocals() to return a dict object. The eval trick is now: eval(code, nondict) --> eval(code, globals(), nondict). Besides eval() I removed the relevant typecheck from execfile() and the exec statement. Any other place I am missing? We might want to still somehow check the type of the locals, to avoid strange errors caused by e.g. eval("a", "b"). PyMapping_Check() is the obvious candidate, but it looks like a hack. More testing is needed. test_descrtut.py line 84 now succeeds, unexpectedly, which is interpreted as a test failure. Needs some docs, too. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:27 Message: Logged In: YES user_id=80475 Armin, can you whip-up a quick patch so that we can explore the implications of your suggestion. Ideally, I would like to see something like this go in for Py2.4. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-22 11:53 Message: Logged In: YES user_id=80475 +1 Amrin's idea provides most of the needed functionality with zero performance impact. Also, using a local dictionary for the application variables is likely to be just exactly what a programmer would want to do. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 13:21 Message: Logged In: YES user_id=4771 With minimal work and performance impact, we could allow frame->f_locals to be something else than a real dictionary. It looks like it would be easily possible as long as we don't touch the time-critical frame->f_globals. Code compiled by eval() always uses LOAD_NAME and STORE_NAME, which is anyway allowed to be a bit slower than LOAD_FAST or LOAD_GLOBAL. Note that PyDict_GetItem() & co., as called by LOAD/STORE_NAME, do a PyDict_Check() anyway. We could just replace it with PyDict_CheckExact() and if it fails fall back to calling the appropriate ob_type->tp_as_mapping method. This would turn some of the PyDict_Xxx() API into a general mapping API, half-way between its current dict-only usage and the fully abstract PyObject_Xxx() API. This is maybe a bit strange from the user's point of view, because eval("expr", mydict) still wouldn't work: only eval("expr", {}, mydict) would. which does which does Now, there is no way I can think of that code compiled by eval() could contain LOAD_GLOBAL or STORE_GLOBAL. The only way to tell the difference between eval("expr", mydict) and eval("expr", {}, mydict) appears to be by calling globals() or somehow inspecting the frame directly. Therefore it might be acceptable to redefine the two-argument eval(expr, dict) to be equivalent not to eval(expr, dict, dict) but eval(expr, {}, dict). This hack might be useful enough even if it won't work with the exec statement. ---------------------------------------------------------------------- Comment By: Philippe Fremy (pfremy) Date: 2004-03-03 04:40 Message: Logged In: YES user_id=233844 I have exactly the same need as the original poster. The only difference in my case is that I inhreted a dictionnary. I want to use the eval() function as a simple expression evaluator. I have the follwing dictionnary: d['a']='1' d['b']='2' d['c']='a+b' I want the following results: d[a] -> 1 d[b] -> 2 d[c] -> 3 To do that, I was planning to use the eval() function and overloading the __getitem__ of the global or local dictionnary: class MyDict( dict ) : def __getitem__( self, key ): print "__getitem__", key val = dict.__getitem__( self, key ) print "val = '%s'" % val return eval( val , self ) But it does not work: d[a]: __getitem__ a val = '1' -> 1 d[b]: __getitem__ b val = '2' -> 2 d[c]: __getitem__ c val = 'e+1' ERROR Traceback (most recent call last): File "test_parse_jaycos_config.py", line 83, in testMyDict self.assertEquals( d['c'], 2 ) File "parse_config_file.py", line 10, in __getitem__ return eval( val , self ) File "", line 0, in ? TypeError: cannot concatenate 'str' and 'int' objects d['c'] did fetch the 'a+1' value, which was passed to eval. However, eval() tried to evaluate the expression using the content of the dictionnary instead of using __getitem__. So it fetched the string '1' for 'a' instead of the value 1, so the result is not suitable for the addition. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-01-12 17:01 Message: Logged In: YES user_id=99874 Hmm... I like this! Of course, I am wary of adding *yet another* special double- underscore function to satisfy a single special purpose, but this would satisfy all of *my* needs, and without any performance impact for lookups that are FOUND. Lookups that are NOT found would have a slight performance degrade (I know better than to speculate about the size of the effect without measuring it). I'm not really sure what percentage of dict lookups succeed. At any rate, what are these "other contexts" you mention in which a __keyerror__ would "also be useful"? Because if we can find other places where it is useful, that significantly improves the usefulness. OTOH, if the tests can be done ONLY for eval (I don't really think that's possible), then I'm certain no one cares about the performance of eval. ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2004-01-12 14:38 Message: Logged In: YES user_id=292741 This may be a compromise solution, which could also be useful in other contexts: What if the object passed is derived from dict - presumably that doesn't help because the point is to do low-level calls to the actual dict's lookup functions. Now, suppose we modify the basic dict type, so that before throwing a KeyError, it checks to see if it is really a derived object with a method __keyerror__, and if so, calls that and returns its result (or lets it throw)? Now you can make objects that look like dicts, and act like them at the low level, but can automatically populate themselves when non-existent keys are requested. Of course, __keyerror__('x') does not have to make an 'x' entry in the dict; it could make no change, or add several entries, depending on the desired semantics regarding future lookups. It could be set up so that every lookup fails and is forwarded by __keyerror__ to the __getitem__ of another object, for instance. The cost of this to the 'normal' dict lookup is that the need to do PyDict_CheckExact() each time a lookup fails. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2003-12-16 10:37 Message: Logged In: YES user_id=99874 I'll just add my voice as somebody who would find this to be "darn handy" if it could ever done without noticably impacting the speed of python code that DOESN'T use eval(). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 23:18 Message: Added to PEP 42. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 14:42 Message: Changed Group to Feature Request. Should be added to PEP 42 (I'll do that if nobody beats me to it). CPython requires a genuine dict now for speed. I believe JPython allows any mapping object. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 From noreply at sourceforge.net Wed Jun 23 11:48:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 23 12:59:39 2004 Subject: [ python-Bugs-978308 ] Spurious errors taking bool of dead proxy Message-ID: Bugs item #978308, was opened at 2004-06-23 11:48 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978308&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Chris King (colander_man) Assigned to: Nobody/Anonymous (nobody) Summary: Spurious errors taking bool of dead proxy Initial Comment: The following code will cause interesting errors: from weakref import proxy class a: pass bool(proxy(a())) Entered at a prompt, this will not cause a ReferenceError until another statement is entered (and will instead return True or 1) in both 2.3 and 2.2. Run as a script, this will return True and cause a fatal error in 2.3, but will return 1 and otherwise exhibit no strange behaviour in 2.2 (even with additional code accessing the return value). The equivalent code written using ref rather than proxy works as expected: bool(ref(a())()) returns False and creates no errors. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978308&group_id=5470 From noreply at sourceforge.net Wed Jun 23 13:58:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 23 13:59:37 2004 Subject: [ python-Bugs-924301 ] A leak case with cmd.py & readline & exception Message-ID: Bugs item #924301, was opened at 2004-03-27 01:28 Message generated for change (Comment added) made by svenil You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Sverker Nilsson (svenil) Assigned to: Michael Hudson (mwh) Summary: A leak case with cmd.py & readline & exception Initial Comment: A leak to do with readline & cmd, in Python 2.3. I found out what hold on to my interactive objects too long ('for ever') in certain circumstances. The circumstance had to do with an exception being raised in Cmd.cmdloop and handled (or not handled) outside of Cmd.cmdloop. In cmd.py, class Cmd, in cmdloop(), if an exception is raised and propagated out from the interior of cmdloop, the function postloop() is not called. The default function of this, (in 2.3) when the readline library is present, is to restore the completer, via: readline.set_completer(self.old_completer) If this is not done, the newly (by preloop) inserted completer will remain. Even if the loop is called again and run without exception, the new completer will remain, because then in postloop the old completer will be set to our new completer. When we exit, the completer will remain the one we set. This will hold on to our object, aka 'leak'. - In cmd.py in 2.2 no attempt was made to restore the completer, so this was also a kind of leak, but it was replaced the next time a Cmd instance was initialized. Now, however, the next time we will not replace the old completer, but both of them will remain in memory. The old one will be stored as self.old_completer. If we terminate with an exception, bad luck... the stored completer retains both of the instances. If we terminate normally, the old one will be retained. In no case do we restore the space of the first instance. The only way that would happen, would be if the second instance first exited the loop with an exception, and then entered the loop again an exited normally. But then, the second instance is retained instead! If each instance happens to terminate with an exception, it seems well possible that an ever increasing chain of leaking instances will be accumulated. My fix is to always call the postloop, given the preloop succeeded. This is done via a try:finally clause. def cmdloop(self, intro=None): ... self.preloop() try: ... finally: # Make sure postloop called self.postloop() I am attaching my patched version of cmd.py. It was originally from the tarball of Python 2.3.3 downloaded from Python.org some month or so ago in which cmd.py had this size & date: 14504 Feb 19 2003 cmd.py Best regards, Sverker Nilsson ---------------------------------------------------------------------- >Comment By: Sverker Nilsson (svenil) Date: 2004-06-23 19:58 Message: Logged In: YES user_id=356603 The constructor takes parameters stdin and stdout, and sets self.stdin and self.stdout from them or from sys. sys.std[in,out] are then not used directly except implicitly by raw_input. This seems to have changed somewhere between Python 2.2 and 2.3. Also, I remember an old version had the cmdqueue list as a class variable which was not at all thread safe - now it is an instance variable. I am hoping/thinking it is thread safe now... It seems superfluos to have both the use_rawinput flag and stdin parameter. At least raw_input should not be used if stdin is some other file than the raw input. But I don't have a simple suggestion to fix this, for one thing, it wouldn't be sufficient to compare the stdin parameter to sys.stdin since that file could have been changed so it wasn't the raw stdin anymore. Perhaps the module could store away the original sys.stdin as it was imported... but that wouldn't quite work since there is no guarantee sys.stdin hadn't already been changed. I guess if it is worth the trouble, if someone has an idea, could be a good thing to fix, anyway... ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-22 11:07 Message: Logged In: YES user_id=6656 Um. Unless I'm *hopelessly* misreading things, cmd.Cmd writes to sys.stdout unconditionally and either calls raw_input() or sys.stdin.readline(). So I'm not sure how one would "use a cmd.Cmd instance in a separate thread, talking eg via a socket file" without rewriting such huge amounts of the class that thread- safety becomes your own problem. Apologies if I'm being dumb... also, please note: I didn't write this module :-) ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-06-22 02:34 Message: Logged In: YES user_id=356603 Your comment about threads worries me, I am not sure I understand it. Would it be unsafe to use a cmd.Cmd instance in a separate thread, talking eg via a socket file? The instance is used only by that thread and not by others, but there may be other threads using other instances. I understand that it could be unsafe to have two threads share the same instance, but how about different instances? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-21 12:35 Message: Logged In: YES user_id=6656 Well, that would seem to be easy enough to fix (see attached). If you're using cmd.Cmd instances from different threads at the same time, mind, I think you're screwed anyway. You're certainly walking on thin ice... ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-06-18 21:13 Message: Logged In: YES user_id=356603 I think it is OK. Just noting that it changes the completer (just like my version) even if use_rawinput is false. I guess one should remember to pass a null completekey in that case, in case some other thread was using raw_input. Perhaps a check for use_rawinput could be added in cmd.py to avoid changing the completer in that case, for less risk of future mistakes. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 14:29 Message: Logged In: YES user_id=6656 yay, that appears to have worked. let me know what you think. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 14:26 Message: Logged In: YES user_id=6656 trying again... ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-01 12:10 Message: Logged In: YES user_id=6656 Bah. I don't have the laptop with the patch with me, I'll try uploading again in a couple of days. ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-05-29 09:43 Message: Logged In: YES user_id=356603 I couldn't find a new attached file. I acknowledge some problems with my original patch, but have no other suggestion at the moment. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-26 18:36 Message: Logged In: YES user_id=6656 What do you think of the attached? This makes the documentation of pre & post loop more accurate again, which I think is nice. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-19 11:38 Message: Logged In: YES user_id=6656 This is where I go "I wish I'd reviewed that patch more carefully". In particular, the documentation of {pre,post}loop is now out of date. I wonder setting/getting the completer in these functions was a good idea. Hmm. This bug report confuses me :-) but I can certainly see the intent of the patch... ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-19 03:52 Message: Logged In: YES user_id=80475 Michael, this touches some of your code. Do you want to handle this one? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 From noreply at sourceforge.net Wed Jun 23 16:13:00 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 23 16:13:24 2004 Subject: [ python-Bugs-978308 ] Spurious errors taking bool of dead proxy Message-ID: Bugs item #978308, was opened at 2004-06-23 10:48 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978308&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None >Priority: 6 Submitted By: Chris King (colander_man) Assigned to: Nobody/Anonymous (nobody) Summary: Spurious errors taking bool of dead proxy Initial Comment: The following code will cause interesting errors: from weakref import proxy class a: pass bool(proxy(a())) Entered at a prompt, this will not cause a ReferenceError until another statement is entered (and will instead return True or 1) in both 2.3 and 2.2. Run as a script, this will return True and cause a fatal error in 2.3, but will return 1 and otherwise exhibit no strange behaviour in 2.2 (even with additional code accessing the return value). The equivalent code written using ref rather than proxy works as expected: bool(ref(a())()) returns False and creates no errors. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 15:13 Message: Logged In: YES user_id=80475 Confirmed the script behavior in Py2.4. The interactive prompt results are not consistent It returned True the first time I ran it and raised a ReferenceError in subsequent attempts. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978308&group_id=5470 From noreply at sourceforge.net Wed Jun 23 17:47:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 23 17:48:02 2004 Subject: [ python-Bugs-978556 ] Broken URLs in sha module docs Message-ID: Bugs item #978556, was opened at 2004-06-23 21:47 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978556&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Troels Therkelsen (troelst) Assigned to: Nobody/Anonymous (nobody) Summary: Broken URLs in sha module docs Initial Comment: The following URLs are broken: http://csrc.nist.gov/publications/fips/fips180-1/fip180- 1.txt http://csrc.nist.gov/publications/fips/fips180-1/fip180- 1.pdf Only thing I can suggest to fix this is to change the links to point to the pdf document describing the FIPS 180-2 version of the algorithm(s) as this document describes the SHA-1 algorithm in addition to the higher bit count SHA algorithms. AFAIK there's no text version, but then again, I didn't look too hard for one. FIPS 180-2 URL: http://csrc.nist.gov/publications/fips/fips180-2/fips180- 2withchangenotice.pdf ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978556&group_id=5470 From noreply at sourceforge.net Wed Jun 23 19:07:12 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 23 19:07:21 2004 Subject: [ python-Bugs-978604 ] wait_variable hangs at exit Message-ID: Bugs item #978604, was opened at 2004-06-23 16:07 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978604&group_id=5470 Category: Tkinter Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Russell Owen (reowen) Assigned to: Martin v. L?wis (loewis) Summary: wait_variable hangs at exit Initial Comment: If code is waiting on a wait_variable at program exit then the program never fully exits. The command prompt never returns and the process doesn't seem to be doing much of anything (i.e. no heavy CPU usage). ctrl-C has no effect. I have found that binding to does work around the problem. I saw this using a unix/X11 build of Python 2.3.4 on MacOS X 10.3.4 and also the standard Python 2.3 framework build included with MacOS X 10.3. The attached file gives a good example. To see the problem, execute the script and then close the root window before the script finishes (you have 10 seconds). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978604&group_id=5470 From noreply at sourceforge.net Wed Jun 23 19:47:35 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 23 19:47:40 2004 Subject: [ python-Bugs-975404 ] logging module uses deprecate apply() Message-ID: Bugs item #975404, was opened at 2004-06-18 09:52 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975404&group_id=5470 Category: Python Library Group: Python 2.3 Status: Closed Resolution: Invalid Priority: 5 Submitted By: Barry Alan Scott (barry-scott) Assigned to: Nobody/Anonymous (nobody) Summary: logging module uses deprecate apply() Initial Comment: The use of apply in logging causes warning to be issued by python when turning programs into executables with Gordon's McMillians Installer and probably others. Replacing the apply() calls with the modern idium would fix the problem. The work around is the add "import warnings" in the main module of your program. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 18:47 Message: Logged In: YES user_id=80475 FWIW, the warning was silenced in 2.3.4 and 2.4. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-19 15:57 Message: Logged In: YES user_id=357491 If you read PEP 291 (http://www.python.org/peps/pep-0291.html) you will notice that the logging module is to be kept backwards-compatible to 1.5.2 . This requires using apply() instead of ``*args, **kwargs``. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975404&group_id=5470 From noreply at sourceforge.net Wed Jun 23 20:16:26 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 23 20:16:30 2004 Subject: [ python-Bugs-978632 ] configure and gmake fail in openbsd 3.5 i386 Message-ID: Bugs item #978632, was opened at 2004-06-24 00:16 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978632&group_id=5470 Category: Installation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: - (panterrap) Assigned to: Nobody/Anonymous (nobody) Summary: configure and gmake fail in openbsd 3.5 i386 Initial Comment: Problem with compiler of python-2.3.4 in OpenBSD 3.5 i386 # ./configure --prefix=/usr/local/python-2.3.4 --with-cxx=/usr/bin/gcc 4 warnings sections in configure ------------ configure: WARNING: ncurses.h: present but cannot be compiled configure: WARNING: ncurses.h: check for missing prerequisite headers? configure: WARNING: ncurses.h: proceeding with the preprocessor's result configure: WARNING: ## ------------------------------------ ## configure: WARNING: ## Report this to bug-autoconf@gnu.org. ## configure: WARNING: ## ------------------------------------ ## ------------- configure: WARNING: sys/audioio.h: present but cannot be compiled configure: WARNING: sys/audioio.h: check for missing prerequisite headers? configure: WARNING: sys/audioio.h: proceeding with the preprocessor's result configure: WARNING: ## ------------------------------------ ## configure: WARNING: ## Report this to bug-autoconf@gnu.org. ## configure: WARNING: ## ------------------------------------ ## -------------- configure: WARNING: sys/lock.h: present but cannot be compiled configure: WARNING: sys/lock.h: check for missing prerequisite headers? configure: WARNING: sys/lock.h: proceeding with the preprocessor's result configure: WARNING: ## ------------------------------------ ## configure: WARNING: ## Report this to bug-autoconf@gnu.org. ## configure: WARNING: ## ------------------------------------ ## -------------- configure: WARNING: sys/select.h: present but cannot be compiled configure: WARNING: sys/select.h: check for missing prerequisite headers? configure: WARNING: sys/select.h: proceeding with the preprocessor's result configure: WARNING: ## ------------------------------------ ## configure: WARNING: ## Report this to bug-autoconf@gnu.org. ## configure: WARNING: ## ------------------------------------ ## --------------- my compilation in this platform # gmake /usr/bin/gcc -pthread -c -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Modules/ccpython.o ./Modules/ccpython.cc In file included from /usr/include/sys/select.h:38, from Include/pyport.h:118, from Include/Python.h:48, from ./Modules/ccpython.cc:3: /usr/include/sys/event.h:53: syntax error before `;' /usr/include/sys/event.h:55: syntax error before `;' gmake: *** [Modules/ccpython.o] Error 1 ------------- P.D.: Python-2.2.3 in this platform ok ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978632&group_id=5470 From noreply at sourceforge.net Wed Jun 23 20:49:32 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 23 20:49:39 2004 Subject: [ python-Bugs-978645 ] compiler warning for _NSGetExecutablePath() in getpath.c Message-ID: Bugs item #978645, was opened at 2004-06-23 17:49 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978645&group_id=5470 Category: Macintosh Group: Python 2.4 Status: Open Resolution: None Priority: 3 Submitted By: Brett Cannon (bcannon) Assigned to: Jack Jansen (jackjansen) Summary: compiler warning for _NSGetExecutablePath() in getpath.c Initial Comment: On OS X 10.3.4 (gcc 3.3, compiled with --disable-framework -- disable-toolbox-glue), a warning about no function prototype for _NSGetExecutablePath() is thrown for Modules/getpath.c:399 . It looks like this has to do with the include file for the function (mach-o/dyld.h) being #ifdef'ed with WITH_NEXT_FRAMEWORK while the code using _NSGetExecutablePath() is #ifdef'ed with __APPLE__. Should probably use the same #ifdef statement, but I don't know which one. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978645&group_id=5470 From noreply at sourceforge.net Wed Jun 23 21:56:08 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 23 22:03:37 2004 Subject: [ python-Bugs-978662 ] can't compile _localemodule.c w/o --enable-toolbox-glue Message-ID: Bugs item #978662, was opened at 2004-06-23 18:56 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978662&group_id=5470 Category: Macintosh Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Brett Cannon (bcannon) Assigned to: Jack Jansen (jackjansen) Summary: can't compile _localemodule.c w/o --enable-toolbox-glue Initial Comment: Line 412 of Modules/_localemodule.c calls PyMac_getscript() which is within a ``#if defined(__APPLE__)`` block. Trouble is that the code is in Python/mactoolboxglue.c which is not compiled if -- disable-toolbox-glue is a compile option (which it was on my OS X 10.3.4 box). Probably shouldn't have a compile failure thanks to ld not finding the symbol; should probably either just not compile the module, change the #if block, or change the function. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978662&group_id=5470 From noreply at sourceforge.net Thu Jun 24 00:13:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 24 00:13:16 2004 Subject: [ python-Bugs-932563 ] logging: need a way to discard Logger objects Message-ID: Bugs item #932563, was opened at 2004-04-09 17:51 Message generated for change (Comment added) made by fdrake You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fred L. Drake, Jr. (fdrake) >Assigned to: Vinay Sajip (vsajip) Summary: logging: need a way to discard Logger objects Initial Comment: There needs to be a way to tell the logging package that an application is done with a particular logger object. This is important for long-running processes that want to use a logger to represent a related set of activities over a relatively short period of time (compared to the life of the process). This is useful to allow creating per-connection loggers for internet servers, for example. Using a connection-specific logger allows the application to provide an identifier for the session that can be automatically included in the logs without having the application encode it into each message (a far more error prone approach). ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-24 00:13 Message: Logged In: YES user_id=3066 Looking at this again, after adjusting the application I have that used the connection-specific loggers, I decided that a different approach better solves the problem. What you've shown requires exactly what I wanted to avoid: having to make a gesture at each logging call (to transform the message). Instead of doing this, I ended up writing a wrapper for the logger objects that implement the methods log(), debug(), info(), warn(), warning(), error(), exception(), critical(), and fatal(). These methods each transform the message before calling the underlying logger. It would be really nice to have something like this that isolates the final call to Logger._log() so specific implementations can simply override _log() (or some other helper routine that gets all the info) and maybe the __init__(). I don't think that's completely necessary, but would probably make it a lot easier to implement this pattern. There's probably some useful documentation improvements that could be made to help people avoid the issue of leaking loggers. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-10 12:50 Message: Logged In: YES user_id=3066 Sorry for the delay in following up. I'll re-visit the software where I wanted this to see how it'll work out in practice. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-09 12:01 Message: Logged In: YES user_id=31435 Assigned to Fred, because Vinay wants his input (in general, a bug should be assigned to the next person who needs to "do something" about it, and that can change over time). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-09 05:28 Message: Logged In: YES user_id=308438 Fred, any more thoughts on this? Thanks, Vinay ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-05-08 15:28 Message: Logged In: YES user_id=308438 The problem with disposing of Logger objects programmatically is that you don't know who is referencing them. How about the following approach? I'm making no assumptions about the actual connection classes used; if you need to make it even less error prone, you can create delegating methods in the server class which do the appropriate wrapping. class ConnectionWrapper: def __init__(self, conn): self.conn = conn def message(self, msg): return "[connection: %s]: %s" % (self.conn, msg) and then use this like so... class Server: def get_connection(self, request): # return the connection for this request def handle_request(self, request): conn = self.get_connection(request) # we use a cache of connection wrappers if conn in self.conn_cache: cw = self.conn_cache[conn] else: cw = ConnectionWrapper(conn) self.conn_cache[conn] = cw #process request, and if events need to be logged, you can e.g. logger.debug(cw.message("Network packet truncated at %d bytes"), n) #The logged message will contain the connection ID ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 From noreply at sourceforge.net Thu Jun 24 02:09:01 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 24 02:09:28 2004 Subject: [ python-Bugs-731501 ] Importing anydbm generates exception if _bsddb unavailable Message-ID: Bugs item #731501, was opened at 2003-05-02 13:56 Message generated for change (Comment added) made by fdrake You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=731501&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: Accepted Priority: 5 Submitted By: Nick Vargish (vargish) >Assigned to: Skip Montanaro (montanaro) Summary: Importing anydbm generates exception if _bsddb unavailable Initial Comment: The anydbm module attempts to import the dbhash module, which fails if there is no BSD DB module available. Relevant portion of backtrace: File "/diska/netsite-docs/cgi-bin/waisdb2.py", line 124, in _getsystemsdbm dbsrc = anydbm.open(self.dbfile) File "/usr/local/python-2.3b1/lib/python2.3/anydbm.py", line 82, in open mod = __import__(result) File "/usr/local/python-2.3b1/lib/python2.3/dbhash.py", line 5, in ? import bsddb File "/usr/local/python-2.3b1/lib/python2.3/bsddb/__init__.py", line 40, in ? import _bsddb ImportError: No module named _bsddb Tests that explicitly use "import dbm" rather than anydbm are successful on this system. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-24 02:09 Message: Logged In: YES user_id=3066 The doc changes look mostly fine to me (and I've changed what didn't; a small cosmetic nit). I'm just amazed we're still spending time tweaking BSD DB; I don't think that's ever "just worked" for me without digging around for a version of the underlying library that worked with Python. ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2003-05-06 16:54 Message: Logged In: YES user_id=44345 Assigned to Fred for doc review - I added a couple notes to libbsddb.tex and libundoc.tex in lieu of actually creating a separate bsddb185 section which I felt would have given people the mistaken idea that the module is available for general use. Still, I thought there should be some mention in the docs. Library detection probably needs a little tweakage as well. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2003-05-05 17:55 Message: Logged In: YES user_id=21627 Actually, you probably need to check whether /usr/lib/libdb.* is present, and link with that as well if it is. If you are uncertain whether this is the right library, I see no way except to run a test program, at configure time, that creates a database and determines whether this really is a DB 1.85 database. Alternatively, the test program might try to invoke db_version. If the function is available, it is DB x, x>=2 (DB1 apparently has no version indication function). ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2003-05-05 16:34 Message: Logged In: YES user_id=21627 I can't actually test the patch, but it looks good to me. Please apply! ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2003-05-05 10:27 Message: Logged In: YES user_id=44345 I believe the attached patch does what's necessary to get this to work again. It does a few things: * setup.py builds the bsddb185 under the right (restrictive) circumstances. /usr/include/db.h must exist and HASHVERSION must be 2. In this case the bsddb185 module will be built without any extra includes, libraries or #defines, forcing whatever is present in /usr/include/db.h and libc to be used to build the module. * whichdb.py detects the older hash file format and returns "bsddb185". * bsddbmodule.c grows an extra undocumented attribute, "open". The last two changes avoid having to change dbhash.py in complicated ways to distinguish between new and old file versions. The format- detecting mess remains isolated to whichdb.py. Using this setup I was successfully able to open /etc/pwd.db on my system using anydbm.open(), which I was unable to do previously. I can also still open a more recent hash file created with anydbm.open. Finally, new files created with anydbm.open are in the later format. Please give it a try. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2003-05-03 05:02 Message: Logged In: YES user_id=21627 I think this is not a bug. open() has determined that this is a bsddb file, but bsddb is not supported on the system. Or did you mean to suggest that opening the very same file with dbm would be successful? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=731501&group_id=5470 From noreply at sourceforge.net Thu Jun 24 05:57:53 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 24 05:58:01 2004 Subject: [ python-Bugs-978833 ] SSL-ed sockets don't close correct? Message-ID: Bugs item #978833, was opened at 2004-06-24 11:57 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978833&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Robert Kiendl (kxroberto) Assigned to: Nobody/Anonymous (nobody) Summary: SSL-ed sockets don't close correct? Initial Comment: When testing FTP over SSL I have strong doubt, that ssl-ed sockets are not closed correctly. (This doesn't show with https because nobody takes care about whats going on "after the party".) See the following : --- I need to run FTP over SSL from windows (not shitty sftp via ssh etc!) as explained on http://www.ford-hutchinson.com/~fh-1-pfh/ftps-ext.html (good variant 3: FTP_TLS ) I tried to learn from M2Crypto's ftpslib.py (uses OpenSSL - not Pythons SSL) and made a wrapper for ftplib.FTP using Pythons SSL. I wrap the cmd socket like: self.voidcmd('AUTH TLS') ssl = socket.ssl(self.sock, self.key_file, self.cert_file) import httplib self.sock = httplib.FakeSocket(self.sock, ssl) self.file = self.sock.makefile('rb') Everything works ok, if I don't SSL the data port connection, but only the If I SSL the data port connection too, it almosts work, but ... self.voidcmd('PBSZ 0') self.voidcmd('PROT P') wrap the data connection with SSL: ssl = socket.ssl(conn, self.key_file, self.cert_file) import httplib conn = httplib.FakeSocket(conn, ssl) than in retrbinary it hangs endless in the last 'return self.voidresp()'. all data of the retrieved file is already correctly in my basket! The ftp server just won't send the final '226 Transfer complete.' on the cmd socket. Why? def retrbinary(self, cmd, callback, blocksize=8192, rest=None): self.voidcmd('TYPE I') conn = self.transfercmd(cmd, rest) fp = conn.makefile('rb') while 1: #data = conn.recv(blocksize) data = fp.read() #blocksize) if not data: break callback(data) fp.close() conn.close() return self.voidresp() what could be reason? The server is a ProFTPD 1.2.9 Server. I debugged, that the underlying (Shared)socket of the conn object is really closed. (If I simly omit the self.voidresp(), I have one file in the box, but subsequent ftp communication on that connection is not anymore correct.) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978833&group_id=5470 From noreply at sourceforge.net Thu Jun 24 06:06:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 24 06:06:43 2004 Subject: [ python-Bugs-973103 ] empty raise after handled exception Message-ID: Bugs item #973103, was opened at 2004-06-15 09:36 Message generated for change (Comment added) made by arigo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973103&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Niki Spahiev (nikis) Assigned to: Nobody/Anonymous (nobody) Summary: empty raise after handled exception Initial Comment: executing empty raise after handled exception produces wrong traceback. no exception: Traceback (most recent call last): File "bug.py", line 19, in ? test(i) File "bug.py", line 15, in test badfn() File "bug.py", line 5, in badfn raise TypeError: exceptions must be classes, instances, or strings (deprecated), not NoneType handled exception: no Traceback (most recent call last): File "bug.py", line 19, in ? test(i) File "bug.py", line 15, in test badfn() File "bug.py", line 11, in test print d[12345] KeyError: 12345 ---------------------------------------------------------------------- >Comment By: Armin Rigo (arigo) Date: 2004-06-24 10:06 Message: Logged In: YES user_id=4771 This is the intended behavior, although the docs that explain that are not too clear: "raise" is equivalent to re-raising what "sys.get_info()" returns; the docs about sys.get_info() explain in detail why you get this behavior. The language reference 6.9 should mention this. (Btw the language reference 7.4 still says that continue is illegal within try:finally:, which is no longer the case.) The reason sys.get_info() has access to the exception handled in a parent frame is to be able to implement things like traceback.print_exc(). But I don't know exactly why it should be the case that an exception remains active after its handler finished. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973103&group_id=5470 From noreply at sourceforge.net Thu Jun 24 06:58:13 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 24 06:58:22 2004 Subject: [ python-Bugs-932563 ] logging: need a way to discard Logger objects Message-ID: Bugs item #932563, was opened at 2004-04-09 21:51 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fred L. Drake, Jr. (fdrake) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: logging: need a way to discard Logger objects Initial Comment: There needs to be a way to tell the logging package that an application is done with a particular logger object. This is important for long-running processes that want to use a logger to represent a related set of activities over a relatively short period of time (compared to the life of the process). This is useful to allow creating per-connection loggers for internet servers, for example. Using a connection-specific logger allows the application to provide an identifier for the session that can be automatically included in the logs without having the application encode it into each message (a far more error prone approach). ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2004-06-24 10:58 Message: Logged In: YES user_id=308438 How about if I add a LoggerAdapter class which takes a logger in the __init__ and has logging methods debug(), info() etc. [and including _log()] which delegate to the underlying logger? Then you could subclass the Adapter and just override the methods you needed. Would that fit the bill? Of course the package can use a Logger-derived class, but this would apply to all loggers where the LoggerAdapter could be used for just some of the loggers in a system. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-24 04:13 Message: Logged In: YES user_id=3066 Looking at this again, after adjusting the application I have that used the connection-specific loggers, I decided that a different approach better solves the problem. What you've shown requires exactly what I wanted to avoid: having to make a gesture at each logging call (to transform the message). Instead of doing this, I ended up writing a wrapper for the logger objects that implement the methods log(), debug(), info(), warn(), warning(), error(), exception(), critical(), and fatal(). These methods each transform the message before calling the underlying logger. It would be really nice to have something like this that isolates the final call to Logger._log() so specific implementations can simply override _log() (or some other helper routine that gets all the info) and maybe the __init__(). I don't think that's completely necessary, but would probably make it a lot easier to implement this pattern. There's probably some useful documentation improvements that could be made to help people avoid the issue of leaking loggers. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-10 16:50 Message: Logged In: YES user_id=3066 Sorry for the delay in following up. I'll re-visit the software where I wanted this to see how it'll work out in practice. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-09 16:01 Message: Logged In: YES user_id=31435 Assigned to Fred, because Vinay wants his input (in general, a bug should be assigned to the next person who needs to "do something" about it, and that can change over time). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-09 09:28 Message: Logged In: YES user_id=308438 Fred, any more thoughts on this? Thanks, Vinay ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-05-08 19:28 Message: Logged In: YES user_id=308438 The problem with disposing of Logger objects programmatically is that you don't know who is referencing them. How about the following approach? I'm making no assumptions about the actual connection classes used; if you need to make it even less error prone, you can create delegating methods in the server class which do the appropriate wrapping. class ConnectionWrapper: def __init__(self, conn): self.conn = conn def message(self, msg): return "[connection: %s]: %s" % (self.conn, msg) and then use this like so... class Server: def get_connection(self, request): # return the connection for this request def handle_request(self, request): conn = self.get_connection(request) # we use a cache of connection wrappers if conn in self.conn_cache: cw = self.conn_cache[conn] else: cw = ConnectionWrapper(conn) self.conn_cache[conn] = cw #process request, and if events need to be logged, you can e.g. logger.debug(cw.message("Network packet truncated at %d bytes"), n) #The logged message will contain the connection ID ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 From noreply at sourceforge.net Thu Jun 24 09:51:43 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 24 09:51:49 2004 Subject: [ python-Bugs-978952 ] Remove all email from the archives Message-ID: Bugs item #978952, was opened at 2004-06-24 15:51 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978952&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Encolpe DEGOUTE (encolpe) Assigned to: Nobody/Anonymous (nobody) Summary: Remove all email from the archives Initial Comment: Hi, here come a patch for the old 2.0.11 to remove any email stuff from archives (header and body). It's a very usefull for antispam policy. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978952&group_id=5470 From noreply at sourceforge.net Thu Jun 24 10:06:09 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 24 10:06:14 2004 Subject: [ python-Bugs-932563 ] logging: need a way to discard Logger objects Message-ID: Bugs item #932563, was opened at 2004-04-09 17:51 Message generated for change (Comment added) made by fdrake You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fred L. Drake, Jr. (fdrake) >Assigned to: Vinay Sajip (vsajip) Summary: logging: need a way to discard Logger objects Initial Comment: There needs to be a way to tell the logging package that an application is done with a particular logger object. This is important for long-running processes that want to use a logger to represent a related set of activities over a relatively short period of time (compared to the life of the process). This is useful to allow creating per-connection loggers for internet servers, for example. Using a connection-specific logger allows the application to provide an identifier for the session that can be automatically included in the logs without having the application encode it into each message (a far more error prone approach). ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-24 10:06 Message: Logged In: YES user_id=3066 I've attached a file showing the class I came up with. I don't consider this to be a good wrapper, just what worked. I think one of the problems is that what I really want to override is the makeRecord() method, not the logging methods themselves. There's too much logic in those dealing with the disabling and level filtering that I don't want to duplicate, but as soon as I pass the calls on to the underlying logger, I can no longer change the makeRecord(). It would be possible to inject a new makeRecord() while my methods are active (in my definition for log() in the sample), and restore the original in a finally clause, but that feels... icky. The advantage of overriding makeRecord() is that formatter can deal with with how the additional information is added to the log because more information is made available on the record. ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-24 06:58 Message: Logged In: YES user_id=308438 How about if I add a LoggerAdapter class which takes a logger in the __init__ and has logging methods debug(), info() etc. [and including _log()] which delegate to the underlying logger? Then you could subclass the Adapter and just override the methods you needed. Would that fit the bill? Of course the package can use a Logger-derived class, but this would apply to all loggers where the LoggerAdapter could be used for just some of the loggers in a system. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-24 00:13 Message: Logged In: YES user_id=3066 Looking at this again, after adjusting the application I have that used the connection-specific loggers, I decided that a different approach better solves the problem. What you've shown requires exactly what I wanted to avoid: having to make a gesture at each logging call (to transform the message). Instead of doing this, I ended up writing a wrapper for the logger objects that implement the methods log(), debug(), info(), warn(), warning(), error(), exception(), critical(), and fatal(). These methods each transform the message before calling the underlying logger. It would be really nice to have something like this that isolates the final call to Logger._log() so specific implementations can simply override _log() (or some other helper routine that gets all the info) and maybe the __init__(). I don't think that's completely necessary, but would probably make it a lot easier to implement this pattern. There's probably some useful documentation improvements that could be made to help people avoid the issue of leaking loggers. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-10 12:50 Message: Logged In: YES user_id=3066 Sorry for the delay in following up. I'll re-visit the software where I wanted this to see how it'll work out in practice. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-09 12:01 Message: Logged In: YES user_id=31435 Assigned to Fred, because Vinay wants his input (in general, a bug should be assigned to the next person who needs to "do something" about it, and that can change over time). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-09 05:28 Message: Logged In: YES user_id=308438 Fred, any more thoughts on this? Thanks, Vinay ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-05-08 15:28 Message: Logged In: YES user_id=308438 The problem with disposing of Logger objects programmatically is that you don't know who is referencing them. How about the following approach? I'm making no assumptions about the actual connection classes used; if you need to make it even less error prone, you can create delegating methods in the server class which do the appropriate wrapping. class ConnectionWrapper: def __init__(self, conn): self.conn = conn def message(self, msg): return "[connection: %s]: %s" % (self.conn, msg) and then use this like so... class Server: def get_connection(self, request): # return the connection for this request def handle_request(self, request): conn = self.get_connection(request) # we use a cache of connection wrappers if conn in self.conn_cache: cw = self.conn_cache[conn] else: cw = ConnectionWrapper(conn) self.conn_cache[conn] = cw #process request, and if events need to be logged, you can e.g. logger.debug(cw.message("Network packet truncated at %d bytes"), n) #The logged message will contain the connection ID ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 From noreply at sourceforge.net Thu Jun 24 11:28:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 24 11:28:42 2004 Subject: [ python-Bugs-932563 ] logging: need a way to discard Logger objects Message-ID: Bugs item #932563, was opened at 2004-04-09 21:51 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fred L. Drake, Jr. (fdrake) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: logging: need a way to discard Logger objects Initial Comment: There needs to be a way to tell the logging package that an application is done with a particular logger object. This is important for long-running processes that want to use a logger to represent a related set of activities over a relatively short period of time (compared to the life of the process). This is useful to allow creating per-connection loggers for internet servers, for example. Using a connection-specific logger allows the application to provide an identifier for the session that can be automatically included in the logs without having the application encode it into each message (a far more error prone approach). ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2004-06-24 15:28 Message: Logged In: YES user_id=308438 Suppose I add a callable "recordMaker" to logging, and modify makeRecord() to call it with logger + the args passed to makeRecord(). If it's necessary to add extra attrs to the LogRecord, this can be done by replacing recordMaker with your own callable. Seems less icky - what do you think? If you think it'll fly, are there any other args you think I need to pass into the callable? ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-24 14:06 Message: Logged In: YES user_id=3066 I've attached a file showing the class I came up with. I don't consider this to be a good wrapper, just what worked. I think one of the problems is that what I really want to override is the makeRecord() method, not the logging methods themselves. There's too much logic in those dealing with the disabling and level filtering that I don't want to duplicate, but as soon as I pass the calls on to the underlying logger, I can no longer change the makeRecord(). It would be possible to inject a new makeRecord() while my methods are active (in my definition for log() in the sample), and restore the original in a finally clause, but that feels... icky. The advantage of overriding makeRecord() is that formatter can deal with with how the additional information is added to the log because more information is made available on the record. ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-24 10:58 Message: Logged In: YES user_id=308438 How about if I add a LoggerAdapter class which takes a logger in the __init__ and has logging methods debug(), info() etc. [and including _log()] which delegate to the underlying logger? Then you could subclass the Adapter and just override the methods you needed. Would that fit the bill? Of course the package can use a Logger-derived class, but this would apply to all loggers where the LoggerAdapter could be used for just some of the loggers in a system. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-24 04:13 Message: Logged In: YES user_id=3066 Looking at this again, after adjusting the application I have that used the connection-specific loggers, I decided that a different approach better solves the problem. What you've shown requires exactly what I wanted to avoid: having to make a gesture at each logging call (to transform the message). Instead of doing this, I ended up writing a wrapper for the logger objects that implement the methods log(), debug(), info(), warn(), warning(), error(), exception(), critical(), and fatal(). These methods each transform the message before calling the underlying logger. It would be really nice to have something like this that isolates the final call to Logger._log() so specific implementations can simply override _log() (or some other helper routine that gets all the info) and maybe the __init__(). I don't think that's completely necessary, but would probably make it a lot easier to implement this pattern. There's probably some useful documentation improvements that could be made to help people avoid the issue of leaking loggers. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-10 16:50 Message: Logged In: YES user_id=3066 Sorry for the delay in following up. I'll re-visit the software where I wanted this to see how it'll work out in practice. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-09 16:01 Message: Logged In: YES user_id=31435 Assigned to Fred, because Vinay wants his input (in general, a bug should be assigned to the next person who needs to "do something" about it, and that can change over time). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-09 09:28 Message: Logged In: YES user_id=308438 Fred, any more thoughts on this? Thanks, Vinay ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-05-08 19:28 Message: Logged In: YES user_id=308438 The problem with disposing of Logger objects programmatically is that you don't know who is referencing them. How about the following approach? I'm making no assumptions about the actual connection classes used; if you need to make it even less error prone, you can create delegating methods in the server class which do the appropriate wrapping. class ConnectionWrapper: def __init__(self, conn): self.conn = conn def message(self, msg): return "[connection: %s]: %s" % (self.conn, msg) and then use this like so... class Server: def get_connection(self, request): # return the connection for this request def handle_request(self, request): conn = self.get_connection(request) # we use a cache of connection wrappers if conn in self.conn_cache: cw = self.conn_cache[conn] else: cw = ConnectionWrapper(conn) self.conn_cache[conn] = cw #process request, and if events need to be logged, you can e.g. logger.debug(cw.message("Network packet truncated at %d bytes"), n) #The logged message will contain the connection ID ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 From noreply at sourceforge.net Thu Jun 24 13:08:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 24 13:09:31 2004 Subject: [ python-Feature Requests-950644 ] Allow any lvalue for function definitions Message-ID: Feature Requests item #950644, was opened at 2004-05-08 20:52 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=950644&group_id=5470 Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: David Albert Torpey (dtorp) >Assigned to: Guido van Rossum (gvanrossum) Summary: Allow any lvalue for function definitions Initial Comment: A definition like: def M(x): return 2*x is the same as: M = lambda x: 2*x With the latter form, I can use any lvalue: A[0] = lambda x: 2*x B.f = lambda x: 2*x But with the first form, you're locked into just using a plain variable name. If this were fixed, it wouldn't break anything else but would be useful for making method definitons outside of a class definition: This came up when I was experimenting with David MacQuigg's ideas for prototype OO. I want to write something like: Account = Object.clone() Account.balance = 0 def Account.deposit(self, v): self.balance += v Unfortunately, the latter has to be written: def Account.deposit(self, v): self.balance += v Account.deposit = deposit ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-24 12:08 Message: Logged In: YES user_id=80475 Guido, are you open to this? If so, I would be happy to draft a patch. I wouldn't expect it to become mainstream, but it would open the door to working with namespaces more directly. AFAICT, there is no downside to allowing this capability. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-21 00:57 Message: Logged In: YES user_id=80475 I think this should be made possible. It allows for alternate coding styles wihout harming anything else. The Lua programming language has something similar. It is a lightweight, non-OO language that revolves around making the best possible use of namespaces. Direct assignments into a namespace come up in several contexts throughout the language and are key to Lua's flexibility (using one concept to meet all needs). My only concern is that "def" and "class" statements also have the side effect of binding the __name__ attribute. We would have to come up with a lambda style placeholder for the attribute. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-05-19 19:56 Message: Logged In: YES user_id=99874 I'm highly dubious of this. I see little advantage to doing the definition and storing the value in a single line, mostly because I rarely want to do such a thing. Your example may be convincing in Prothon or some relative, but in Python the sensible way to do it is a normal method. I'd suggest that if you want this idea to go anywhere that you try posting this to c.l.py and seeing if you can drum up interest and support there. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=950644&group_id=5470 From noreply at sourceforge.net Thu Jun 24 16:43:03 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 24 16:43:09 2004 Subject: [ python-Bugs-979252 ] Trap OSError when calling RotatingFileHandler.doRollover Message-ID: Bugs item #979252, was opened at 2004-06-24 15:43 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979252&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Richard T. Hewitt (hewittr) Assigned to: Nobody/Anonymous (nobody) Summary: Trap OSError when calling RotatingFileHandler.doRollover Initial Comment: I use the RotatingFileHandler in most of my scripts. The script will crash if the RotatingFileHandler encounters a locked log file. I would like to see something like: def emit(self, record): """ Emit a record. Output the record to the file, catering for rollover as described in __init__(). """ if self.maxBytes > 0: # are we rolling over? msg = "%s\n" % self.format(record) self.stream.seek(0, 2) #due to non-posix- compliant Windows feature if self.stream.tell() + len(msg) >= self.maxBytes: try: self.doRollover() except Exception: logging.FileHandler.emit(self, 'Failed to doRollover.') logging.FileHandler.emit(self, record) My version of Python (2.3.2) had the wrong docstring as well, referring to a non-existent setRollover. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979252&group_id=5470 From noreply at sourceforge.net Thu Jun 24 21:16:37 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 24 21:16:42 2004 Subject: [ python-Bugs-979407 ] urllib2 digest auth totally broken Message-ID: Bugs item #979407, was opened at 2004-06-24 20:16 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979407&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Aaron Swartz (aaronsw) Assigned to: Nobody/Anonymous (nobody) Summary: urllib2 digest auth totally broken Initial Comment: The urllib2 digest auth handler is totally broken. 1. It looks for an "Authorization" header instead of "WWW- Authenticate" (Authorization is the header you send back). 2. It thinks passwords in the URL are port names. 3. Even if you get around all that, it just doesn't work. It seems to encrypt the thing wrongly and get itself into an infinite loop sending the wrong answer back again and again, being rejected each time. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979407&group_id=5470 From noreply at sourceforge.net Thu Jun 24 22:46:38 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 24 22:46:45 2004 Subject: [ python-Bugs-978308 ] Spurious errors taking bool of dead proxy Message-ID: Bugs item #978308, was opened at 2004-06-23 11:48 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978308&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 6 Submitted By: Chris King (colander_man) >Assigned to: Neal Norwitz (nnorwitz) Summary: Spurious errors taking bool of dead proxy Initial Comment: The following code will cause interesting errors: from weakref import proxy class a: pass bool(proxy(a())) Entered at a prompt, this will not cause a ReferenceError until another statement is entered (and will instead return True or 1) in both 2.3 and 2.2. Run as a script, this will return True and cause a fatal error in 2.3, but will return 1 and otherwise exhibit no strange behaviour in 2.2 (even with additional code accessing the return value). The equivalent code written using ref rather than proxy works as expected: bool(ref(a())()) returns False and creates no errors. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-24 22:46 Message: Logged In: YES user_id=33168 I believe the fix is in Objects/weakrefobject.c, line 358. -1 should be returned, not 1 since an error occurred in proxy_checkref(). I'll try to fix this. If anyone wants to steal it, feel free. :-) Chris, if you could test the fix and report your results, that would be great. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 16:13 Message: Logged In: YES user_id=80475 Confirmed the script behavior in Py2.4. The interactive prompt results are not consistent It returned True the first time I ran it and raised a ReferenceError in subsequent attempts. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978308&group_id=5470 From noreply at sourceforge.net Thu Jun 24 22:57:35 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Thu Jun 24 22:57:40 2004 Subject: [ python-Bugs-978308 ] Spurious errors taking bool of dead proxy Message-ID: Bugs item #978308, was opened at 2004-06-23 11:48 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978308&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 6 Submitted By: Chris King (colander_man) Assigned to: Neal Norwitz (nnorwitz) Summary: Spurious errors taking bool of dead proxy Initial Comment: The following code will cause interesting errors: from weakref import proxy class a: pass bool(proxy(a())) Entered at a prompt, this will not cause a ReferenceError until another statement is entered (and will instead return True or 1) in both 2.3 and 2.2. Run as a script, this will return True and cause a fatal error in 2.3, but will return 1 and otherwise exhibit no strange behaviour in 2.2 (even with additional code accessing the return value). The equivalent code written using ref rather than proxy works as expected: bool(ref(a())()) returns False and creates no errors. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-24 22:57 Message: Logged In: YES user_id=33168 attaching patch w/test ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-24 22:46 Message: Logged In: YES user_id=33168 I believe the fix is in Objects/weakrefobject.c, line 358. -1 should be returned, not 1 since an error occurred in proxy_checkref(). I'll try to fix this. If anyone wants to steal it, feel free. :-) Chris, if you could test the fix and report your results, that would be great. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 16:13 Message: Logged In: YES user_id=80475 Confirmed the script behavior in Py2.4. The interactive prompt results are not consistent It returned True the first time I ran it and raised a ReferenceError in subsequent attempts. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978308&group_id=5470 From noreply at sourceforge.net Fri Jun 25 04:39:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 04:39:23 2004 Subject: [ python-Bugs-977470 ] Deleted files are reinstalled Message-ID: Bugs item #977470, was opened at 2004-06-22 15:07 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977470&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: Anders J. Munch (andersjm) Assigned to: Nobody/Anonymous (nobody) Summary: Deleted files are reinstalled Initial Comment: Python 2.3.3 on W2K Run python setup.py install then delete a .py-file from setup.py and from the Python installation, then run python setup.py install again. Now the removed .py-file will be reinstalled - presumably because there's still a copy in build/lib. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2004-06-25 10:39 Message: Logged In: YES user_id=11105 You can clean before install. Either 'python setup.py clean install' or 'python setup.py clean -a install' should work. ---------------------------------------------------------------------- Comment By: Anders J. Munch (andersjm) Date: 2004-06-23 11:50 Message: Logged In: YES user_id=384806 It's a bug because it bit me :-) I had a module xml.py in a package and renamed it because the name clash with the top-level module was creating problems. Having the renamed file reappear under the old name seemingly from out of nowhere was very confusing. I guess my mental model is that everything in the build directory is an implementation detail. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-22 20:12 Message: Logged In: YES user_id=21627 Why is that a bug? You have to remove the build directory in that case. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977470&group_id=5470 From noreply at sourceforge.net Fri Jun 25 06:55:07 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 06:55:20 2004 Subject: [ python-Bugs-978645 ] compiler warning for _NSGetExecutablePath() in getpath.c Message-ID: Bugs item #978645, was opened at 2004-06-24 02:49 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978645&group_id=5470 Category: Macintosh Group: Python 2.4 Status: Open Resolution: None Priority: 3 Submitted By: Brett Cannon (bcannon) Assigned to: Jack Jansen (jackjansen) Summary: compiler warning for _NSGetExecutablePath() in getpath.c Initial Comment: On OS X 10.3.4 (gcc 3.3, compiled with --disable-framework -- disable-toolbox-glue), a warning about no function prototype for _NSGetExecutablePath() is thrown for Modules/getpath.c:399 . It looks like this has to do with the include file for the function (mach-o/dyld.h) being #ifdef'ed with WITH_NEXT_FRAMEWORK while the code using _NSGetExecutablePath() is #ifdef'ed with __APPLE__. Should probably use the same #ifdef statement, but I don't know which one. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-25 12:55 Message: Logged In: YES user_id=45365 Brett, you want the _NSGetExecutablePath code for non-framework builds as well as for framework builds. I can't test right now, but I assume that using #ifdef __APPLE__ #include #endif at the top (in stead of the #ifdef WITH_NETX_FRAMEWORK that's there right now) should do the trick. Could you check this, and check in the fix if it works, please? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978645&group_id=5470 From noreply at sourceforge.net Fri Jun 25 07:47:38 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 07:47:41 2004 Subject: [ python-Bugs-978645 ] compiler warning for _NSGetExecutablePath() in getpath.c Message-ID: Bugs item #978645, was opened at 2004-06-24 02:49 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978645&group_id=5470 Category: Macintosh Group: Python 2.4 Status: Open Resolution: None Priority: 3 Submitted By: Brett Cannon (bcannon) Assigned to: Jack Jansen (jackjansen) Summary: compiler warning for _NSGetExecutablePath() in getpath.c Initial Comment: On OS X 10.3.4 (gcc 3.3, compiled with --disable-framework -- disable-toolbox-glue), a warning about no function prototype for _NSGetExecutablePath() is thrown for Modules/getpath.c:399 . It looks like this has to do with the include file for the function (mach-o/dyld.h) being #ifdef'ed with WITH_NEXT_FRAMEWORK while the code using _NSGetExecutablePath() is #ifdef'ed with __APPLE__. Should probably use the same #ifdef statement, but I don't know which one. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-25 13:47 Message: Logged In: YES user_id=45365 PS: if you commit the patch, could you put in a note it needs to be backported? ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2004-06-25 12:55 Message: Logged In: YES user_id=45365 Brett, you want the _NSGetExecutablePath code for non-framework builds as well as for framework builds. I can't test right now, but I assume that using #ifdef __APPLE__ #include #endif at the top (in stead of the #ifdef WITH_NETX_FRAMEWORK that's there right now) should do the trick. Could you check this, and check in the fix if it works, please? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978645&group_id=5470 From noreply at sourceforge.net Fri Jun 25 07:56:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 07:56:58 2004 Subject: [ python-Bugs-978662 ] can't compile _localemodule.c w/o --enable-toolbox-glue Message-ID: Bugs item #978662, was opened at 2004-06-24 03:56 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978662&group_id=5470 Category: Macintosh Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Brett Cannon (bcannon) Assigned to: Jack Jansen (jackjansen) Summary: can't compile _localemodule.c w/o --enable-toolbox-glue Initial Comment: Line 412 of Modules/_localemodule.c calls PyMac_getscript() which is within a ``#if defined(__APPLE__)`` block. Trouble is that the code is in Python/mactoolboxglue.c which is not compiled if -- disable-toolbox-glue is a compile option (which it was on my OS X 10.3.4 box). Probably shouldn't have a compile failure thanks to ld not finding the symbol; should probably either just not compile the module, change the #if block, or change the function. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-25 13:56 Message: Logged In: YES user_id=45365 There are two solutions: 1) Move PyMac_getscript() to _localemodule. 2) Copy PyMac_getscript() to _localemodule. 3) #ifdef PyLocale_getdefaultlocale in _localemodule on something set by enable_toolbox_module_glue. 1) means we can't really put the external declaration in macglue.h any more (_localemodule needn't be configured), but I don't think it's used by anything but localemodule, so it's probably the best solution. Still, this is an incompatible change, so we shouldn't backport it. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978662&group_id=5470 From noreply at sourceforge.net Fri Jun 25 09:45:23 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 09:45:30 2004 Subject: [ python-Bugs-979739 ] compile of code with incorrect encoding yields MemoryError Message-ID: Bugs item #979739, was opened at 2004-06-25 07:45 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979739&group_id=5470 Category: Parser/Compiler Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Anthony Tuininga (atuining) Assigned to: Nobody/Anonymous (nobody) Summary: compile of code with incorrect encoding yields MemoryError Initial Comment: The following code will fail in both Python 2.3 and Python 2.4, raising a MemoryError exception, when run on any platform except Windows: compile("# -*- coding: mbcs -*-", "test.py", "exec") This has been reproduced on the following platforms: Linux x86 HP-UX Solaris Tru64 Unix I realize this is an invalid encoding but it would be nice if something other than MemoryError was raised! ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979739&group_id=5470 From noreply at sourceforge.net Fri Jun 25 09:47:03 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 09:47:08 2004 Subject: [ python-Bugs-978308 ] Spurious errors taking bool of dead proxy Message-ID: Bugs item #978308, was opened at 2004-06-23 11:48 Message generated for change (Comment added) made by colander_man You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978308&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 6 Submitted By: Chris King (colander_man) Assigned to: Neal Norwitz (nnorwitz) Summary: Spurious errors taking bool of dead proxy Initial Comment: The following code will cause interesting errors: from weakref import proxy class a: pass bool(proxy(a())) Entered at a prompt, this will not cause a ReferenceError until another statement is entered (and will instead return True or 1) in both 2.3 and 2.2. Run as a script, this will return True and cause a fatal error in 2.3, but will return 1 and otherwise exhibit no strange behaviour in 2.2 (even with additional code accessing the return value). The equivalent code written using ref rather than proxy works as expected: bool(ref(a())()) returns False and creates no errors. ---------------------------------------------------------------------- >Comment By: Chris King (colander_man) Date: 2004-06-25 09:47 Message: Logged In: YES user_id=573252 The fix works (plus gave me an excuse to download the 2.4 snapshot). Thanks! ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-24 22:57 Message: Logged In: YES user_id=33168 attaching patch w/test ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-24 22:46 Message: Logged In: YES user_id=33168 I believe the fix is in Objects/weakrefobject.c, line 358. -1 should be returned, not 1 since an error occurred in proxy_checkref(). I'll try to fix this. If anyone wants to steal it, feel free. :-) Chris, if you could test the fix and report your results, that would be great. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 16:13 Message: Logged In: YES user_id=80475 Confirmed the script behavior in Py2.4. The interactive prompt results are not consistent It returned True the first time I ran it and raised a ReferenceError in subsequent attempts. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978308&group_id=5470 From noreply at sourceforge.net Fri Jun 25 10:47:40 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 10:47:47 2004 Subject: [ python-Bugs-977934 ] Python compiler encodes path to source in .pyc Message-ID: Bugs item #977934, was opened at 2004-06-22 22:17 Message generated for change (Comment added) made by mondragon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 >Category: IDLE Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jonathan Polley (jwpolley) Assigned to: Nobody/Anonymous (nobody) Summary: Python compiler encodes path to source in .pyc Initial Comment: This bug was notices in version 2.0, but I was able to reproduce it in version 2.3.4. When a python module is compiled, the path to that file is encoded in the .pyc file. This causes problems when a multi-platform development environment is used, in my case it is a hybrid UNIX/Windows platform. To reproduce the problem, perform the following steps from within IDLE: 1) run python 2) import crlf 3) exit python 4) copy the crlf.py and crlf.pyc files from Tools/Scripts to another directory 5) run python 6) add the path to the copies of crlf.py* to the start of the system path. 7) import crlf 8) print crlf.__file__ (this will yield the proper path to the files) 9) using the Open Module..., try to open the crlf module. You will get an error saying that the module was not found. This is a major problem when doing multi-platform debugging. If an exception is raised in a module that was compiled on another system (with a different path or OS), the debugger can not find the file to open it so it can be debugged. It also makes the "Open Module..." menu option unreliable. If you look in the .pyc file you will find the path to the location where the file was originally generated. ---------------------------------------------------------------------- >Comment By: Nick Bastin (mondragon) Date: 2004-06-25 10:47 Message: Logged In: YES user_id=430343 Changing this to be filed against IDLE, not the parser/compiler (should also fix the debugger as well). There is no magic value that the compiler could put in there that would make this right, so external tools will just have to deal with that. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 From noreply at sourceforge.net Fri Jun 25 10:48:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 10:48:46 2004 Subject: [ python-Bugs-977934 ] Python compiler encodes path to source in .pyc Message-ID: Bugs item #977934, was opened at 2004-06-22 22:17 Message generated for change (Comment added) made by mondragon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 Category: IDLE Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jonathan Polley (jwpolley) Assigned to: Nobody/Anonymous (nobody) Summary: Python compiler encodes path to source in .pyc Initial Comment: This bug was notices in version 2.0, but I was able to reproduce it in version 2.3.4. When a python module is compiled, the path to that file is encoded in the .pyc file. This causes problems when a multi-platform development environment is used, in my case it is a hybrid UNIX/Windows platform. To reproduce the problem, perform the following steps from within IDLE: 1) run python 2) import crlf 3) exit python 4) copy the crlf.py and crlf.pyc files from Tools/Scripts to another directory 5) run python 6) add the path to the copies of crlf.py* to the start of the system path. 7) import crlf 8) print crlf.__file__ (this will yield the proper path to the files) 9) using the Open Module..., try to open the crlf module. You will get an error saying that the module was not found. This is a major problem when doing multi-platform debugging. If an exception is raised in a module that was compiled on another system (with a different path or OS), the debugger can not find the file to open it so it can be debugged. It also makes the "Open Module..." menu option unreliable. If you look in the .pyc file you will find the path to the location where the file was originally generated. ---------------------------------------------------------------------- >Comment By: Nick Bastin (mondragon) Date: 2004-06-25 10:48 Message: Logged In: YES user_id=430343 Changing this to be filed against IDLE, not the parser/compiler (should also fix the debugger as well). There is no magic value that the compiler could put in there that would make this right, so external tools will just have to deal with that. ---------------------------------------------------------------------- Comment By: Nick Bastin (mondragon) Date: 2004-06-25 10:47 Message: Logged In: YES user_id=430343 Changing this to be filed against IDLE, not the parser/compiler (should also fix the debugger as well). There is no magic value that the compiler could put in there that would make this right, so external tools will just have to deal with that. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 From noreply at sourceforge.net Fri Jun 25 11:12:32 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 11:12:36 2004 Subject: [ python-Bugs-979794 ] diffutil errors when coparing 2 0 byte entries Message-ID: Bugs item #979794, was opened at 2004-06-25 11:12 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979794&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Robert M. Zigweid (rzigweid) Assigned to: Nobody/Anonymous (nobody) Summary: diffutil errors when coparing 2 0 byte entries Initial Comment: difflib has a problem where if the two things that it is comparing are 0 byte/null that when it comes time to output the results, it errors because a generator appears to not be properly set up. To duplicate easily, use the diff.py utility in Tools/scripts and diff two zero byte files. This error does not occur if either of the objects being compared has content. File "diff.py", line 40, in ? sys.stdout.writelines(diff) File "/usr/local/lib/python2.3/difflib.py", line 1215, in context_diff for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): File "/usr/local/lib/python2.3/difflib.py", line 574, in get_grouped_opcodes if codes[0][0] == 'equal': IndexError: list index out of range ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979794&group_id=5470 From noreply at sourceforge.net Fri Jun 25 11:13:50 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 11:13:53 2004 Subject: [ python-Bugs-979794 ] diffutil errors when comparing 2 0 byte entries Message-ID: Bugs item #979794, was opened at 2004-06-25 11:12 Message generated for change (Settings changed) made by rzigweid You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979794&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Robert M. Zigweid (rzigweid) >Assigned to: Barry A. Warsaw (bwarsaw) >Summary: diffutil errors when comparing 2 0 byte entries Initial Comment: difflib has a problem where if the two things that it is comparing are 0 byte/null that when it comes time to output the results, it errors because a generator appears to not be properly set up. To duplicate easily, use the diff.py utility in Tools/scripts and diff two zero byte files. This error does not occur if either of the objects being compared has content. File "diff.py", line 40, in ? sys.stdout.writelines(diff) File "/usr/local/lib/python2.3/difflib.py", line 1215, in context_diff for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): File "/usr/local/lib/python2.3/difflib.py", line 574, in get_grouped_opcodes if codes[0][0] == 'equal': IndexError: list index out of range ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979794&group_id=5470 From noreply at sourceforge.net Fri Jun 25 13:23:13 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 13:23:17 2004 Subject: [ python-Bugs-979872 ] On HPUX 11i universal newlines seems to cause readline(s) to Message-ID: Bugs item #979872, was opened at 2004-06-25 13:23 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979872&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: dmcisaac (dmcisaac) Assigned to: Nobody/Anonymous (nobody) Summary: On HPUX 11i universal newlines seems to cause readline(s) to Initial Comment: I compiled version 2.3.4 on hp-ux 11i, with shared and threads enabled, using Gnu c 3.3.3. 'make test' fails on all tests that use readline() and/or readlines() and test_univnewlines fails with a memory fault and core dump. All other tests pass that I expect to pass. If I hand modify pyconfig.h to comment out with universal newline support and recompile (after a make clean) then the readline(s) failures go away. I have also compiled without thread support and got the same failures as with using universal newlines. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979872&group_id=5470 From noreply at sourceforge.net Fri Jun 25 14:52:43 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 14:52:47 2004 Subject: [ python-Bugs-979924 ] email.Message.Message.__getitem__ doc string wrong Message-ID: Bugs item #979924, was opened at 2004-06-25 14:52 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979924&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Jp Calderone (kuran) Assigned to: Nobody/Anonymous (nobody) Summary: email.Message.Message.__getitem__ doc string wrong Initial Comment: The doc string for email.Message.Message.__getitem__ references a "getall" method. There is no such method. It should refer to the "get_all" method instead. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979924&group_id=5470 From noreply at sourceforge.net Fri Jun 25 15:09:27 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 15:09:33 2004 Subject: [ python-Bugs-979739 ] compile of code with incorrect encoding yields MemoryError Message-ID: Bugs item #979739, was opened at 2004-06-25 15:45 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979739&group_id=5470 Category: Parser/Compiler Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Anthony Tuininga (atuining) Assigned to: Nobody/Anonymous (nobody) Summary: compile of code with incorrect encoding yields MemoryError Initial Comment: The following code will fail in both Python 2.3 and Python 2.4, raising a MemoryError exception, when run on any platform except Windows: compile("# -*- coding: mbcs -*-", "test.py", "exec") This has been reproduced on the following platforms: Linux x86 HP-UX Solaris Tru64 Unix I realize this is an invalid encoding but it would be nice if something other than MemoryError was raised! ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2004-06-25 21:09 Message: Logged In: YES user_id=11105 I assume the behaviour occurrs when an unknown encoding is specified. It can be reproduced on Windows: compile("# -*- coding: xxx -*-", "test.py", "exec") It should probably raise a SyntaxError, as trying to import a module containing this encoding does. The problem seems that when PyParser_ParseStringFlagsFilename() calls PyTokenizer_FromString(), and when the latter fails the error is set to E_NOMEM. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979739&group_id=5470 From noreply at sourceforge.net Fri Jun 25 15:50:40 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 15:50:59 2004 Subject: [ python-Bugs-979967 ] gethostbyname is broken on hpux 11.11 Message-ID: Bugs item #979967, was opened at 2004-06-25 14:50 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979967&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Ehab Teima (ehab_teima) Assigned to: Nobody/Anonymous (nobody) Summary: gethostbyname is broken on hpux 11.11 Initial Comment: This bug exists in Python 2.3.2, 2.3.3 and 2.3.4. socket.gethostbyname is broken on hpux HP-UX B.11.11 U. This is what I'm getting when I try to call socket.gethostbyname('server'): Traceback (most recent call last): File "../../test.py", line 2, in ? a=socket.gethostbyname('myserver') socket.gaierror: (8, 'host nor service provided, or not known') I got the same error when I tried getaddrinfo(server, port). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979967&group_id=5470 From noreply at sourceforge.net Fri Jun 25 16:05:33 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 16:05:42 2004 Subject: [ python-Bugs-979967 ] gethostbyname is broken on hpux 11.11 Message-ID: Bugs item #979967, was opened at 2004-06-25 14:50 Message generated for change (Settings changed) made by ehab_teima You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979967&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None >Priority: 9 Submitted By: Ehab Teima (ehab_teima) Assigned to: Nobody/Anonymous (nobody) Summary: gethostbyname is broken on hpux 11.11 Initial Comment: This bug exists in Python 2.3.2, 2.3.3 and 2.3.4. socket.gethostbyname is broken on hpux HP-UX B.11.11 U. This is what I'm getting when I try to call socket.gethostbyname('server'): Traceback (most recent call last): File "../../test.py", line 2, in ? a=socket.gethostbyname('myserver') socket.gaierror: (8, 'host nor service provided, or not known') I got the same error when I tried getaddrinfo(server, port). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979967&group_id=5470 From noreply at sourceforge.net Fri Jun 25 16:15:43 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 16:15:52 2004 Subject: [ python-Bugs-979967 ] gethostbyname is broken on hpux 11.11 Message-ID: Bugs item #979967, was opened at 2004-06-25 14:50 Message generated for change (Comment added) made by ehab_teima You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979967&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 9 Submitted By: Ehab Teima (ehab_teima) Assigned to: Nobody/Anonymous (nobody) Summary: gethostbyname is broken on hpux 11.11 Initial Comment: This bug exists in Python 2.3.2, 2.3.3 and 2.3.4. socket.gethostbyname is broken on hpux HP-UX B.11.11 U. This is what I'm getting when I try to call socket.gethostbyname('server'): Traceback (most recent call last): File "../../test.py", line 2, in ? a=socket.gethostbyname('myserver') socket.gaierror: (8, 'host nor service provided, or not known') I got the same error when I tried getaddrinfo(server, port). ---------------------------------------------------------------------- >Comment By: Ehab Teima (ehab_teima) Date: 2004-06-25 15:15 Message: Logged In: YES user_id=1069522 The servername is supposed to be resolved via DNS query to get thje ip address. If I use the ip address instead of the servername, the function works fine [i.e. socket.gethostbyname('x.x.x.x') works fine.] ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979967&group_id=5470 From noreply at sourceforge.net Fri Jun 25 17:28:11 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 17:28:15 2004 Subject: [ python-Bugs-980019 ] email module namespace inconsistent Message-ID: Bugs item #980019, was opened at 2004-06-25 15:28 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980019&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Charles (melicertes) Assigned to: Nobody/Anonymous (nobody) Summary: email module namespace inconsistent Initial Comment: Inconsistencies in the email module: ---------------- from email.Generator import Generator foo = Generator(...) ---------------- works as you'd expect, but the following: ---------------- import email foo = email.Generator.Generator(...) ---------------- raises AttributeError: 'module' object has no attribute 'Generator'. The two should be equivalent, shouldn't they? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980019&group_id=5470 From noreply at sourceforge.net Fri Jun 25 18:29:43 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 18:29:50 2004 Subject: [ python-Bugs-980019 ] email module namespace inconsistent Message-ID: Bugs item #980019, was opened at 2004-06-25 16:28 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980019&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None >Priority: 3 Submitted By: Charles (melicertes) >Assigned to: Barry A. Warsaw (bwarsaw) Summary: email module namespace inconsistent Initial Comment: Inconsistencies in the email module: ---------------- from email.Generator import Generator foo = Generator(...) ---------------- works as you'd expect, but the following: ---------------- import email foo = email.Generator.Generator(...) ---------------- raises AttributeError: 'module' object has no attribute 'Generator'. The two should be equivalent, shouldn't they? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980019&group_id=5470 From noreply at sourceforge.net Fri Jun 25 20:54:45 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 20:54:50 2004 Subject: [ python-Bugs-980092 ] tp_subclasses grow without bounds Message-ID: Bugs item #980092, was opened at 2004-06-25 17:54 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980092&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Eric Huss (ehuss) Assigned to: Nobody/Anonymous (nobody) Summary: tp_subclasses grow without bounds Initial Comment: Python 2.3.4 When heap allocated type objects are created, they will be added to their base class's tp_subclasses list as a weak reference. If, for example, your base type is PyBaseObject_Type, then the tp_subclasses list for the base object type will grow for each new object. Unfortunately remove_subclass is never called. If your newly create type objects are deleted, then you will end up with a bunch of weak reference objects in the tp_subclasses list that do not reference anything. Perhaps remove_subclass should be called inside type_dealloc? Or, better yet, tp_subclasses should be a Weak Set. I'm not certain what's the best solution. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980092&group_id=5470 From noreply at sourceforge.net Fri Jun 25 21:13:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 21:13:21 2004 Subject: [ python-Bugs-624827 ] Creation of struct_seq types Message-ID: Bugs item #624827, was opened at 2002-10-17 11:51 Message generated for change (Comment added) made by ehuss You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=624827&group_id=5470 Category: Extension Modules Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fred L. Drake, Jr. (fdrake) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Creation of struct_seq types Initial Comment: It would be really nice to be able to create structure sequence types from within Python. MvL suggested: ------------------- ... I think all you need to do is to expose PyStructSequence_InitType. I would recommend an interface like struct_seq(name, doc, n_in_sequence, (fields)) where fields is a list of (name,doc) tuples. You will need to temporarily allocate a PyStructSequence_Field array of len(fields)+1 elements, and put the PyStructSequence_Desc on the stack. You will also need to dynamically allocate a PyTypeObject which you return. I would put this into the new module. ------------------- Assigned to me since I actually wanted to create these things. ---------------------------------------------------------------------- Comment By: Eric Huss (ehuss) Date: 2004-06-25 18:13 Message: Logged In: YES user_id=393416 Uploaded patch 980098 which implements this functionality as a separate module. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-10-22 13:26 Message: Logged In: YES user_id=3066 As part of this, struct_seq types should be made subclassable. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=624827&group_id=5470 From noreply at sourceforge.net Fri Jun 25 21:42:12 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 21:42:18 2004 Subject: [ python-Bugs-900977 ] cygwinccompiler.get_versions fails on `ld -v` output Message-ID: Bugs item #900977, was opened at 2004-02-20 04:52 Message generated for change (Settings changed) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=900977&group_id=5470 Category: Distutils Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Pearu Peterson (pearu) >Assigned to: Nobody/Anonymous (nobody) Summary: cygwinccompiler.get_versions fails on `ld -v` output Initial Comment: Under linux `ld -v` returns GNU ld version 2.14.90.0.7 20031029 Debian GNU/Linux for instance, and get_versions() function uses StrictVersion on '2.14.90.0.7'. This situation triggers an error: ValueError: invalid version number '2.14.90.0.7' As a fix, either use LooseVersion or the following re pattern result = re.search('(\d+\.\d+(\.\d+)?)',out_string) in `if ld_exe` block. Pearu ---------------------------------------------------------------------- Comment By: Jason Tishler (jlt63) Date: 2004-06-14 07:33 Message: Logged In: YES user_id=86216 Hmm... I botch a Cygwin Python release and I get assigned a 4 month old Distutils bug. Coincidence or punishment? :,) > Is this still a problem? I don't know. > Jason, do you have any comments? This problem seems more like a Distutils issue than a Cygwin one. Please assign to a Distutils developer (e.g., Rene). Since I'm not familiar with the issues, I'm afraid that if I try to fix this problem I may cause another one... Additionally, I cannot reproduce the problem on my Linux box unless I write a shell script to simulate the behavior of Pearu's ld -v... ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-13 17:33 Message: Logged In: YES user_id=33168 Is this still a problem? Jason, do you have any comments? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=900977&group_id=5470 From noreply at sourceforge.net Fri Jun 25 22:41:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 22:41:34 2004 Subject: [ python-Bugs-980117 ] Index error for empty lists in Difflib Message-ID: Bugs item #980117, was opened at 2004-06-25 22:41 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980117&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Rocco Moretti (roccomoretti) Assigned to: Nobody/Anonymous (nobody) Summary: Index error for empty lists in Difflib Initial Comment: When using the context_diff and unified_diff functions in difflib, submitting two empty lists to compare to each other produces an IndexError, as opposed to the expected empty diff. This error does not occur when using ndiff, or when comparing a nonempty list to an empty list. (In either direction) Example Behavior: Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import difflib >>> difflib.unified_diff([],[]) >>> list(difflib.unified_diff([],[])) Traceback (most recent call last): File "", line 1, in ? File "C:\Python23\lib\difflib.py", line 1149, in unified_diff for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): File "C:\Python23\lib\difflib.py", line 574, in get_grouped_opcodes if codes[0][0] == 'equal': IndexError: list index out of range >>> list(difflib.unified_diff([],[''])) ['--- \n', '+++ \n', '@@ -1,0 +1,1 @@\n', '+'] >>> list(difflib.unified_diff([''],[])) ['--- \n', '+++ \n', '@@ -1,1 +1,0 @@\n', '-'] >>> ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980117&group_id=5470 From noreply at sourceforge.net Fri Jun 25 22:46:55 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 22:47:01 2004 Subject: [ python-Bugs-978308 ] Spurious errors taking bool of dead proxy Message-ID: Bugs item #978308, was opened at 2004-06-23 11:48 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978308&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 6 Submitted By: Chris King (colander_man) Assigned to: Neal Norwitz (nnorwitz) Summary: Spurious errors taking bool of dead proxy Initial Comment: The following code will cause interesting errors: from weakref import proxy class a: pass bool(proxy(a())) Entered at a prompt, this will not cause a ReferenceError until another statement is entered (and will instead return True or 1) in both 2.3 and 2.2. Run as a script, this will return True and cause a fatal error in 2.3, but will return 1 and otherwise exhibit no strange behaviour in 2.2 (even with additional code accessing the return value). The equivalent code written using ref rather than proxy works as expected: bool(ref(a())()) returns False and creates no errors. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-25 22:46 Message: Logged In: YES user_id=31435 Neal, FWIW, your fix is certainly correct. ---------------------------------------------------------------------- Comment By: Chris King (colander_man) Date: 2004-06-25 09:47 Message: Logged In: YES user_id=573252 The fix works (plus gave me an excuse to download the 2.4 snapshot). Thanks! ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-24 22:57 Message: Logged In: YES user_id=33168 attaching patch w/test ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2004-06-24 22:46 Message: Logged In: YES user_id=33168 I believe the fix is in Objects/weakrefobject.c, line 358. -1 should be returned, not 1 since an error occurred in proxy_checkref(). I'll try to fix this. If anyone wants to steal it, feel free. :-) Chris, if you could test the fix and report your results, that would be great. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 16:13 Message: Logged In: YES user_id=80475 Confirmed the script behavior in Py2.4. The interactive prompt results are not consistent It returned True the first time I ran it and raised a ReferenceError in subsequent attempts. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978308&group_id=5470 From noreply at sourceforge.net Fri Jun 25 23:17:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 23:18:01 2004 Subject: [ python-Bugs-979794 ] diffutil errors when comparing 2 0 byte entries Message-ID: Bugs item #979794, was opened at 2004-06-25 11:12 Message generated for change (Comment added) made by bwarsaw You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979794&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Robert M. Zigweid (rzigweid) Assigned to: Barry A. Warsaw (bwarsaw) Summary: diffutil errors when comparing 2 0 byte entries Initial Comment: difflib has a problem where if the two things that it is comparing are 0 byte/null that when it comes time to output the results, it errors because a generator appears to not be properly set up. To duplicate easily, use the diff.py utility in Tools/scripts and diff two zero byte files. This error does not occur if either of the objects being compared has content. File "diff.py", line 40, in ? sys.stdout.writelines(diff) File "/usr/local/lib/python2.3/difflib.py", line 1215, in context_diff for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): File "/usr/local/lib/python2.3/difflib.py", line 574, in get_grouped_opcodes if codes[0][0] == 'equal': IndexError: list index out of range ---------------------------------------------------------------------- >Comment By: Barry A. Warsaw (bwarsaw) Date: 2004-06-25 23:17 Message: Logged In: YES user_id=12800 Probably a dup of 980117 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979794&group_id=5470 From noreply at sourceforge.net Fri Jun 25 23:18:22 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 23:18:27 2004 Subject: [ python-Bugs-980117 ] Index error for empty lists in Difflib Message-ID: Bugs item #980117, was opened at 2004-06-25 22:41 Message generated for change (Comment added) made by bwarsaw You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980117&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Rocco Moretti (roccomoretti) Assigned to: Nobody/Anonymous (nobody) Summary: Index error for empty lists in Difflib Initial Comment: When using the context_diff and unified_diff functions in difflib, submitting two empty lists to compare to each other produces an IndexError, as opposed to the expected empty diff. This error does not occur when using ndiff, or when comparing a nonempty list to an empty list. (In either direction) Example Behavior: Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import difflib >>> difflib.unified_diff([],[]) >>> list(difflib.unified_diff([],[])) Traceback (most recent call last): File "", line 1, in ? File "C:\Python23\lib\difflib.py", line 1149, in unified_diff for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): File "C:\Python23\lib\difflib.py", line 574, in get_grouped_opcodes if codes[0][0] == 'equal': IndexError: list index out of range >>> list(difflib.unified_diff([],[''])) ['--- \n', '+++ \n', '@@ -1,0 +1,1 @@\n', '+'] >>> list(difflib.unified_diff([''],[])) ['--- \n', '+++ \n', '@@ -1,1 +1,0 @@\n', '-'] >>> ---------------------------------------------------------------------- >Comment By: Barry A. Warsaw (bwarsaw) Date: 2004-06-25 23:18 Message: Logged In: YES user_id=12800 Probably a dup of 979794 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980117&group_id=5470 From noreply at sourceforge.net Fri Jun 25 23:31:48 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 23:31:52 2004 Subject: [ python-Bugs-980127 ] Possible error during LINKCC check in Configure.in Message-ID: Bugs item #980127, was opened at 2004-06-25 20:31 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980127&group_id=5470 Category: Build Group: Python 2.4 Status: Open Resolution: None Priority: 3 Submitted By: Brett Cannon (bcannon) Assigned to: Nobody/Anonymous (nobody) Summary: Possible error during LINKCC check in Configure.in Initial Comment: Under OS X 10.3.4 (gcc 3.3), when I run ./configure, I get an error from ld about an undefined symbol:: checking LIBRARY... libpython$(VERSION).a checking LINKCC... ld: Undefined symbols: ___gxx_personality_v0 $(PURIFY) $(CXX) checking for --enable-shared... no checking for --enable-profiling... I think that specific symbol comes up only when there is a linking error to the Standard C++ library, but I am not sure. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980127&group_id=5470 From noreply at sourceforge.net Fri Jun 25 23:41:05 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 23:41:14 2004 Subject: [ python-Bugs-980019 ] email module namespace inconsistent Message-ID: Bugs item #980019, was opened at 2004-06-25 17:28 Message generated for change (Comment added) made by bwarsaw You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980019&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed >Resolution: Invalid Priority: 3 Submitted By: Charles (melicertes) Assigned to: Barry A. Warsaw (bwarsaw) Summary: email module namespace inconsistent Initial Comment: Inconsistencies in the email module: ---------------- from email.Generator import Generator foo = Generator(...) ---------------- works as you'd expect, but the following: ---------------- import email foo = email.Generator.Generator(...) ---------------- raises AttributeError: 'module' object has no attribute 'Generator'. The two should be equivalent, shouldn't they? ---------------------------------------------------------------------- >Comment By: Barry A. Warsaw (bwarsaw) Date: 2004-06-25 23:41 Message: Logged In: YES user_id=12800 That's not the way Python works, and the email package isn't like os here; it doesn't automatically export its sub-modules. You'll need to do this: import email.Generator foo = email.Generator.Generator() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980019&group_id=5470 From noreply at sourceforge.net Fri Jun 25 23:41:31 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Fri Jun 25 23:41:35 2004 Subject: [ python-Bugs-978662 ] can't compile _localemodule.c w/o --enable-toolbox-glue Message-ID: Bugs item #978662, was opened at 2004-06-23 18:56 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978662&group_id=5470 Category: Macintosh Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Brett Cannon (bcannon) Assigned to: Jack Jansen (jackjansen) Summary: can't compile _localemodule.c w/o --enable-toolbox-glue Initial Comment: Line 412 of Modules/_localemodule.c calls PyMac_getscript() which is within a ``#if defined(__APPLE__)`` block. Trouble is that the code is in Python/mactoolboxglue.c which is not compiled if -- disable-toolbox-glue is a compile option (which it was on my OS X 10.3.4 box). Probably shouldn't have a compile failure thanks to ld not finding the symbol; should probably either just not compile the module, change the #if block, or change the function. ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-25 20:41 Message: Logged In: YES user_id=357491 grep agrees with you, Jack: drifty@Bretts-Computer>grep -rl PyMac_getscript * Include/pymactoolbox.h Modules/_localemodule.c Python/mactoolboxglue.c So we could just copy and paste it (and fix the spelling of Fredrik's last name =). But will those CF* functions be linked properly w/o a framework build? I didn't see any macro definitions or anything to signal when a toolbox glue build was being done, but it would not hurt to define that. I get a ton of linking errors for any module that has Carbon with this ``-- disable-framework --disable-toolbox-glue`` build that I have been fiddling with. Should we try to make it so that those modules are built only if some flag is set somewhere signifying that the proper OS X- specific build options are not turned off? Or am I just being so totally non-standard with these build options (only done to try to build faster on my old iBook) that it is not worth the hassle? ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2004-06-25 04:56 Message: Logged In: YES user_id=45365 There are two solutions: 1) Move PyMac_getscript() to _localemodule. 2) Copy PyMac_getscript() to _localemodule. 3) #ifdef PyLocale_getdefaultlocale in _localemodule on something set by enable_toolbox_module_glue. 1) means we can't really put the external declaration in macglue.h any more (_localemodule needn't be configured), but I don't think it's used by anything but localemodule, so it's probably the best solution. Still, this is an incompatible change, so we shouldn't backport it. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978662&group_id=5470 From noreply at sourceforge.net Sat Jun 26 00:11:06 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 26 00:11:10 2004 Subject: [ python-Bugs-978645 ] compiler warning for _NSGetExecutablePath() in getpath.c Message-ID: Bugs item #978645, was opened at 2004-06-23 17:49 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978645&group_id=5470 Category: Macintosh Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 3 Submitted By: Brett Cannon (bcannon) Assigned to: Jack Jansen (jackjansen) Summary: compiler warning for _NSGetExecutablePath() in getpath.c Initial Comment: On OS X 10.3.4 (gcc 3.3, compiled with --disable-framework -- disable-toolbox-glue), a warning about no function prototype for _NSGetExecutablePath() is thrown for Modules/getpath.c:399 . It looks like this has to do with the include file for the function (mach-o/dyld.h) being #ifdef'ed with WITH_NEXT_FRAMEWORK while the code using _NSGetExecutablePath() is #ifdef'ed with __APPLE__. Should probably use the same #ifdef statement, but I don't know which one. ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-25 21:11 Message: Logged In: YES user_id=357491 Checked in on HEAD as rev. 1.49 and on 2.3 as rev. 1.46.14.2 . ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2004-06-25 04:47 Message: Logged In: YES user_id=45365 PS: if you commit the patch, could you put in a note it needs to be backported? ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2004-06-25 03:55 Message: Logged In: YES user_id=45365 Brett, you want the _NSGetExecutablePath code for non-framework builds as well as for framework builds. I can't test right now, but I assume that using #ifdef __APPLE__ #include #endif at the top (in stead of the #ifdef WITH_NETX_FRAMEWORK that's there right now) should do the trick. Could you check this, and check in the fix if it works, please? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978645&group_id=5470 From noreply at sourceforge.net Sat Jun 26 12:29:16 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 26 12:29:20 2004 Subject: [ python-Bugs-980327 ] ntpath normpath Message-ID: Bugs item #980327, was opened at 2004-06-26 18:29 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980327&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Keyphrene (keyphrene) Assigned to: Nobody/Anonymous (nobody) Summary: ntpath normpath Initial Comment: on windows print os.path.normpath("c:\blah\\blah.exe") >>> c:\blah\blah.exe # correct syntax print os.path.normpath("c:\\blah\\blah.exe") >>> c:\\blah\blah.exe # wrong syntax this trouble gets an os error with the rename command (OSError: [Errno 2] No such file or directory) an solution: path.replace("\\","\") ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980327&group_id=5470 From noreply at sourceforge.net Sat Jun 26 13:26:18 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 26 13:26:23 2004 Subject: [ python-Bugs-980352 ] coercion results used dangerously Message-ID: Bugs item #980352, was opened at 2004-06-26 17:26 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980352&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: coercion results used dangerously Initial Comment: The C core uses the result of PyNumber_CoerceEx() dangerously: it gets passed to tp_compare, and most tp_compare slots assume they get two objects of the same type. This assumption is never checked, even when a user-defined __coerce__() is called: >>> class X(object): ... def __coerce__(self, other): ... return 4, other ... >>> slice(1,2,3) == X() Segmentation fault ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980352&group_id=5470 From noreply at sourceforge.net Sat Jun 26 15:21:21 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 26 15:21:26 2004 Subject: [ python-Bugs-980352 ] coercion results used dangerously Message-ID: Bugs item #980352, was opened at 2004-06-26 17:26 Message generated for change (Comment added) made by nascheme You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980352&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: coercion results used dangerously Initial Comment: The C core uses the result of PyNumber_CoerceEx() dangerously: it gets passed to tp_compare, and most tp_compare slots assume they get two objects of the same type. This assumption is never checked, even when a user-defined __coerce__() is called: >>> class X(object): ... def __coerce__(self, other): ... return 4, other ... >>> slice(1,2,3) == X() Segmentation fault ---------------------------------------------------------------------- >Comment By: Neil Schemenauer (nascheme) Date: 2004-06-26 19:21 Message: Logged In: YES user_id=35752 This bug should obviously get fixed but in long term I think __coerce__ should go away. Do you think deprecating it for 2.4 and then removing support for it in 2.5 or 2.6 is feasible? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980352&group_id=5470 From noreply at sourceforge.net Sat Jun 26 16:36:39 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 26 16:36:42 2004 Subject: [ python-Bugs-980419 ] int left-shift causes memory leak Message-ID: Bugs item #980419, was opened at 2004-06-26 16:36 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980419&group_id=5470 Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Erik Demaine (edemaine) Assigned to: Nobody/Anonymous (nobody) Summary: int left-shift causes memory leak Initial Comment: The following Python one-liner causes the Python image size to grow quickly (beyond 100 megabytes within 10 seconds or so on my Linux 2.4.18 box on an Intel Pentium 4): while True: 1 << 64 The point is that 1 << 64 gets automatically turned into a long. Somehow, even after all pointers to this long disappear, it (or something produced along the way) sticks around. The same effect is obtained by the following more natural code: while True: x = 1 << 64 There is an easy workaround; the following code does not cause a memory leak: while True: 1L << 64 However, a memory leak should not be intended behavior for the new int/long unification. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980419&group_id=5470 From noreply at sourceforge.net Sat Jun 26 16:38:06 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 26 16:38:11 2004 Subject: [ python-Bugs-980419 ] int left-shift causes memory leak Message-ID: Bugs item #980419, was opened at 2004-06-26 16:36 Message generated for change (Comment added) made by edemaine You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980419&group_id=5470 Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Erik Demaine (edemaine) Assigned to: Nobody/Anonymous (nobody) Summary: int left-shift causes memory leak Initial Comment: The following Python one-liner causes the Python image size to grow quickly (beyond 100 megabytes within 10 seconds or so on my Linux 2.4.18 box on an Intel Pentium 4): while True: 1 << 64 The point is that 1 << 64 gets automatically turned into a long. Somehow, even after all pointers to this long disappear, it (or something produced along the way) sticks around. The same effect is obtained by the following more natural code: while True: x = 1 << 64 There is an easy workaround; the following code does not cause a memory leak: while True: 1L << 64 However, a memory leak should not be intended behavior for the new int/long unification. ---------------------------------------------------------------------- >Comment By: Erik Demaine (edemaine) Date: 2004-06-26 16:38 Message: Logged In: YES user_id=265183 Forgot to mention that this arises even on the latest CVS version of Python 2.4, and on other versions of Python 2.4 I had lying around. It does not arise on Python 2.2, which is pre-int/long unification I think. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980419&group_id=5470 From noreply at sourceforge.net Sat Jun 26 18:31:52 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 26 18:31:57 2004 Subject: [ python-Bugs-980419 ] int left-shift causes memory leak Message-ID: Bugs item #980419, was opened at 2004-06-26 15:36 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980419&group_id=5470 Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None >Priority: 6 Submitted By: Erik Demaine (edemaine) >Assigned to: Raymond Hettinger (rhettinger) Summary: int left-shift causes memory leak Initial Comment: The following Python one-liner causes the Python image size to grow quickly (beyond 100 megabytes within 10 seconds or so on my Linux 2.4.18 box on an Intel Pentium 4): while True: 1 << 64 The point is that 1 << 64 gets automatically turned into a long. Somehow, even after all pointers to this long disappear, it (or something produced along the way) sticks around. The same effect is obtained by the following more natural code: while True: x = 1 << 64 There is an easy workaround; the following code does not cause a memory leak: while True: 1L << 64 However, a memory leak should not be intended behavior for the new int/long unification. ---------------------------------------------------------------------- Comment By: Erik Demaine (edemaine) Date: 2004-06-26 15:38 Message: Logged In: YES user_id=265183 Forgot to mention that this arises even on the latest CVS version of Python 2.4, and on other versions of Python 2.4 I had lying around. It does not arise on Python 2.2, which is pre-int/long unification I think. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980419&group_id=5470 From noreply at sourceforge.net Sat Jun 26 18:42:40 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 26 18:42:46 2004 Subject: [ python-Bugs-980352 ] coercion results used dangerously Message-ID: Bugs item #980352, was opened at 2004-06-26 12:26 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980352&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: coercion results used dangerously Initial Comment: The C core uses the result of PyNumber_CoerceEx() dangerously: it gets passed to tp_compare, and most tp_compare slots assume they get two objects of the same type. This assumption is never checked, even when a user-defined __coerce__() is called: >>> class X(object): ... def __coerce__(self, other): ... return 4, other ... >>> slice(1,2,3) == X() Segmentation fault ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-26 17:42 Message: Logged In: YES user_id=80475 I looked back at one of my ASPN recipes, http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/265894 , and saw that the use of __coerce__ dramatically simplified the code. Also, the API for rich comparisons is not only complex, but it is not entirely sef-consistent. See Tim's "mini-bug" comment in sets.py for an example. IOW, I think it is premature to pull the plug. ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-26 14:21 Message: Logged In: YES user_id=35752 This bug should obviously get fixed but in long term I think __coerce__ should go away. Do you think deprecating it for 2.4 and then removing support for it in 2.5 or 2.6 is feasible? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980352&group_id=5470 From noreply at sourceforge.net Sat Jun 26 20:01:16 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sat Jun 26 20:01:20 2004 Subject: [ python-Bugs-980419 ] int left-shift causes memory leak Message-ID: Bugs item #980419, was opened at 2004-06-26 15:36 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980419&group_id=5470 Category: Python Interpreter Core Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 6 Submitted By: Erik Demaine (edemaine) Assigned to: Raymond Hettinger (rhettinger) Summary: int left-shift causes memory leak Initial Comment: The following Python one-liner causes the Python image size to grow quickly (beyond 100 megabytes within 10 seconds or so on my Linux 2.4.18 box on an Intel Pentium 4): while True: 1 << 64 The point is that 1 << 64 gets automatically turned into a long. Somehow, even after all pointers to this long disappear, it (or something produced along the way) sticks around. The same effect is obtained by the following more natural code: while True: x = 1 << 64 There is an easy workaround; the following code does not cause a memory leak: while True: 1L << 64 However, a memory leak should not be intended behavior for the new int/long unification. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-26 19:01 Message: Logged In: YES user_id=80475 Fixed. See Objects/intobject.c 2.111. Thanks for the bug report. ---------------------------------------------------------------------- Comment By: Erik Demaine (edemaine) Date: 2004-06-26 15:38 Message: Logged In: YES user_id=265183 Forgot to mention that this arises even on the latest CVS version of Python 2.4, and on other versions of Python 2.4 I had lying around. It does not arise on Python 2.2, which is pre-int/long unification I think. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980419&group_id=5470 From noreply at sourceforge.net Sun Jun 27 03:50:06 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 03:50:13 2004 Subject: [ python-Bugs-215126 ] Over restricted type checking on eval() function Message-ID: Bugs item #215126, was opened at 2000-09-22 14:36 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 Category: Python Interpreter Core Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Armin Rigo (arigo) Summary: Over restricted type checking on eval() function Initial Comment: The built-in function eval() takes a string argument and a dictionary. The second argument should allow any instance which defines __getitem__ as opposed to just dictionaries. The following example creates a type error: eval, argument 2: expected dictionary, instance found class SpreadSheet: _cells = {} def __setitem__( self, key, formula ): self._cells[key] = formula def __getitem__( self, key ): return eval( self._cells[key], self ) ss = SpreadSheet() ss['a1'] = '5' ss['a2'] = 'a1*5' ss['a2'] ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 02:50 Message: Logged In: YES user_id=80475 Attaching my minimal version of the patch. It runs the attached demo without exception and does not measurably show on any of my timings. The approach is to restrict the generalization to eval() instead of exec(). Since eval() can't set values in the locals dict, no changes are needed to the setitem and delitem calls. Instead of using PyObject_GetItem() directly, I do a regular lookup and fallback to the generalizaiton if necessary -- this is why the normal case doesn't get slowed down (the cost is a PyDict_Check which uses values already in cache, and a branch predicatable comparison).. While the demo script runs, and the test_suite passes, it is slightly too simple and doesn't yet handle eval('dir()', globals(), M()) where M is a non-dict mapping. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 11:59 Message: Logged In: YES user_id=80475 The quick patch works fine. Do change the PyArg_ParseTuple() into the faster PyArg_UnpackTuple(). Does this patch show any changes to pystone or other key metrics? Would the PyDict_GetItem trick have better performance? My guess is that it would. +1 on using PyMapping_Check() for checking the locals argument to eval(). That is as good as you can get without actually running it. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-22 16:07 Message: Logged In: YES user_id=4771 Quick patch attached. I didn't try to use the PyDict_GetItem trick described, but just systematically use PyObject_GetItem/SetItem/DelItem when working with f_locals. This might confuse some extension modules that expect PyEval_GetLocals() to return a dict object. The eval trick is now: eval(code, nondict) --> eval(code, globals(), nondict). Besides eval() I removed the relevant typecheck from execfile() and the exec statement. Any other place I am missing? We might want to still somehow check the type of the locals, to avoid strange errors caused by e.g. eval("a", "b"). PyMapping_Check() is the obvious candidate, but it looks like a hack. More testing is needed. test_descrtut.py line 84 now succeeds, unexpectedly, which is interpreted as a test failure. Needs some docs, too. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:27 Message: Logged In: YES user_id=80475 Armin, can you whip-up a quick patch so that we can explore the implications of your suggestion. Ideally, I would like to see something like this go in for Py2.4. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-22 11:53 Message: Logged In: YES user_id=80475 +1 Amrin's idea provides most of the needed functionality with zero performance impact. Also, using a local dictionary for the application variables is likely to be just exactly what a programmer would want to do. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 13:21 Message: Logged In: YES user_id=4771 With minimal work and performance impact, we could allow frame->f_locals to be something else than a real dictionary. It looks like it would be easily possible as long as we don't touch the time-critical frame->f_globals. Code compiled by eval() always uses LOAD_NAME and STORE_NAME, which is anyway allowed to be a bit slower than LOAD_FAST or LOAD_GLOBAL. Note that PyDict_GetItem() & co., as called by LOAD/STORE_NAME, do a PyDict_Check() anyway. We could just replace it with PyDict_CheckExact() and if it fails fall back to calling the appropriate ob_type->tp_as_mapping method. This would turn some of the PyDict_Xxx() API into a general mapping API, half-way between its current dict-only usage and the fully abstract PyObject_Xxx() API. This is maybe a bit strange from the user's point of view, because eval("expr", mydict) still wouldn't work: only eval("expr", {}, mydict) would. which does which does Now, there is no way I can think of that code compiled by eval() could contain LOAD_GLOBAL or STORE_GLOBAL. The only way to tell the difference between eval("expr", mydict) and eval("expr", {}, mydict) appears to be by calling globals() or somehow inspecting the frame directly. Therefore it might be acceptable to redefine the two-argument eval(expr, dict) to be equivalent not to eval(expr, dict, dict) but eval(expr, {}, dict). This hack might be useful enough even if it won't work with the exec statement. ---------------------------------------------------------------------- Comment By: Philippe Fremy (pfremy) Date: 2004-03-03 04:40 Message: Logged In: YES user_id=233844 I have exactly the same need as the original poster. The only difference in my case is that I inhreted a dictionnary. I want to use the eval() function as a simple expression evaluator. I have the follwing dictionnary: d['a']='1' d['b']='2' d['c']='a+b' I want the following results: d[a] -> 1 d[b] -> 2 d[c] -> 3 To do that, I was planning to use the eval() function and overloading the __getitem__ of the global or local dictionnary: class MyDict( dict ) : def __getitem__( self, key ): print "__getitem__", key val = dict.__getitem__( self, key ) print "val = '%s'" % val return eval( val , self ) But it does not work: d[a]: __getitem__ a val = '1' -> 1 d[b]: __getitem__ b val = '2' -> 2 d[c]: __getitem__ c val = 'e+1' ERROR Traceback (most recent call last): File "test_parse_jaycos_config.py", line 83, in testMyDict self.assertEquals( d['c'], 2 ) File "parse_config_file.py", line 10, in __getitem__ return eval( val , self ) File "", line 0, in ? TypeError: cannot concatenate 'str' and 'int' objects d['c'] did fetch the 'a+1' value, which was passed to eval. However, eval() tried to evaluate the expression using the content of the dictionnary instead of using __getitem__. So it fetched the string '1' for 'a' instead of the value 1, so the result is not suitable for the addition. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-01-12 17:01 Message: Logged In: YES user_id=99874 Hmm... I like this! Of course, I am wary of adding *yet another* special double- underscore function to satisfy a single special purpose, but this would satisfy all of *my* needs, and without any performance impact for lookups that are FOUND. Lookups that are NOT found would have a slight performance degrade (I know better than to speculate about the size of the effect without measuring it). I'm not really sure what percentage of dict lookups succeed. At any rate, what are these "other contexts" you mention in which a __keyerror__ would "also be useful"? Because if we can find other places where it is useful, that significantly improves the usefulness. OTOH, if the tests can be done ONLY for eval (I don't really think that's possible), then I'm certain no one cares about the performance of eval. ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2004-01-12 14:38 Message: Logged In: YES user_id=292741 This may be a compromise solution, which could also be useful in other contexts: What if the object passed is derived from dict - presumably that doesn't help because the point is to do low-level calls to the actual dict's lookup functions. Now, suppose we modify the basic dict type, so that before throwing a KeyError, it checks to see if it is really a derived object with a method __keyerror__, and if so, calls that and returns its result (or lets it throw)? Now you can make objects that look like dicts, and act like them at the low level, but can automatically populate themselves when non-existent keys are requested. Of course, __keyerror__('x') does not have to make an 'x' entry in the dict; it could make no change, or add several entries, depending on the desired semantics regarding future lookups. It could be set up so that every lookup fails and is forwarded by __keyerror__ to the __getitem__ of another object, for instance. The cost of this to the 'normal' dict lookup is that the need to do PyDict_CheckExact() each time a lookup fails. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2003-12-16 10:37 Message: Logged In: YES user_id=99874 I'll just add my voice as somebody who would find this to be "darn handy" if it could ever done without noticably impacting the speed of python code that DOESN'T use eval(). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 23:18 Message: Added to PEP 42. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 14:42 Message: Changed Group to Feature Request. Should be added to PEP 42 (I'll do that if nobody beats me to it). CPython requires a genuine dict now for speed. I believe JPython allows any mapping object. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 From noreply at sourceforge.net Sun Jun 27 04:37:29 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 04:37:32 2004 Subject: [ python-Bugs-215126 ] Over restricted type checking on eval() function Message-ID: Bugs item #215126, was opened at 2000-09-22 14:36 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 Category: Python Interpreter Core Group: Feature Request Status: Open Resolution: None >Priority: 6 Submitted By: Nobody/Anonymous (nobody) Assigned to: Armin Rigo (arigo) Summary: Over restricted type checking on eval() function Initial Comment: The built-in function eval() takes a string argument and a dictionary. The second argument should allow any instance which defines __getitem__ as opposed to just dictionaries. The following example creates a type error: eval, argument 2: expected dictionary, instance found class SpreadSheet: _cells = {} def __setitem__( self, key, formula ): self._cells[key] = formula def __getitem__( self, key ): return eval( self._cells[key], self ) ss = SpreadSheet() ss['a1'] = '5' ss['a2'] = 'a1*5' ss['a2'] ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 03:37 Message: Logged In: YES user_id=80475 Okay, the patch is ready for second review. Attaching the revision with unittests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 02:50 Message: Logged In: YES user_id=80475 Attaching my minimal version of the patch. It runs the attached demo without exception and does not measurably show on any of my timings. The approach is to restrict the generalization to eval() instead of exec(). Since eval() can't set values in the locals dict, no changes are needed to the setitem and delitem calls. Instead of using PyObject_GetItem() directly, I do a regular lookup and fallback to the generalizaiton if necessary -- this is why the normal case doesn't get slowed down (the cost is a PyDict_Check which uses values already in cache, and a branch predicatable comparison).. While the demo script runs, and the test_suite passes, it is slightly too simple and doesn't yet handle eval('dir()', globals(), M()) where M is a non-dict mapping. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 11:59 Message: Logged In: YES user_id=80475 The quick patch works fine. Do change the PyArg_ParseTuple() into the faster PyArg_UnpackTuple(). Does this patch show any changes to pystone or other key metrics? Would the PyDict_GetItem trick have better performance? My guess is that it would. +1 on using PyMapping_Check() for checking the locals argument to eval(). That is as good as you can get without actually running it. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-22 16:07 Message: Logged In: YES user_id=4771 Quick patch attached. I didn't try to use the PyDict_GetItem trick described, but just systematically use PyObject_GetItem/SetItem/DelItem when working with f_locals. This might confuse some extension modules that expect PyEval_GetLocals() to return a dict object. The eval trick is now: eval(code, nondict) --> eval(code, globals(), nondict). Besides eval() I removed the relevant typecheck from execfile() and the exec statement. Any other place I am missing? We might want to still somehow check the type of the locals, to avoid strange errors caused by e.g. eval("a", "b"). PyMapping_Check() is the obvious candidate, but it looks like a hack. More testing is needed. test_descrtut.py line 84 now succeeds, unexpectedly, which is interpreted as a test failure. Needs some docs, too. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:27 Message: Logged In: YES user_id=80475 Armin, can you whip-up a quick patch so that we can explore the implications of your suggestion. Ideally, I would like to see something like this go in for Py2.4. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-22 11:53 Message: Logged In: YES user_id=80475 +1 Amrin's idea provides most of the needed functionality with zero performance impact. Also, using a local dictionary for the application variables is likely to be just exactly what a programmer would want to do. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 13:21 Message: Logged In: YES user_id=4771 With minimal work and performance impact, we could allow frame->f_locals to be something else than a real dictionary. It looks like it would be easily possible as long as we don't touch the time-critical frame->f_globals. Code compiled by eval() always uses LOAD_NAME and STORE_NAME, which is anyway allowed to be a bit slower than LOAD_FAST or LOAD_GLOBAL. Note that PyDict_GetItem() & co., as called by LOAD/STORE_NAME, do a PyDict_Check() anyway. We could just replace it with PyDict_CheckExact() and if it fails fall back to calling the appropriate ob_type->tp_as_mapping method. This would turn some of the PyDict_Xxx() API into a general mapping API, half-way between its current dict-only usage and the fully abstract PyObject_Xxx() API. This is maybe a bit strange from the user's point of view, because eval("expr", mydict) still wouldn't work: only eval("expr", {}, mydict) would. which does which does Now, there is no way I can think of that code compiled by eval() could contain LOAD_GLOBAL or STORE_GLOBAL. The only way to tell the difference between eval("expr", mydict) and eval("expr", {}, mydict) appears to be by calling globals() or somehow inspecting the frame directly. Therefore it might be acceptable to redefine the two-argument eval(expr, dict) to be equivalent not to eval(expr, dict, dict) but eval(expr, {}, dict). This hack might be useful enough even if it won't work with the exec statement. ---------------------------------------------------------------------- Comment By: Philippe Fremy (pfremy) Date: 2004-03-03 04:40 Message: Logged In: YES user_id=233844 I have exactly the same need as the original poster. The only difference in my case is that I inhreted a dictionnary. I want to use the eval() function as a simple expression evaluator. I have the follwing dictionnary: d['a']='1' d['b']='2' d['c']='a+b' I want the following results: d[a] -> 1 d[b] -> 2 d[c] -> 3 To do that, I was planning to use the eval() function and overloading the __getitem__ of the global or local dictionnary: class MyDict( dict ) : def __getitem__( self, key ): print "__getitem__", key val = dict.__getitem__( self, key ) print "val = '%s'" % val return eval( val , self ) But it does not work: d[a]: __getitem__ a val = '1' -> 1 d[b]: __getitem__ b val = '2' -> 2 d[c]: __getitem__ c val = 'e+1' ERROR Traceback (most recent call last): File "test_parse_jaycos_config.py", line 83, in testMyDict self.assertEquals( d['c'], 2 ) File "parse_config_file.py", line 10, in __getitem__ return eval( val , self ) File "", line 0, in ? TypeError: cannot concatenate 'str' and 'int' objects d['c'] did fetch the 'a+1' value, which was passed to eval. However, eval() tried to evaluate the expression using the content of the dictionnary instead of using __getitem__. So it fetched the string '1' for 'a' instead of the value 1, so the result is not suitable for the addition. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-01-12 17:01 Message: Logged In: YES user_id=99874 Hmm... I like this! Of course, I am wary of adding *yet another* special double- underscore function to satisfy a single special purpose, but this would satisfy all of *my* needs, and without any performance impact for lookups that are FOUND. Lookups that are NOT found would have a slight performance degrade (I know better than to speculate about the size of the effect without measuring it). I'm not really sure what percentage of dict lookups succeed. At any rate, what are these "other contexts" you mention in which a __keyerror__ would "also be useful"? Because if we can find other places where it is useful, that significantly improves the usefulness. OTOH, if the tests can be done ONLY for eval (I don't really think that's possible), then I'm certain no one cares about the performance of eval. ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2004-01-12 14:38 Message: Logged In: YES user_id=292741 This may be a compromise solution, which could also be useful in other contexts: What if the object passed is derived from dict - presumably that doesn't help because the point is to do low-level calls to the actual dict's lookup functions. Now, suppose we modify the basic dict type, so that before throwing a KeyError, it checks to see if it is really a derived object with a method __keyerror__, and if so, calls that and returns its result (or lets it throw)? Now you can make objects that look like dicts, and act like them at the low level, but can automatically populate themselves when non-existent keys are requested. Of course, __keyerror__('x') does not have to make an 'x' entry in the dict; it could make no change, or add several entries, depending on the desired semantics regarding future lookups. It could be set up so that every lookup fails and is forwarded by __keyerror__ to the __getitem__ of another object, for instance. The cost of this to the 'normal' dict lookup is that the need to do PyDict_CheckExact() each time a lookup fails. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2003-12-16 10:37 Message: Logged In: YES user_id=99874 I'll just add my voice as somebody who would find this to be "darn handy" if it could ever done without noticably impacting the speed of python code that DOESN'T use eval(). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 23:18 Message: Added to PEP 42. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 14:42 Message: Changed Group to Feature Request. Should be added to PEP 42 (I'll do that if nobody beats me to it). CPython requires a genuine dict now for speed. I believe JPython allows any mapping object. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 From noreply at sourceforge.net Sun Jun 27 07:33:42 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 07:33:45 2004 Subject: [ python-Bugs-215126 ] Over restricted type checking on eval() function Message-ID: Bugs item #215126, was opened at 2000-09-22 19:36 Message generated for change (Comment added) made by arigo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 Category: Python Interpreter Core Group: Feature Request Status: Open Resolution: None Priority: 6 Submitted By: Nobody/Anonymous (nobody) Assigned to: Armin Rigo (arigo) Summary: Over restricted type checking on eval() function Initial Comment: The built-in function eval() takes a string argument and a dictionary. The second argument should allow any instance which defines __getitem__ as opposed to just dictionaries. The following example creates a type error: eval, argument 2: expected dictionary, instance found class SpreadSheet: _cells = {} def __setitem__( self, key, formula ): self._cells[key] = formula def __getitem__( self, key ): return eval( self._cells[key], self ) ss = SpreadSheet() ss['a1'] = '5' ss['a2'] = 'a1*5' ss['a2'] ---------------------------------------------------------------------- >Comment By: Armin Rigo (arigo) Date: 2004-06-27 11:33 Message: Logged In: YES user_id=4771 eval() can set items in the locals, e.g. by using list comprehension. Moreover the ability to exec in custom dicts would be extremely useful too, e.g. to catch definitions while they appear. If you really want to avoid any performance impact, using PyDict_CheckExact() in all critical parts looks good (do not use PyDict_Check(), it's both slower and not what we want because it prevents subclasses of dicts to override __xxxitem__). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 08:37 Message: Logged In: YES user_id=80475 Okay, the patch is ready for second review. Attaching the revision with unittests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 07:50 Message: Logged In: YES user_id=80475 Attaching my minimal version of the patch. It runs the attached demo without exception and does not measurably show on any of my timings. The approach is to restrict the generalization to eval() instead of exec(). Since eval() can't set values in the locals dict, no changes are needed to the setitem and delitem calls. Instead of using PyObject_GetItem() directly, I do a regular lookup and fallback to the generalizaiton if necessary -- this is why the normal case doesn't get slowed down (the cost is a PyDict_Check which uses values already in cache, and a branch predicatable comparison).. While the demo script runs, and the test_suite passes, it is slightly too simple and doesn't yet handle eval('dir()', globals(), M()) where M is a non-dict mapping. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 16:59 Message: Logged In: YES user_id=80475 The quick patch works fine. Do change the PyArg_ParseTuple() into the faster PyArg_UnpackTuple(). Does this patch show any changes to pystone or other key metrics? Would the PyDict_GetItem trick have better performance? My guess is that it would. +1 on using PyMapping_Check() for checking the locals argument to eval(). That is as good as you can get without actually running it. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-22 21:07 Message: Logged In: YES user_id=4771 Quick patch attached. I didn't try to use the PyDict_GetItem trick described, but just systematically use PyObject_GetItem/SetItem/DelItem when working with f_locals. This might confuse some extension modules that expect PyEval_GetLocals() to return a dict object. The eval trick is now: eval(code, nondict) --> eval(code, globals(), nondict). Besides eval() I removed the relevant typecheck from execfile() and the exec statement. Any other place I am missing? We might want to still somehow check the type of the locals, to avoid strange errors caused by e.g. eval("a", "b"). PyMapping_Check() is the obvious candidate, but it looks like a hack. More testing is needed. test_descrtut.py line 84 now succeeds, unexpectedly, which is interpreted as a test failure. Needs some docs, too. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 07:27 Message: Logged In: YES user_id=80475 Armin, can you whip-up a quick patch so that we can explore the implications of your suggestion. Ideally, I would like to see something like this go in for Py2.4. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-22 16:53 Message: Logged In: YES user_id=80475 +1 Amrin's idea provides most of the needed functionality with zero performance impact. Also, using a local dictionary for the application variables is likely to be just exactly what a programmer would want to do. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 18:21 Message: Logged In: YES user_id=4771 With minimal work and performance impact, we could allow frame->f_locals to be something else than a real dictionary. It looks like it would be easily possible as long as we don't touch the time-critical frame->f_globals. Code compiled by eval() always uses LOAD_NAME and STORE_NAME, which is anyway allowed to be a bit slower than LOAD_FAST or LOAD_GLOBAL. Note that PyDict_GetItem() & co., as called by LOAD/STORE_NAME, do a PyDict_Check() anyway. We could just replace it with PyDict_CheckExact() and if it fails fall back to calling the appropriate ob_type->tp_as_mapping method. This would turn some of the PyDict_Xxx() API into a general mapping API, half-way between its current dict-only usage and the fully abstract PyObject_Xxx() API. This is maybe a bit strange from the user's point of view, because eval("expr", mydict) still wouldn't work: only eval("expr", {}, mydict) would. which does which does Now, there is no way I can think of that code compiled by eval() could contain LOAD_GLOBAL or STORE_GLOBAL. The only way to tell the difference between eval("expr", mydict) and eval("expr", {}, mydict) appears to be by calling globals() or somehow inspecting the frame directly. Therefore it might be acceptable to redefine the two-argument eval(expr, dict) to be equivalent not to eval(expr, dict, dict) but eval(expr, {}, dict). This hack might be useful enough even if it won't work with the exec statement. ---------------------------------------------------------------------- Comment By: Philippe Fremy (pfremy) Date: 2004-03-03 09:40 Message: Logged In: YES user_id=233844 I have exactly the same need as the original poster. The only difference in my case is that I inhreted a dictionnary. I want to use the eval() function as a simple expression evaluator. I have the follwing dictionnary: d['a']='1' d['b']='2' d['c']='a+b' I want the following results: d[a] -> 1 d[b] -> 2 d[c] -> 3 To do that, I was planning to use the eval() function and overloading the __getitem__ of the global or local dictionnary: class MyDict( dict ) : def __getitem__( self, key ): print "__getitem__", key val = dict.__getitem__( self, key ) print "val = '%s'" % val return eval( val , self ) But it does not work: d[a]: __getitem__ a val = '1' -> 1 d[b]: __getitem__ b val = '2' -> 2 d[c]: __getitem__ c val = 'e+1' ERROR Traceback (most recent call last): File "test_parse_jaycos_config.py", line 83, in testMyDict self.assertEquals( d['c'], 2 ) File "parse_config_file.py", line 10, in __getitem__ return eval( val , self ) File "", line 0, in ? TypeError: cannot concatenate 'str' and 'int' objects d['c'] did fetch the 'a+1' value, which was passed to eval. However, eval() tried to evaluate the expression using the content of the dictionnary instead of using __getitem__. So it fetched the string '1' for 'a' instead of the value 1, so the result is not suitable for the addition. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-01-12 22:01 Message: Logged In: YES user_id=99874 Hmm... I like this! Of course, I am wary of adding *yet another* special double- underscore function to satisfy a single special purpose, but this would satisfy all of *my* needs, and without any performance impact for lookups that are FOUND. Lookups that are NOT found would have a slight performance degrade (I know better than to speculate about the size of the effect without measuring it). I'm not really sure what percentage of dict lookups succeed. At any rate, what are these "other contexts" you mention in which a __keyerror__ would "also be useful"? Because if we can find other places where it is useful, that significantly improves the usefulness. OTOH, if the tests can be done ONLY for eval (I don't really think that's possible), then I'm certain no one cares about the performance of eval. ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2004-01-12 19:38 Message: Logged In: YES user_id=292741 This may be a compromise solution, which could also be useful in other contexts: What if the object passed is derived from dict - presumably that doesn't help because the point is to do low-level calls to the actual dict's lookup functions. Now, suppose we modify the basic dict type, so that before throwing a KeyError, it checks to see if it is really a derived object with a method __keyerror__, and if so, calls that and returns its result (or lets it throw)? Now you can make objects that look like dicts, and act like them at the low level, but can automatically populate themselves when non-existent keys are requested. Of course, __keyerror__('x') does not have to make an 'x' entry in the dict; it could make no change, or add several entries, depending on the desired semantics regarding future lookups. It could be set up so that every lookup fails and is forwarded by __keyerror__ to the __getitem__ of another object, for instance. The cost of this to the 'normal' dict lookup is that the need to do PyDict_CheckExact() each time a lookup fails. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2003-12-16 15:37 Message: Logged In: YES user_id=99874 I'll just add my voice as somebody who would find this to be "darn handy" if it could ever done without noticably impacting the speed of python code that DOESN'T use eval(). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-23 04:18 Message: Added to PEP 42. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 19:42 Message: Changed Group to Feature Request. Should be added to PEP 42 (I'll do that if nobody beats me to it). CPython requires a genuine dict now for speed. I believe JPython allows any mapping object. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 From noreply at sourceforge.net Sun Jun 27 13:36:15 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 13:36:22 2004 Subject: [ python-Bugs-215126 ] Over restricted type checking on eval() function Message-ID: Bugs item #215126, was opened at 2000-09-22 14:36 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 Category: Python Interpreter Core Group: Feature Request Status: Open Resolution: None Priority: 6 Submitted By: Nobody/Anonymous (nobody) Assigned to: Armin Rigo (arigo) Summary: Over restricted type checking on eval() function Initial Comment: The built-in function eval() takes a string argument and a dictionary. The second argument should allow any instance which defines __getitem__ as opposed to just dictionaries. The following example creates a type error: eval, argument 2: expected dictionary, instance found class SpreadSheet: _cells = {} def __setitem__( self, key, formula ): self._cells[key] = formula def __getitem__( self, key ): return eval( self._cells[key], self ) ss = SpreadSheet() ss['a1'] = '5' ss['a2'] = 'a1*5' ss['a2'] ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 12:36 Message: Logged In: YES user_id=80475 Changed to PyDict_CheckExact(). Are you sure about having to change the sets and dels. I've tried several things at the interactive prompt and can't get it to fail: >>> [locals() for i in (2,3)] Do you have any examples (in part, so I can use them as test cases)? ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-27 06:33 Message: Logged In: YES user_id=4771 eval() can set items in the locals, e.g. by using list comprehension. Moreover the ability to exec in custom dicts would be extremely useful too, e.g. to catch definitions while they appear. If you really want to avoid any performance impact, using PyDict_CheckExact() in all critical parts looks good (do not use PyDict_Check(), it's both slower and not what we want because it prevents subclasses of dicts to override __xxxitem__). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 03:37 Message: Logged In: YES user_id=80475 Okay, the patch is ready for second review. Attaching the revision with unittests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 02:50 Message: Logged In: YES user_id=80475 Attaching my minimal version of the patch. It runs the attached demo without exception and does not measurably show on any of my timings. The approach is to restrict the generalization to eval() instead of exec(). Since eval() can't set values in the locals dict, no changes are needed to the setitem and delitem calls. Instead of using PyObject_GetItem() directly, I do a regular lookup and fallback to the generalizaiton if necessary -- this is why the normal case doesn't get slowed down (the cost is a PyDict_Check which uses values already in cache, and a branch predicatable comparison).. While the demo script runs, and the test_suite passes, it is slightly too simple and doesn't yet handle eval('dir()', globals(), M()) where M is a non-dict mapping. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 11:59 Message: Logged In: YES user_id=80475 The quick patch works fine. Do change the PyArg_ParseTuple() into the faster PyArg_UnpackTuple(). Does this patch show any changes to pystone or other key metrics? Would the PyDict_GetItem trick have better performance? My guess is that it would. +1 on using PyMapping_Check() for checking the locals argument to eval(). That is as good as you can get without actually running it. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-22 16:07 Message: Logged In: YES user_id=4771 Quick patch attached. I didn't try to use the PyDict_GetItem trick described, but just systematically use PyObject_GetItem/SetItem/DelItem when working with f_locals. This might confuse some extension modules that expect PyEval_GetLocals() to return a dict object. The eval trick is now: eval(code, nondict) --> eval(code, globals(), nondict). Besides eval() I removed the relevant typecheck from execfile() and the exec statement. Any other place I am missing? We might want to still somehow check the type of the locals, to avoid strange errors caused by e.g. eval("a", "b"). PyMapping_Check() is the obvious candidate, but it looks like a hack. More testing is needed. test_descrtut.py line 84 now succeeds, unexpectedly, which is interpreted as a test failure. Needs some docs, too. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:27 Message: Logged In: YES user_id=80475 Armin, can you whip-up a quick patch so that we can explore the implications of your suggestion. Ideally, I would like to see something like this go in for Py2.4. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-22 11:53 Message: Logged In: YES user_id=80475 +1 Amrin's idea provides most of the needed functionality with zero performance impact. Also, using a local dictionary for the application variables is likely to be just exactly what a programmer would want to do. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 13:21 Message: Logged In: YES user_id=4771 With minimal work and performance impact, we could allow frame->f_locals to be something else than a real dictionary. It looks like it would be easily possible as long as we don't touch the time-critical frame->f_globals. Code compiled by eval() always uses LOAD_NAME and STORE_NAME, which is anyway allowed to be a bit slower than LOAD_FAST or LOAD_GLOBAL. Note that PyDict_GetItem() & co., as called by LOAD/STORE_NAME, do a PyDict_Check() anyway. We could just replace it with PyDict_CheckExact() and if it fails fall back to calling the appropriate ob_type->tp_as_mapping method. This would turn some of the PyDict_Xxx() API into a general mapping API, half-way between its current dict-only usage and the fully abstract PyObject_Xxx() API. This is maybe a bit strange from the user's point of view, because eval("expr", mydict) still wouldn't work: only eval("expr", {}, mydict) would. which does which does Now, there is no way I can think of that code compiled by eval() could contain LOAD_GLOBAL or STORE_GLOBAL. The only way to tell the difference between eval("expr", mydict) and eval("expr", {}, mydict) appears to be by calling globals() or somehow inspecting the frame directly. Therefore it might be acceptable to redefine the two-argument eval(expr, dict) to be equivalent not to eval(expr, dict, dict) but eval(expr, {}, dict). This hack might be useful enough even if it won't work with the exec statement. ---------------------------------------------------------------------- Comment By: Philippe Fremy (pfremy) Date: 2004-03-03 04:40 Message: Logged In: YES user_id=233844 I have exactly the same need as the original poster. The only difference in my case is that I inhreted a dictionnary. I want to use the eval() function as a simple expression evaluator. I have the follwing dictionnary: d['a']='1' d['b']='2' d['c']='a+b' I want the following results: d[a] -> 1 d[b] -> 2 d[c] -> 3 To do that, I was planning to use the eval() function and overloading the __getitem__ of the global or local dictionnary: class MyDict( dict ) : def __getitem__( self, key ): print "__getitem__", key val = dict.__getitem__( self, key ) print "val = '%s'" % val return eval( val , self ) But it does not work: d[a]: __getitem__ a val = '1' -> 1 d[b]: __getitem__ b val = '2' -> 2 d[c]: __getitem__ c val = 'e+1' ERROR Traceback (most recent call last): File "test_parse_jaycos_config.py", line 83, in testMyDict self.assertEquals( d['c'], 2 ) File "parse_config_file.py", line 10, in __getitem__ return eval( val , self ) File "", line 0, in ? TypeError: cannot concatenate 'str' and 'int' objects d['c'] did fetch the 'a+1' value, which was passed to eval. However, eval() tried to evaluate the expression using the content of the dictionnary instead of using __getitem__. So it fetched the string '1' for 'a' instead of the value 1, so the result is not suitable for the addition. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-01-12 17:01 Message: Logged In: YES user_id=99874 Hmm... I like this! Of course, I am wary of adding *yet another* special double- underscore function to satisfy a single special purpose, but this would satisfy all of *my* needs, and without any performance impact for lookups that are FOUND. Lookups that are NOT found would have a slight performance degrade (I know better than to speculate about the size of the effect without measuring it). I'm not really sure what percentage of dict lookups succeed. At any rate, what are these "other contexts" you mention in which a __keyerror__ would "also be useful"? Because if we can find other places where it is useful, that significantly improves the usefulness. OTOH, if the tests can be done ONLY for eval (I don't really think that's possible), then I'm certain no one cares about the performance of eval. ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2004-01-12 14:38 Message: Logged In: YES user_id=292741 This may be a compromise solution, which could also be useful in other contexts: What if the object passed is derived from dict - presumably that doesn't help because the point is to do low-level calls to the actual dict's lookup functions. Now, suppose we modify the basic dict type, so that before throwing a KeyError, it checks to see if it is really a derived object with a method __keyerror__, and if so, calls that and returns its result (or lets it throw)? Now you can make objects that look like dicts, and act like them at the low level, but can automatically populate themselves when non-existent keys are requested. Of course, __keyerror__('x') does not have to make an 'x' entry in the dict; it could make no change, or add several entries, depending on the desired semantics regarding future lookups. It could be set up so that every lookup fails and is forwarded by __keyerror__ to the __getitem__ of another object, for instance. The cost of this to the 'normal' dict lookup is that the need to do PyDict_CheckExact() each time a lookup fails. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2003-12-16 10:37 Message: Logged In: YES user_id=99874 I'll just add my voice as somebody who would find this to be "darn handy" if it could ever done without noticably impacting the speed of python code that DOESN'T use eval(). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 23:18 Message: Added to PEP 42. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 14:42 Message: Changed Group to Feature Request. Should be added to PEP 42 (I'll do that if nobody beats me to it). CPython requires a genuine dict now for speed. I believe JPython allows any mapping object. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 From noreply at sourceforge.net Sun Jun 27 14:08:20 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 14:08:24 2004 Subject: [ python-Bugs-215126 ] Over restricted type checking on eval() function Message-ID: Bugs item #215126, was opened at 2000-09-22 14:36 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 Category: Python Interpreter Core Group: Feature Request Status: Open Resolution: None Priority: 6 Submitted By: Nobody/Anonymous (nobody) >Assigned to: Raymond Hettinger (rhettinger) Summary: Over restricted type checking on eval() function Initial Comment: The built-in function eval() takes a string argument and a dictionary. The second argument should allow any instance which defines __getitem__ as opposed to just dictionaries. The following example creates a type error: eval, argument 2: expected dictionary, instance found class SpreadSheet: _cells = {} def __setitem__( self, key, formula ): self._cells[key] = formula def __getitem__( self, key ): return eval( self._cells[key], self ) ss = SpreadSheet() ss['a1'] = '5' ss['a2'] = 'a1*5' ss['a2'] ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 13:08 Message: Logged In: YES user_id=80475 Nix, that last comment. Have examples that call setitem(). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 12:36 Message: Logged In: YES user_id=80475 Changed to PyDict_CheckExact(). Are you sure about having to change the sets and dels. I've tried several things at the interactive prompt and can't get it to fail: >>> [locals() for i in (2,3)] Do you have any examples (in part, so I can use them as test cases)? ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-27 06:33 Message: Logged In: YES user_id=4771 eval() can set items in the locals, e.g. by using list comprehension. Moreover the ability to exec in custom dicts would be extremely useful too, e.g. to catch definitions while they appear. If you really want to avoid any performance impact, using PyDict_CheckExact() in all critical parts looks good (do not use PyDict_Check(), it's both slower and not what we want because it prevents subclasses of dicts to override __xxxitem__). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 03:37 Message: Logged In: YES user_id=80475 Okay, the patch is ready for second review. Attaching the revision with unittests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 02:50 Message: Logged In: YES user_id=80475 Attaching my minimal version of the patch. It runs the attached demo without exception and does not measurably show on any of my timings. The approach is to restrict the generalization to eval() instead of exec(). Since eval() can't set values in the locals dict, no changes are needed to the setitem and delitem calls. Instead of using PyObject_GetItem() directly, I do a regular lookup and fallback to the generalizaiton if necessary -- this is why the normal case doesn't get slowed down (the cost is a PyDict_Check which uses values already in cache, and a branch predicatable comparison).. While the demo script runs, and the test_suite passes, it is slightly too simple and doesn't yet handle eval('dir()', globals(), M()) where M is a non-dict mapping. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 11:59 Message: Logged In: YES user_id=80475 The quick patch works fine. Do change the PyArg_ParseTuple() into the faster PyArg_UnpackTuple(). Does this patch show any changes to pystone or other key metrics? Would the PyDict_GetItem trick have better performance? My guess is that it would. +1 on using PyMapping_Check() for checking the locals argument to eval(). That is as good as you can get without actually running it. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-22 16:07 Message: Logged In: YES user_id=4771 Quick patch attached. I didn't try to use the PyDict_GetItem trick described, but just systematically use PyObject_GetItem/SetItem/DelItem when working with f_locals. This might confuse some extension modules that expect PyEval_GetLocals() to return a dict object. The eval trick is now: eval(code, nondict) --> eval(code, globals(), nondict). Besides eval() I removed the relevant typecheck from execfile() and the exec statement. Any other place I am missing? We might want to still somehow check the type of the locals, to avoid strange errors caused by e.g. eval("a", "b"). PyMapping_Check() is the obvious candidate, but it looks like a hack. More testing is needed. test_descrtut.py line 84 now succeeds, unexpectedly, which is interpreted as a test failure. Needs some docs, too. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:27 Message: Logged In: YES user_id=80475 Armin, can you whip-up a quick patch so that we can explore the implications of your suggestion. Ideally, I would like to see something like this go in for Py2.4. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-22 11:53 Message: Logged In: YES user_id=80475 +1 Amrin's idea provides most of the needed functionality with zero performance impact. Also, using a local dictionary for the application variables is likely to be just exactly what a programmer would want to do. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 13:21 Message: Logged In: YES user_id=4771 With minimal work and performance impact, we could allow frame->f_locals to be something else than a real dictionary. It looks like it would be easily possible as long as we don't touch the time-critical frame->f_globals. Code compiled by eval() always uses LOAD_NAME and STORE_NAME, which is anyway allowed to be a bit slower than LOAD_FAST or LOAD_GLOBAL. Note that PyDict_GetItem() & co., as called by LOAD/STORE_NAME, do a PyDict_Check() anyway. We could just replace it with PyDict_CheckExact() and if it fails fall back to calling the appropriate ob_type->tp_as_mapping method. This would turn some of the PyDict_Xxx() API into a general mapping API, half-way between its current dict-only usage and the fully abstract PyObject_Xxx() API. This is maybe a bit strange from the user's point of view, because eval("expr", mydict) still wouldn't work: only eval("expr", {}, mydict) would. which does which does Now, there is no way I can think of that code compiled by eval() could contain LOAD_GLOBAL or STORE_GLOBAL. The only way to tell the difference between eval("expr", mydict) and eval("expr", {}, mydict) appears to be by calling globals() or somehow inspecting the frame directly. Therefore it might be acceptable to redefine the two-argument eval(expr, dict) to be equivalent not to eval(expr, dict, dict) but eval(expr, {}, dict). This hack might be useful enough even if it won't work with the exec statement. ---------------------------------------------------------------------- Comment By: Philippe Fremy (pfremy) Date: 2004-03-03 04:40 Message: Logged In: YES user_id=233844 I have exactly the same need as the original poster. The only difference in my case is that I inhreted a dictionnary. I want to use the eval() function as a simple expression evaluator. I have the follwing dictionnary: d['a']='1' d['b']='2' d['c']='a+b' I want the following results: d[a] -> 1 d[b] -> 2 d[c] -> 3 To do that, I was planning to use the eval() function and overloading the __getitem__ of the global or local dictionnary: class MyDict( dict ) : def __getitem__( self, key ): print "__getitem__", key val = dict.__getitem__( self, key ) print "val = '%s'" % val return eval( val , self ) But it does not work: d[a]: __getitem__ a val = '1' -> 1 d[b]: __getitem__ b val = '2' -> 2 d[c]: __getitem__ c val = 'e+1' ERROR Traceback (most recent call last): File "test_parse_jaycos_config.py", line 83, in testMyDict self.assertEquals( d['c'], 2 ) File "parse_config_file.py", line 10, in __getitem__ return eval( val , self ) File "", line 0, in ? TypeError: cannot concatenate 'str' and 'int' objects d['c'] did fetch the 'a+1' value, which was passed to eval. However, eval() tried to evaluate the expression using the content of the dictionnary instead of using __getitem__. So it fetched the string '1' for 'a' instead of the value 1, so the result is not suitable for the addition. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-01-12 17:01 Message: Logged In: YES user_id=99874 Hmm... I like this! Of course, I am wary of adding *yet another* special double- underscore function to satisfy a single special purpose, but this would satisfy all of *my* needs, and without any performance impact for lookups that are FOUND. Lookups that are NOT found would have a slight performance degrade (I know better than to speculate about the size of the effect without measuring it). I'm not really sure what percentage of dict lookups succeed. At any rate, what are these "other contexts" you mention in which a __keyerror__ would "also be useful"? Because if we can find other places where it is useful, that significantly improves the usefulness. OTOH, if the tests can be done ONLY for eval (I don't really think that's possible), then I'm certain no one cares about the performance of eval. ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2004-01-12 14:38 Message: Logged In: YES user_id=292741 This may be a compromise solution, which could also be useful in other contexts: What if the object passed is derived from dict - presumably that doesn't help because the point is to do low-level calls to the actual dict's lookup functions. Now, suppose we modify the basic dict type, so that before throwing a KeyError, it checks to see if it is really a derived object with a method __keyerror__, and if so, calls that and returns its result (or lets it throw)? Now you can make objects that look like dicts, and act like them at the low level, but can automatically populate themselves when non-existent keys are requested. Of course, __keyerror__('x') does not have to make an 'x' entry in the dict; it could make no change, or add several entries, depending on the desired semantics regarding future lookups. It could be set up so that every lookup fails and is forwarded by __keyerror__ to the __getitem__ of another object, for instance. The cost of this to the 'normal' dict lookup is that the need to do PyDict_CheckExact() each time a lookup fails. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2003-12-16 10:37 Message: Logged In: YES user_id=99874 I'll just add my voice as somebody who would find this to be "darn handy" if it could ever done without noticably impacting the speed of python code that DOESN'T use eval(). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 23:18 Message: Added to PEP 42. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 14:42 Message: Changed Group to Feature Request. Should be added to PEP 42 (I'll do that if nobody beats me to it). CPython requires a genuine dict now for speed. I believe JPython allows any mapping object. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 From noreply at sourceforge.net Sun Jun 27 16:59:40 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 17:00:06 2004 Subject: [ python-Bugs-979967 ] gethostbyname is broken on hpux 11.11 Message-ID: Bugs item #979967, was opened at 2004-06-25 21:50 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979967&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None >Priority: 5 Submitted By: Ehab Teima (ehab_teima) Assigned to: Nobody/Anonymous (nobody) Summary: gethostbyname is broken on hpux 11.11 Initial Comment: This bug exists in Python 2.3.2, 2.3.3 and 2.3.4. socket.gethostbyname is broken on hpux HP-UX B.11.11 U. This is what I'm getting when I try to call socket.gethostbyname('server'): Traceback (most recent call last): File "../../test.py", line 2, in ? a=socket.gethostbyname('myserver') socket.gaierror: (8, 'host nor service provided, or not known') I got the same error when I tried getaddrinfo(server, port). ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-27 22:59 Message: Logged In: YES user_id=21627 Can you investigate this in more detail, and suggest a patch? If not, it is unlikely that it will be fixed anytime soon, because the bug is likely specific to the operating system, and might be even specific to your installation. ---------------------------------------------------------------------- Comment By: Ehab Teima (ehab_teima) Date: 2004-06-25 22:15 Message: Logged In: YES user_id=1069522 The servername is supposed to be resolved via DNS query to get thje ip address. If I use the ip address instead of the servername, the function works fine [i.e. socket.gethostbyname('x.x.x.x') works fine.] ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979967&group_id=5470 From noreply at sourceforge.net Sun Jun 27 17:01:16 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 17:01:22 2004 Subject: [ python-Bugs-978952 ] Remove all email from the archives Message-ID: Bugs item #978952, was opened at 2004-06-24 15:51 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978952&group_id=5470 Category: Python Library Group: Feature Request >Status: Closed >Resolution: Rejected Priority: 5 Submitted By: Encolpe DEGOUTE (encolpe) Assigned to: Nobody/Anonymous (nobody) Summary: Remove all email from the archives Initial Comment: Hi, here come a patch for the old 2.0.11 to remove any email stuff from archives (header and body). It's a very usefull for antispam policy. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-27 23:01 Message: Logged In: YES user_id=21627 Please submit this patch to the mailman project; mailman is written in Python, but not part of the Python project. Rejecting it here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978952&group_id=5470 From noreply at sourceforge.net Sun Jun 27 17:03:11 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 17:03:16 2004 Subject: [ python-Bugs-978632 ] configure and gmake fail in openbsd 3.5 i386 Message-ID: Bugs item #978632, was opened at 2004-06-24 02:16 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978632&group_id=5470 Category: Installation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: - (panterrap) Assigned to: Nobody/Anonymous (nobody) Summary: configure and gmake fail in openbsd 3.5 i386 Initial Comment: Problem with compiler of python-2.3.4 in OpenBSD 3.5 i386 # ./configure --prefix=/usr/local/python-2.3.4 --with-cxx=/usr/bin/gcc 4 warnings sections in configure ------------ configure: WARNING: ncurses.h: present but cannot be compiled configure: WARNING: ncurses.h: check for missing prerequisite headers? configure: WARNING: ncurses.h: proceeding with the preprocessor's result configure: WARNING: ## ------------------------------------ ## configure: WARNING: ## Report this to bug-autoconf@gnu.org. ## configure: WARNING: ## ------------------------------------ ## ------------- configure: WARNING: sys/audioio.h: present but cannot be compiled configure: WARNING: sys/audioio.h: check for missing prerequisite headers? configure: WARNING: sys/audioio.h: proceeding with the preprocessor's result configure: WARNING: ## ------------------------------------ ## configure: WARNING: ## Report this to bug-autoconf@gnu.org. ## configure: WARNING: ## ------------------------------------ ## -------------- configure: WARNING: sys/lock.h: present but cannot be compiled configure: WARNING: sys/lock.h: check for missing prerequisite headers? configure: WARNING: sys/lock.h: proceeding with the preprocessor's result configure: WARNING: ## ------------------------------------ ## configure: WARNING: ## Report this to bug-autoconf@gnu.org. ## configure: WARNING: ## ------------------------------------ ## -------------- configure: WARNING: sys/select.h: present but cannot be compiled configure: WARNING: sys/select.h: check for missing prerequisite headers? configure: WARNING: sys/select.h: proceeding with the preprocessor's result configure: WARNING: ## ------------------------------------ ## configure: WARNING: ## Report this to bug-autoconf@gnu.org. ## configure: WARNING: ## ------------------------------------ ## --------------- my compilation in this platform # gmake /usr/bin/gcc -pthread -c -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Modules/ccpython.o ./Modules/ccpython.cc In file included from /usr/include/sys/select.h:38, from Include/pyport.h:118, from Include/Python.h:48, from ./Modules/ccpython.cc:3: /usr/include/sys/event.h:53: syntax error before `;' /usr/include/sys/event.h:55: syntax error before `;' gmake: *** [Modules/ccpython.o] Error 1 ------------- P.D.: Python-2.2.3 in this platform ok ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-27 23:03 Message: Logged In: YES user_id=21627 Can you please analyse the problem in more detail, and suggest a patch? If not, can you please attach the config.log that you got when running configure? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978632&group_id=5470 From noreply at sourceforge.net Sun Jun 27 17:06:25 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 17:06:32 2004 Subject: [ python-Bugs-977934 ] Python compiler encodes path to source in .pyc Message-ID: Bugs item #977934, was opened at 2004-06-23 04:17 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 Category: IDLE Group: Python 2.3 >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Jonathan Polley (jwpolley) Assigned to: Nobody/Anonymous (nobody) Summary: Python compiler encodes path to source in .pyc Initial Comment: This bug was notices in version 2.0, but I was able to reproduce it in version 2.3.4. When a python module is compiled, the path to that file is encoded in the .pyc file. This causes problems when a multi-platform development environment is used, in my case it is a hybrid UNIX/Windows platform. To reproduce the problem, perform the following steps from within IDLE: 1) run python 2) import crlf 3) exit python 4) copy the crlf.py and crlf.pyc files from Tools/Scripts to another directory 5) run python 6) add the path to the copies of crlf.py* to the start of the system path. 7) import crlf 8) print crlf.__file__ (this will yield the proper path to the files) 9) using the Open Module..., try to open the crlf module. You will get an error saying that the module was not found. This is a major problem when doing multi-platform debugging. If an exception is raised in a module that was compiled on another system (with a different path or OS), the debugger can not find the file to open it so it can be debugged. It also makes the "Open Module..." menu option unreliable. If you look in the .pyc file you will find the path to the location where the file was originally generated. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-27 23:06 Message: Logged In: YES user_id=21627 I fail to see any kind of bug in this report. Yes, the path to the source code is hard-coded in the .pyc file, but that is intentional, and not a bug. If you want the tracebacks to be correct on multiple installations, you have to recreate the .pyc files every time. ---------------------------------------------------------------------- Comment By: Nick Bastin (mondragon) Date: 2004-06-25 16:48 Message: Logged In: YES user_id=430343 Changing this to be filed against IDLE, not the parser/compiler (should also fix the debugger as well). There is no magic value that the compiler could put in there that would make this right, so external tools will just have to deal with that. ---------------------------------------------------------------------- Comment By: Nick Bastin (mondragon) Date: 2004-06-25 16:47 Message: Logged In: YES user_id=430343 Changing this to be filed against IDLE, not the parser/compiler (should also fix the debugger as well). There is no magic value that the compiler could put in there that would make this right, so external tools will just have to deal with that. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 From noreply at sourceforge.net Sun Jun 27 20:02:06 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 20:02:15 2004 Subject: [ python-Bugs-980925 ] Possible contradiction in "extending" and PyArg_ParseTuple() Message-ID: Bugs item #980925, was opened at 2004-06-27 17:02 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980925&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Brett Cannon (bcannon) Assigned to: Nobody/Anonymous (nobody) Summary: Possible contradiction in "extending" and PyArg_ParseTuple() Initial Comment: In section 1.3 of the "extending" tutorial (http://www.python.org/ dev/doc/devel/ext/backToExample.html), in the discussion of the use of PyArg_ParseTuple() for a string argument, it says that "in Standard C, the variable ... should properly be declared as "const char *" ". But if you look at any example code (such as in section 1.7, which covers parsing arguments; http://www.python.org/dev/ doc/devel/ext/parseTuple.html) and the docs for PyArg_ParseTuple() (found at http://www.python.org/dev/doc/ devel/api/arg-parsing.html) just use ``char *`` as the type for the argument for a string. Which is correct? I suspect that it is not required but more correct to have the variables declared ``const char *`` since you are not supposed to play with the string that is being pointed to (which is probably why the Unicode arguments in the docs for PyArg_ParseTuple() say it is ``const char *`` instead of just ``char *``). If this is true I will change the docs and the tutorial to use ``const char *``. But if it isn't, I will rip out the line saying that you should ``const char *`` since this is a contradiction at the moment of recommended practice. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980925&group_id=5470 From noreply at sourceforge.net Sun Jun 27 19:00:21 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 20:12:17 2004 Subject: [ python-Bugs-897820 ] db4 4.2 == :-( (test_anydbm and test_bsddb3) Message-ID: Bugs item #897820, was opened at 2004-02-15 22:03 Message generated for change (Comment added) made by greg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=897820&group_id=5470 Category: Python Library >Group: Python 2.3 Status: Open >Resolution: Remind Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Gregory P. Smith (greg) Summary: db4 4.2 == :-( (test_anydbm and test_bsddb3) Initial Comment: This machine, running fedora core 2 (test) has db4 4.2.52 installed. test_anydbm fails utterly with this combination. 3 of the 4 tests fail, the failing part is the same in each case: File "Lib/anydbm.py", line 83, in open return mod.open(file, flag, mode) File "Lib/dbhash.py", line 16, in open return bsddb.hashopen(file, flag, mode) File "Lib/bsddb/__init__.py", line 293, in hashopen d.open(file, db.DB_HASH, flags, mode) DBInvalidArgError: (22, 'Invalid argument -- DB_TRUNCATE illegal with locking specified') test_bsddb passes, but test_bsddb3 fails with a similar error: test test_bsddb3 failed -- Traceback (most recent call last): File "Lib/bsddb/test/test_compat.py", line 82, in test04_n_flag f = hashopen(self.filename, 'n') File "Lib/bsddb/__init__.py", line 293, in hashopen d.open(file, db.DB_HASH, flags, mode) DBInvalidArgError: (22, 'Invalid argument -- DB_TRUNCATE illegal with locking specified') ---------------------------------------------------------------------- >Comment By: Gregory P. Smith (greg) Date: 2004-06-27 16:00 Message: Logged In: YES user_id=413 A workaround for this BerkeleyDB change has been committed to HEAD. dist/src/Lib/bsddb/__init__.py rev 1.15 I've marked this bug as "python 2.3" now as a reminder for me to commit this to the 2.3 maintenance branch before 2.3.5. ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-18 04:27 Message: Logged In: YES user_id=29957 Regardless of whether there's a workaround or not, the test suite should not fail. Fedora Core is a fairly well known distro, and while FC2 is probably the first to ship with db 4.2, I'm sure it won't be the last. ---------------------------------------------------------------------- Comment By: Gregory P. Smith (greg) Date: 2004-06-16 15:40 Message: Logged In: YES user_id=413 As Tim Peters pointed out on python-dev in march: > I suspect Sleepycat screwed us there, changing the rules in midstream. > Someone on c.l.py appeared to track down the same thing here, but in an app > instead of in our test suite: > > http://mail.python.org/pipermail/python-list/2004-May/220168.html > > The change log of Berkeley DB 4.2.52 says "9. Fix a bug to now > disallow DB_TRUNCATE on opens in locking environments, since we > cannot prevent race conditions ..." leaving the bug open until i look to see if there is a workaround. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=897820&group_id=5470 From noreply at sourceforge.net Sun Jun 27 20:33:04 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 20:34:20 2004 Subject: [ python-Bugs-980925 ] Possible contradiction in "extending" and PyArg_ParseTuple() Message-ID: Bugs item #980925, was opened at 2004-06-27 20:02 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980925&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Brett Cannon (bcannon) Assigned to: Nobody/Anonymous (nobody) Summary: Possible contradiction in "extending" and PyArg_ParseTuple() Initial Comment: In section 1.3 of the "extending" tutorial (http://www.python.org/ dev/doc/devel/ext/backToExample.html), in the discussion of the use of PyArg_ParseTuple() for a string argument, it says that "in Standard C, the variable ... should properly be declared as "const char *" ". But if you look at any example code (such as in section 1.7, which covers parsing arguments; http://www.python.org/dev/ doc/devel/ext/parseTuple.html) and the docs for PyArg_ParseTuple() (found at http://www.python.org/dev/doc/ devel/api/arg-parsing.html) just use ``char *`` as the type for the argument for a string. Which is correct? I suspect that it is not required but more correct to have the variables declared ``const char *`` since you are not supposed to play with the string that is being pointed to (which is probably why the Unicode arguments in the docs for PyArg_ParseTuple() say it is ``const char *`` instead of just ``char *``). If this is true I will change the docs and the tutorial to use ``const char *``. But if it isn't, I will rip out the line saying that you should ``const char *`` since this is a contradiction at the moment of recommended practice. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2004-06-27 20:33 Message: Logged In: YES user_id=31435 Most of Python, and its docs, were written before C89 ("Standard C") was required. const is used sporadically as a result, mostly just in newer code and docs. Changing examples to use const char * should be fine, as that is best C practice. Just make sure the examples still work . ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980925&group_id=5470 From noreply at sourceforge.net Sun Jun 27 20:51:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 20:51:45 2004 Subject: [ python-Bugs-980938 ] smtplib.SMTP prints debug stuff to stdout Message-ID: Bugs item #980938, was opened at 2004-06-27 19:51 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980938&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Skip Montanaro (montanaro) Assigned to: Nobody/Anonymous (nobody) Summary: smtplib.SMTP prints debug stuff to stdout Initial Comment: The SMTP class in smtplib.py sends its debug output to stdout. This should at least be changed to stderr. Better yet would be to make the destination configurable via a set_debug_output() method. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980938&group_id=5470 From noreply at sourceforge.net Sun Jun 27 23:53:03 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Sun Jun 27 23:53:08 2004 Subject: [ python-Bugs-980986 ] Missing space in sec 3.3.1 Message-ID: Bugs item #980986, was opened at 2004-06-27 23:53 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980986&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Christopher Ingram (cold_drink) Assigned to: Nobody/Anonymous (nobody) Summary: Missing space in sec 3.3.1 Initial Comment: In http://www.python.org/doc/current/ref/customization.html there is a missing space under __hash__. "dictionaryoperations" is in a sentence, and I'm pretty sure that should be "dictionary operations". ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980986&group_id=5470 From noreply at sourceforge.net Mon Jun 28 00:34:16 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 00:34:25 2004 Subject: [ python-Bugs-977934 ] Python compiler encodes path to source in .pyc Message-ID: Bugs item #977934, was opened at 2004-06-22 21:17 Message generated for change (Comment added) made by jwpolley You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 Category: IDLE Group: Python 2.3 Status: Closed Resolution: Invalid Priority: 5 Submitted By: Jonathan Polley (jwpolley) Assigned to: Nobody/Anonymous (nobody) Summary: Python compiler encodes path to source in .pyc Initial Comment: This bug was notices in version 2.0, but I was able to reproduce it in version 2.3.4. When a python module is compiled, the path to that file is encoded in the .pyc file. This causes problems when a multi-platform development environment is used, in my case it is a hybrid UNIX/Windows platform. To reproduce the problem, perform the following steps from within IDLE: 1) run python 2) import crlf 3) exit python 4) copy the crlf.py and crlf.pyc files from Tools/Scripts to another directory 5) run python 6) add the path to the copies of crlf.py* to the start of the system path. 7) import crlf 8) print crlf.__file__ (this will yield the proper path to the files) 9) using the Open Module..., try to open the crlf module. You will get an error saying that the module was not found. This is a major problem when doing multi-platform debugging. If an exception is raised in a module that was compiled on another system (with a different path or OS), the debugger can not find the file to open it so it can be debugged. It also makes the "Open Module..." menu option unreliable. If you look in the .pyc file you will find the path to the location where the file was originally generated. ---------------------------------------------------------------------- >Comment By: Jonathan Polley (jwpolley) Date: 2004-06-27 23:34 Message: Logged In: YES user_id=391068 This bug was reported because the encoding of the path in the .pyc generates errors in some tools (namely IDLE) as well as generates erroneous tracebacks when exceptions are raised. When some of the not very software savvy script writers see "/rfs/ proj/cse/scripts" in their traceback, instead of the "P:/scripts" that they expect, it leads to confusion. I'm not sure why the __file__ and exception mechanism use different methods for determining the path to the file, but they do. The same can be said for the value of having the fully qualified path encoded in the .pyc. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-27 16:06 Message: Logged In: YES user_id=21627 I fail to see any kind of bug in this report. Yes, the path to the source code is hard-coded in the .pyc file, but that is intentional, and not a bug. If you want the tracebacks to be correct on multiple installations, you have to recreate the .pyc files every time. ---------------------------------------------------------------------- Comment By: Nick Bastin (mondragon) Date: 2004-06-25 09:48 Message: Logged In: YES user_id=430343 Changing this to be filed against IDLE, not the parser/compiler (should also fix the debugger as well). There is no magic value that the compiler could put in there that would make this right, so external tools will just have to deal with that. ---------------------------------------------------------------------- Comment By: Nick Bastin (mondragon) Date: 2004-06-25 09:47 Message: Logged In: YES user_id=430343 Changing this to be filed against IDLE, not the parser/compiler (should also fix the debugger as well). There is no magic value that the compiler could put in there that would make this right, so external tools will just have to deal with that. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 From noreply at sourceforge.net Mon Jun 28 00:52:25 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 00:52:30 2004 Subject: [ python-Bugs-977934 ] Python compiler encodes path to source in .pyc Message-ID: Bugs item #977934, was opened at 2004-06-23 04:17 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 Category: IDLE Group: Python 2.3 Status: Closed Resolution: Invalid Priority: 5 Submitted By: Jonathan Polley (jwpolley) Assigned to: Nobody/Anonymous (nobody) Summary: Python compiler encodes path to source in .pyc Initial Comment: This bug was notices in version 2.0, but I was able to reproduce it in version 2.3.4. When a python module is compiled, the path to that file is encoded in the .pyc file. This causes problems when a multi-platform development environment is used, in my case it is a hybrid UNIX/Windows platform. To reproduce the problem, perform the following steps from within IDLE: 1) run python 2) import crlf 3) exit python 4) copy the crlf.py and crlf.pyc files from Tools/Scripts to another directory 5) run python 6) add the path to the copies of crlf.py* to the start of the system path. 7) import crlf 8) print crlf.__file__ (this will yield the proper path to the files) 9) using the Open Module..., try to open the crlf module. You will get an error saying that the module was not found. This is a major problem when doing multi-platform debugging. If an exception is raised in a module that was compiled on another system (with a different path or OS), the debugger can not find the file to open it so it can be debugged. It also makes the "Open Module..." menu option unreliable. If you look in the .pyc file you will find the path to the location where the file was originally generated. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2004-06-28 06:52 Message: Logged In: YES user_id=21627 You will have to report errors on tools one by one, and some of them may not be fixable. From this report, it is not clear what precisely the error in IDLE is. In any case, the behaviour that the .pyc compiler embeds the source file path cannot and will not change. The mechanism for __file__ is different because __file__, for a .pyc file, refers to that very .pyc file, not to the source file. OTOH, the debug information refers to the source file. If the .pyc file says the source is in /rfs/proj/cse/scripts, you should arrange that the source file really is in that location. If you move the files to a different machine, you will need to delete the .pyc files. I do not completely agree that the traceback is erroneous. If your complaint is that it refers to a non-existing file: this could be fixed by not printing the name of the file if it doesn't exist. However, I would not think that this would be an improvement. If you think that the traceback should magically guess where the source file is: this is not implementable in a reliable way. ---------------------------------------------------------------------- Comment By: Jonathan Polley (jwpolley) Date: 2004-06-28 06:34 Message: Logged In: YES user_id=391068 This bug was reported because the encoding of the path in the .pyc generates errors in some tools (namely IDLE) as well as generates erroneous tracebacks when exceptions are raised. When some of the not very software savvy script writers see "/rfs/ proj/cse/scripts" in their traceback, instead of the "P:/scripts" that they expect, it leads to confusion. I'm not sure why the __file__ and exception mechanism use different methods for determining the path to the file, but they do. The same can be said for the value of having the fully qualified path encoded in the .pyc. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-27 23:06 Message: Logged In: YES user_id=21627 I fail to see any kind of bug in this report. Yes, the path to the source code is hard-coded in the .pyc file, but that is intentional, and not a bug. If you want the tracebacks to be correct on multiple installations, you have to recreate the .pyc files every time. ---------------------------------------------------------------------- Comment By: Nick Bastin (mondragon) Date: 2004-06-25 16:48 Message: Logged In: YES user_id=430343 Changing this to be filed against IDLE, not the parser/compiler (should also fix the debugger as well). There is no magic value that the compiler could put in there that would make this right, so external tools will just have to deal with that. ---------------------------------------------------------------------- Comment By: Nick Bastin (mondragon) Date: 2004-06-25 16:47 Message: Logged In: YES user_id=430343 Changing this to be filed against IDLE, not the parser/compiler (should also fix the debugger as well). There is no magic value that the compiler could put in there that would make this right, so external tools will just have to deal with that. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 From noreply at sourceforge.net Mon Jun 28 00:59:03 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 00:59:09 2004 Subject: [ python-Bugs-980925 ] Possible contradiction in "extending" and PyArg_ParseTuple() Message-ID: Bugs item #980925, was opened at 2004-06-27 17:02 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980925&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Brett Cannon (bcannon) Assigned to: Nobody/Anonymous (nobody) Summary: Possible contradiction in "extending" and PyArg_ParseTuple() Initial Comment: In section 1.3 of the "extending" tutorial (http://www.python.org/ dev/doc/devel/ext/backToExample.html), in the discussion of the use of PyArg_ParseTuple() for a string argument, it says that "in Standard C, the variable ... should properly be declared as "const char *" ". But if you look at any example code (such as in section 1.7, which covers parsing arguments; http://www.python.org/dev/ doc/devel/ext/parseTuple.html) and the docs for PyArg_ParseTuple() (found at http://www.python.org/dev/doc/ devel/api/arg-parsing.html) just use ``char *`` as the type for the argument for a string. Which is correct? I suspect that it is not required but more correct to have the variables declared ``const char *`` since you are not supposed to play with the string that is being pointed to (which is probably why the Unicode arguments in the docs for PyArg_ParseTuple() say it is ``const char *`` instead of just ``char *``). If this is true I will change the docs and the tutorial to use ``const char *``. But if it isn't, I will rip out the line saying that you should ``const char *`` since this is a contradiction at the moment of recommended practice. ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-27 21:59 Message: Logged In: YES user_id=357491 OK, I will definitely change the code examples. How about the docs for PyArg_ParseTuple() for "s" and friends? Should that stay ``char *`` as its listed argument type, or should I change that as well (Unicode arguments already say ``const char *`` so there is precedence). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-27 17:33 Message: Logged In: YES user_id=31435 Most of Python, and its docs, were written before C89 ("Standard C") was required. const is used sporadically as a result, mostly just in newer code and docs. Changing examples to use const char * should be fine, as that is best C practice. Just make sure the examples still work . ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980925&group_id=5470 From noreply at sourceforge.net Mon Jun 28 08:15:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 08:15:39 2004 Subject: [ python-Bugs-977934 ] Python compiler encodes path to source in .pyc Message-ID: Bugs item #977934, was opened at 2004-06-22 21:17 Message generated for change (Comment added) made by jwpolley You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 Category: IDLE Group: Python 2.3 Status: Closed Resolution: Invalid Priority: 5 Submitted By: Jonathan Polley (jwpolley) Assigned to: Nobody/Anonymous (nobody) Summary: Python compiler encodes path to source in .pyc Initial Comment: This bug was notices in version 2.0, but I was able to reproduce it in version 2.3.4. When a python module is compiled, the path to that file is encoded in the .pyc file. This causes problems when a multi-platform development environment is used, in my case it is a hybrid UNIX/Windows platform. To reproduce the problem, perform the following steps from within IDLE: 1) run python 2) import crlf 3) exit python 4) copy the crlf.py and crlf.pyc files from Tools/Scripts to another directory 5) run python 6) add the path to the copies of crlf.py* to the start of the system path. 7) import crlf 8) print crlf.__file__ (this will yield the proper path to the files) 9) using the Open Module..., try to open the crlf module. You will get an error saying that the module was not found. This is a major problem when doing multi-platform debugging. If an exception is raised in a module that was compiled on another system (with a different path or OS), the debugger can not find the file to open it so it can be debugged. It also makes the "Open Module..." menu option unreliable. If you look in the .pyc file you will find the path to the location where the file was originally generated. ---------------------------------------------------------------------- >Comment By: Jonathan Polley (jwpolley) Date: 2004-06-28 07:15 Message: Logged In: YES user_id=391068 >If you move the files to a different machine, you will need to delete the .pyc files. In my case, the files are not moved between machines. They are referenced by both Windows and UNIX systems simultaneously. The platform that references the .py file first will generate the .pyc, so we have a mixture of UNIX and Windows paths embedded in the .pyc files. In the case when we deliver python scripts, we only deliver compiled python files. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-27 23:52 Message: Logged In: YES user_id=21627 You will have to report errors on tools one by one, and some of them may not be fixable. From this report, it is not clear what precisely the error in IDLE is. In any case, the behaviour that the .pyc compiler embeds the source file path cannot and will not change. The mechanism for __file__ is different because __file__, for a .pyc file, refers to that very .pyc file, not to the source file. OTOH, the debug information refers to the source file. If the .pyc file says the source is in /rfs/proj/cse/scripts, you should arrange that the source file really is in that location. If you move the files to a different machine, you will need to delete the .pyc files. I do not completely agree that the traceback is erroneous. If your complaint is that it refers to a non-existing file: this could be fixed by not printing the name of the file if it doesn't exist. However, I would not think that this would be an improvement. If you think that the traceback should magically guess where the source file is: this is not implementable in a reliable way. ---------------------------------------------------------------------- Comment By: Jonathan Polley (jwpolley) Date: 2004-06-27 23:34 Message: Logged In: YES user_id=391068 This bug was reported because the encoding of the path in the .pyc generates errors in some tools (namely IDLE) as well as generates erroneous tracebacks when exceptions are raised. When some of the not very software savvy script writers see "/rfs/ proj/cse/scripts" in their traceback, instead of the "P:/scripts" that they expect, it leads to confusion. I'm not sure why the __file__ and exception mechanism use different methods for determining the path to the file, but they do. The same can be said for the value of having the fully qualified path encoded in the .pyc. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2004-06-27 16:06 Message: Logged In: YES user_id=21627 I fail to see any kind of bug in this report. Yes, the path to the source code is hard-coded in the .pyc file, but that is intentional, and not a bug. If you want the tracebacks to be correct on multiple installations, you have to recreate the .pyc files every time. ---------------------------------------------------------------------- Comment By: Nick Bastin (mondragon) Date: 2004-06-25 09:48 Message: Logged In: YES user_id=430343 Changing this to be filed against IDLE, not the parser/compiler (should also fix the debugger as well). There is no magic value that the compiler could put in there that would make this right, so external tools will just have to deal with that. ---------------------------------------------------------------------- Comment By: Nick Bastin (mondragon) Date: 2004-06-25 09:47 Message: Logged In: YES user_id=430343 Changing this to be filed against IDLE, not the parser/compiler (should also fix the debugger as well). There is no magic value that the compiler could put in there that would make this right, so external tools will just have to deal with that. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=977934&group_id=5470 From noreply at sourceforge.net Mon Jun 28 11:03:24 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 11:03:28 2004 Subject: [ python-Bugs-981299 ] Rsync protocol missing from urlparse Message-ID: Bugs item #981299, was opened at 2004-06-28 15:03 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=981299&group_id=5470 Category: Python Library Group: Python 2.2.3 Status: Open Resolution: None Priority: 5 Submitted By: Morten Kjeldgaard (mok0) Assigned to: Nobody/Anonymous (nobody) Summary: Rsync protocol missing from urlparse Initial Comment: The rsync protocol is missing from urlparse.py. >>> import urlparse >>> urlparse.urlparse("rsync://a.b.c/d/e") ('rsync', '', '//a.b.c/d/e', '', '', '') The machine name a.b.c should be in field 1 of the result, and the directory part /e/f in field 2. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=981299&group_id=5470 From noreply at sourceforge.net Mon Jun 28 11:51:15 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 11:51:22 2004 Subject: [ python-Bugs-980925 ] Possible contradiction in "extending" and PyArg_ParseTuple() Message-ID: Bugs item #980925, was opened at 2004-06-27 19:02 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980925&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Brett Cannon (bcannon) Assigned to: Nobody/Anonymous (nobody) Summary: Possible contradiction in "extending" and PyArg_ParseTuple() Initial Comment: In section 1.3 of the "extending" tutorial (http://www.python.org/ dev/doc/devel/ext/backToExample.html), in the discussion of the use of PyArg_ParseTuple() for a string argument, it says that "in Standard C, the variable ... should properly be declared as "const char *" ". But if you look at any example code (such as in section 1.7, which covers parsing arguments; http://www.python.org/dev/ doc/devel/ext/parseTuple.html) and the docs for PyArg_ParseTuple() (found at http://www.python.org/dev/doc/ devel/api/arg-parsing.html) just use ``char *`` as the type for the argument for a string. Which is correct? I suspect that it is not required but more correct to have the variables declared ``const char *`` since you are not supposed to play with the string that is being pointed to (which is probably why the Unicode arguments in the docs for PyArg_ParseTuple() say it is ``const char *`` instead of just ``char *``). If this is true I will change the docs and the tutorial to use ``const char *``. But if it isn't, I will rip out the line saying that you should ``const char *`` since this is a contradiction at the moment of recommended practice. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-28 10:51 Message: Logged In: YES user_id=80475 -0 on changing anything here. While technically correct, the proposed revisions can potentially create issues where none currently exist. I ignore const and things work find. The last thing I want to see are coercions like (const char *) sprouting-up here and there. --my two cents ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-27 23:59 Message: Logged In: YES user_id=357491 OK, I will definitely change the code examples. How about the docs for PyArg_ParseTuple() for "s" and friends? Should that stay ``char *`` as its listed argument type, or should I change that as well (Unicode arguments already say ``const char *`` so there is precedence). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-27 19:33 Message: Logged In: YES user_id=31435 Most of Python, and its docs, were written before C89 ("Standard C") was required. const is used sporadically as a result, mostly just in newer code and docs. Changing examples to use const char * should be fine, as that is best C practice. Just make sure the examples still work . ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980925&group_id=5470 From noreply at sourceforge.net Mon Jun 28 13:43:47 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 13:44:02 2004 Subject: [ python-Feature Requests-950644 ] Allow any lvalue for function definitions Message-ID: Feature Requests item #950644, was opened at 2004-05-08 21:52 Message generated for change (Comment added) made by gvanrossum You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=950644&group_id=5470 Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: David Albert Torpey (dtorp) >Assigned to: Nobody/Anonymous (nobody) Summary: Allow any lvalue for function definitions Initial Comment: A definition like: def M(x): return 2*x is the same as: M = lambda x: 2*x With the latter form, I can use any lvalue: A[0] = lambda x: 2*x B.f = lambda x: 2*x But with the first form, you're locked into just using a plain variable name. If this were fixed, it wouldn't break anything else but would be useful for making method definitons outside of a class definition: This came up when I was experimenting with David MacQuigg's ideas for prototype OO. I want to write something like: Account = Object.clone() Account.balance = 0 def Account.deposit(self, v): self.balance += v Unfortunately, the latter has to be written: def Account.deposit(self, v): self.balance += v Account.deposit = deposit ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-28 13:43 Message: Logged In: YES user_id=6380 This is the kind of thing that needs a lot of thought going into it to decide whether it is a net improvement to the language. Right now my gut feeling is that it's not worth the complexity, and more likely to be used towards unnecessary obfuscation. The redability gain is minimal if not negative IMO. Also, I've sometimes typed "def self.foo(args):" instead of "def foo(self, args):" suggesting that there's more than one intuitive way of interpreting the proposed syntax. Another minus point. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-24 13:08 Message: Logged In: YES user_id=80475 Guido, are you open to this? If so, I would be happy to draft a patch. I wouldn't expect it to become mainstream, but it would open the door to working with namespaces more directly. AFAICT, there is no downside to allowing this capability. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-21 01:57 Message: Logged In: YES user_id=80475 I think this should be made possible. It allows for alternate coding styles wihout harming anything else. The Lua programming language has something similar. It is a lightweight, non-OO language that revolves around making the best possible use of namespaces. Direct assignments into a namespace come up in several contexts throughout the language and are key to Lua's flexibility (using one concept to meet all needs). My only concern is that "def" and "class" statements also have the side effect of binding the __name__ attribute. We would have to come up with a lambda style placeholder for the attribute. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-05-19 20:56 Message: Logged In: YES user_id=99874 I'm highly dubious of this. I see little advantage to doing the definition and storing the value in a single line, mostly because I rarely want to do such a thing. Your example may be convincing in Prothon or some relative, but in Python the sensible way to do it is a normal method. I'd suggest that if you want this idea to go anywhere that you try posting this to c.l.py and seeing if you can drum up interest and support there. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=950644&group_id=5470 From noreply at sourceforge.net Mon Jun 28 13:54:48 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 13:54:51 2004 Subject: [ python-Bugs-970799 ] Pyton 2.3.4 Make Test Fails on Mac OS X Message-ID: Bugs item #970799, was opened at 2004-06-10 16:42 Message generated for change (Comment added) made by dekiefer You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970799&group_id=5470 Category: Build Group: Python 2.3 Status: Open Resolution: None Priority: 3 Submitted By: D. Evan Kiefer (dekiefer) Assigned to: Brett Cannon (bcannon) Summary: Pyton 2.3.4 Make Test Fails on Mac OS X Initial Comment: Under Mac OSX 10.3.4 with latest security update. Power Mac G4 733MHz Trying to install Zope 2.7.0 with Python 2.3.4. I first used fink to install 2.3.4 but Zope could find module 'os' to import. I then followed the instructions at: http://zope.org/Members/jens/docs/Document.2003-12-27.2431/ document_view to install Python with the default configure. Unlike under fink, doing this allowed me to run 'make test'. It too noted import problems for 'os' and 'site'. test_tempfile 'import site' failed; use -v for traceback Traceback (most recent call last): File "/Volumes/Spielen/Python-2.3.4/Lib/test/tf_inherit_check.py", line 6, in ? import os ImportError: No module named os test test_tempfile failed -- Traceback (most recent call last): File "/Volumes/Spielen/Python-2.3.4/Lib/test/test_tempfile.py", line 307, in test_noinherit self.failIf(retval > 0, "child process reports failure") File "/Volumes/Spielen/Python-2.3.4/Lib/unittest.py", line 274, in failIf if expr: raise self.failureException, msg AssertionError: child process reports failure test_atexit 'import site' failed; use -v for traceback Traceback (most recent call last): File "@test.py", line 1, in ? import atexit ImportError: No module named atexit test test_atexit failed -- '' == "handler2 (7,) {'kw': 'abc'}\nhandler2 () {}\nhandler1\n" test_audioop ---------- test_poll skipped -- select.poll not defined -- skipping test_poll test_popen 'import site' failed; use -v for traceback 'import site' failed; use -v for traceback 'import site' failed; use -v for traceback test_popen2 ------------------- 229 tests OK. 2 tests failed: test_atexit test_tempfile 24 tests skipped: test_al test_bsddb3 test_cd test_cl test_curses test_dl test_email_codecs test_gl test_imgfile test_largefile test_linuxaudiodev test_locale test_nis test_normalization test_ossaudiodev test_pep277 test_poll test_socket_ssl test_socketserver test_sunaudiodev test_timeout test_urllibnet test_winreg test_winsound Those skips are all expected on darwin. make: *** [test] Error 1 deksmacintosh:3-> ---------------------------------------------------------------------- >Comment By: D. Evan Kiefer (dekiefer) Date: 2004-06-28 10:54 Message: Logged In: YES user_id=318754 Do you get these failures if you compile Python from scratch instead of using Fink? How about running the tests directly? See above, the second time I installed without fink and ran the tests directly. The errors are from 'make test' done without fink. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-17 14:31 Message: Logged In: YES user_id=357491 Do you get these failures if you compile Python from scratch instead of using Fink? How about running the tests directly? I suspect the test_atexit failure is a Fink-specific issue and the test_tempfile failure was just a timing quirk since it has a threaded and that can make it sensitive to timing. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970799&group_id=5470 From noreply at sourceforge.net Mon Jun 28 15:31:15 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 15:31:21 2004 Subject: [ python-Bugs-979872 ] On HPUX 11i universal newlines seems to cause readline(s) to Message-ID: Bugs item #979872, was opened at 2004-06-25 13:23 Message generated for change (Comment added) made by dmcisaac You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979872&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: dmcisaac (dmcisaac) Assigned to: Nobody/Anonymous (nobody) Summary: On HPUX 11i universal newlines seems to cause readline(s) to Initial Comment: I compiled version 2.3.4 on hp-ux 11i, with shared and threads enabled, using Gnu c 3.3.3. 'make test' fails on all tests that use readline() and/or readlines() and test_univnewlines fails with a memory fault and core dump. All other tests pass that I expect to pass. If I hand modify pyconfig.h to comment out with universal newline support and recompile (after a make clean) then the readline(s) failures go away. I have also compiled without thread support and got the same failures as with using universal newlines. ---------------------------------------------------------------------- >Comment By: dmcisaac (dmcisaac) Date: 2004-06-28 15:31 Message: Logged In: YES user_id=1071078 On further investigation I got everything to compile the way I think it should if I ommit optimization. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979872&group_id=5470 From noreply at sourceforge.net Mon Jun 28 16:22:13 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 16:22:16 2004 Subject: [ python-Bugs-981530 ] UnboundLocalError in shutil.rmtree() Message-ID: Bugs item #981530, was opened at 2004-06-28 16:22 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=981530&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Guido van Rossum (gvanrossum) Assigned to: Guido van Rossum (gvanrossum) Summary: UnboundLocalError in shutil.rmtree() Initial Comment: It is possible for the call to onerror(func, arg, exc) to be triggered by an os.error raised by the os.listdir() call in _build_cmdtuple() rather than in the "for func, arg in cmdtuples" loop. In that case, func isn't set yet, and an UnboundLocalError is raised rather than onerror() being called. A quick fix for this is to set func=None before calling _build_cmdtuple, but the docs guarantee that it is set to a function when onerror is called (and specifically, os.remove and os.rmdir). I propose to set func = os.listdir before calling _build_cmdtuple() and fixing the docs. I can backport this to 2.3. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=981530&group_id=5470 From noreply at sourceforge.net Mon Jun 28 16:32:44 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 16:32:48 2004 Subject: [ python-Bugs-981530 ] UnboundLocalError in shutil.rmtree() Message-ID: Bugs item #981530, was opened at 2004-06-28 16:22 Message generated for change (Comment added) made by gvanrossum You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=981530&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open >Resolution: Fixed Priority: 5 Submitted By: Guido van Rossum (gvanrossum) Assigned to: Guido van Rossum (gvanrossum) Summary: UnboundLocalError in shutil.rmtree() Initial Comment: It is possible for the call to onerror(func, arg, exc) to be triggered by an os.error raised by the os.listdir() call in _build_cmdtuple() rather than in the "for func, arg in cmdtuples" loop. In that case, func isn't set yet, and an UnboundLocalError is raised rather than onerror() being called. A quick fix for this is to set func=None before calling _build_cmdtuple, but the docs guarantee that it is set to a function when onerror is called (and specifically, os.remove and os.rmdir). I propose to set func = os.listdir before calling _build_cmdtuple() and fixing the docs. I can backport this to 2.3. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-28 16:32 Message: Logged In: YES user_id=6380 Fixed in 2.3 and 2.4. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=981530&group_id=5470 From noreply at sourceforge.net Mon Jun 28 16:33:03 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 16:33:07 2004 Subject: [ python-Bugs-981530 ] UnboundLocalError in shutil.rmtree() Message-ID: Bugs item #981530, was opened at 2004-06-28 16:22 Message generated for change (Settings changed) made by gvanrossum You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=981530&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed Resolution: Fixed Priority: 5 Submitted By: Guido van Rossum (gvanrossum) Assigned to: Guido van Rossum (gvanrossum) Summary: UnboundLocalError in shutil.rmtree() Initial Comment: It is possible for the call to onerror(func, arg, exc) to be triggered by an os.error raised by the os.listdir() call in _build_cmdtuple() rather than in the "for func, arg in cmdtuples" loop. In that case, func isn't set yet, and an UnboundLocalError is raised rather than onerror() being called. A quick fix for this is to set func=None before calling _build_cmdtuple, but the docs guarantee that it is set to a function when onerror is called (and specifically, os.remove and os.rmdir). I propose to set func = os.listdir before calling _build_cmdtuple() and fixing the docs. I can backport this to 2.3. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2004-06-28 16:32 Message: Logged In: YES user_id=6380 Fixed in 2.3 and 2.4. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=981530&group_id=5470 From noreply at sourceforge.net Mon Jun 28 17:12:39 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 17:12:45 2004 Subject: [ python-Bugs-970799 ] Pyton 2.3.4 Make Test Fails on Mac OS X Message-ID: Bugs item #970799, was opened at 2004-06-11 01:42 Message generated for change (Comment added) made by jackjansen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970799&group_id=5470 Category: Build Group: Python 2.3 Status: Open Resolution: None Priority: 3 Submitted By: D. Evan Kiefer (dekiefer) Assigned to: Brett Cannon (bcannon) Summary: Pyton 2.3.4 Make Test Fails on Mac OS X Initial Comment: Under Mac OSX 10.3.4 with latest security update. Power Mac G4 733MHz Trying to install Zope 2.7.0 with Python 2.3.4. I first used fink to install 2.3.4 but Zope could find module 'os' to import. I then followed the instructions at: http://zope.org/Members/jens/docs/Document.2003-12-27.2431/ document_view to install Python with the default configure. Unlike under fink, doing this allowed me to run 'make test'. It too noted import problems for 'os' and 'site'. test_tempfile 'import site' failed; use -v for traceback Traceback (most recent call last): File "/Volumes/Spielen/Python-2.3.4/Lib/test/tf_inherit_check.py", line 6, in ? import os ImportError: No module named os test test_tempfile failed -- Traceback (most recent call last): File "/Volumes/Spielen/Python-2.3.4/Lib/test/test_tempfile.py", line 307, in test_noinherit self.failIf(retval > 0, "child process reports failure") File "/Volumes/Spielen/Python-2.3.4/Lib/unittest.py", line 274, in failIf if expr: raise self.failureException, msg AssertionError: child process reports failure test_atexit 'import site' failed; use -v for traceback Traceback (most recent call last): File "@test.py", line 1, in ? import atexit ImportError: No module named atexit test test_atexit failed -- '' == "handler2 (7,) {'kw': 'abc'}\nhandler2 () {}\nhandler1\n" test_audioop ---------- test_poll skipped -- select.poll not defined -- skipping test_poll test_popen 'import site' failed; use -v for traceback 'import site' failed; use -v for traceback 'import site' failed; use -v for traceback test_popen2 ------------------- 229 tests OK. 2 tests failed: test_atexit test_tempfile 24 tests skipped: test_al test_bsddb3 test_cd test_cl test_curses test_dl test_email_codecs test_gl test_imgfile test_largefile test_linuxaudiodev test_locale test_nis test_normalization test_ossaudiodev test_pep277 test_poll test_socket_ssl test_socketserver test_sunaudiodev test_timeout test_urllibnet test_winreg test_winsound Those skips are all expected on darwin. make: *** [test] Error 1 deksmacintosh:3-> ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2004-06-28 23:12 Message: Logged In: YES user_id=45365 My guess is that something in your environment is messing things up. Various of the tests that give the "import site failed" message use subprocesses. What I tend to do to debug issues like this is create a new dummy user (I tend to use the short name "luser" and the long name "Bill Gates":-), unpack a fresh distribution under that account and try building. ---------------------------------------------------------------------- Comment By: D. Evan Kiefer (dekiefer) Date: 2004-06-28 19:54 Message: Logged In: YES user_id=318754 Do you get these failures if you compile Python from scratch instead of using Fink? How about running the tests directly? See above, the second time I installed without fink and ran the tests directly. The errors are from 'make test' done without fink. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-17 23:31 Message: Logged In: YES user_id=357491 Do you get these failures if you compile Python from scratch instead of using Fink? How about running the tests directly? I suspect the test_atexit failure is a Fink-specific issue and the test_tempfile failure was just a timing quirk since it has a threaded and that can make it sensitive to timing. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970799&group_id=5470 From noreply at sourceforge.net Mon Jun 28 17:28:38 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 17:28:49 2004 Subject: [ python-Bugs-951851 ] Crash when reading "import table" of certain windows DLLs Message-ID: Bugs item #951851, was opened at 2004-05-11 13:02 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=951851&group_id=5470 Category: Windows Group: Python 2.3 Status: Open Resolution: None Priority: 7 Submitted By: Mark Hammond (mhammond) >Assigned to: Nobody/Anonymous (nobody) Summary: Crash when reading "import table" of certain windows DLLs Initial Comment: As diagnosed by Thomas Heller, via the python-win32 list. On Windows 2000, if your sys.path includes the Windows system32 directory, 'import wmi' will crash Python. To reproduce, change to the system32 directory, start Python, and execute 'import wmi'. Note that Windows XP does not crash. The problem is in GetPythonImport(), in code that tries to walk the PE 'import table'. AFAIK, this is code that checks the correct Python version is used, but I've never seen this code before. I'm not sure why the code is actually crashing (ie, what assumptions made about the import table are wrong), but I have a patch that checks a the pointers are valid before they are de-referenced. After the patch is applied, the result is the correct: "ImportError: dynamic module does not define init function (initwmi)" exception. Assigning to thomas for his review, then I'm happy to check in. I propose this as a 2.3 bugfix. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2004-06-28 23:28 Message: Logged In: YES user_id=11105 It seems Mark doesn't listen (or don't have time). I'd like to check this in for 2.4. Any objections? ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-01 20:35 Message: Logged In: YES user_id=11105 This is not yet accepted. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-01 19:45 Message: Logged In: YES user_id=11105 The reason the current code crashed when Python tries to import Win2k's or XP's wmi.dll as extension is that the size of the import table in this dll is zero. The first patch 'dynload_win.c-1.patch' fixes this by returning NULL in that case. The code, however, doesn't do what is intended in a debug build of Python. It looks for imports of 'python23.dll', when it should look for 'python23_d.dll' instead. The second patch 'dynload_win.c-2.patch' fixes this also (and includes the first patch as well). ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-12 03:56 Message: Logged In: YES user_id=14198 Seeing as it was the de-referencing of 'import_name' that crashed, I think a better patch is to terminate the outer while look as soon as we hit a bad string. Otherwise, this outer loop continues, 20 bytes at a time, until the outer pointer ('import_data') also goes bad or happens to reference \0. Attaching a slightly different patch, with comments and sizeof() change. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-12 01:00 Message: Logged In: YES user_id=14198 OK - will change to 12+so(WORD) And yes, I had seen this code - I meant "familiar with" :) Tim: Note that the import failure is not due to a bad import table (but the crash was). This code is trying to determine if a different version of Python is used. We are effectively skipping that check, and landing directly in the "does it have an init function?", then faling normally - ie, the code is now correctly *not* finding other Python versions linked against it. Thus, a different error message doesn't make much sense to me. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-11 17:45 Message: Logged In: YES user_id=11105 Oh, we have to give the /all option to dumpbin ;-) ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-11 17:42 Message: Logged In: YES user_id=11105 Tim, I don't think the import table format has changed, instead wmi.dll doesn't have an import table (for whatever reason). Maybe the code isn't able to handle that correctly. Since Python 2.3 as well at it's extensions are still built with MSVC 6, I *think* we should be safe with this code. I'll attach the output of running MSVC.NET 2003's 'dumpbin.exe \windows\system32\wmi.dll' on my WinXP Pro SP1 for the curious. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-05-11 17:20 Message: Logged In: YES user_id=31435 Mark, while you may not have seen this code before, you checked it in . IIRC, though, the person who *created* the patch was never heard from again. I don't understand what the code thinks it's doing either, exactly. The obvious concern: if the import table format has changed, won't we also fail to import legit C extensions? I haven't seen that happen yet, but I haven't yet built any extensions using VC 7.1 either. In any case, I'd agree it's better to get a mysterious import failure than a crash. Maybe the detail in the ImportError could be changed to indicate whan an import failure is due to a bad pointer. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-11 16:49 Message: Logged In: YES user_id=11105 IMO, IsBadReadPointer(import_data, 12 + sizeof(DWORD)) should be enough. Yes, please add a comment in the code. This is a little bit hackish, but it fixes the problem. And the real problem can always be fixed later, if needed. And, BTW, python 2.3.3 crashes on Windows XP as well. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-11 13:05 Message: Logged In: YES user_id=14198 Actually, I guess a comment regarding the pointer checks and referencing this bug would be nice :) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=951851&group_id=5470 From noreply at sourceforge.net Mon Jun 28 18:16:58 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 18:17:05 2004 Subject: [ python-Bugs-970799 ] Pyton 2.3.4 Make Test Fails on Mac OS X Message-ID: Bugs item #970799, was opened at 2004-06-10 16:42 Message generated for change (Comment added) made by dekiefer You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970799&group_id=5470 Category: Build Group: Python 2.3 Status: Open Resolution: None Priority: 3 Submitted By: D. Evan Kiefer (dekiefer) Assigned to: Brett Cannon (bcannon) Summary: Pyton 2.3.4 Make Test Fails on Mac OS X Initial Comment: Under Mac OSX 10.3.4 with latest security update. Power Mac G4 733MHz Trying to install Zope 2.7.0 with Python 2.3.4. I first used fink to install 2.3.4 but Zope could find module 'os' to import. I then followed the instructions at: http://zope.org/Members/jens/docs/Document.2003-12-27.2431/ document_view to install Python with the default configure. Unlike under fink, doing this allowed me to run 'make test'. It too noted import problems for 'os' and 'site'. test_tempfile 'import site' failed; use -v for traceback Traceback (most recent call last): File "/Volumes/Spielen/Python-2.3.4/Lib/test/tf_inherit_check.py", line 6, in ? import os ImportError: No module named os test test_tempfile failed -- Traceback (most recent call last): File "/Volumes/Spielen/Python-2.3.4/Lib/test/test_tempfile.py", line 307, in test_noinherit self.failIf(retval > 0, "child process reports failure") File "/Volumes/Spielen/Python-2.3.4/Lib/unittest.py", line 274, in failIf if expr: raise self.failureException, msg AssertionError: child process reports failure test_atexit 'import site' failed; use -v for traceback Traceback (most recent call last): File "@test.py", line 1, in ? import atexit ImportError: No module named atexit test test_atexit failed -- '' == "handler2 (7,) {'kw': 'abc'}\nhandler2 () {}\nhandler1\n" test_audioop ---------- test_poll skipped -- select.poll not defined -- skipping test_poll test_popen 'import site' failed; use -v for traceback 'import site' failed; use -v for traceback 'import site' failed; use -v for traceback test_popen2 ------------------- 229 tests OK. 2 tests failed: test_atexit test_tempfile 24 tests skipped: test_al test_bsddb3 test_cd test_cl test_curses test_dl test_email_codecs test_gl test_imgfile test_largefile test_linuxaudiodev test_locale test_nis test_normalization test_ossaudiodev test_pep277 test_poll test_socket_ssl test_socketserver test_sunaudiodev test_timeout test_urllibnet test_winreg test_winsound Those skips are all expected on darwin. make: *** [test] Error 1 deksmacintosh:3-> ---------------------------------------------------------------------- >Comment By: D. Evan Kiefer (dekiefer) Date: 2004-06-28 15:16 Message: Logged In: YES user_id=318754 Thanks Jack, I'll try that. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2004-06-28 14:12 Message: Logged In: YES user_id=45365 My guess is that something in your environment is messing things up. Various of the tests that give the "import site failed" message use subprocesses. What I tend to do to debug issues like this is create a new dummy user (I tend to use the short name "luser" and the long name "Bill Gates":-), unpack a fresh distribution under that account and try building. ---------------------------------------------------------------------- Comment By: D. Evan Kiefer (dekiefer) Date: 2004-06-28 10:54 Message: Logged In: YES user_id=318754 Do you get these failures if you compile Python from scratch instead of using Fink? How about running the tests directly? See above, the second time I installed without fink and ran the tests directly. The errors are from 'make test' done without fink. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-17 14:31 Message: Logged In: YES user_id=357491 Do you get these failures if you compile Python from scratch instead of using Fink? How about running the tests directly? I suspect the test_atexit failure is a Fink-specific issue and the test_tempfile failure was just a timing quirk since it has a threaded and that can make it sensitive to timing. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=970799&group_id=5470 From noreply at sourceforge.net Mon Jun 28 19:20:38 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 19:20:50 2004 Subject: [ python-Bugs-951851 ] Crash when reading "import table" of certain windows DLLs Message-ID: Bugs item #951851, was opened at 2004-05-11 21:02 Message generated for change (Comment added) made by mhammond You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=951851&group_id=5470 Category: Windows Group: Python 2.3 Status: Open Resolution: None Priority: 7 Submitted By: Mark Hammond (mhammond) Assigned to: Nobody/Anonymous (nobody) Summary: Crash when reading "import table" of certain windows DLLs Initial Comment: As diagnosed by Thomas Heller, via the python-win32 list. On Windows 2000, if your sys.path includes the Windows system32 directory, 'import wmi' will crash Python. To reproduce, change to the system32 directory, start Python, and execute 'import wmi'. Note that Windows XP does not crash. The problem is in GetPythonImport(), in code that tries to walk the PE 'import table'. AFAIK, this is code that checks the correct Python version is used, but I've never seen this code before. I'm not sure why the code is actually crashing (ie, what assumptions made about the import table are wrong), but I have a patch that checks a the pointers are valid before they are de-referenced. After the patch is applied, the result is the correct: "ImportError: dynamic module does not define init function (initwmi)" exception. Assigning to thomas for his review, then I'm happy to check in. I propose this as a 2.3 bugfix. ---------------------------------------------------------------------- >Comment By: Mark Hammond (mhammond) Date: 2004-06-29 09:20 Message: Logged In: YES user_id=14198 I'm sorry, but I'm not sure what I don't listen to :) You are correct about being short of time though. What would you like me to do? ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-29 07:28 Message: Logged In: YES user_id=11105 It seems Mark doesn't listen (or don't have time). I'd like to check this in for 2.4. Any objections? ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-02 04:35 Message: Logged In: YES user_id=11105 This is not yet accepted. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-02 03:45 Message: Logged In: YES user_id=11105 The reason the current code crashed when Python tries to import Win2k's or XP's wmi.dll as extension is that the size of the import table in this dll is zero. The first patch 'dynload_win.c-1.patch' fixes this by returning NULL in that case. The code, however, doesn't do what is intended in a debug build of Python. It looks for imports of 'python23.dll', when it should look for 'python23_d.dll' instead. The second patch 'dynload_win.c-2.patch' fixes this also (and includes the first patch as well). ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-12 11:56 Message: Logged In: YES user_id=14198 Seeing as it was the de-referencing of 'import_name' that crashed, I think a better patch is to terminate the outer while look as soon as we hit a bad string. Otherwise, this outer loop continues, 20 bytes at a time, until the outer pointer ('import_data') also goes bad or happens to reference \0. Attaching a slightly different patch, with comments and sizeof() change. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-12 09:00 Message: Logged In: YES user_id=14198 OK - will change to 12+so(WORD) And yes, I had seen this code - I meant "familiar with" :) Tim: Note that the import failure is not due to a bad import table (but the crash was). This code is trying to determine if a different version of Python is used. We are effectively skipping that check, and landing directly in the "does it have an init function?", then faling normally - ie, the code is now correctly *not* finding other Python versions linked against it. Thus, a different error message doesn't make much sense to me. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-12 01:45 Message: Logged In: YES user_id=11105 Oh, we have to give the /all option to dumpbin ;-) ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-12 01:42 Message: Logged In: YES user_id=11105 Tim, I don't think the import table format has changed, instead wmi.dll doesn't have an import table (for whatever reason). Maybe the code isn't able to handle that correctly. Since Python 2.3 as well at it's extensions are still built with MSVC 6, I *think* we should be safe with this code. I'll attach the output of running MSVC.NET 2003's 'dumpbin.exe \windows\system32\wmi.dll' on my WinXP Pro SP1 for the curious. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-05-12 01:20 Message: Logged In: YES user_id=31435 Mark, while you may not have seen this code before, you checked it in . IIRC, though, the person who *created* the patch was never heard from again. I don't understand what the code thinks it's doing either, exactly. The obvious concern: if the import table format has changed, won't we also fail to import legit C extensions? I haven't seen that happen yet, but I haven't yet built any extensions using VC 7.1 either. In any case, I'd agree it's better to get a mysterious import failure than a crash. Maybe the detail in the ImportError could be changed to indicate whan an import failure is due to a bad pointer. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-12 00:49 Message: Logged In: YES user_id=11105 IMO, IsBadReadPointer(import_data, 12 + sizeof(DWORD)) should be enough. Yes, please add a comment in the code. This is a little bit hackish, but it fixes the problem. And the real problem can always be fixed later, if needed. And, BTW, python 2.3.3 crashes on Windows XP as well. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-11 21:05 Message: Logged In: YES user_id=14198 Actually, I guess a comment regarding the pointer checks and referencing this bug would be nice :) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=951851&group_id=5470 From noreply at sourceforge.net Mon Jun 28 23:50:47 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Mon Jun 28 23:50:52 2004 Subject: [ python-Bugs-980925 ] Possible contradiction in "extending" and PyArg_ParseTuple() Message-ID: Bugs item #980925, was opened at 2004-06-27 17:02 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980925&group_id=5470 Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Brett Cannon (bcannon) Assigned to: Nobody/Anonymous (nobody) Summary: Possible contradiction in "extending" and PyArg_ParseTuple() Initial Comment: In section 1.3 of the "extending" tutorial (http://www.python.org/ dev/doc/devel/ext/backToExample.html), in the discussion of the use of PyArg_ParseTuple() for a string argument, it says that "in Standard C, the variable ... should properly be declared as "const char *" ". But if you look at any example code (such as in section 1.7, which covers parsing arguments; http://www.python.org/dev/ doc/devel/ext/parseTuple.html) and the docs for PyArg_ParseTuple() (found at http://www.python.org/dev/doc/ devel/api/arg-parsing.html) just use ``char *`` as the type for the argument for a string. Which is correct? I suspect that it is not required but more correct to have the variables declared ``const char *`` since you are not supposed to play with the string that is being pointed to (which is probably why the Unicode arguments in the docs for PyArg_ParseTuple() say it is ``const char *`` instead of just ``char *``). If this is true I will change the docs and the tutorial to use ``const char *``. But if it isn't, I will rip out the line saying that you should ``const char *`` since this is a contradiction at the moment of recommended practice. ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-28 20:50 Message: Logged In: YES user_id=357491 Perhaps a comment about it then? So it would still say ``char *`` in bold, but in the body mention that ``const char *`` is more proper, but could lead to extra casting. Would that bump you to >= +0? Regardless of that issue I went ahead and changed Doc/ext/extending.tex sans section 1.8 where the example is accredited to someone specific. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-28 08:51 Message: Logged In: YES user_id=80475 -0 on changing anything here. While technically correct, the proposed revisions can potentially create issues where none currently exist. I ignore const and things work find. The last thing I want to see are coercions like (const char *) sprouting-up here and there. --my two cents ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-06-27 21:59 Message: Logged In: YES user_id=357491 OK, I will definitely change the code examples. How about the docs for PyArg_ParseTuple() for "s" and friends? Should that stay ``char *`` as its listed argument type, or should I change that as well (Unicode arguments already say ``const char *`` so there is precedence). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-27 17:33 Message: Logged In: YES user_id=31435 Most of Python, and its docs, were written before C89 ("Standard C") was required. const is used sporadically as a result, mostly just in newer code and docs. Changing examples to use const char * should be fine, as that is best C practice. Just make sure the examples still work . ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980925&group_id=5470 From noreply at sourceforge.net Tue Jun 29 00:09:02 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 00:09:09 2004 Subject: [ python-Bugs-981299 ] Rsync protocol missing from urlparse Message-ID: Bugs item #981299, was opened at 2004-06-28 08:03 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=981299&group_id=5470 Category: Python Library Group: Python 2.2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Morten Kjeldgaard (mok0) Assigned to: Nobody/Anonymous (nobody) Summary: Rsync protocol missing from urlparse Initial Comment: The rsync protocol is missing from urlparse.py. >>> import urlparse >>> urlparse.urlparse("rsync://a.b.c/d/e") ('rsync', '', '//a.b.c/d/e', '', '', '') The machine name a.b.c should be in field 1 of the result, and the directory part /e/f in field 2. ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-28 21:09 Message: Logged In: YES user_id=357491 Fixed in rev. 1.45 in HEAD and rev. 1.40.10.1 in 2.3 . ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=981299&group_id=5470 From noreply at sourceforge.net Tue Jun 29 00:14:28 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 00:14:32 2004 Subject: [ python-Bugs-980986 ] Missing space in sec 3.3.1 Message-ID: Bugs item #980986, was opened at 2004-06-27 20:53 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980986&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Christopher Ingram (cold_drink) Assigned to: Nobody/Anonymous (nobody) Summary: Missing space in sec 3.3.1 Initial Comment: In http://www.python.org/doc/current/ref/customization.html there is a missing space under __hash__. "dictionaryoperations" is in a sentence, and I'm pretty sure that should be "dictionary operations". ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2004-06-28 21:14 Message: Logged In: YES user_id=357491 Checked into rev. 1.120 of Doc/ref/ref3.text . ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980986&group_id=5470 From noreply at sourceforge.net Tue Jun 29 04:51:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 04:51:24 2004 Subject: [ python-Bugs-215126 ] Over restricted type checking on eval() function Message-ID: Bugs item #215126, was opened at 2000-09-22 14:36 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 Category: Python Interpreter Core Group: Feature Request Status: Open Resolution: None Priority: 6 Submitted By: Nobody/Anonymous (nobody) Assigned to: Raymond Hettinger (rhettinger) Summary: Over restricted type checking on eval() function Initial Comment: The built-in function eval() takes a string argument and a dictionary. The second argument should allow any instance which defines __getitem__ as opposed to just dictionaries. The following example creates a type error: eval, argument 2: expected dictionary, instance found class SpreadSheet: _cells = {} def __setitem__( self, key, formula ): self._cells[key] = formula def __getitem__( self, key ): return eval( self._cells[key], self ) ss = SpreadSheet() ss['a1'] = '5' ss['a2'] = 'a1*5' ss['a2'] ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-29 03:51 Message: Logged In: YES user_id=80475 Okay, a new patch is ready for second review. It is essentially, Armin's patch with the following changes: * leaves the syntax for eval() intact with no automatic globals=locals trick * has the faster approach for LOAD_NAME, incorporating your suggestion for PyDict_CheckExact * omits the code to enable the same liberties for '.exec'. That is beyond the OP's request and well beyond something whose implications I've thought through. Am not opposed to it, but would like it to be left for a separate patch with thorough unittests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 13:08 Message: Logged In: YES user_id=80475 Nix, that last comment. Have examples that call setitem(). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 12:36 Message: Logged In: YES user_id=80475 Changed to PyDict_CheckExact(). Are you sure about having to change the sets and dels. I've tried several things at the interactive prompt and can't get it to fail: >>> [locals() for i in (2,3)] Do you have any examples (in part, so I can use them as test cases)? ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-27 06:33 Message: Logged In: YES user_id=4771 eval() can set items in the locals, e.g. by using list comprehension. Moreover the ability to exec in custom dicts would be extremely useful too, e.g. to catch definitions while they appear. If you really want to avoid any performance impact, using PyDict_CheckExact() in all critical parts looks good (do not use PyDict_Check(), it's both slower and not what we want because it prevents subclasses of dicts to override __xxxitem__). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 03:37 Message: Logged In: YES user_id=80475 Okay, the patch is ready for second review. Attaching the revision with unittests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 02:50 Message: Logged In: YES user_id=80475 Attaching my minimal version of the patch. It runs the attached demo without exception and does not measurably show on any of my timings. The approach is to restrict the generalization to eval() instead of exec(). Since eval() can't set values in the locals dict, no changes are needed to the setitem and delitem calls. Instead of using PyObject_GetItem() directly, I do a regular lookup and fallback to the generalizaiton if necessary -- this is why the normal case doesn't get slowed down (the cost is a PyDict_Check which uses values already in cache, and a branch predicatable comparison).. While the demo script runs, and the test_suite passes, it is slightly too simple and doesn't yet handle eval('dir()', globals(), M()) where M is a non-dict mapping. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 11:59 Message: Logged In: YES user_id=80475 The quick patch works fine. Do change the PyArg_ParseTuple() into the faster PyArg_UnpackTuple(). Does this patch show any changes to pystone or other key metrics? Would the PyDict_GetItem trick have better performance? My guess is that it would. +1 on using PyMapping_Check() for checking the locals argument to eval(). That is as good as you can get without actually running it. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-22 16:07 Message: Logged In: YES user_id=4771 Quick patch attached. I didn't try to use the PyDict_GetItem trick described, but just systematically use PyObject_GetItem/SetItem/DelItem when working with f_locals. This might confuse some extension modules that expect PyEval_GetLocals() to return a dict object. The eval trick is now: eval(code, nondict) --> eval(code, globals(), nondict). Besides eval() I removed the relevant typecheck from execfile() and the exec statement. Any other place I am missing? We might want to still somehow check the type of the locals, to avoid strange errors caused by e.g. eval("a", "b"). PyMapping_Check() is the obvious candidate, but it looks like a hack. More testing is needed. test_descrtut.py line 84 now succeeds, unexpectedly, which is interpreted as a test failure. Needs some docs, too. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:27 Message: Logged In: YES user_id=80475 Armin, can you whip-up a quick patch so that we can explore the implications of your suggestion. Ideally, I would like to see something like this go in for Py2.4. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-22 11:53 Message: Logged In: YES user_id=80475 +1 Amrin's idea provides most of the needed functionality with zero performance impact. Also, using a local dictionary for the application variables is likely to be just exactly what a programmer would want to do. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 13:21 Message: Logged In: YES user_id=4771 With minimal work and performance impact, we could allow frame->f_locals to be something else than a real dictionary. It looks like it would be easily possible as long as we don't touch the time-critical frame->f_globals. Code compiled by eval() always uses LOAD_NAME and STORE_NAME, which is anyway allowed to be a bit slower than LOAD_FAST or LOAD_GLOBAL. Note that PyDict_GetItem() & co., as called by LOAD/STORE_NAME, do a PyDict_Check() anyway. We could just replace it with PyDict_CheckExact() and if it fails fall back to calling the appropriate ob_type->tp_as_mapping method. This would turn some of the PyDict_Xxx() API into a general mapping API, half-way between its current dict-only usage and the fully abstract PyObject_Xxx() API. This is maybe a bit strange from the user's point of view, because eval("expr", mydict) still wouldn't work: only eval("expr", {}, mydict) would. which does which does Now, there is no way I can think of that code compiled by eval() could contain LOAD_GLOBAL or STORE_GLOBAL. The only way to tell the difference between eval("expr", mydict) and eval("expr", {}, mydict) appears to be by calling globals() or somehow inspecting the frame directly. Therefore it might be acceptable to redefine the two-argument eval(expr, dict) to be equivalent not to eval(expr, dict, dict) but eval(expr, {}, dict). This hack might be useful enough even if it won't work with the exec statement. ---------------------------------------------------------------------- Comment By: Philippe Fremy (pfremy) Date: 2004-03-03 04:40 Message: Logged In: YES user_id=233844 I have exactly the same need as the original poster. The only difference in my case is that I inhreted a dictionnary. I want to use the eval() function as a simple expression evaluator. I have the follwing dictionnary: d['a']='1' d['b']='2' d['c']='a+b' I want the following results: d[a] -> 1 d[b] -> 2 d[c] -> 3 To do that, I was planning to use the eval() function and overloading the __getitem__ of the global or local dictionnary: class MyDict( dict ) : def __getitem__( self, key ): print "__getitem__", key val = dict.__getitem__( self, key ) print "val = '%s'" % val return eval( val , self ) But it does not work: d[a]: __getitem__ a val = '1' -> 1 d[b]: __getitem__ b val = '2' -> 2 d[c]: __getitem__ c val = 'e+1' ERROR Traceback (most recent call last): File "test_parse_jaycos_config.py", line 83, in testMyDict self.assertEquals( d['c'], 2 ) File "parse_config_file.py", line 10, in __getitem__ return eval( val , self ) File "", line 0, in ? TypeError: cannot concatenate 'str' and 'int' objects d['c'] did fetch the 'a+1' value, which was passed to eval. However, eval() tried to evaluate the expression using the content of the dictionnary instead of using __getitem__. So it fetched the string '1' for 'a' instead of the value 1, so the result is not suitable for the addition. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-01-12 17:01 Message: Logged In: YES user_id=99874 Hmm... I like this! Of course, I am wary of adding *yet another* special double- underscore function to satisfy a single special purpose, but this would satisfy all of *my* needs, and without any performance impact for lookups that are FOUND. Lookups that are NOT found would have a slight performance degrade (I know better than to speculate about the size of the effect without measuring it). I'm not really sure what percentage of dict lookups succeed. At any rate, what are these "other contexts" you mention in which a __keyerror__ would "also be useful"? Because if we can find other places where it is useful, that significantly improves the usefulness. OTOH, if the tests can be done ONLY for eval (I don't really think that's possible), then I'm certain no one cares about the performance of eval. ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2004-01-12 14:38 Message: Logged In: YES user_id=292741 This may be a compromise solution, which could also be useful in other contexts: What if the object passed is derived from dict - presumably that doesn't help because the point is to do low-level calls to the actual dict's lookup functions. Now, suppose we modify the basic dict type, so that before throwing a KeyError, it checks to see if it is really a derived object with a method __keyerror__, and if so, calls that and returns its result (or lets it throw)? Now you can make objects that look like dicts, and act like them at the low level, but can automatically populate themselves when non-existent keys are requested. Of course, __keyerror__('x') does not have to make an 'x' entry in the dict; it could make no change, or add several entries, depending on the desired semantics regarding future lookups. It could be set up so that every lookup fails and is forwarded by __keyerror__ to the __getitem__ of another object, for instance. The cost of this to the 'normal' dict lookup is that the need to do PyDict_CheckExact() each time a lookup fails. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2003-12-16 10:37 Message: Logged In: YES user_id=99874 I'll just add my voice as somebody who would find this to be "darn handy" if it could ever done without noticably impacting the speed of python code that DOESN'T use eval(). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 23:18 Message: Added to PEP 42. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 14:42 Message: Changed Group to Feature Request. Should be added to PEP 42 (I'll do that if nobody beats me to it). CPython requires a genuine dict now for speed. I believe JPython allows any mapping object. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 From noreply at sourceforge.net Tue Jun 29 04:51:54 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 04:51:59 2004 Subject: [ python-Bugs-215126 ] Over restricted type checking on eval() function Message-ID: Bugs item #215126, was opened at 2000-09-22 14:36 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 Category: Python Interpreter Core Group: Feature Request Status: Open Resolution: None Priority: 6 Submitted By: Nobody/Anonymous (nobody) >Assigned to: Armin Rigo (arigo) Summary: Over restricted type checking on eval() function Initial Comment: The built-in function eval() takes a string argument and a dictionary. The second argument should allow any instance which defines __getitem__ as opposed to just dictionaries. The following example creates a type error: eval, argument 2: expected dictionary, instance found class SpreadSheet: _cells = {} def __setitem__( self, key, formula ): self._cells[key] = formula def __getitem__( self, key ): return eval( self._cells[key], self ) ss = SpreadSheet() ss['a1'] = '5' ss['a2'] = 'a1*5' ss['a2'] ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-29 03:51 Message: Logged In: YES user_id=80475 Okay, a new patch is ready for second review. It is essentially, Armin's patch with the following changes: * leaves the syntax for eval() intact with no automatic globals=locals trick * has the faster approach for LOAD_NAME, incorporating your suggestion for PyDict_CheckExact * omits the code to enable the same liberties for '.exec'. That is beyond the OP's request and well beyond something whose implications I've thought through. Am not opposed to it, but would like it to be left for a separate patch with thorough unittests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 13:08 Message: Logged In: YES user_id=80475 Nix, that last comment. Have examples that call setitem(). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 12:36 Message: Logged In: YES user_id=80475 Changed to PyDict_CheckExact(). Are you sure about having to change the sets and dels. I've tried several things at the interactive prompt and can't get it to fail: >>> [locals() for i in (2,3)] Do you have any examples (in part, so I can use them as test cases)? ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-27 06:33 Message: Logged In: YES user_id=4771 eval() can set items in the locals, e.g. by using list comprehension. Moreover the ability to exec in custom dicts would be extremely useful too, e.g. to catch definitions while they appear. If you really want to avoid any performance impact, using PyDict_CheckExact() in all critical parts looks good (do not use PyDict_Check(), it's both slower and not what we want because it prevents subclasses of dicts to override __xxxitem__). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 03:37 Message: Logged In: YES user_id=80475 Okay, the patch is ready for second review. Attaching the revision with unittests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 02:50 Message: Logged In: YES user_id=80475 Attaching my minimal version of the patch. It runs the attached demo without exception and does not measurably show on any of my timings. The approach is to restrict the generalization to eval() instead of exec(). Since eval() can't set values in the locals dict, no changes are needed to the setitem and delitem calls. Instead of using PyObject_GetItem() directly, I do a regular lookup and fallback to the generalizaiton if necessary -- this is why the normal case doesn't get slowed down (the cost is a PyDict_Check which uses values already in cache, and a branch predicatable comparison).. While the demo script runs, and the test_suite passes, it is slightly too simple and doesn't yet handle eval('dir()', globals(), M()) where M is a non-dict mapping. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 11:59 Message: Logged In: YES user_id=80475 The quick patch works fine. Do change the PyArg_ParseTuple() into the faster PyArg_UnpackTuple(). Does this patch show any changes to pystone or other key metrics? Would the PyDict_GetItem trick have better performance? My guess is that it would. +1 on using PyMapping_Check() for checking the locals argument to eval(). That is as good as you can get without actually running it. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-22 16:07 Message: Logged In: YES user_id=4771 Quick patch attached. I didn't try to use the PyDict_GetItem trick described, but just systematically use PyObject_GetItem/SetItem/DelItem when working with f_locals. This might confuse some extension modules that expect PyEval_GetLocals() to return a dict object. The eval trick is now: eval(code, nondict) --> eval(code, globals(), nondict). Besides eval() I removed the relevant typecheck from execfile() and the exec statement. Any other place I am missing? We might want to still somehow check the type of the locals, to avoid strange errors caused by e.g. eval("a", "b"). PyMapping_Check() is the obvious candidate, but it looks like a hack. More testing is needed. test_descrtut.py line 84 now succeeds, unexpectedly, which is interpreted as a test failure. Needs some docs, too. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:27 Message: Logged In: YES user_id=80475 Armin, can you whip-up a quick patch so that we can explore the implications of your suggestion. Ideally, I would like to see something like this go in for Py2.4. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-22 11:53 Message: Logged In: YES user_id=80475 +1 Amrin's idea provides most of the needed functionality with zero performance impact. Also, using a local dictionary for the application variables is likely to be just exactly what a programmer would want to do. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 13:21 Message: Logged In: YES user_id=4771 With minimal work and performance impact, we could allow frame->f_locals to be something else than a real dictionary. It looks like it would be easily possible as long as we don't touch the time-critical frame->f_globals. Code compiled by eval() always uses LOAD_NAME and STORE_NAME, which is anyway allowed to be a bit slower than LOAD_FAST or LOAD_GLOBAL. Note that PyDict_GetItem() & co., as called by LOAD/STORE_NAME, do a PyDict_Check() anyway. We could just replace it with PyDict_CheckExact() and if it fails fall back to calling the appropriate ob_type->tp_as_mapping method. This would turn some of the PyDict_Xxx() API into a general mapping API, half-way between its current dict-only usage and the fully abstract PyObject_Xxx() API. This is maybe a bit strange from the user's point of view, because eval("expr", mydict) still wouldn't work: only eval("expr", {}, mydict) would. which does which does Now, there is no way I can think of that code compiled by eval() could contain LOAD_GLOBAL or STORE_GLOBAL. The only way to tell the difference between eval("expr", mydict) and eval("expr", {}, mydict) appears to be by calling globals() or somehow inspecting the frame directly. Therefore it might be acceptable to redefine the two-argument eval(expr, dict) to be equivalent not to eval(expr, dict, dict) but eval(expr, {}, dict). This hack might be useful enough even if it won't work with the exec statement. ---------------------------------------------------------------------- Comment By: Philippe Fremy (pfremy) Date: 2004-03-03 04:40 Message: Logged In: YES user_id=233844 I have exactly the same need as the original poster. The only difference in my case is that I inhreted a dictionnary. I want to use the eval() function as a simple expression evaluator. I have the follwing dictionnary: d['a']='1' d['b']='2' d['c']='a+b' I want the following results: d[a] -> 1 d[b] -> 2 d[c] -> 3 To do that, I was planning to use the eval() function and overloading the __getitem__ of the global or local dictionnary: class MyDict( dict ) : def __getitem__( self, key ): print "__getitem__", key val = dict.__getitem__( self, key ) print "val = '%s'" % val return eval( val , self ) But it does not work: d[a]: __getitem__ a val = '1' -> 1 d[b]: __getitem__ b val = '2' -> 2 d[c]: __getitem__ c val = 'e+1' ERROR Traceback (most recent call last): File "test_parse_jaycos_config.py", line 83, in testMyDict self.assertEquals( d['c'], 2 ) File "parse_config_file.py", line 10, in __getitem__ return eval( val , self ) File "", line 0, in ? TypeError: cannot concatenate 'str' and 'int' objects d['c'] did fetch the 'a+1' value, which was passed to eval. However, eval() tried to evaluate the expression using the content of the dictionnary instead of using __getitem__. So it fetched the string '1' for 'a' instead of the value 1, so the result is not suitable for the addition. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-01-12 17:01 Message: Logged In: YES user_id=99874 Hmm... I like this! Of course, I am wary of adding *yet another* special double- underscore function to satisfy a single special purpose, but this would satisfy all of *my* needs, and without any performance impact for lookups that are FOUND. Lookups that are NOT found would have a slight performance degrade (I know better than to speculate about the size of the effect without measuring it). I'm not really sure what percentage of dict lookups succeed. At any rate, what are these "other contexts" you mention in which a __keyerror__ would "also be useful"? Because if we can find other places where it is useful, that significantly improves the usefulness. OTOH, if the tests can be done ONLY for eval (I don't really think that's possible), then I'm certain no one cares about the performance of eval. ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2004-01-12 14:38 Message: Logged In: YES user_id=292741 This may be a compromise solution, which could also be useful in other contexts: What if the object passed is derived from dict - presumably that doesn't help because the point is to do low-level calls to the actual dict's lookup functions. Now, suppose we modify the basic dict type, so that before throwing a KeyError, it checks to see if it is really a derived object with a method __keyerror__, and if so, calls that and returns its result (or lets it throw)? Now you can make objects that look like dicts, and act like them at the low level, but can automatically populate themselves when non-existent keys are requested. Of course, __keyerror__('x') does not have to make an 'x' entry in the dict; it could make no change, or add several entries, depending on the desired semantics regarding future lookups. It could be set up so that every lookup fails and is forwarded by __keyerror__ to the __getitem__ of another object, for instance. The cost of this to the 'normal' dict lookup is that the need to do PyDict_CheckExact() each time a lookup fails. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2003-12-16 10:37 Message: Logged In: YES user_id=99874 I'll just add my voice as somebody who would find this to be "darn handy" if it could ever done without noticably impacting the speed of python code that DOESN'T use eval(). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 23:18 Message: Added to PEP 42. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 14:42 Message: Changed Group to Feature Request. Should be added to PEP 42 (I'll do that if nobody beats me to it). CPython requires a genuine dict now for speed. I believe JPython allows any mapping object. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 From noreply at sourceforge.net Tue Jun 29 06:36:31 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 06:36:35 2004 Subject: [ python-Bugs-215126 ] Over restricted type checking on eval() function Message-ID: Bugs item #215126, was opened at 2000-09-22 19:36 Message generated for change (Comment added) made by arigo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 Category: Python Interpreter Core Group: Feature Request Status: Open Resolution: None Priority: 6 Submitted By: Nobody/Anonymous (nobody) Assigned to: Armin Rigo (arigo) Summary: Over restricted type checking on eval() function Initial Comment: The built-in function eval() takes a string argument and a dictionary. The second argument should allow any instance which defines __getitem__ as opposed to just dictionaries. The following example creates a type error: eval, argument 2: expected dictionary, instance found class SpreadSheet: _cells = {} def __setitem__( self, key, formula ): self._cells[key] = formula def __getitem__( self, key ): return eval( self._cells[key], self ) ss = SpreadSheet() ss['a1'] = '5' ss['a2'] = 'a1*5' ss['a2'] ---------------------------------------------------------------------- >Comment By: Armin Rigo (arigo) Date: 2004-06-29 10:36 Message: Logged In: YES user_id=4771 Doing the type check in exec and execfile() but not in eval() is not something that seems particularly useful to me. Any program can be written as an expression in Python if you are crazy enough to do that... So it doesn't offer any extra security to be more strict in exec than in eval(). People who really want to do it would have to go through incredible pain just to work around the type check. For the implications, I believe it is sufficient (and necessary) to carefully review all usages of f_locals throughout the code, and document f_locals as no longer necessary a dictionary for those extension writers that would have used this fact. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-29 08:51 Message: Logged In: YES user_id=80475 Okay, a new patch is ready for second review. It is essentially, Armin's patch with the following changes: * leaves the syntax for eval() intact with no automatic globals=locals trick * has the faster approach for LOAD_NAME, incorporating your suggestion for PyDict_CheckExact * omits the code to enable the same liberties for '.exec'. That is beyond the OP's request and well beyond something whose implications I've thought through. Am not opposed to it, but would like it to be left for a separate patch with thorough unittests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 18:08 Message: Logged In: YES user_id=80475 Nix, that last comment. Have examples that call setitem(). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 17:36 Message: Logged In: YES user_id=80475 Changed to PyDict_CheckExact(). Are you sure about having to change the sets and dels. I've tried several things at the interactive prompt and can't get it to fail: >>> [locals() for i in (2,3)] Do you have any examples (in part, so I can use them as test cases)? ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-27 11:33 Message: Logged In: YES user_id=4771 eval() can set items in the locals, e.g. by using list comprehension. Moreover the ability to exec in custom dicts would be extremely useful too, e.g. to catch definitions while they appear. If you really want to avoid any performance impact, using PyDict_CheckExact() in all critical parts looks good (do not use PyDict_Check(), it's both slower and not what we want because it prevents subclasses of dicts to override __xxxitem__). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 08:37 Message: Logged In: YES user_id=80475 Okay, the patch is ready for second review. Attaching the revision with unittests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 07:50 Message: Logged In: YES user_id=80475 Attaching my minimal version of the patch. It runs the attached demo without exception and does not measurably show on any of my timings. The approach is to restrict the generalization to eval() instead of exec(). Since eval() can't set values in the locals dict, no changes are needed to the setitem and delitem calls. Instead of using PyObject_GetItem() directly, I do a regular lookup and fallback to the generalizaiton if necessary -- this is why the normal case doesn't get slowed down (the cost is a PyDict_Check which uses values already in cache, and a branch predicatable comparison).. While the demo script runs, and the test_suite passes, it is slightly too simple and doesn't yet handle eval('dir()', globals(), M()) where M is a non-dict mapping. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 16:59 Message: Logged In: YES user_id=80475 The quick patch works fine. Do change the PyArg_ParseTuple() into the faster PyArg_UnpackTuple(). Does this patch show any changes to pystone or other key metrics? Would the PyDict_GetItem trick have better performance? My guess is that it would. +1 on using PyMapping_Check() for checking the locals argument to eval(). That is as good as you can get without actually running it. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-22 21:07 Message: Logged In: YES user_id=4771 Quick patch attached. I didn't try to use the PyDict_GetItem trick described, but just systematically use PyObject_GetItem/SetItem/DelItem when working with f_locals. This might confuse some extension modules that expect PyEval_GetLocals() to return a dict object. The eval trick is now: eval(code, nondict) --> eval(code, globals(), nondict). Besides eval() I removed the relevant typecheck from execfile() and the exec statement. Any other place I am missing? We might want to still somehow check the type of the locals, to avoid strange errors caused by e.g. eval("a", "b"). PyMapping_Check() is the obvious candidate, but it looks like a hack. More testing is needed. test_descrtut.py line 84 now succeeds, unexpectedly, which is interpreted as a test failure. Needs some docs, too. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 07:27 Message: Logged In: YES user_id=80475 Armin, can you whip-up a quick patch so that we can explore the implications of your suggestion. Ideally, I would like to see something like this go in for Py2.4. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-22 16:53 Message: Logged In: YES user_id=80475 +1 Amrin's idea provides most of the needed functionality with zero performance impact. Also, using a local dictionary for the application variables is likely to be just exactly what a programmer would want to do. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 18:21 Message: Logged In: YES user_id=4771 With minimal work and performance impact, we could allow frame->f_locals to be something else than a real dictionary. It looks like it would be easily possible as long as we don't touch the time-critical frame->f_globals. Code compiled by eval() always uses LOAD_NAME and STORE_NAME, which is anyway allowed to be a bit slower than LOAD_FAST or LOAD_GLOBAL. Note that PyDict_GetItem() & co., as called by LOAD/STORE_NAME, do a PyDict_Check() anyway. We could just replace it with PyDict_CheckExact() and if it fails fall back to calling the appropriate ob_type->tp_as_mapping method. This would turn some of the PyDict_Xxx() API into a general mapping API, half-way between its current dict-only usage and the fully abstract PyObject_Xxx() API. This is maybe a bit strange from the user's point of view, because eval("expr", mydict) still wouldn't work: only eval("expr", {}, mydict) would. which does which does Now, there is no way I can think of that code compiled by eval() could contain LOAD_GLOBAL or STORE_GLOBAL. The only way to tell the difference between eval("expr", mydict) and eval("expr", {}, mydict) appears to be by calling globals() or somehow inspecting the frame directly. Therefore it might be acceptable to redefine the two-argument eval(expr, dict) to be equivalent not to eval(expr, dict, dict) but eval(expr, {}, dict). This hack might be useful enough even if it won't work with the exec statement. ---------------------------------------------------------------------- Comment By: Philippe Fremy (pfremy) Date: 2004-03-03 09:40 Message: Logged In: YES user_id=233844 I have exactly the same need as the original poster. The only difference in my case is that I inhreted a dictionnary. I want to use the eval() function as a simple expression evaluator. I have the follwing dictionnary: d['a']='1' d['b']='2' d['c']='a+b' I want the following results: d[a] -> 1 d[b] -> 2 d[c] -> 3 To do that, I was planning to use the eval() function and overloading the __getitem__ of the global or local dictionnary: class MyDict( dict ) : def __getitem__( self, key ): print "__getitem__", key val = dict.__getitem__( self, key ) print "val = '%s'" % val return eval( val , self ) But it does not work: d[a]: __getitem__ a val = '1' -> 1 d[b]: __getitem__ b val = '2' -> 2 d[c]: __getitem__ c val = 'e+1' ERROR Traceback (most recent call last): File "test_parse_jaycos_config.py", line 83, in testMyDict self.assertEquals( d['c'], 2 ) File "parse_config_file.py", line 10, in __getitem__ return eval( val , self ) File "", line 0, in ? TypeError: cannot concatenate 'str' and 'int' objects d['c'] did fetch the 'a+1' value, which was passed to eval. However, eval() tried to evaluate the expression using the content of the dictionnary instead of using __getitem__. So it fetched the string '1' for 'a' instead of the value 1, so the result is not suitable for the addition. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-01-12 22:01 Message: Logged In: YES user_id=99874 Hmm... I like this! Of course, I am wary of adding *yet another* special double- underscore function to satisfy a single special purpose, but this would satisfy all of *my* needs, and without any performance impact for lookups that are FOUND. Lookups that are NOT found would have a slight performance degrade (I know better than to speculate about the size of the effect without measuring it). I'm not really sure what percentage of dict lookups succeed. At any rate, what are these "other contexts" you mention in which a __keyerror__ would "also be useful"? Because if we can find other places where it is useful, that significantly improves the usefulness. OTOH, if the tests can be done ONLY for eval (I don't really think that's possible), then I'm certain no one cares about the performance of eval. ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2004-01-12 19:38 Message: Logged In: YES user_id=292741 This may be a compromise solution, which could also be useful in other contexts: What if the object passed is derived from dict - presumably that doesn't help because the point is to do low-level calls to the actual dict's lookup functions. Now, suppose we modify the basic dict type, so that before throwing a KeyError, it checks to see if it is really a derived object with a method __keyerror__, and if so, calls that and returns its result (or lets it throw)? Now you can make objects that look like dicts, and act like them at the low level, but can automatically populate themselves when non-existent keys are requested. Of course, __keyerror__('x') does not have to make an 'x' entry in the dict; it could make no change, or add several entries, depending on the desired semantics regarding future lookups. It could be set up so that every lookup fails and is forwarded by __keyerror__ to the __getitem__ of another object, for instance. The cost of this to the 'normal' dict lookup is that the need to do PyDict_CheckExact() each time a lookup fails. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2003-12-16 15:37 Message: Logged In: YES user_id=99874 I'll just add my voice as somebody who would find this to be "darn handy" if it could ever done without noticably impacting the speed of python code that DOESN'T use eval(). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-23 04:18 Message: Added to PEP 42. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 19:42 Message: Changed Group to Feature Request. Should be added to PEP 42 (I'll do that if nobody beats me to it). CPython requires a genuine dict now for speed. I believe JPython allows any mapping object. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 From noreply at sourceforge.net Tue Jun 29 08:11:39 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 08:11:44 2004 Subject: [ python-Bugs-693416 ] 2.3a2 import after os.chdir difference Message-ID: Bugs item #693416, was opened at 2003-02-25 23:42 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=693416&group_id=5470 Category: None >Group: Not a Bug >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: James P Rutledge (jrut) >Assigned to: A.M. Kuchling (akuchling) Summary: 2.3a2 import after os.chdir difference Initial Comment: In Python 2.3a2 in interactive mode an import after an os.chdir imports the module in the new current directory after the os.chdir. This is the same as Python 2.2.2 does both in interactive and non-interactive mode. In Python 2.3a2 in non-interactive mode an import after an os.chdir does not import the module in the new current directory after the os.chdir. Instead it attempts to import a module (if present) by that name in the previous current directory before the os.chdir. If there is not a module by that name in the previous current directory, there is an ImportError exception. The above results are on a Debian Linux system using an Intel 32 bit processor. No PYTHONSTARTUP environment variable was set. No PYTHONPATH environment variable was set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-29 08:11 Message: Logged In: YES user_id=11375 Closing, because this isn't a bug. ---------------------------------------------------------------------- Comment By: Aaron Brady (insomnike) Date: 2004-06-05 16:00 Message: Logged In: YES user_id=1057404 Right, I'm nearly sure that this is the behaviour that we want. When run interactively, the first item in sys.path is '', but when run from a script, it is the path to the directory that the script lives in. If we reverted to the 2.2.2 behaviour, multi-module scripts couldn't use os.chdir() for fear of randomly breaking imports. ---------------------------------------------------------------------- Comment By: James P Rutledge (jrut) Date: 2003-02-26 09:31 Message: Logged In: YES user_id=720847 Additional Info -- I have now tried more than one os.chdir before the import in non-interactive mode and found that the words "previous current directory" in the original description should be more accurately expressed as "ORIGINAL current directory." ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=693416&group_id=5470 From noreply at sourceforge.net Tue Jun 29 09:19:57 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 09:20:02 2004 Subject: [ python-Bugs-912845 ] urllib2 checks for http return code 200 only. Message-ID: Bugs item #912845, was opened at 2004-03-09 11:32 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=912845&group_id=5470 Category: Python Library Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Ahmed F. (oneofone) >Assigned to: A.M. Kuchling (akuchling) Summary: urllib2 checks for http return code 200 only. Initial Comment: from urllib2 import * req = Request("http://someurl/page.html", headers={'range: bytes=%s':'20-40'}) result = urlopen(req) will die with something like : File "/usr/lib/python2.3/urllib2.py", line 306, in _call_chain result = func(*args) File "/usr/lib/python2.3/urllib2.py", line 412, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 206: Partial Content line 892 in {PATH}/urllib2.py should be changed from : if code == 200: to if code in [200, 206]: peace ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-29 09:19 Message: Logged In: YES user_id=11375 Applied to CVS; thanks! ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=912845&group_id=5470 From noreply at sourceforge.net Tue Jun 29 09:35:23 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 09:35:28 2004 Subject: [ python-Bugs-978556 ] Broken URLs in sha module docs Message-ID: Bugs item #978556, was opened at 2004-06-23 17:47 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978556&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Troels Therkelsen (troelst) >Assigned to: A.M. Kuchling (akuchling) Summary: Broken URLs in sha module docs Initial Comment: The following URLs are broken: http://csrc.nist.gov/publications/fips/fips180-1/fip180- 1.txt http://csrc.nist.gov/publications/fips/fips180-1/fip180- 1.pdf Only thing I can suggest to fix this is to change the links to point to the pdf document describing the FIPS 180-2 version of the algorithm(s) as this document describes the SHA-1 algorithm in addition to the higher bit count SHA algorithms. AFAIK there's no text version, but then again, I didn't look too hard for one. FIPS 180-2 URL: http://csrc.nist.gov/publications/fips/fips180-2/fips180- 2withchangenotice.pdf ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-29 09:35 Message: Logged In: YES user_id=11375 Fixed in CVS; thanks! ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978556&group_id=5470 From noreply at sourceforge.net Tue Jun 29 09:37:57 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 09:38:01 2004 Subject: [ python-Bugs-976880 ] mmap needs a rfind method Message-ID: Bugs item #976880, was opened at 2004-06-21 12:59 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976880&group_id=5470 Category: Extension Modules Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Nicholas Riley (nriley) Assigned to: Nobody/Anonymous (nobody) Summary: mmap needs a rfind method Initial Comment: It would be convenient to have an 'rfind' method equivalent to the string one; the only (slow, wasteful) alternative I can find is taking slices of the mmap, or using successive seek/read, followed by rindex. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-29 09:37 Message: Logged In: YES user_id=11375 The best way of seeing an rfind() method would be to implement it and submit a patch. (Otherwise it's a feature request, and processing bugs usually takes precendence over implementing features.) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=976880&group_id=5470 From noreply at sourceforge.net Tue Jun 29 09:52:21 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 09:52:27 2004 Subject: [ python-Bugs-948970 ] PyExc_* not in index Message-ID: Bugs item #948970, was opened at 2004-05-06 02:30 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=948970&group_id=5470 Category: Documentation >Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Thomas Heller (theller) >Assigned to: A.M. Kuchling (akuchling) Summary: PyExc_* not in index Initial Comment: The PyExc_* variables are not in the Python/C API index. This prevents finding them easily. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2004-06-29 09:52 Message: Logged In: YES user_id=11375 Fixed in CVS. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=948970&group_id=5470 From noreply at sourceforge.net Tue Jun 29 11:23:16 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 11:23:20 2004 Subject: [ python-Bugs-982008 ] PyObject_Str() and PyObject_Repr() corrupt object type Message-ID: Bugs item #982008, was opened at 2004-06-29 08:23 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982008&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Kevin Gadd (janusfury) Assigned to: Nobody/Anonymous (nobody) Summary: PyObject_Str() and PyObject_Repr() corrupt object type Initial Comment: For background: I am working on a modification to a game engine that uses Python for automation. I am trying to expose a few new objects to python scripts. The engine already exposes a dozen or so different objects to scripts, and I am exposing these new objects the exact same way. I am unable to override str() or repr() for my objects. If I set the appropriate member in the type structure (tp_str or tp_repr), PyObject_Str()/PyObject_Repr() mangle the object when trying to convert it. If I leave the structure member empty, everything works (except for the str()/repr() override not working, naturally.) I've gone over all my code multiple times, and can't find anything that could possibly cause this. When I step through the code in the VC++ debugger, I can see the ob_type member of my object become corrupted at a certain point in PyObject_Str()'s execution. This block of code: if (v->ob_type->tp_str == NULL) return PyObject_Repr(v); res = (*v->ob_type->tp_str)(v); Is apparently the culprit. If tp_str is NULL, the line following the if is never reached. If it is set, however, that line is executed, and for some reason (one which I cannot determine, since tp_str isn't documented at all), that line calls a method of std::string, and once that function pointer call returns, my object's ob_type member has been changed to an invalid pointer (usually something like 0x00000032.) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982008&group_id=5470 From noreply at sourceforge.net Tue Jun 29 11:54:30 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 11:54:34 2004 Subject: [ python-Bugs-982008 ] PyObject_Str() and PyObject_Repr() corrupt object type Message-ID: Bugs item #982008, was opened at 2004-06-29 08:23 Message generated for change (Comment added) made by janusfury You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982008&group_id=5470 Category: None Group: None >Status: Closed Resolution: None Priority: 5 Submitted By: Kevin Gadd (janusfury) Assigned to: Nobody/Anonymous (nobody) Summary: PyObject_Str() and PyObject_Repr() corrupt object type Initial Comment: For background: I am working on a modification to a game engine that uses Python for automation. I am trying to expose a few new objects to python scripts. The engine already exposes a dozen or so different objects to scripts, and I am exposing these new objects the exact same way. I am unable to override str() or repr() for my objects. If I set the appropriate member in the type structure (tp_str or tp_repr), PyObject_Str()/PyObject_Repr() mangle the object when trying to convert it. If I leave the structure member empty, everything works (except for the str()/repr() override not working, naturally.) I've gone over all my code multiple times, and can't find anything that could possibly cause this. When I step through the code in the VC++ debugger, I can see the ob_type member of my object become corrupted at a certain point in PyObject_Str()'s execution. This block of code: if (v->ob_type->tp_str == NULL) return PyObject_Repr(v); res = (*v->ob_type->tp_str)(v); Is apparently the culprit. If tp_str is NULL, the line following the if is never reached. If it is set, however, that line is executed, and for some reason (one which I cannot determine, since tp_str isn't documented at all), that line calls a method of std::string, and once that function pointer call returns, my object's ob_type member has been changed to an invalid pointer (usually something like 0x00000032.) ---------------------------------------------------------------------- >Comment By: Kevin Gadd (janusfury) Date: 2004-06-29 08:54 Message: Logged In: YES user_id=313047 Sorry, this issue turned out to be a global scope problem. My tp_str method had the same name and signature as a method defined globally by a library I'm currently using, and for some reason the debugger couldn't jump into it. Sorry for the trouble. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982008&group_id=5470 From noreply at sourceforge.net Tue Jun 29 12:34:22 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 12:34:25 2004 Subject: [ python-Bugs-982049 ] Relative Path causing crash in RotatingFileHandler Message-ID: Bugs item #982049, was opened at 2004-06-29 10:34 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982049&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: David London (groodude) Assigned to: Nobody/Anonymous (nobody) Summary: Relative Path causing crash in RotatingFileHandler Initial Comment: When using the RotatingFileHandler class, if the working directory is changed after the logging is setup and a relative file name has been used to set up the handler, python will crash when a rollover is attempted. In my application, I set up the logger first (using a config file) and then I start reading the config file for the application. This is done so that I can log any errors found within the application config file. However, if a certain option is set, then the application has to change the working directory. But since the file name that I have included is relative, when the logger attempts to rollover the file, it crashes since the log file can no longer be found within the current directory. Is this the desired behaviour (i.e. does the logger expect to have absolute paths)? If so, this would be a good thing to add to the documentation (along with an example of a RotatingFileHandler configuration in section 6.28.7.2). It seems to make more sense that when a file name is passed in, the absolute path be stored. Even if this is the desired behaviour, I think that an example of the RotatingFileConfig handler should be included in the configuration section, complete with values for the maxbytes and the number of back ups. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982049&group_id=5470 From noreply at sourceforge.net Tue Jun 29 14:55:26 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 14:55:30 2004 Subject: [ python-Bugs-980092 ] tp_subclasses grow without bounds Message-ID: Bugs item #980092, was opened at 2004-06-26 01:54 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980092&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Eric Huss (ehuss) Assigned to: Nobody/Anonymous (nobody) Summary: tp_subclasses grow without bounds Initial Comment: Python 2.3.4 When heap allocated type objects are created, they will be added to their base class's tp_subclasses list as a weak reference. If, for example, your base type is PyBaseObject_Type, then the tp_subclasses list for the base object type will grow for each new object. Unfortunately remove_subclass is never called. If your newly create type objects are deleted, then you will end up with a bunch of weak reference objects in the tp_subclasses list that do not reference anything. Perhaps remove_subclass should be called inside type_dealloc? Or, better yet, tp_subclasses should be a Weak Set. I'm not certain what's the best solution. ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-29 19:55 Message: Logged In: YES user_id=6656 It's not quite as bad as you might think, because add_subclass will stomp on a dead reference if it finds one. So the length of tp_subclasses is bounded by the number of bases that exist at any one time, which doesn't seem too bad to me. Still, it would seem cleaner to do the removal at type_dealloc time... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980092&group_id=5470 From noreply at sourceforge.net Tue Jun 29 14:59:22 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 14:59:27 2004 Subject: [ python-Bugs-924301 ] A leak case with cmd.py & readline & exception Message-ID: Bugs item #924301, was opened at 2004-03-27 00:28 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Sverker Nilsson (svenil) Assigned to: Michael Hudson (mwh) Summary: A leak case with cmd.py & readline & exception Initial Comment: A leak to do with readline & cmd, in Python 2.3. I found out what hold on to my interactive objects too long ('for ever') in certain circumstances. The circumstance had to do with an exception being raised in Cmd.cmdloop and handled (or not handled) outside of Cmd.cmdloop. In cmd.py, class Cmd, in cmdloop(), if an exception is raised and propagated out from the interior of cmdloop, the function postloop() is not called. The default function of this, (in 2.3) when the readline library is present, is to restore the completer, via: readline.set_completer(self.old_completer) If this is not done, the newly (by preloop) inserted completer will remain. Even if the loop is called again and run without exception, the new completer will remain, because then in postloop the old completer will be set to our new completer. When we exit, the completer will remain the one we set. This will hold on to our object, aka 'leak'. - In cmd.py in 2.2 no attempt was made to restore the completer, so this was also a kind of leak, but it was replaced the next time a Cmd instance was initialized. Now, however, the next time we will not replace the old completer, but both of them will remain in memory. The old one will be stored as self.old_completer. If we terminate with an exception, bad luck... the stored completer retains both of the instances. If we terminate normally, the old one will be retained. In no case do we restore the space of the first instance. The only way that would happen, would be if the second instance first exited the loop with an exception, and then entered the loop again an exited normally. But then, the second instance is retained instead! If each instance happens to terminate with an exception, it seems well possible that an ever increasing chain of leaking instances will be accumulated. My fix is to always call the postloop, given the preloop succeeded. This is done via a try:finally clause. def cmdloop(self, intro=None): ... self.preloop() try: ... finally: # Make sure postloop called self.postloop() I am attaching my patched version of cmd.py. It was originally from the tarball of Python 2.3.3 downloaded from Python.org some month or so ago in which cmd.py had this size & date: 14504 Feb 19 2003 cmd.py Best regards, Sverker Nilsson ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2004-06-29 19:59 Message: Logged In: YES user_id=6656 OK, so I was misreading (or reading an old version, or something). I agree with your comments about the bogosities, however it's not my problem today :-) Do you think I should check the attached patch in? I do. ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-06-23 18:58 Message: Logged In: YES user_id=356603 The constructor takes parameters stdin and stdout, and sets self.stdin and self.stdout from them or from sys. sys.std[in,out] are then not used directly except implicitly by raw_input. This seems to have changed somewhere between Python 2.2 and 2.3. Also, I remember an old version had the cmdqueue list as a class variable which was not at all thread safe - now it is an instance variable. I am hoping/thinking it is thread safe now... It seems superfluos to have both the use_rawinput flag and stdin parameter. At least raw_input should not be used if stdin is some other file than the raw input. But I don't have a simple suggestion to fix this, for one thing, it wouldn't be sufficient to compare the stdin parameter to sys.stdin since that file could have been changed so it wasn't the raw stdin anymore. Perhaps the module could store away the original sys.stdin as it was imported... but that wouldn't quite work since there is no guarantee sys.stdin hadn't already been changed. I guess if it is worth the trouble, if someone has an idea, could be a good thing to fix, anyway... ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-22 10:07 Message: Logged In: YES user_id=6656 Um. Unless I'm *hopelessly* misreading things, cmd.Cmd writes to sys.stdout unconditionally and either calls raw_input() or sys.stdin.readline(). So I'm not sure how one would "use a cmd.Cmd instance in a separate thread, talking eg via a socket file" without rewriting such huge amounts of the class that thread- safety becomes your own problem. Apologies if I'm being dumb... also, please note: I didn't write this module :-) ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-06-22 01:34 Message: Logged In: YES user_id=356603 Your comment about threads worries me, I am not sure I understand it. Would it be unsafe to use a cmd.Cmd instance in a separate thread, talking eg via a socket file? The instance is used only by that thread and not by others, but there may be other threads using other instances. I understand that it could be unsafe to have two threads share the same instance, but how about different instances? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-21 11:35 Message: Logged In: YES user_id=6656 Well, that would seem to be easy enough to fix (see attached). If you're using cmd.Cmd instances from different threads at the same time, mind, I think you're screwed anyway. You're certainly walking on thin ice... ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-06-18 20:13 Message: Logged In: YES user_id=356603 I think it is OK. Just noting that it changes the completer (just like my version) even if use_rawinput is false. I guess one should remember to pass a null completekey in that case, in case some other thread was using raw_input. Perhaps a check for use_rawinput could be added in cmd.py to avoid changing the completer in that case, for less risk of future mistakes. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 13:29 Message: Logged In: YES user_id=6656 yay, that appears to have worked. let me know what you think. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 13:26 Message: Logged In: YES user_id=6656 trying again... ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-01 11:10 Message: Logged In: YES user_id=6656 Bah. I don't have the laptop with the patch with me, I'll try uploading again in a couple of days. ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-05-29 08:43 Message: Logged In: YES user_id=356603 I couldn't find a new attached file. I acknowledge some problems with my original patch, but have no other suggestion at the moment. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-26 17:36 Message: Logged In: YES user_id=6656 What do you think of the attached? This makes the documentation of pre & post loop more accurate again, which I think is nice. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-19 10:38 Message: Logged In: YES user_id=6656 This is where I go "I wish I'd reviewed that patch more carefully". In particular, the documentation of {pre,post}loop is now out of date. I wonder setting/getting the completer in these functions was a good idea. Hmm. This bug report confuses me :-) but I can certainly see the intent of the patch... ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-19 02:52 Message: Logged In: YES user_id=80475 Michael, this touches some of your code. Do you want to handle this one? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 From noreply at sourceforge.net Tue Jun 29 15:46:44 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 15:46:55 2004 Subject: [ python-Bugs-951851 ] Crash when reading "import table" of certain windows DLLs Message-ID: Bugs item #951851, was opened at 2004-05-11 13:02 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=951851&group_id=5470 Category: Windows Group: Python 2.3 Status: Open Resolution: None Priority: 7 Submitted By: Mark Hammond (mhammond) Assigned to: Nobody/Anonymous (nobody) Summary: Crash when reading "import table" of certain windows DLLs Initial Comment: As diagnosed by Thomas Heller, via the python-win32 list. On Windows 2000, if your sys.path includes the Windows system32 directory, 'import wmi' will crash Python. To reproduce, change to the system32 directory, start Python, and execute 'import wmi'. Note that Windows XP does not crash. The problem is in GetPythonImport(), in code that tries to walk the PE 'import table'. AFAIK, this is code that checks the correct Python version is used, but I've never seen this code before. I'm not sure why the code is actually crashing (ie, what assumptions made about the import table are wrong), but I have a patch that checks a the pointers are valid before they are de-referenced. After the patch is applied, the result is the correct: "ImportError: dynamic module does not define init function (initwmi)" exception. Assigning to thomas for his review, then I'm happy to check in. I propose this as a 2.3 bugfix. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2004-06-29 21:46 Message: Logged In: YES user_id=11105 Sorry if I was confusing. I simply want to know if this should be checked in or not. Maybe you can review the code, and/or try it out. Thanks. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-06-29 01:20 Message: Logged In: YES user_id=14198 I'm sorry, but I'm not sure what I don't listen to :) You are correct about being short of time though. What would you like me to do? ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-28 23:28 Message: Logged In: YES user_id=11105 It seems Mark doesn't listen (or don't have time). I'd like to check this in for 2.4. Any objections? ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-01 20:35 Message: Logged In: YES user_id=11105 This is not yet accepted. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-01 19:45 Message: Logged In: YES user_id=11105 The reason the current code crashed when Python tries to import Win2k's or XP's wmi.dll as extension is that the size of the import table in this dll is zero. The first patch 'dynload_win.c-1.patch' fixes this by returning NULL in that case. The code, however, doesn't do what is intended in a debug build of Python. It looks for imports of 'python23.dll', when it should look for 'python23_d.dll' instead. The second patch 'dynload_win.c-2.patch' fixes this also (and includes the first patch as well). ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-12 03:56 Message: Logged In: YES user_id=14198 Seeing as it was the de-referencing of 'import_name' that crashed, I think a better patch is to terminate the outer while look as soon as we hit a bad string. Otherwise, this outer loop continues, 20 bytes at a time, until the outer pointer ('import_data') also goes bad or happens to reference \0. Attaching a slightly different patch, with comments and sizeof() change. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-12 01:00 Message: Logged In: YES user_id=14198 OK - will change to 12+so(WORD) And yes, I had seen this code - I meant "familiar with" :) Tim: Note that the import failure is not due to a bad import table (but the crash was). This code is trying to determine if a different version of Python is used. We are effectively skipping that check, and landing directly in the "does it have an init function?", then faling normally - ie, the code is now correctly *not* finding other Python versions linked against it. Thus, a different error message doesn't make much sense to me. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-11 17:45 Message: Logged In: YES user_id=11105 Oh, we have to give the /all option to dumpbin ;-) ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-11 17:42 Message: Logged In: YES user_id=11105 Tim, I don't think the import table format has changed, instead wmi.dll doesn't have an import table (for whatever reason). Maybe the code isn't able to handle that correctly. Since Python 2.3 as well at it's extensions are still built with MSVC 6, I *think* we should be safe with this code. I'll attach the output of running MSVC.NET 2003's 'dumpbin.exe \windows\system32\wmi.dll' on my WinXP Pro SP1 for the curious. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-05-11 17:20 Message: Logged In: YES user_id=31435 Mark, while you may not have seen this code before, you checked it in . IIRC, though, the person who *created* the patch was never heard from again. I don't understand what the code thinks it's doing either, exactly. The obvious concern: if the import table format has changed, won't we also fail to import legit C extensions? I haven't seen that happen yet, but I haven't yet built any extensions using VC 7.1 either. In any case, I'd agree it's better to get a mysterious import failure than a crash. Maybe the detail in the ImportError could be changed to indicate whan an import failure is due to a bad pointer. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-11 16:49 Message: Logged In: YES user_id=11105 IMO, IsBadReadPointer(import_data, 12 + sizeof(DWORD)) should be enough. Yes, please add a comment in the code. This is a little bit hackish, but it fixes the problem. And the real problem can always be fixed later, if needed. And, BTW, python 2.3.3 crashes on Windows XP as well. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-11 13:05 Message: Logged In: YES user_id=14198 Actually, I guess a comment regarding the pointer checks and referencing this bug would be nice :) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=951851&group_id=5470 From noreply at sourceforge.net Tue Jun 29 15:54:57 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 15:54:59 2004 Subject: [ python-Bugs-982215 ] Next button not greyed out during file copy. Message-ID: Bugs item #982215, was opened at 2004-06-29 19:54 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982215&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: Phil Rittenhouse (prittenh) Assigned to: Nobody/Anonymous (nobody) Summary: Next button not greyed out during file copy. Initial Comment: I've noticed a problem with a couple of distutils based installers (py2exe and pywin32) that I think may be a problem in all distutils based windows installers. When you click Next to start the file copy, the Next button is not greyed out or disabled. If you click it again, it seems to start another file copy process running, because you then get an "Overwrite Existing Files?" dialog. If you click Yes, the install may throw a Runtime Error with the message "The process cannot access the file because it is being used by another process..." and it may lock up. I'm running Python 2.3.3 and have seen this in py2exe 0.5.0 and pywin32 build 201. I have tried a number of Windows OSs including 2000 and ME. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982215&group_id=5470 From noreply at sourceforge.net Tue Jun 29 15:55:55 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 15:56:04 2004 Subject: [ python-Bugs-982215 ] Next button not greyed out during file copy. Message-ID: Bugs item #982215, was opened at 2004-06-29 19:54 Message generated for change (Settings changed) made by prittenh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982215&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: Phil Rittenhouse (prittenh) >Assigned to: Thomas Heller (theller) Summary: Next button not greyed out during file copy. Initial Comment: I've noticed a problem with a couple of distutils based installers (py2exe and pywin32) that I think may be a problem in all distutils based windows installers. When you click Next to start the file copy, the Next button is not greyed out or disabled. If you click it again, it seems to start another file copy process running, because you then get an "Overwrite Existing Files?" dialog. If you click Yes, the install may throw a Runtime Error with the message "The process cannot access the file because it is being used by another process..." and it may lock up. I'm running Python 2.3.3 and have seen this in py2exe 0.5.0 and pywin32 build 201. I have tried a number of Windows OSs including 2000 and ME. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982215&group_id=5470 From noreply at sourceforge.net Tue Jun 29 16:38:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 16:38:45 2004 Subject: [ python-Bugs-215126 ] Over restricted type checking on eval() function Message-ID: Bugs item #215126, was opened at 2000-09-22 14:36 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 Category: Python Interpreter Core Group: Feature Request Status: Open Resolution: None Priority: 6 Submitted By: Nobody/Anonymous (nobody) Assigned to: Armin Rigo (arigo) Summary: Over restricted type checking on eval() function Initial Comment: The built-in function eval() takes a string argument and a dictionary. The second argument should allow any instance which defines __getitem__ as opposed to just dictionaries. The following example creates a type error: eval, argument 2: expected dictionary, instance found class SpreadSheet: _cells = {} def __setitem__( self, key, formula ): self._cells[key] = formula def __getitem__( self, key ): return eval( self._cells[key], self ) ss = SpreadSheet() ss['a1'] = '5' ss['a2'] = 'a1*5' ss['a2'] ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-29 15:38 Message: Logged In: YES user_id=80475 Are you comfortable doing this in two steps. Commit the patch as-is so that eval() works properly and then craft a separate patch to thoroughly implement, test, and document the same thing for exec and execfile()? ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-29 05:36 Message: Logged In: YES user_id=4771 Doing the type check in exec and execfile() but not in eval() is not something that seems particularly useful to me. Any program can be written as an expression in Python if you are crazy enough to do that... So it doesn't offer any extra security to be more strict in exec than in eval(). People who really want to do it would have to go through incredible pain just to work around the type check. For the implications, I believe it is sufficient (and necessary) to carefully review all usages of f_locals throughout the code, and document f_locals as no longer necessary a dictionary for those extension writers that would have used this fact. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-29 03:51 Message: Logged In: YES user_id=80475 Okay, a new patch is ready for second review. It is essentially, Armin's patch with the following changes: * leaves the syntax for eval() intact with no automatic globals=locals trick * has the faster approach for LOAD_NAME, incorporating your suggestion for PyDict_CheckExact * omits the code to enable the same liberties for '.exec'. That is beyond the OP's request and well beyond something whose implications I've thought through. Am not opposed to it, but would like it to be left for a separate patch with thorough unittests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 13:08 Message: Logged In: YES user_id=80475 Nix, that last comment. Have examples that call setitem(). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 12:36 Message: Logged In: YES user_id=80475 Changed to PyDict_CheckExact(). Are you sure about having to change the sets and dels. I've tried several things at the interactive prompt and can't get it to fail: >>> [locals() for i in (2,3)] Do you have any examples (in part, so I can use them as test cases)? ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-27 06:33 Message: Logged In: YES user_id=4771 eval() can set items in the locals, e.g. by using list comprehension. Moreover the ability to exec in custom dicts would be extremely useful too, e.g. to catch definitions while they appear. If you really want to avoid any performance impact, using PyDict_CheckExact() in all critical parts looks good (do not use PyDict_Check(), it's both slower and not what we want because it prevents subclasses of dicts to override __xxxitem__). ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 03:37 Message: Logged In: YES user_id=80475 Okay, the patch is ready for second review. Attaching the revision with unittests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-27 02:50 Message: Logged In: YES user_id=80475 Attaching my minimal version of the patch. It runs the attached demo without exception and does not measurably show on any of my timings. The approach is to restrict the generalization to eval() instead of exec(). Since eval() can't set values in the locals dict, no changes are needed to the setitem and delitem calls. Instead of using PyObject_GetItem() directly, I do a regular lookup and fallback to the generalizaiton if necessary -- this is why the normal case doesn't get slowed down (the cost is a PyDict_Check which uses values already in cache, and a branch predicatable comparison).. While the demo script runs, and the test_suite passes, it is slightly too simple and doesn't yet handle eval('dir()', globals(), M()) where M is a non-dict mapping. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-23 11:59 Message: Logged In: YES user_id=80475 The quick patch works fine. Do change the PyArg_ParseTuple() into the faster PyArg_UnpackTuple(). Does this patch show any changes to pystone or other key metrics? Would the PyDict_GetItem trick have better performance? My guess is that it would. +1 on using PyMapping_Check() for checking the locals argument to eval(). That is as good as you can get without actually running it. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-06-22 16:07 Message: Logged In: YES user_id=4771 Quick patch attached. I didn't try to use the PyDict_GetItem trick described, but just systematically use PyObject_GetItem/SetItem/DelItem when working with f_locals. This might confuse some extension modules that expect PyEval_GetLocals() to return a dict object. The eval trick is now: eval(code, nondict) --> eval(code, globals(), nondict). Besides eval() I removed the relevant typecheck from execfile() and the exec statement. Any other place I am missing? We might want to still somehow check the type of the locals, to avoid strange errors caused by e.g. eval("a", "b"). PyMapping_Check() is the obvious candidate, but it looks like a hack. More testing is needed. test_descrtut.py line 84 now succeeds, unexpectedly, which is interpreted as a test failure. Needs some docs, too. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-12 02:27 Message: Logged In: YES user_id=80475 Armin, can you whip-up a quick patch so that we can explore the implications of your suggestion. Ideally, I would like to see something like this go in for Py2.4. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-22 11:53 Message: Logged In: YES user_id=80475 +1 Amrin's idea provides most of the needed functionality with zero performance impact. Also, using a local dictionary for the application variables is likely to be just exactly what a programmer would want to do. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2004-05-19 13:21 Message: Logged In: YES user_id=4771 With minimal work and performance impact, we could allow frame->f_locals to be something else than a real dictionary. It looks like it would be easily possible as long as we don't touch the time-critical frame->f_globals. Code compiled by eval() always uses LOAD_NAME and STORE_NAME, which is anyway allowed to be a bit slower than LOAD_FAST or LOAD_GLOBAL. Note that PyDict_GetItem() & co., as called by LOAD/STORE_NAME, do a PyDict_Check() anyway. We could just replace it with PyDict_CheckExact() and if it fails fall back to calling the appropriate ob_type->tp_as_mapping method. This would turn some of the PyDict_Xxx() API into a general mapping API, half-way between its current dict-only usage and the fully abstract PyObject_Xxx() API. This is maybe a bit strange from the user's point of view, because eval("expr", mydict) still wouldn't work: only eval("expr", {}, mydict) would. which does which does Now, there is no way I can think of that code compiled by eval() could contain LOAD_GLOBAL or STORE_GLOBAL. The only way to tell the difference between eval("expr", mydict) and eval("expr", {}, mydict) appears to be by calling globals() or somehow inspecting the frame directly. Therefore it might be acceptable to redefine the two-argument eval(expr, dict) to be equivalent not to eval(expr, dict, dict) but eval(expr, {}, dict). This hack might be useful enough even if it won't work with the exec statement. ---------------------------------------------------------------------- Comment By: Philippe Fremy (pfremy) Date: 2004-03-03 04:40 Message: Logged In: YES user_id=233844 I have exactly the same need as the original poster. The only difference in my case is that I inhreted a dictionnary. I want to use the eval() function as a simple expression evaluator. I have the follwing dictionnary: d['a']='1' d['b']='2' d['c']='a+b' I want the following results: d[a] -> 1 d[b] -> 2 d[c] -> 3 To do that, I was planning to use the eval() function and overloading the __getitem__ of the global or local dictionnary: class MyDict( dict ) : def __getitem__( self, key ): print "__getitem__", key val = dict.__getitem__( self, key ) print "val = '%s'" % val return eval( val , self ) But it does not work: d[a]: __getitem__ a val = '1' -> 1 d[b]: __getitem__ b val = '2' -> 2 d[c]: __getitem__ c val = 'e+1' ERROR Traceback (most recent call last): File "test_parse_jaycos_config.py", line 83, in testMyDict self.assertEquals( d['c'], 2 ) File "parse_config_file.py", line 10, in __getitem__ return eval( val , self ) File "", line 0, in ? TypeError: cannot concatenate 'str' and 'int' objects d['c'] did fetch the 'a+1' value, which was passed to eval. However, eval() tried to evaluate the expression using the content of the dictionnary instead of using __getitem__. So it fetched the string '1' for 'a' instead of the value 1, so the result is not suitable for the addition. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2004-01-12 17:01 Message: Logged In: YES user_id=99874 Hmm... I like this! Of course, I am wary of adding *yet another* special double- underscore function to satisfy a single special purpose, but this would satisfy all of *my* needs, and without any performance impact for lookups that are FOUND. Lookups that are NOT found would have a slight performance degrade (I know better than to speculate about the size of the effect without measuring it). I'm not really sure what percentage of dict lookups succeed. At any rate, what are these "other contexts" you mention in which a __keyerror__ would "also be useful"? Because if we can find other places where it is useful, that significantly improves the usefulness. OTOH, if the tests can be done ONLY for eval (I don't really think that's possible), then I'm certain no one cares about the performance of eval. ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2004-01-12 14:38 Message: Logged In: YES user_id=292741 This may be a compromise solution, which could also be useful in other contexts: What if the object passed is derived from dict - presumably that doesn't help because the point is to do low-level calls to the actual dict's lookup functions. Now, suppose we modify the basic dict type, so that before throwing a KeyError, it checks to see if it is really a derived object with a method __keyerror__, and if so, calls that and returns its result (or lets it throw)? Now you can make objects that look like dicts, and act like them at the low level, but can automatically populate themselves when non-existent keys are requested. Of course, __keyerror__('x') does not have to make an 'x' entry in the dict; it could make no change, or add several entries, depending on the desired semantics regarding future lookups. It could be set up so that every lookup fails and is forwarded by __keyerror__ to the __getitem__ of another object, for instance. The cost of this to the 'normal' dict lookup is that the need to do PyDict_CheckExact() each time a lookup fails. ---------------------------------------------------------------------- Comment By: Michael Chermside (mcherm) Date: 2003-12-16 10:37 Message: Logged In: YES user_id=99874 I'll just add my voice as somebody who would find this to be "darn handy" if it could ever done without noticably impacting the speed of python code that DOESN'T use eval(). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 23:18 Message: Added to PEP 42. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-09-22 14:42 Message: Changed Group to Feature Request. Should be added to PEP 42 (I'll do that if nobody beats me to it). CPython requires a genuine dict now for speed. I believe JPython allows any mapping object. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=215126&group_id=5470 From noreply at sourceforge.net Tue Jun 29 17:30:56 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 17:31:04 2004 Subject: [ python-Bugs-972724 ] Python 2.3.4, Solaris 7, socketmodule.c does not compile Message-ID: Bugs item #972724, was opened at 2004-06-14 13:36 Message generated for change (Comment added) made by brucedray You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972724&group_id=5470 Category: Build Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Bruce D. Ray (brucedray) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.3.4, Solaris 7, socketmodule.c does not compile Initial Comment: Build of Python 2.3.4 on Solaris 7 with SUN WorkshopPro compilers fails to compiile socketmodule.c with the following error messages on the build: "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 2979: undefined symbol: AF_INET6 "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 3023: undefined symbol: INET_ADDRSTRLEN "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 3023: integral constant expression expected "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 3053: warning: improper pointer/integer combination: op "=" "/export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c", line 3062: warning: statement not reached cc: acomp failed for /export/home/bruce/python/Python-2.3.4/Modules/socketmodule.c As a consequence of the above, make test gave errors for every test that attempted to import _socket Other error messages in the build not related to the socketmodule.c issue were: "Objects/frameobject.c", line 301: warning: non-constant initializer: op "--" "Objects/frameobject.c", line 303: warning: non-constant initializer: op "--" "Objects/stringobject.c", line 1765: warning: statement not reached "Python/ceval.c", line 3427: warning: non-constant initializer: op "--" "Python/ceval.c", line 3550: warning: non-constant initializer: op "--" "Python/ceval.c", line 3551: warning: non-constant initializer: op "--" "/export/home/bruce/python/Python-2.3.4/Modules/pypcre.c", line 995: warning: non-constant initializer: op "++" "/export/home/bruce/python/Python-2.3.4/Modules/pypcre.c", line 2574: warning: non-constant initializer: op "--" "/export/home/bruce/python/Python-2.3.4/Modules/unicodedata.c", line 313: warning: non-constant initializer: op "--" "/export/home/bruce/python/Python-2.3.4/Modules/parsermodule.c", line 2493: warning: non-constant initializer: op "++" "/export/home/bruce/python/Python-2.3.4/Modules/expat/xmlparse.c", line 3942: warning: non-constant initializer: op "<<=" Configuration output was: checking MACHDEP... sunos5 checking EXTRAPLATDIR... checking for --without-gcc... no checking for --with-cxx=... no checking for c++... no checking for g++... no checking for gcc... no checking for CC... CC checking for C++ compiler default output... a.out checking whether the C++ compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for gcc... no checking for cc... cc checking for C compiler default output... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... no checking whether cc accepts -g... yes checking for cc option to accept ANSI C... none needed checking how to run the C preprocessor... cc -E checking for egrep... egrep checking for AIX... no checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... no checking for unistd.h... yes checking minix/config.h usability... no checking minix/config.h presence... no checking for minix/config.h... no checking for --with-suffix... checking for case-insensitive build directory... no checking LIBRARY... libpython$(VERSION).a checking LINKCC... $(PURIFY) $(CC) checking for --enable-shared... no checking LDLIBRARY... libpython$(VERSION).a checking for ranlib... ranlib checking for ar... ar checking for a BSD-compatible install... ./install-sh -c checking for --with-pydebug... no checking whether cc accepts -OPT:Olimit=0... no checking whether cc accepts -Olimit 1500... no checking whether pthreads are available without options... no checking whether cc accepts -Kpthread... no checking whether cc accepts -Kthread... no checking whether cc accepts -pthread... no checking whether CC also accepts flags for thread support... no checking for ANSI C header files... (cached) yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking grp.h usability... yes checking grp.h presence... yes checking for grp.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking langinfo.h usability... yes checking langinfo.h presence... yes checking for langinfo.h... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking locale.h usability... yes checking locale.h presence... yes checking for locale.h... yes checking ncurses.h usability... no checking ncurses.h presence... no checking for ncurses.h... no checking poll.h usability... yes checking poll.h presence... yes checking for poll.h... yes checking pthread.h usability... yes checking pthread.h presence... yes checking for pthread.h... yes checking signal.h usability... yes checking signal.h presence... yes checking for signal.h... yes checking stdarg.h usability... yes checking stdarg.h presence... yes checking for stdarg.h... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdlib.h... (cached) yes checking stropts.h usability... yes checking stropts.h presence... yes checking for stropts.h... yes checking termios.h usability... yes checking termios.h presence... yes checking for termios.h... yes checking thread.h usability... yes checking thread.h presence... yes checking for thread.h... yes checking for unistd.h... (cached) yes checking utime.h usability... yes checking utime.h presence... yes checking for utime.h... yes checking sys/audioio.h usability... yes checking sys/audioio.h presence... yes checking for sys/audioio.h... yes checking sys/bsdtty.h usability... no checking sys/bsdtty.h presence... no checking for sys/bsdtty.h... no checking sys/file.h usability... yes checking sys/file.h presence... yes checking for sys/file.h... yes checking sys/lock.h usability... yes checking sys/lock.h presence... yes checking for sys/lock.h... yes checking sys/mkdev.h usability... yes checking sys/mkdev.h presence... yes checking for sys/mkdev.h... yes checking sys/modem.h usability... no checking sys/modem.h presence... no checking for sys/modem.h... no checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking sys/poll.h usability... yes checking sys/poll.h presence... yes checking for sys/poll.h... yes checking sys/select.h usability... yes checking sys/select.h presence... yes checking for sys/select.h... yes checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking sys/times.h usability... yes checking sys/times.h presence... yes checking for sys/times.h... yes checking sys/un.h usability... yes checking sys/un.h presence... yes checking for sys/un.h... yes checking sys/utsname.h usability... yes checking sys/utsname.h presence... yes checking for sys/utsname.h... yes checking sys/wait.h usability... yes checking sys/wait.h presence... yes checking for sys/wait.h... yes checking pty.h usability... no checking pty.h presence... no checking for pty.h... no checking term.h usability... no checking term.h presence... yes configure: WARNING: term.h: present but cannot be compiled configure: WARNING: term.h: check for missing prerequisite headers? configure: WARNING: term.h: proceeding with the preprocessor's result configure: WARNING: ## ------------------------------------ ## configure: WARNING: ## Report this to bug-autoconf@gnu.org. ## configure: WARNING: ## ------------------------------------ ## checking for term.h... yes checking libutil.h usability... no checking libutil.h presence... no checking for libutil.h... no checking sys/resource.h usability... yes checking sys/resource.h presence... yes checking for sys/resource.h... yes checking netpacket/packet.h usability... no checking netpacket/packet.h presence... no checking for netpacket/packet.h... no checking sysexits.h usability... yes checking sysexits.h presence... yes checking for sysexits.h... yes checking for dirent.h that defines DIR... yes checking for library containing opendir... none required checking whether sys/types.h defines makedev... no checking for sys/mkdev.h... (cached) yes checking for clock_t in time.h... yes checking for makedev... no checking Solaris LFS bug... no checking for mode_t... yes checking for off_t... yes checking for pid_t... yes checking return type of signal handlers... void checking for size_t... yes checking for uid_t in sys/types.h... yes checking for int... yes checking size of int... 4 checking for long... yes checking size of long... 4 checking for void *... yes checking size of void *... 4 checking for char... yes checking size of char... 1 checking for short... yes checking size of short... 2 checking for float... yes checking size of float... 4 checking for double... yes checking size of double... 8 checking for fpos_t... yes checking size of fpos_t... 8 checking for long long support... yes checking for long long... yes checking size of long long... 8 checking for uintptr_t support... no checking size of off_t... 8 checking whether to enable large file support... yes checking size of time_t... 4 checking for pthread_t... yes checking size of pthread_t... 4 checking for --enable-toolbox-glue... no checking for --enable-framework... no checking for dyld... no checking SO... .so checking LDSHARED... $(CC) -G checking CCSHARED... checking LINKFORSHARED... checking CFLAGSFORSHARED... checking SHLIBS... $(LIBS) checking for dlopen in -ldl... yes checking for shl_load in -ldld... no checking for library containing sem_init... -lrt checking for textdomain in -lintl... yes checking for t_open in -lnsl... yes checking for socket in -lsocket... yes checking for --with-libs... no checking for --with-signal-module... yes checking for --with-dec-threads... no checking for --with-threads... yes checking for _POSIX_THREADS in unistd.h... yes checking cthreads.h usability... no checking cthreads.h presence... no checking for cthreads.h... no checking mach/cthreads.h usability... no checking mach/cthreads.h presence... no checking for mach/cthreads.h... no checking for --with-pth... no checking for pthread_create in -lpthread... yes checking for usconfig in -lmpc... no checking if PTHREAD_SCOPE_SYSTEM is supported... yes checking for pthread_sigmask... yes checking if --enable-ipv6 is specified... no checking for --with-universal-newlines... yes checking for --with-doc-strings... yes checking for --with-pymalloc... yes checking for --with-wctype-functions... no checking for --with-sgi-dl... no checking for --with-dl-dld... no checking for dlopen... yes checking DYNLOADFILE... dynload_shlib.o checking MACHDEP_OBJS... MACHDEP_OBJS checking for alarm... yes checking for chown... yes checking for clock... yes checking for confstr... yes checking for ctermid... yes checking for execv... yes checking for fork... yes checking for fpathconf... yes checking for ftime... yes checking for ftruncate... yes checking for gai_strerror... no checking for getgroups... yes checking for getlogin... yes checking for getloadavg... yes checking for getpeername... yes checking for getpgid... yes checking for getpid... yes checking for getpriority... yes checking for getpwent... yes checking for getwd... yes checking for kill... yes checking for killpg... yes checking for lchown... yes checking for lstat... yes checking for mkfifo... yes checking for mknod... yes checking for mktime... yes checking for mremap... no checking for nice... yes checking for pathconf... yes checking for pause... yes checking for plock... yes checking for poll... yes checking for pthread_init... no checking for putenv... yes checking for readlink... yes checking for realpath... yes checking for select... yes checking for setegid... yes checking for seteuid... yes checking for setgid... yes checking for setlocale... yes checking for setregid... yes checking for setreuid... yes checking for setsid... yes checking for setpgid... yes checking for setpgrp... yes checking for setuid... yes checking for setvbuf... yes checking for snprintf... yes checking for sigaction... yes checking for siginterrupt... yes checking for sigrelse... yes checking for strftime... yes checking for strptime... yes checking for sysconf... yes checking for tcgetpgrp... yes checking for tcsetpgrp... yes checking for tempnam... yes checking for timegm... no checking for times... yes checking for tmpfile... yes checking for tmpnam... yes checking for tmpnam_r... yes checking for truncate... yes checking for uname... yes checking for unsetenv... no checking for utimes... yes checking for waitpid... yes checking for wcscoll... yes checking for _getpty... no checking for chroot... yes checking for link... yes checking for symlink... yes checking for fchdir... yes checking for fsync... yes checking for fdatasync... yes checking for ctermid_r... no checking for flock... no checking for getpagesize... yes checking for true... true checking for inet_aton in -lc... no checking for inet_aton in -lresolv... yes checking for hstrerror... yes checking for inet_aton... yes checking for inet_pton... yes checking for setgroups... yes checking for openpty... no checking for openpty in -lutil... no checking for forkpty... no checking for forkpty in -lutil... no checking for fseek64... no checking for fseeko... yes checking for fstatvfs... yes checking for ftell64... no checking for ftello... yes checking for statvfs... yes checking for dup2... yes checking for getcwd... yes checking for strdup... yes checking for strerror... yes checking for memmove... yes checking for getpgrp... yes checking for setpgrp... (cached) yes checking for gettimeofday... yes checking for major... yes checking for getaddrinfo... no checking for getnameinfo... no checking whether time.h and sys/time.h may both be included... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for struct tm.tm_zone... no checking for tzname... yes checking for struct stat.st_rdev... yes checking for struct stat.st_blksize... yes checking for struct stat.st_blocks... yes checking for time.h that defines altzone... no checking whether sys/select.h and sys/time.h may both be included... yes checking for addrinfo... no checking for sockaddr_storage... no checking whether char is unsigned... no checking for an ANSI C-conforming const... yes checking for working volatile... yes checking for working signed char... yes checking for prototypes... yes checking for variable length prototypes and stdarg.h... yes checking for bad exec* prototypes... no checking if sockaddr has sa_len member... no checking whether va_list is an array... no checking for gethostbyname_r... yes checking gethostbyname_r with 6 args... no checking gethostbyname_r with 5 args... yes checking for __fpu_control... no checking for __fpu_control in -lieee... no checking for --with-fpectl... no checking for --with-libm=STRING... default LIBM="-lm" checking for --with-libc=STRING... default LIBC="" checking for hypot... yes checking wchar.h usability... yes checking wchar.h presence... yes checking for wchar.h... yes checking for wchar_t... yes checking size of wchar_t... 4 checking for UCS-4 tcl... no checking what type to use for unicode... unsigned short checking whether byte ordering is bigendian... yes checking whether right shift extends the sign bit... yes checking for getc_unlocked() and friends... yes checking for rl_pre_input_hook in -lreadline... no checking for rl_completion_matches in -lreadline... no checking for broken nice()... no checking for broken poll()... no checking for working tzset()... no checking for tv_nsec in struct stat... yes checking whether mvwdelch is an expression... yes checking whether WINDOW has _flags... yes checking for /dev/ptmx... yes checking for /dev/ptc... no checking for socklen_t... yes checking for build directories... done configure: creating ./config.status config.status: creating Makefile.pre config.status: creating Modules/Setup.config config.status: creating pyconfig.h creating Setup creating Setup.local creating Makefile ---------------------------------------------------------------------- >Comment By: Bruce D. Ray (brucedray) Date: 2004-06-29 16:30 Message: Logged In: YES user_id=1063363 I've checked further, and there is a problem in Sun's headers as supplied with Solaris 7 Server 5/99 edition that the standard set of free patches apparently does not address. For this edition of Solaris 7, sys/socket.h does not have either AF_INET6 or INET_ADDRSTRLEN defined (seen on multiple systems). socketmodule.c will compile correctly with SunWorkshopPro 5.0 compilers and passes test if the following lines are added prior to the first function in socketmodule.c: #ifndef AF_INET6 /* not present in some Solaris 7 versions */ #define AF_INET6 26 /* use the Solaris 8 sys/socket.h definition */ #endif #ifndef INET_ADDRSTRLEN /* not present in some Solaris 7 versions */ #define INET_ADDRSTRLEN 16 /* use the definition given for SGI above */ #endif ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2004-06-18 06:36 Message: Logged In: YES user_id=29957 Hm. I don't get this on Solaris 7 using GCC. Can you try with gcc, and see if the problem is in Sun's headers on your system, or with the Sun compiler? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=972724&group_id=5470 From noreply at sourceforge.net Tue Jun 29 17:34:51 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 17:35:04 2004 Subject: [ python-Bugs-979252 ] Trap OSError when calling RotatingFileHandler.doRollover Message-ID: Bugs item #979252, was opened at 2004-06-24 14:43 Message generated for change (Comment added) made by groodude You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979252&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Richard T. Hewitt (hewittr) Assigned to: Nobody/Anonymous (nobody) Summary: Trap OSError when calling RotatingFileHandler.doRollover Initial Comment: I use the RotatingFileHandler in most of my scripts. The script will crash if the RotatingFileHandler encounters a locked log file. I would like to see something like: def emit(self, record): """ Emit a record. Output the record to the file, catering for rollover as described in __init__(). """ if self.maxBytes > 0: # are we rolling over? msg = "%s\n" % self.format(record) self.stream.seek(0, 2) #due to non-posix- compliant Windows feature if self.stream.tell() + len(msg) >= self.maxBytes: try: self.doRollover() except Exception: logging.FileHandler.emit(self, 'Failed to doRollover.') logging.FileHandler.emit(self, record) My version of Python (2.3.2) had the wrong docstring as well, referring to a non-existent setRollover. ---------------------------------------------------------------------- Comment By: David London (groodude) Date: 2004-06-29 15:34 Message: Logged In: YES user_id=975020 Richard, I have also had the "joy" of encountering this bug (on a windows 2000 machine), although I am still baffled at what could be locking the file. Our testing did not reveal this bug and now we have some code in a production environment that is causing me grief, so I had to come up with a solution quickly and I though that I might share it with you (and others). I don't believe that your solution will solve the problem. In doRollover the stream is closed and then the files are manipulated. Since this is where the errors will occur simply calling the emit method again will only cause more exceptions to be generated. Also, since the stream is never opened again, you would never get any log messages until you restarted. Sorry, after looking a bit more it seems like you will get tracebacks printed to the standard error (where ever you have designated that to be). What I am doing is replacing my version of doRollover with this version; def doRollover(self): """ Do a rollover, as described in __init__(). """ self.stream.close() openMode = "w" if self.backupCount > 0: try: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d" % (self.baseFilename, i) dfn = "%s.%d" % (self.baseFilename, i + 1) if os.path.exists(sfn): #print "%s -> %s" % (sfn, dfn) if os.path.exists(dfn): os.remove(dfn) os.rename(sfn, dfn) dfn = self.baseFilename + ".1" if os.path.exists(dfn): os.remove(dfn) os.rename(self.baseFilename, dfn) #print "%s -> %s" % (self.baseFilename, dfn) except: ## There was some problem with the file manipulationg above. ## If someone has some time, maybe they can figure out what errors ## can occur and the best way to deal with all of them. ## For my purposes, I'm just going to try and re-open ## the original file. openMode = "a" try: self.stream = open(self.baseFilename, openMode) except: ## There was some problem opening the log file. ## Again, someone with more time can look into all of the possible ## errors and figure out how best to handle them. ## I'm just going to try and open a 'problem' log file. ## The idea is that this file will exist long enough for the problems ## above to go away and the next time that we attempt to rollover ## it will work as we expect. gotError = True count = 1 while gotError: try: problemFilename = self.baseFilename + ".pblm." + str(count) self.stream = open(problemFilename, openMode) gotError = False except: count += 1 If my logic works out, this should always open a file for logging so the logging should never crash and no log messages will be lost. Hopefully you will find this helpful. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=979252&group_id=5470 From noreply at sourceforge.net Tue Jun 29 18:10:34 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Tue Jun 29 18:10:38 2004 Subject: [ python-Bugs-932563 ] logging: need a way to discard Logger objects Message-ID: Bugs item #932563, was opened at 2004-04-09 21:51 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fred L. Drake, Jr. (fdrake) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: logging: need a way to discard Logger objects Initial Comment: There needs to be a way to tell the logging package that an application is done with a particular logger object. This is important for long-running processes that want to use a logger to represent a related set of activities over a relatively short period of time (compared to the life of the process). This is useful to allow creating per-connection loggers for internet servers, for example. Using a connection-specific logger allows the application to provide an identifier for the session that can be automatically included in the logs without having the application encode it into each message (a far more error prone approach). ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2004-06-29 22:10 Message: Logged In: YES user_id=308438 I just had a further thought: is the approach below any good to you? Apart from not being able to use the root logger, it seems to meet your need. import logging class MyLogger(logging.Logger): def makeRecord(self, name, level, fn, lno, msg, args, exc_info): record = logging.Logger.makeRecord(self, name, level, fn, lno, msg, args, exc_info) record.magicnumber = 0xDECAFBAD # special number return record logging._loggerClass = MyLogger h = logging.StreamHandler() logger = logging.getLogger("mylogger") h.setFormatter(logging.Formatter("%(asctime)s %(levelname) s %(magicnumber)X %(message)s")) logger.addHandler(h) logger.warn("There's a custom attribute in my message!") ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-24 15:28 Message: Logged In: YES user_id=308438 Suppose I add a callable "recordMaker" to logging, and modify makeRecord() to call it with logger + the args passed to makeRecord(). If it's necessary to add extra attrs to the LogRecord, this can be done by replacing recordMaker with your own callable. Seems less icky - what do you think? If you think it'll fly, are there any other args you think I need to pass into the callable? ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-24 14:06 Message: Logged In: YES user_id=3066 I've attached a file showing the class I came up with. I don't consider this to be a good wrapper, just what worked. I think one of the problems is that what I really want to override is the makeRecord() method, not the logging methods themselves. There's too much logic in those dealing with the disabling and level filtering that I don't want to duplicate, but as soon as I pass the calls on to the underlying logger, I can no longer change the makeRecord(). It would be possible to inject a new makeRecord() while my methods are active (in my definition for log() in the sample), and restore the original in a finally clause, but that feels... icky. The advantage of overriding makeRecord() is that formatter can deal with with how the additional information is added to the log because more information is made available on the record. ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-24 10:58 Message: Logged In: YES user_id=308438 How about if I add a LoggerAdapter class which takes a logger in the __init__ and has logging methods debug(), info() etc. [and including _log()] which delegate to the underlying logger? Then you could subclass the Adapter and just override the methods you needed. Would that fit the bill? Of course the package can use a Logger-derived class, but this would apply to all loggers where the LoggerAdapter could be used for just some of the loggers in a system. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-24 04:13 Message: Logged In: YES user_id=3066 Looking at this again, after adjusting the application I have that used the connection-specific loggers, I decided that a different approach better solves the problem. What you've shown requires exactly what I wanted to avoid: having to make a gesture at each logging call (to transform the message). Instead of doing this, I ended up writing a wrapper for the logger objects that implement the methods log(), debug(), info(), warn(), warning(), error(), exception(), critical(), and fatal(). These methods each transform the message before calling the underlying logger. It would be really nice to have something like this that isolates the final call to Logger._log() so specific implementations can simply override _log() (or some other helper routine that gets all the info) and maybe the __init__(). I don't think that's completely necessary, but would probably make it a lot easier to implement this pattern. There's probably some useful documentation improvements that could be made to help people avoid the issue of leaking loggers. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-10 16:50 Message: Logged In: YES user_id=3066 Sorry for the delay in following up. I'll re-visit the software where I wanted this to see how it'll work out in practice. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-09 16:01 Message: Logged In: YES user_id=31435 Assigned to Fred, because Vinay wants his input (in general, a bug should be assigned to the next person who needs to "do something" about it, and that can change over time). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-09 09:28 Message: Logged In: YES user_id=308438 Fred, any more thoughts on this? Thanks, Vinay ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-05-08 19:28 Message: Logged In: YES user_id=308438 The problem with disposing of Logger objects programmatically is that you don't know who is referencing them. How about the following approach? I'm making no assumptions about the actual connection classes used; if you need to make it even less error prone, you can create delegating methods in the server class which do the appropriate wrapping. class ConnectionWrapper: def __init__(self, conn): self.conn = conn def message(self, msg): return "[connection: %s]: %s" % (self.conn, msg) and then use this like so... class Server: def get_connection(self, request): # return the connection for this request def handle_request(self, request): conn = self.get_connection(request) # we use a cache of connection wrappers if conn in self.conn_cache: cw = self.conn_cache[conn] else: cw = ConnectionWrapper(conn) self.conn_cache[conn] = cw #process request, and if events need to be logged, you can e.g. logger.debug(cw.message("Network packet truncated at %d bytes"), n) #The logged message will contain the connection ID ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 From noreply at sourceforge.net Wed Jun 30 07:11:17 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 30 07:11:21 2004 Subject: [ python-Bugs-982679 ] Bad state of multi btree database file after large inserts Message-ID: Bugs item #982679, was opened at 2004-06-30 14:11 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982679&group_id=5470 Category: Extension Modules Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Elerion Elrond (elerionelrond) Assigned to: Nobody/Anonymous (nobody) Summary: Bad state of multi btree database file after large inserts Initial Comment: The database on file is left in an bad state after inserting a large number of entries at once in a BTREE database. This happens when: a) multiple databases are open in a file, b) the dbtype is DB_BTREE, c) a large volume of key + data is inserted in the database. The volume varies with the pagesize of the database. No error is raised on insertion. However, if we check the database file after insertion, we get the following error: (-30980, 'DB_VERIFY_BAD: Database verification failed -- Subdatabase entry references page 4 of invalid type 5'). Moreover, running the test suite from bsddb module yields 6 failures and 2 errors (see the 'bsddb_testrun.txt' attachment). The error condition can be verified easily with the 'testdb.py' script - see attachments). It was run with Python 2.3.3 and 2.3.4 on Windows XP, also with Python 2.3.4 the cygwin version and Python 2.3+ on Suse Linux 9.0. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982679&group_id=5470 From noreply at sourceforge.net Wed Jun 30 09:38:41 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 30 09:39:32 2004 Subject: [ python-Bugs-975330 ] Inconsistent newline handling in email module Message-ID: Bugs item #975330, was opened at 2004-06-18 14:50 Message generated for change (Settings changed) made by iko You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975330&group_id=5470 >Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Anders Hammarquist (iko) Assigned to: Nobody/Anonymous (nobody) Summary: Inconsistent newline handling in email module Initial Comment: text/* parts of email messages must use \r\n as the newline separator. For unencoded messages. smtplib and friends take care of the translation from \n to \r\n in the SMTP processing. Parts which are unencoded (i.e. 7bit character sets) MUST use \n line endings, or smtplib with translate to \r\r\n. Parts that get encoded using quoted-printable can use either, because the qp-encoder assumes input data is text and reencodes with \n. However, parts which get encoded using base64 are NOT translated, and so must use \r\n line endings. This means you have to guess whether your text is going to get encoded or not (admittedly, usually not that hard), and translate the line endings appropriately before generating a Message instance. I think the fix would be for Charset.encode_body() to alway force the encoder to text mode (i.e.binary=False), since it seems unlikely to have a Charset for something which is not text. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975330&group_id=5470 From noreply at sourceforge.net Wed Jun 30 11:29:16 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 30 11:29:20 2004 Subject: [ python-Bugs-982806 ] gdbm.open () fails with a single argument Message-ID: Bugs item #982806, was opened at 2004-06-30 11:29 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982806&group_id=5470 Category: Extension Modules Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Roy Smith (roysmith) Assigned to: Nobody/Anonymous (nobody) Summary: gdbm.open () fails with a single argument Initial Comment: I am running: Python 2.3.4 (#3, Jun 29 2004, 21:48:03) [GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin Darwin Roy-Smiths-Computer.local 7.4.0 Darwin Kernel Version 7.4.0: Wed May 12 16:58:24 PDT 2004; root:xnu/xnu -517.7.7.obj~7/RELEASE_PPC Power Macintosh powerpc release 1.8.3 of GNU dbm. I've got a readable gdbm file: -rw-r--r-- 1 roy roy 12288 30 Jun 09:52 status.gdbm If I try to open it with no flag argument, it fails. Explicitly specifying 'r' as a 2nd argument works. >>> import gdbm >>> gdbm.open ('status.gdbm') Traceback (most recent call last): File "", line 1, in ? gdbm.error: Flag ' ' is not supported. >>> gdbm.open ('status.gdbm', 'r') The on-line doc says the 2nd argument is optional: http://www.python.org/doc/current/lib/module-gdbm.html It's not clear if it's the code or the doc that wrong. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=982806&group_id=5470 From noreply at sourceforge.net Wed Jun 30 12:20:44 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 30 12:20:55 2004 Subject: [ python-Bugs-924301 ] A leak case with cmd.py & readline & exception Message-ID: Bugs item #924301, was opened at 2004-03-27 01:28 Message generated for change (Comment added) made by svenil You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Sverker Nilsson (svenil) Assigned to: Michael Hudson (mwh) Summary: A leak case with cmd.py & readline & exception Initial Comment: A leak to do with readline & cmd, in Python 2.3. I found out what hold on to my interactive objects too long ('for ever') in certain circumstances. The circumstance had to do with an exception being raised in Cmd.cmdloop and handled (or not handled) outside of Cmd.cmdloop. In cmd.py, class Cmd, in cmdloop(), if an exception is raised and propagated out from the interior of cmdloop, the function postloop() is not called. The default function of this, (in 2.3) when the readline library is present, is to restore the completer, via: readline.set_completer(self.old_completer) If this is not done, the newly (by preloop) inserted completer will remain. Even if the loop is called again and run without exception, the new completer will remain, because then in postloop the old completer will be set to our new completer. When we exit, the completer will remain the one we set. This will hold on to our object, aka 'leak'. - In cmd.py in 2.2 no attempt was made to restore the completer, so this was also a kind of leak, but it was replaced the next time a Cmd instance was initialized. Now, however, the next time we will not replace the old completer, but both of them will remain in memory. The old one will be stored as self.old_completer. If we terminate with an exception, bad luck... the stored completer retains both of the instances. If we terminate normally, the old one will be retained. In no case do we restore the space of the first instance. The only way that would happen, would be if the second instance first exited the loop with an exception, and then entered the loop again an exited normally. But then, the second instance is retained instead! If each instance happens to terminate with an exception, it seems well possible that an ever increasing chain of leaking instances will be accumulated. My fix is to always call the postloop, given the preloop succeeded. This is done via a try:finally clause. def cmdloop(self, intro=None): ... self.preloop() try: ... finally: # Make sure postloop called self.postloop() I am attaching my patched version of cmd.py. It was originally from the tarball of Python 2.3.3 downloaded from Python.org some month or so ago in which cmd.py had this size & date: 14504 Feb 19 2003 cmd.py Best regards, Sverker Nilsson ---------------------------------------------------------------------- >Comment By: Sverker Nilsson (svenil) Date: 2004-06-30 18:20 Message: Logged In: YES user_id=356603 I agree you should check it in. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-29 20:59 Message: Logged In: YES user_id=6656 OK, so I was misreading (or reading an old version, or something). I agree with your comments about the bogosities, however it's not my problem today :-) Do you think I should check the attached patch in? I do. ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-06-23 19:58 Message: Logged In: YES user_id=356603 The constructor takes parameters stdin and stdout, and sets self.stdin and self.stdout from them or from sys. sys.std[in,out] are then not used directly except implicitly by raw_input. This seems to have changed somewhere between Python 2.2 and 2.3. Also, I remember an old version had the cmdqueue list as a class variable which was not at all thread safe - now it is an instance variable. I am hoping/thinking it is thread safe now... It seems superfluos to have both the use_rawinput flag and stdin parameter. At least raw_input should not be used if stdin is some other file than the raw input. But I don't have a simple suggestion to fix this, for one thing, it wouldn't be sufficient to compare the stdin parameter to sys.stdin since that file could have been changed so it wasn't the raw stdin anymore. Perhaps the module could store away the original sys.stdin as it was imported... but that wouldn't quite work since there is no guarantee sys.stdin hadn't already been changed. I guess if it is worth the trouble, if someone has an idea, could be a good thing to fix, anyway... ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-22 11:07 Message: Logged In: YES user_id=6656 Um. Unless I'm *hopelessly* misreading things, cmd.Cmd writes to sys.stdout unconditionally and either calls raw_input() or sys.stdin.readline(). So I'm not sure how one would "use a cmd.Cmd instance in a separate thread, talking eg via a socket file" without rewriting such huge amounts of the class that thread- safety becomes your own problem. Apologies if I'm being dumb... also, please note: I didn't write this module :-) ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-06-22 02:34 Message: Logged In: YES user_id=356603 Your comment about threads worries me, I am not sure I understand it. Would it be unsafe to use a cmd.Cmd instance in a separate thread, talking eg via a socket file? The instance is used only by that thread and not by others, but there may be other threads using other instances. I understand that it could be unsafe to have two threads share the same instance, but how about different instances? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-21 12:35 Message: Logged In: YES user_id=6656 Well, that would seem to be easy enough to fix (see attached). If you're using cmd.Cmd instances from different threads at the same time, mind, I think you're screwed anyway. You're certainly walking on thin ice... ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-06-18 21:13 Message: Logged In: YES user_id=356603 I think it is OK. Just noting that it changes the completer (just like my version) even if use_rawinput is false. I guess one should remember to pass a null completekey in that case, in case some other thread was using raw_input. Perhaps a check for use_rawinput could be added in cmd.py to avoid changing the completer in that case, for less risk of future mistakes. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 14:29 Message: Logged In: YES user_id=6656 yay, that appears to have worked. let me know what you think. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-11 14:26 Message: Logged In: YES user_id=6656 trying again... ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-06-01 12:10 Message: Logged In: YES user_id=6656 Bah. I don't have the laptop with the patch with me, I'll try uploading again in a couple of days. ---------------------------------------------------------------------- Comment By: Sverker Nilsson (svenil) Date: 2004-05-29 09:43 Message: Logged In: YES user_id=356603 I couldn't find a new attached file. I acknowledge some problems with my original patch, but have no other suggestion at the moment. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-26 18:36 Message: Logged In: YES user_id=6656 What do you think of the attached? This makes the documentation of pre & post loop more accurate again, which I think is nice. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-05-19 11:38 Message: Logged In: YES user_id=6656 This is where I go "I wish I'd reviewed that patch more carefully". In particular, the documentation of {pre,post}loop is now out of date. I wonder setting/getting the completer in these functions was a good idea. Hmm. This bug report confuses me :-) but I can certainly see the intent of the patch... ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2004-05-19 03:52 Message: Logged In: YES user_id=80475 Michael, this touches some of your code. Do you want to handle this one? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=924301&group_id=5470 From noreply at sourceforge.net Wed Jun 30 18:50:36 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 30 18:50:40 2004 Subject: [ python-Feature Requests-983069 ] md5 and sha1 modules should use openssl implementation Message-ID: Feature Requests item #983069, was opened at 2004-06-30 15:50 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=983069&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Gregory P. Smith (greg) Assigned to: Nobody/Anonymous (nobody) Summary: md5 and sha1 modules should use openssl implementation Initial Comment: the MD5 and SHA-1 modules should use the OpenSSL library to perform the hashing when it is available (always these days; python ships with basic ssl socket support using openssl). OpenSSL includes and keeps up with highly optimized versions of the hash algorithms for various architectures. They should always perform the same or better than the python shamodule and md5module implementations. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=983069&group_id=5470 From noreply at sourceforge.net Wed Jun 30 20:07:53 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 30 20:08:09 2004 Subject: [ python-Bugs-951851 ] Crash when reading "import table" of certain windows DLLs Message-ID: Bugs item #951851, was opened at 2004-05-11 21:02 Message generated for change (Comment added) made by mhammond You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=951851&group_id=5470 Category: Windows Group: Python 2.3 Status: Open >Resolution: Accepted Priority: 7 Submitted By: Mark Hammond (mhammond) Assigned to: Nobody/Anonymous (nobody) Summary: Crash when reading "import table" of certain windows DLLs Initial Comment: As diagnosed by Thomas Heller, via the python-win32 list. On Windows 2000, if your sys.path includes the Windows system32 directory, 'import wmi' will crash Python. To reproduce, change to the system32 directory, start Python, and execute 'import wmi'. Note that Windows XP does not crash. The problem is in GetPythonImport(), in code that tries to walk the PE 'import table'. AFAIK, this is code that checks the correct Python version is used, but I've never seen this code before. I'm not sure why the code is actually crashing (ie, what assumptions made about the import table are wrong), but I have a patch that checks a the pointers are valid before they are de-referenced. After the patch is applied, the result is the correct: "ImportError: dynamic module does not define init function (initwmi)" exception. Assigning to thomas for his review, then I'm happy to check in. I propose this as a 2.3 bugfix. ---------------------------------------------------------------------- >Comment By: Mark Hammond (mhammond) Date: 2004-07-01 10:07 Message: Logged In: YES user_id=14198 Sorry about that - I thought my "IsBadReadPtr()" patch did end up getting checked in :( It is still sitting as a diff in my 2.3 tree, which is a shame. Sorry about that. You patch works fine with the 2.4 branch, and does also stop the crash, so please check it in. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-30 05:46 Message: Logged In: YES user_id=11105 Sorry if I was confusing. I simply want to know if this should be checked in or not. Maybe you can review the code, and/or try it out. Thanks. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-06-29 09:20 Message: Logged In: YES user_id=14198 I'm sorry, but I'm not sure what I don't listen to :) You are correct about being short of time though. What would you like me to do? ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-29 07:28 Message: Logged In: YES user_id=11105 It seems Mark doesn't listen (or don't have time). I'd like to check this in for 2.4. Any objections? ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-02 04:35 Message: Logged In: YES user_id=11105 This is not yet accepted. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-06-02 03:45 Message: Logged In: YES user_id=11105 The reason the current code crashed when Python tries to import Win2k's or XP's wmi.dll as extension is that the size of the import table in this dll is zero. The first patch 'dynload_win.c-1.patch' fixes this by returning NULL in that case. The code, however, doesn't do what is intended in a debug build of Python. It looks for imports of 'python23.dll', when it should look for 'python23_d.dll' instead. The second patch 'dynload_win.c-2.patch' fixes this also (and includes the first patch as well). ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-12 11:56 Message: Logged In: YES user_id=14198 Seeing as it was the de-referencing of 'import_name' that crashed, I think a better patch is to terminate the outer while look as soon as we hit a bad string. Otherwise, this outer loop continues, 20 bytes at a time, until the outer pointer ('import_data') also goes bad or happens to reference \0. Attaching a slightly different patch, with comments and sizeof() change. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-12 09:00 Message: Logged In: YES user_id=14198 OK - will change to 12+so(WORD) And yes, I had seen this code - I meant "familiar with" :) Tim: Note that the import failure is not due to a bad import table (but the crash was). This code is trying to determine if a different version of Python is used. We are effectively skipping that check, and landing directly in the "does it have an init function?", then faling normally - ie, the code is now correctly *not* finding other Python versions linked against it. Thus, a different error message doesn't make much sense to me. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-12 01:45 Message: Logged In: YES user_id=11105 Oh, we have to give the /all option to dumpbin ;-) ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-12 01:42 Message: Logged In: YES user_id=11105 Tim, I don't think the import table format has changed, instead wmi.dll doesn't have an import table (for whatever reason). Maybe the code isn't able to handle that correctly. Since Python 2.3 as well at it's extensions are still built with MSVC 6, I *think* we should be safe with this code. I'll attach the output of running MSVC.NET 2003's 'dumpbin.exe \windows\system32\wmi.dll' on my WinXP Pro SP1 for the curious. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-05-12 01:20 Message: Logged In: YES user_id=31435 Mark, while you may not have seen this code before, you checked it in . IIRC, though, the person who *created* the patch was never heard from again. I don't understand what the code thinks it's doing either, exactly. The obvious concern: if the import table format has changed, won't we also fail to import legit C extensions? I haven't seen that happen yet, but I haven't yet built any extensions using VC 7.1 either. In any case, I'd agree it's better to get a mysterious import failure than a crash. Maybe the detail in the ImportError could be changed to indicate whan an import failure is due to a bad pointer. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-05-12 00:49 Message: Logged In: YES user_id=11105 IMO, IsBadReadPointer(import_data, 12 + sizeof(DWORD)) should be enough. Yes, please add a comment in the code. This is a little bit hackish, but it fixes the problem. And the real problem can always be fixed later, if needed. And, BTW, python 2.3.3 crashes on Windows XP as well. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2004-05-11 21:05 Message: Logged In: YES user_id=14198 Actually, I guess a comment regarding the pointer checks and referencing this bug would be nice :) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=951851&group_id=5470 From noreply at sourceforge.net Wed Jun 30 21:41:13 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 30 21:41:21 2004 Subject: [ python-Bugs-644744 ] bdist_rpm fails when installing man pages Message-ID: Bugs item #644744, was opened at 2002-11-27 09:30 Message generated for change (Settings changed) made by fdrake You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=644744&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: Wummel (calvin) Assigned to: Nobody/Anonymous (nobody) >Summary: bdist_rpm fails when installing man pages Initial Comment: When a man page is in data_files, the rpm installer compresses it with gzip (done by the brp-compress script). I attached a little hack for install_data command to add a ".gz" to such man pages. Then it works for me. Of course, the proper fix would be to detect if brp-compress is run and what files were compressed by the script. But I am not an rpm guru, so its up to you how you want to fix it. I am using a Debian Linux unstable boxen with Python 2.2.2-2, and rpm 4.0.4-11 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=644744&group_id=5470 From noreply at sourceforge.net Wed Jun 30 23:53:31 2004 From: noreply at sourceforge.net (SourceForge.net) Date: Wed Jun 30 23:53:37 2004 Subject: [ python-Bugs-932563 ] logging: need a way to discard Logger objects Message-ID: Bugs item #932563, was opened at 2004-04-09 17:51 Message generated for change (Comment added) made by fdrake You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None >Priority: 6 Submitted By: Fred L. Drake, Jr. (fdrake) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: logging: need a way to discard Logger objects Initial Comment: There needs to be a way to tell the logging package that an application is done with a particular logger object. This is important for long-running processes that want to use a logger to represent a related set of activities over a relatively short period of time (compared to the life of the process). This is useful to allow creating per-connection loggers for internet servers, for example. Using a connection-specific logger allows the application to provide an identifier for the session that can be automatically included in the logs without having the application encode it into each message (a far more error prone approach). ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-30 23:53 Message: Logged In: YES user_id=3066 Vinay: I don't think that will work. Another issue that crops up once I start looking into the Logger class is that findCaller() won't do (what I think is) the Right Thing when wrappers and subclasses are involved. After reviewing my application, I think the only thing the application really needs to control is the creation of the record objects, but that has to happen on the wrapper, or there's no way to get the necessary information into the record (without seriously performing surgery on the underlying logger). I think I've come up with a base class that does the Right Thing, but I need to write up an explanation of why it works the way it does. It's not massively mysterious, but does end up dealing with more than I really like worrying about. I don't have any more time for this tonight, but will write up what I have and post it here in the next few days. It shouldn't be hard to refactor what's in logging.Logger and my base class to share most of the code. Having the base class in the logging package would avoid having to use a separate findCaller() implementation. Boosting the priority to make sure this stays on my radar. ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-29 18:10 Message: Logged In: YES user_id=308438 I just had a further thought: is the approach below any good to you? Apart from not being able to use the root logger, it seems to meet your need. import logging class MyLogger(logging.Logger): def makeRecord(self, name, level, fn, lno, msg, args, exc_info): record = logging.Logger.makeRecord(self, name, level, fn, lno, msg, args, exc_info) record.magicnumber = 0xDECAFBAD # special number return record logging._loggerClass = MyLogger h = logging.StreamHandler() logger = logging.getLogger("mylogger") h.setFormatter(logging.Formatter("%(asctime)s %(levelname) s %(magicnumber)X %(message)s")) logger.addHandler(h) logger.warn("There's a custom attribute in my message!") ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-24 11:28 Message: Logged In: YES user_id=308438 Suppose I add a callable "recordMaker" to logging, and modify makeRecord() to call it with logger + the args passed to makeRecord(). If it's necessary to add extra attrs to the LogRecord, this can be done by replacing recordMaker with your own callable. Seems less icky - what do you think? If you think it'll fly, are there any other args you think I need to pass into the callable? ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-24 10:06 Message: Logged In: YES user_id=3066 I've attached a file showing the class I came up with. I don't consider this to be a good wrapper, just what worked. I think one of the problems is that what I really want to override is the makeRecord() method, not the logging methods themselves. There's too much logic in those dealing with the disabling and level filtering that I don't want to duplicate, but as soon as I pass the calls on to the underlying logger, I can no longer change the makeRecord(). It would be possible to inject a new makeRecord() while my methods are active (in my definition for log() in the sample), and restore the original in a finally clause, but that feels... icky. The advantage of overriding makeRecord() is that formatter can deal with with how the additional information is added to the log because more information is made available on the record. ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-24 06:58 Message: Logged In: YES user_id=308438 How about if I add a LoggerAdapter class which takes a logger in the __init__ and has logging methods debug(), info() etc. [and including _log()] which delegate to the underlying logger? Then you could subclass the Adapter and just override the methods you needed. Would that fit the bill? Of course the package can use a Logger-derived class, but this would apply to all loggers where the LoggerAdapter could be used for just some of the loggers in a system. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-24 00:13 Message: Logged In: YES user_id=3066 Looking at this again, after adjusting the application I have that used the connection-specific loggers, I decided that a different approach better solves the problem. What you've shown requires exactly what I wanted to avoid: having to make a gesture at each logging call (to transform the message). Instead of doing this, I ended up writing a wrapper for the logger objects that implement the methods log(), debug(), info(), warn(), warning(), error(), exception(), critical(), and fatal(). These methods each transform the message before calling the underlying logger. It would be really nice to have something like this that isolates the final call to Logger._log() so specific implementations can simply override _log() (or some other helper routine that gets all the info) and maybe the __init__(). I don't think that's completely necessary, but would probably make it a lot easier to implement this pattern. There's probably some useful documentation improvements that could be made to help people avoid the issue of leaking loggers. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2004-06-10 12:50 Message: Logged In: YES user_id=3066 Sorry for the delay in following up. I'll re-visit the software where I wanted this to see how it'll work out in practice. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-06-09 12:01 Message: Logged In: YES user_id=31435 Assigned to Fred, because Vinay wants his input (in general, a bug should be assigned to the next person who needs to "do something" about it, and that can change over time). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-06-09 05:28 Message: Logged In: YES user_id=308438 Fred, any more thoughts on this? Thanks, Vinay ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2004-05-08 15:28 Message: Logged In: YES user_id=308438 The problem with disposing of Logger objects programmatically is that you don't know who is referencing them. How about the following approach? I'm making no assumptions about the actual connection classes used; if you need to make it even less error prone, you can create delegating methods in the server class which do the appropriate wrapping. class ConnectionWrapper: def __init__(self, conn): self.conn = conn def message(self, msg): return "[connection: %s]: %s" % (self.conn, msg) and then use this like so... class Server: def get_connection(self, request): # return the connection for this request def handle_request(self, request): conn = self.get_connection(request) # we use a cache of connection wrappers if conn in self.conn_cache: cw = self.conn_cache[conn] else: cw = ConnectionWrapper(conn) self.conn_cache[conn] = cw #process request, and if events need to be logged, you can e.g. logger.debug(cw.message("Network packet truncated at %d bytes"), n) #The logged message will contain the connection ID ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=932563&group_id=5470