From noreply@sourceforge.net Fri Jun 1 01:20:57 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 31 May 2001 17:20:57 -0700 Subject: [Python-bugs-list] [ python-Bugs-429193 ] CGIHTTPServer crashes Explorer in WinME Message-ID: Bugs item #429193, was updated on 2001-05-31 17:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429193&group_id=5470 Category: Windows Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: CGIHTTPServer crashes Explorer in WinME Initial Comment: Python Versions: BeOpen 2.0 & ActiveState Python 2.1 Windows Version: Windows Millenium Edition PC: Dell Inspiron 8000 Laptop Issue: Invoking CGIHTTPServer.py either by itself or subclassed leads to critical problems when serving CGI content (*.py) CGIHTTPServer will serve HTML with no issues. If a CGI link is clicked, Windows sounds "critical stop" system sound, the console shows that the CGI is being called ("C:\>Python.exe -u foo.py") and will hang Internet Explorer 5.5 CGIHTTPServer cannot, at that point, be killed through Ctrl-C. If the console Window is simply closed, Windows will cause Explorer to sometimes crash, preventing a clean shutdown of the machine. This problem exists even if "#!C:\Python20\Python.exe" is included as first line of CGI, and if w9xpopen is included in the CGI directory. Problem does not exist with Apache or Xitami servers calling Python CGI scripts. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429193&group_id=5470 From noreply@sourceforge.net Fri Jun 1 03:58:20 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 31 May 2001 19:58:20 -0700 Subject: [Python-bugs-list] [ python-Bugs-428555 ] IDLE crashes Message-ID: Bugs item #428555, was updated on 2001-05-29 23:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=428555&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: IDLE crashes Initial Comment: On Windows 2000 (havent tested other platforms) IDLE will crash if i run the following code (ditto for PythonWin) from Tkinter import * root = Tk() img = PhotoImage(root,file="somefile.gif") # the error here is that i used root as the name ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-05-31 19:58 Message: Logged In: NO I get the same problem. Idle locks up when I close the window created by this application. I'm using Python 2.1 on Win98SE with IDLE 0.8 and Tcl 8.3 Happens every time for me. I can't get IDLE back to the >>> prompt when I close the window. HELP! ---- from Tkinter import * def testwindow(mess): root = Tk() w = Label(root, text="[ %s ]" % mess) w.pack() root.mainloop() testwindow("Hello World") ---- ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=428555&group_id=5470 From noreply@sourceforge.net Fri Jun 1 16:20:41 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Jun 2001 08:20:41 -0700 Subject: [Python-bugs-list] [ python-Bugs-429329 ] actual-parameters *arg, **kws not doc'd Message-ID: Bugs item #429329, was updated on 2001-06-01 08:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429329&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Alex Martelli (aleax) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: actual-parameters *arg, **kws not doc'd Initial Comment: 5.3.4 in the language reference should document the forms *args and **kwds for actual parameters, but it makes no mention of them and does not allow for them in the syntax productions. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429329&group_id=5470 From noreply@sourceforge.net Fri Jun 1 16:38:22 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Jun 2001 08:38:22 -0700 Subject: [Python-bugs-list] [ python-Bugs-429329 ] actual-parameters *arg, **kws not doc'd Message-ID: Bugs item #429329, was updated on 2001-06-01 08:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429329&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Alex Martelli (aleax) >Assigned to: Jeremy Hylton (jhylton) Summary: actual-parameters *arg, **kws not doc'd Initial Comment: 5.3.4 in the language reference should document the forms *args and **kwds for actual parameters, but it makes no mention of them and does not allow for them in the syntax productions. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-06-01 08:38 Message: Logged In: YES user_id=3066 Assigned to Jeremy, since he shepharded the patch into the Python release. Changes should be integrated with the 2.1.1 and head branches. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429329&group_id=5470 From noreply@sourceforge.net Fri Jun 1 17:29:19 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Jun 2001 09:29:19 -0700 Subject: [Python-bugs-list] [ python-Bugs-429357 ] non-greedy regexp duplicating match bug Message-ID: Bugs item #429357, was updated on 2001-06-01 09:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429357&group_id=5470 Category: Regular Expressions Group: None Status: Open Resolution: None Priority: 5 Submitted By: Matthew Mueller (donut) Assigned to: Nobody/Anonymous (nobody) Summary: non-greedy regexp duplicating match bug Initial Comment: I found some weird bug, where when a non-greedy match doesn't match anything, it will duplicate the rest of the string instead of being None. #pyrebug.py: import re urlrebug=re.compile(""" (.*?):// #scheme ( (.*?) #user (?: :(.*) #pass )? @)? (.*?) #addr (?::([0-9]+))? #port (/.*)?$ #path """, re.VERBOSE) testbad='foo://bah:81/pth' print urlrebug.match(testbad).groups() Bug Output: >python2.1 pyrebug.py ('foo', None, 'bah:81/pth', None, 'bah', '81', '/pth') >python-cvs pyrebug.py ('foo', None, 'bah:81/pth', None, 'bah', '81', '/pth') Good (expected) Output: >python1.5 pyrebug.py ('foo', None, None, None, 'bah', '81', '/pth') ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429357&group_id=5470 From noreply@sourceforge.net Fri Jun 1 17:45:47 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Jun 2001 09:45:47 -0700 Subject: [Python-bugs-list] [ python-Bugs-429361 ] popen2.Popen3.wait() exit code Message-ID: Bugs item #429361, was updated on 2001-06-01 09:45 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429361&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: eXom (jkuan) Assigned to: Nobody/Anonymous (nobody) Summary: popen2.Popen3.wait() exit code Initial Comment: I found there is no documentation for this behaviour or is there any that I have missed out. Why the value returned from wait() is multipled with 256 to the return code of child process? E.g. a = popen2.Popen3("sh -c 'exit 0'") a.wait( ) ---> 0 a = popen2.Popen3("sh -c 'exit 1'") a.wait( ) ---> 256 a = popen2.Popen3("sh -c 'exit 2'") a.wait( ) ---> 512 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429361&group_id=5470 From noreply@sourceforge.net Sat Jun 2 01:40:01 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Jun 2001 17:40:01 -0700 Subject: [Python-bugs-list] [ python-Bugs-405837 ] getting PyRun_String() true result Message-ID: Bugs item #405837, was updated on 2001-03-04 06:53 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405837&group_id=5470 Category: Python Interpreter Core Group: Not a Bug Status: Closed Resolution: Invalid Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Guido van Rossum (gvanrossum) Summary: getting PyRun_String() true result Initial Comment: It seems impossible to build am embedded Python interpreter extension which actually allows getting the result of the evaluation of a string from the interpreter, as it is done in interactive mode, as in the following: def f: pass f prints: But in C (called twice with the 2 above strings): PyRun_String(string, Py_file_input, globals, globals) returns None. I found a workaround by patching the core in ceval.c, eval_code2() (inspired by the PRINT_EXPR case): ... case POP_TOP: v = POP(); PyDict_SetItemString(f->f_globals, "_", v); /* added */ Py_DECREF(v); continue; ... and then: PyRun_String(string, Py_file_input, globals, globals) result =PyDict_GetItemString(globals, "_") returns the '' correct result. My goal is to allow the tclpython extension (at http://jfontain.free.fr/) to work without having to insert print statements on the Python side to be able to pass data to the Tcl interpreter. Please forgive me if there is an obvious way to do the above without patching the core, but I am new to Python (I like it already though :-) Jean-Luc Fontaine ---------------------------------------------------------------------- Comment By: David Gravereaux (davygrvy) Date: 2001-06-01 17:40 Message: Logged In: YES user_id=7549 >> But maybe Python is not meant to be also used as a C extension? >Nonsense. You just have to use it properly. >> Once loaded in the embedded interpreter, how do you get the >> result of the library Python functions that you invoke? >You load it using file_input (up to the end of foo), then >you evaluate the expressions (e.g. foo()) using eval_input >(passing the same locals and globals). This is also >(roughly) what the interactive interpreter does (it always >passes the dictionary of __main__). But foo() was the last invoked. Where's it's result? Is eval_input a function, or a start token as in Py_eval_input? How can the operation of PyRun_String() place into its returning PyObject* the result of the operation run? This is a central theme in Tcl and Perl. such as -> Tcl_Interp *interp; int code; interp = Tcl_CreateInterp(); code = Tcl_Eval(interp, "proc foo {} {}; expr {rand()}"); if (code == TCL_OK) { printf("the random number is %s", interp->result); } else { printf("We bombed with %s", interp->result); } How is it possible to do the same thing in python to get the result of last invoked command as a char*? >IMO, there is no meaningful result except for "success or exception". And upon success, where is the result placed? POP_TOP case in eval_code2() discarding the result object is either a bug in the core, or a new start token for return actual results of operations needs to added. Important data is being discarded. >> my very simple patch >Which patch? *** ceval.c.orig Fri Jun 1 17:11:52 2001 --- ceval.c Fri Jun 1 17:12:56 2001 *************** *** 771,776 **** --- 771,777 ---- case POP_TOP: v = POP(); + PyDict_SetItemString(f- >f_globals, "_", v); Py_DECREF(v); continue; ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-03-24 09:51 Message: Logged In: YES user_id=21627 > But maybe Python is not meant to be also used as a C extension? Nonsense. You just have to use it properly. > Once loaded in the embedded interpreter, how do you get the > result of the library Python functions that you invoke? You load it using file_input (up to the end of foo), then you evaluate the expressions (e.g. foo()) using eval_input (passing the same locals and globals). This is also (roughly) what the interactive interpreter does (it always passes the dictionary of __main__). > my very simple patch Which patch? ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-03-24 08:20 Message: Logged In: NO >i = 1 >def foo(): > return i >class C: > pass >What should be the result of executing this statement list >(i.e. suite)? IMO, there is no meaningful result except for >"success or exception". True. But you want to go further than that: actually calling a function and getting its result, as it is done in interactive mode. Let us say that you have a nice library written in Python, that you want to invoke from C. Once loaded in the embedded interpreter, how do you get the result of the library Python functions that you invoke? Again, that is something that is readily done in other interpreted languages. But maybe Python is not meant to be also used as a C extension? I think my very simple patch demonstrates otherwise, and furthermore, that Python when run in interactive mode behaves as I (and I guess most people) expect. For example, when typing: i = 1 def foo(): return i foo() one gets 1 as result. Now if you pass the following C string to an embedded interpreter: char *code = "i = 1\ndef foo()\n return i\nfoo()"; how to you get the result "1"? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-03-20 13:40 Message: Logged In: YES user_id=21627 What kind of result would you expect from evaluating a file_input? E.g. given i = 1 def foo(): return i class C: pass What should be the result of executing this statement list (i.e. suite)? IMO, there is no meaningful result except for "success or exception". You may also consider discussing this on python-list@python.org. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-03-20 13:34 Message: Logged In: NO I agree that this is not a bug per se. I am puzzled though, that other scripting languages, such as Perl and Tcl can readily do this. I still have no answer to my request, so I guess I will try help@python.org as you recommend. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-03-20 10:49 Message: Logged In: YES user_id=6380 This is not a bug. Closing the bug report now. If you need more help still, wrote help@python.org. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-03-05 10:51 Message: Logged In: NO > To evaluate a string, use Py_RunString with Py_eval_input, > or perhaps Py_single_input. Py_eval_input is for "isolated expressions", and Py_single_input "for a single statement", so how do I execute whole modules except by using Py_file_input, the only remaining option? I actually tested all the above options thoroughly and found that only Py_file_input did the job, but without a way to get at the result. Please let me know whether there is something that I missed, as I am stuck at the moment. If needed, I will be happy to send you sample code that illustrates the problem. Thank you very much for your prompt response. Jean-Luc PS: passing "def f(): pass\n" to Py_eval_input returns a "SyntaxError: invalid syntax" ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-03-04 10:22 Message: Logged In: YES user_id=21627 Sure there is. PyRun_SimpleString executes a string in "file mode"; this has no result. The interactive interpreter, when it prints a result, runs the string in "eval mode" - only evaluation gives a result. To evaluate a string, use Py_RunString with Py_eval_input, or perhaps Py_single_input. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405837&group_id=5470 From noreply@sourceforge.net Sat Jun 2 01:41:55 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Jun 2001 17:41:55 -0700 Subject: [Python-bugs-list] [ python-Bugs-416526 ] Regular expression tests: SEGV on Mac OS Message-ID: Bugs item #416526, was updated on 2001-04-16 14:09 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=416526&group_id=5470 Category: Regular Expressions Group: Platform-specific Status: Open Resolution: None Priority: 7 Submitted By: Nobody/Anonymous (nobody) Assigned to: Fredrik Lundh (effbot) Summary: Regular expression tests: SEGV on Mac OS Initial Comment: The regular expression regression tests 'test_re' causes a SEGV failure on Mac OS X version 10.0.1 when using Python 2.1c2 (and earlier). This is caused by the test trying to recurse 50,000 levels deep. Workaround: A workaround is to limit how deep the regular expression library can recurse (this is already done for Win32). This can be achieved by changing file './Modules/_sre.c' using the following patch: --- ./orig/_sre.c Sun Apr 15 19:00:58 2001 +++ ./new/_sre.c Mon Apr 16 21:39:29 2001 @@ -75,6 +75,9 @@ Win64 (MS_WIN64), Linux64 (__LP64__), Monterey (64-bit AIX) (_LP64) */ /* FIXME: maybe the limit should be 40000 / sizeof(void*) ? */ #define USE_RECURSION_LIMIT 7500 +#elif defined(__APPLE_CC__) +/* Apple 'cc' compiler eg. for Mac OS X */ +#define USE_RECURSION_LIMIT 4000 #else #define USE_RECURSION_LIMIT 10000 #endif ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-06-01 17:41 Message: Logged In: NO >An alternate (and perhaps better) workaround could be to increase the stack size on Mac OS X. it's been tried and shot down as a will not fix. :-( instead of +#elif defined(__APPLE_CC__) perhaps we should use __APPLE__ as per the documentation: There are two relatively new macros: __APPLE__ and __APPLE_CC__.The former refers to any Apple platform, though at present it is only predefined in Apple's gcc-based Mac OS X and Mac OS X Server compilers. The value of the latter is an integer that corresponds to the version number of the compiler. This should allow one to distinguish, for example, between compilers based on the same version of gcc, but with different bug fixes or features. At present, larger values denote (chronologically) later compilers. - D ---------------------------------------------------------------------- Comment By: Fredrik Lundh (effbot) Date: 2001-04-26 15:01 Message: Logged In: YES user_id=38376 An alternate (and perhaps better) workaround could be to increase the stack size on Mac OS X. But in either case, my plan is to get rid of the recursion limit in 2.2 (stackless SRE may still run out of memory, but it shouldn't have to run out of stack). Cheers /F ---------------------------------------------------------------------- Comment By: Dan Wolfe (dkwolfe) Date: 2001-04-17 16:52 Message: Logged In: YES user_id=80173 Instead of relying on a compiler variable, we should probably set a environment variable as part of the ./configure and use that to determine when to reduce the USE_RECURSION_LIMIT. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=416526&group_id=5470 From noreply@sourceforge.net Sat Jun 2 11:21:59 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 02 Jun 2001 03:21:59 -0700 Subject: [Python-bugs-list] [ python-Bugs-416944 ] 2.0: cum sympt; w/gdb bktr; OBSD2.8. Message-ID: Bugs item #416944, was updated on 2001-04-17 21:03 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=416944&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 3 Submitted By: Brad Allen (valaulmo) Assigned to: Martin v. Löwis (loewis) Summary: 2.0: cum sympt; w/gdb bktr; OBSD2.8. Initial Comment: Reverted to Python 2.0 + all patches on patch pages, still with OpenBSD 2.8. Only non-default module set is zlib (needed for Mojo Nation according to the docs). System is Pentium 133MHz. Ok, I'm getting better at this now. This time, I ran GDB on it, since I noticed that one of the bugs may have cumulative dependent symptoms. Also, I'm typing this bug report into the web browser so that it wraps correctly (I'll cross my fingers that it doesn't crash though; that's why I prefer Emacs; I suppose I could try SSL W3 Emacs but that is hard; not now.) Here is the gdb output where it did die: [... lots of tests ...] test_long Program received signal SIGSEGV, Segmentation fault. 0x4017bc2f in _thread_machdep_switch () (gdb) bt #0 0x4017bc2f in _thread_machdep_switch () #1 0x401c8308 in _sigq_check_reqd () #2 0x4017ba66 in _thread_kern_sig_undefer () #3 0x4017eb68 in pthread_cond_signal () #4 0x1510d in PyThread_release_lock (lock=0x27c320) at thread_pthread.h:344 #5 0x43006 in eval_code2 (co=0x3c7d80, globals=0x3c104c, locals=0x0, args=0x25cd58, argcount=2, kws=0x25cd60, kwcount=0, defs=0x0, defcount=0, owner=0x0) at ceval.c:617 #6 0x450b3 in eval_code2 (co=0x3c7f40, globals=0x3c104c, locals=0x0, args=0x1a4d60, argcount=1, kws=0x1a4d64, kwcount=0, defs=0x0, defcount=0, owner=0x0) at ceval.c:1850 #7 0x450b3 in eval_code2 (co=0x4221c0, globals=0x3c104c, locals=0x0, args=0x39a140, argcount=0, kws=0x39a140, kwcount=0, defs=0x29c1b8, defcount=1, owner=0x0) at ceval.c:1850 #8 0x450b3 in eval_code2 (co=0x41c900, globals=0x3c104c, locals=0x3c104c, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, owner=0x0) at ceval.c:1850 #9 0x42625 in PyEval_EvalCode (co=0x41c900, globals=0x3c104c, locals=0x3c104c) at ceval.c:319 #10 0xacff in PyImport_ExecCodeModuleEx (name=0xdfbfd180 "test_long", co=0x41c900, pathname=0xdfbfcce0 "./Lib/test/test_long.py") at import.c:495 #11 0xb30a in load_source_module (name=0xdfbfd180 "test_long", pathname=0xdfbfcce0 "./Lib/test/test_long.py", fp=0x401fa83c) at import.c:758 #12 0xbc23 in load_module (name=0xdfbfd180 "test_long", fp=0x401fa83c, buf=0xdfbfcce0 "./Lib/test/test_long.py", type=1) at import.c:1227 #13 0xc8fb in import_submodule (mod=0x1140fc, subname=0xdfbfd180 "test_long", fullname=0xdfbfd180 "test_long") at import.c:1755 #14 0xc4ca in load_next (mod=0x1140fc, altmod=0x1140fc, p_name=0xdfbfd58c, buf=0xdfbfd180 "test_long", p_buflen=0xdfbfd17c) at import.c:1611 #15 0xc142 in import_module_ex (name=0x0, globals=0x14624c, locals=0x1a538c, fromlist=0x29c02c) at import.c:1462 ---Type to continue, or q to quit--- #16 0xc277 in PyImport_ImportModuleEx (name=0x19f0d4 "test_long", globals=0x14624c, locals=0x1a538c, fromlist=0x29c02c) at import.c:1503 #17 0x3ce47 in builtin___import__ (self=0x0, args=0x3b198c) at bltinmodule.c:31 #18 0x4668d in call_builtin (func=0x141070, arg=0x3b198c, kw=0x0) at ceval.c:2650 #19 0x46507 in PyEval_CallObjectWithKeywords (func=0x141070, arg=0x3b198c, kw=0x0) at ceval.c:2618 #20 0x453ac in eval_code2 (co=0x1cc240, globals=0x14624c, locals=0x0, args=0x15fdcc, argcount=5, kws=0x15fde0, kwcount=0, defs=0x1c4738, defcount=1, owner=0x0) at ceval.c:1951 #21 0x450b3 in eval_code2 (co=0x1c9f00, globals=0x14624c, locals=0x0, args=0x140f44, argcount=0, kws=0x140f44, kwcount=0, defs=0x18d918, defcount=10, owner=0x0) at ceval.c:1850 #22 0x450b3 in eval_code2 (co=0x1c9f80, globals=0x14624c, locals=0x14624c, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, owner=0x0) at ceval.c:1850 #23 0x42625 in PyEval_EvalCode (co=0x1c9f80, globals=0x14624c, locals=0x14624c) at ceval.c:319 #24 0x1272f in run_node (n=0x155400, filename=0xdfbfdb5a "./Lib/test/regrtest.py", globals=0x14624c, locals=0x14624c) at pythonrun.c:886 #25 0x126e8 in run_err_node (n=0x155400, filename=0xdfbfdb5a "./Lib/test/regrtest.py", globals=0x14624c, locals=0x14624c) at pythonrun.c:874 #26 0x126c4 in PyRun_FileEx (fp=0x401fa7e4, filename=0xdfbfdb5a "./Lib/test/regrtest.py", start=257, globals=0x14624c, locals=0x14624c, closeit=1) at pythonrun.c:866 #27 0x11cfe in PyRun_SimpleFileEx (fp=0x401fa7e4, filename=0xdfbfdb5a "./Lib/test/regrtest.py", closeit=1) at pythonrun.c:579 #28 0x118f3 in PyRun_AnyFileEx (fp=0x401fa7e4, filename=0xdfbfdb5a "./Lib/test/regrtest.py", closeit=1) at pythonrun.c:459 #29 0x24bb in Py_Main (argc=2, argv=0xdfbfdac8) at main.c:289 ---Type to continue, or q to quit--- #30 0x17b5 in main (argc=4, argv=0xdfbfdac8) at python.c:10 (gdb) Once again, I will attach the config.* (config.cache, etc.) to this in a file. Brad Allen ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-02 03:21 Message: Logged In: YES user_id=21627 Without access to an OpenBSD machine, this is difficult to analyze. I agree with Jeremy that it looks like a bug in your C library and/or operating system, and I also recommend to build Python with --disable-threads. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-04-24 10:44 Message: Logged In: YES user_id=21627 To me, this likes like a bug in the multithread support of your operating system. To work around this bug, you should configure python with --disable-threads. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=416944&group_id=5470 From noreply@sourceforge.net Sat Jun 2 11:23:19 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 02 Jun 2001 03:23:19 -0700 Subject: [Python-bugs-list] [ python-Bugs-418314 ] __eprintf undefined on Sun-OS 5.6 Message-ID: Bugs item #418314, was updated on 2001-04-23 11:36 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=418314&group_id=5470 Category: Build Group: None >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Martin v. Löwis (loewis) Summary: __eprintf undefined on Sun-OS 5.6 Initial Comment: I downloaded python-2.1, configured, built and installed it. When I try to use the C-API, I get this error: Undefined first referenced symbol in file __eprintf /home/aalen/python/lib/python2.1/config/libpython2.1.a(classobject.o) Program (interp.C): #include main () { Py_Initialize(); } Linking: /opt/SUNWspro4.2/bin/CC -o ./interp interp.o -L/wherever/python/lib/python2.1/config -R/wherever/python/lib/python2.1/config -lpython2.1 -ldl ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-02 03:23 Message: Logged In: YES user_id=21627 Since there was no feedback on my questions, I close this report as "won't fix". ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-04-24 10:10 Message: Logged In: YES user_id=21627 Could it be that you have been using gcc to build Python? If so, you also need to use gcc to link your executables, or explicitly link libgcc.a. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=418314&group_id=5470 From noreply@sourceforge.net Sat Jun 2 12:11:21 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 02 Jun 2001 04:11:21 -0700 Subject: [Python-bugs-list] [ python-Bugs-429554 ] PyList_SET_ITEM documentation omission Message-ID: Bugs item #429554, was updated on 2001-06-02 04:11 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429554&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Ernst Jan Plugge (rmc) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: PyList_SET_ITEM documentation omission Initial Comment: The Python/C API documentation doesn't document the fact that PyList_SET_ITEM does not DECREF the list item being replaced (if any), but PyList_SetItem does. from listobject.h: #define PyList_SET_ITEM(op, i, v) (((PyListObject *) (op))->ob_item[i] = (v)) from listobject.c: [...] olditem = *p; *p = newitem; Py_XDECREF(olditem); [...] ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429554&group_id=5470 From noreply@sourceforge.net Sat Jun 2 12:40:48 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 02 Jun 2001 04:40:48 -0700 Subject: [Python-bugs-list] [ python-Bugs-429570 ] GC objects are tracked prematurely Message-ID: Bugs item #429570, was updated on 2001-06-02 04:40 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429570&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Ernst Jan Plugge (rmc) Assigned to: Nobody/Anonymous (nobody) Summary: GC objects are tracked prematurely Initial Comment: Under certain circumstances, the Python Garbage Collector tracks objects that haven't yet been added to the GC chain. This causes hard to debug segfaults when GC occurs. I've reduced the problem to this simple case: a simple extension module with one extension type that holds a reference to another object. The single module method creates two objects, lets them point to eachother, and returns one of them. This creates a cycle to be broken by the GC. In the attached source, the second PyObject_GC_Init() call is done after the references have been set up. If called in a 'while 1: a = trouble.createT( "foo" )' loop, it segfaults as soon as a GC cycle is performed while the method is still setting up the objects. If the PyObject_GC_Init() call is moved up to immediately after the obj field is set to Py_None, no segfault occurs. I believe this should not happen because one should be free to mess with tracked objects as long as they haven't been added to the GC chain. If it is in fact working as designed, this should be documented. I'm using Python 2.1 on Linux/Intel. 2.0 has the same problem, but it shows in slightly different places. 2.1 made it easier to create this simple minimal case. It took a few long, frustrating days to track this one down, because it didn't show up until the module was up to about 20000 twisty lines of code, all interconnected... :-( ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429570&group_id=5470 From noreply@sourceforge.net Sun Jun 3 04:17:27 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 02 Jun 2001 20:17:27 -0700 Subject: [Python-bugs-list] [ python-Bugs-429554 ] PyList_SET_ITEM documentation omission Message-ID: Bugs item #429554, was updated on 2001-06-02 04:11 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429554&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Ernst Jan Plugge (rmc) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: PyList_SET_ITEM documentation omission Initial Comment: The Python/C API documentation doesn't document the fact that PyList_SET_ITEM does not DECREF the list item being replaced (if any), but PyList_SetItem does. from listobject.h: #define PyList_SET_ITEM(op, i, v) (((PyListObject *) (op))->ob_item[i] = (v)) from listobject.c: [...] olditem = *p; *p = newitem; Py_XDECREF(olditem); [...] ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-06-02 20:17 Message: Logged In: YES user_id=3066 Fixed in Doc/api/api.tex revisions 1.126 and 1.117.2.4. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429554&group_id=5470 From noreply@sourceforge.net Sun Jun 3 07:06:49 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 02 Jun 2001 23:06:49 -0700 Subject: [Python-bugs-list] [ python-Bugs-429570 ] GC objects are tracked prematurely Message-ID: Bugs item #429570, was updated on 2001-06-02 04:40 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429570&group_id=5470 >Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Ernst Jan Plugge (rmc) >Assigned to: Neil Schemenauer (nascheme) Summary: GC objects are tracked prematurely Initial Comment: Under certain circumstances, the Python Garbage Collector tracks objects that haven't yet been added to the GC chain. This causes hard to debug segfaults when GC occurs. I've reduced the problem to this simple case: a simple extension module with one extension type that holds a reference to another object. The single module method creates two objects, lets them point to eachother, and returns one of them. This creates a cycle to be broken by the GC. In the attached source, the second PyObject_GC_Init() call is done after the references have been set up. If called in a 'while 1: a = trouble.createT( "foo" )' loop, it segfaults as soon as a GC cycle is performed while the method is still setting up the objects. If the PyObject_GC_Init() call is moved up to immediately after the obj field is set to Py_None, no segfault occurs. I believe this should not happen because one should be free to mess with tracked objects as long as they haven't been added to the GC chain. If it is in fact working as designed, this should be documented. I'm using Python 2.1 on Linux/Intel. 2.0 has the same problem, but it shows in slightly different places. 2.1 made it easier to create this simple minimal case. It took a few long, frustrating days to track this one down, because it didn't show up until the module was up to about 20000 twisty lines of code, all interconnected... :-( ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-02 23:06 Message: Logged In: YES user_id=31435 Assigned to Neil. The docs should at least be more explicit about that when a container C is added to the GC list via PyObject_GC_Init(), all objects reachable from C then and until PyObject_GC_Fini() is called must either (a) not participate in GC at all, or (b) themselves have been PyObject_GC_Init()'ed before becoming reachable from C. It looks like this particular case would not have blown up if, in _PyGC_Insert, op were added to generation0 *before* checking to see whether collection should run. Think that's more robust? Well, maybe in a case where A references B references A, but if the cycle being created were longer than that this style of programming would still lead to problems. Ernst, independent of all that, when doing x = y; get into the rigid habit of incref'ing y before decref'ing x. Sooner or later they're going to point to the same object without you realizing it, and then decref'ing x first can leave y pointing at trash. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429570&group_id=5470 From noreply@sourceforge.net Mon Jun 4 21:46:47 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Jun 2001 13:46:47 -0700 Subject: [Python-bugs-list] [ python-Bugs-231273 ] [windows] os.popen doens't kill subprocess when interrupted Message-ID: Bugs item #231273, was updated on 2001-02-06 08:43 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=231273&group_id=5470 Category: Windows Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Christophe Gouiran (cgouiran) Assigned to: Mark Hammond (mhammond) Summary: [windows] os.popen doens't kill subprocess when interrupted Initial Comment: Hi, in the following script I liked to make an interface to the contig program(http://www.sysinternals.com) As the popen invocation can be a long time process (since it walks recursively trough directories) I press CTRL-C sometimes and the contig continues to run. I use Python 2.0 (BeOpen version) under WinNT 4.0(SP 4) Maybe I made a mistake in the following script ? ------------------------------------------------- #! /usr/bin/env python import sys; import os; import re; content = "" mm = re.compile("Processing (.+)?:\nFragments: (\d+)?"); output = os.popen("contig -a -s *.*"); while(1): line = output.readline(); if line == '': break content += line; status = output.close() if status: print("Error contig : "+`status`+"("+os.strerror(status)+")"); sys.exit(12); print mm.findall(content) ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-04 13:46 Message: Logged In: YES user_id=21627 A patch for this problem is in progress at https://sourceforge.net/tracker/index.php?func=detail&aid=403743&group_id=5470&atid=305470 I can't see the bug here, though - why *should* terminating the parent process terminate the child processes also? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-02-09 15:50 Message: Poor Mark. I assign anything with "popen + Windows" to you, because you're the only one who ever makes progress on them . Offhand, I can't see how Ctrl+C directed at Python *could* interrupt a spawned process. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=231273&group_id=5470 From noreply@sourceforge.net Mon Jun 4 22:29:19 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Jun 2001 14:29:19 -0700 Subject: [Python-bugs-list] [ python-Bugs-420720 ] Starting many threads causes core dump Message-ID: Bugs item #420720, was updated on 2001-05-02 07:09 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=420720&group_id=5470 Category: Threads Group: None Status: Open Resolution: None Priority: 4 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: Starting many threads causes core dump Initial Comment: If I start more than 1020 threads simultaneously, using the threading module, Python 2.1 causes segmentation faults, or will not exit, under Linux. I have not tested this on Windows. I don't know whether the problems are caused by the threads themselves, or the threading.Event for which they are waiting. I have attached a program threadKill3.py which demonstrates this (including sample runs and output). ---------------------------------------------------------------------- Comment By: Thomas Hazel (thazel) Date: 2001-06-04 14:29 Message: Logged In: YES user_id=127523 Python Support/Developers, I believe the fault is in Pthreads and not in the python interpreter. Pthreads under Linux has a few issues. Running this python test under Windows should not have this problem. I have run similar tests in python with txObject ATK's abstraction layer to native(pthreads/Windows) threads. I have run these python tests on Linux, Windows and Solaris. Only Linux has this problem. If folks are interested, the project txObject ATK (txobject.sourceforge.net) has home grown threads that can scale to many thousand of threads (or as much memory as you have). txObject ATK also provides python wrappers to these features. However these threads are non-preemptive. Hope this helps Tom ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2001-05-09 08:53 Message: Logged In: YES user_id=31392 We need two things in order to offer any useful help: 1. A stack trace from the core file. 2. Evidence that a C program that starts 1024 threads won't do the same thing. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=420720&group_id=5470 From noreply@sourceforge.net Tue Jun 5 01:09:05 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Jun 2001 17:09:05 -0700 Subject: [Python-bugs-list] [ python-Bugs-430160 ] CGIHTTPServer.py POST bug using IE Message-ID: Bugs item #430160, was updated on 2001-06-04 17:09 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430160&group_id=5470 Category: Windows Group: None Status: Open Resolution: None Priority: 5 Submitted By: Kevin Altis (kasplat) Assigned to: Nobody/Anonymous (nobody) Summary: CGIHTTPServer.py POST bug using IE Initial Comment: >From the readme included in the zip: This set of files shows a bug that occurs when doing POST requests via the CGIHTTPServer.py module in Python 2.1 The testpost.html file when accessed via Internet Explorer 5.5 from webserver.py should show this bug. On short POST requests IE will end up doing a second POST and then displaying an error message while longer POSTs will be followed by a second POST and then a GET. The problem appears to be specific to the interaction of IE and the handling of windows sockets in Python in the CGIHTTPServer.py module which relies on BaseHTTPServer.py, SocketServer.py... posttestwebserver.py is currently setup to use C:\tmp\testpost as the document root, so either move the "testpost" folder to C:\tmp or change the directory to wherever the testpost folder is located. Start the server using the .bat file and bring up .html page with something like: http://localhost/testpost.html The bug should occur when you try: Test short CGI response with POST or Test long CGI response with POST The other requests should work fine. The bug will occur regardless of whether IE is set to use HTTP/1.0 or HTTP/1.1. The bug doesn't appear to occur when going through a simple proxy. You can also get the bug to occur using a remote IE client (either on a LAN or over the Net). In addition, it doesn't appear to matter whether running with unbuffered binary pipes (python -u) or not. I also tested against my modified CGIHTTPServer.py See the bug I posted at: http://sourceforge.net/tracker/? func=detail&atid=105470&aid=427345&group_id=5470 My configuration: Windows 2000 Pro, SP2 AMD 1.2GHz 256MB RAM ActiveStatet Python 2.1 (build 210) Internet Explorer 5.5 (5.50.4522.1800) ka --- Mark Lutz said: "FWIW, I noticed today (in between lecturing a class) that on Windows, Python actually uses a special Python- coded socket.py library module, not the standard C- coded socket extension module. socket.py lives in the library directory; it does unique things for closes and deletes that may not make sense in all cases (e.g., the makefile call generates a class instance, not a true file object). It may also be trying to close the underlying socket twice. I don't have" ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430160&group_id=5470 From noreply@sourceforge.net Tue Jun 5 06:10:34 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Jun 2001 22:10:34 -0700 Subject: [Python-bugs-list] [ python-Bugs-430200 ] corrupt floats in lists & tuples Message-ID: Bugs item #430200, was updated on 2001-06-04 22:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430200&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: corrupt floats in lists & tuples Initial Comment: C Python 2.1 on GNU/Linux Mandrake-7.2. Intel PIII. Compiled from sources. Assigning a tuple or list of floating-point numbers returns corrupted numbers. In the interpreter with nothing imported: >>> t = (0.5, 0.6, 0.76, 0.1) >>> t (0.5, 0.59999999999999998, 0.76000000000000001, 0.10000000000000001) >>> t = (1.5, 2.26, 3.76, 4.1) >>> t (1.5, 2.2599999999999998, 3.7599999999999998, 4.0999999999999996) >>> l = [1.5, 2.26, 3.76, 4.1] >>> l [1.5, 2.2599999999999998, 3.7599999999999998, 4.0999999999999996] At least one other identical case so far exists in my Python Authors Group. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430200&group_id=5470 From noreply@sourceforge.net Tue Jun 5 06:11:23 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Jun 2001 22:11:23 -0700 Subject: [Python-bugs-list] [ python-Bugs-418392 ] METH_OLDARGS allows bogus keywords Message-ID: Bugs item #418392, was updated on 2001-04-23 16:02 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=418392&group_id=5470 Category: Python Interpreter Core Group: None >Status: Closed >Resolution: Fixed Priority: 3 Submitted By: Barry Warsaw (bwarsaw) Assigned to: Barry Warsaw (bwarsaw) Summary: METH_OLDARGS allows bogus keywords Initial Comment: fileobj.close() is a method that's implemented using PyArg_NoArgs() and METH_OLDARGS. It allows bogus keyword arguments, which are ignored, e.g.: >>> fp = open('/tmp/foo', 'w') >>> fp.close(bogus=1) Also, >>> fp = open('/tmp/foo', 'w') >>> fp.write('hello', bogus=1) TypeError: argument must be string or read-only character buffer, not int >>> fp.write('hello', bogus='world') >>> ^D % cat /tmp/foo hello The fix is to convert these to use METH_VARARGS. I'm submitting this bug report so it doesn't get forgotten. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-04 22:11 Message: Logged In: YES user_id=21627 Fixed with 2.245 of ceval.c. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=418392&group_id=5470 From noreply@sourceforge.net Tue Jun 5 06:17:46 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Jun 2001 22:17:46 -0700 Subject: [Python-bugs-list] [ python-Bugs-416462 ] .pyo files missing in mimetypes.py? Message-ID: Bugs item #416462, was updated on 2001-04-16 09:01 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=416462&group_id=5470 Category: Python Library Group: Feature Request >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: David M. Beazley (beazley) Assigned to: Barry Warsaw (bwarsaw) Summary: .pyo files missing in mimetypes.py? Initial Comment: This is a very minor nit caught by Paul Dubois in reviewing Python Essential Reference 2nd Ed. The .py and .pyc files are both recognized in the mimetypes module as : '.py': 'text/x-python', '.pyc': 'application/x-python-code', However, no entry appears for '.pyo' files. Should they also appear? '.pyo' : 'application/x-python-code', Admittedly it's pretty minor---I don't think I would have noticed this had it not been pointed out. -- Dave ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-04 22:17 Message: Logged In: YES user_id=21627 Fixed with 1.14 of mimetypes.py. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=416462&group_id=5470 From noreply@sourceforge.net Tue Jun 5 06:31:54 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Jun 2001 22:31:54 -0700 Subject: [Python-bugs-list] [ python-Bugs-430200 ] corrupt floats in lists & tuples Message-ID: Bugs item #430200, was updated on 2001-06-04 22:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430200&group_id=5470 Category: Python Interpreter Core >Group: Not a Bug >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: corrupt floats in lists & tuples Initial Comment: C Python 2.1 on GNU/Linux Mandrake-7.2. Intel PIII. Compiled from sources. Assigning a tuple or list of floating-point numbers returns corrupted numbers. In the interpreter with nothing imported: >>> t = (0.5, 0.6, 0.76, 0.1) >>> t (0.5, 0.59999999999999998, 0.76000000000000001, 0.10000000000000001) >>> t = (1.5, 2.26, 3.76, 4.1) >>> t (1.5, 2.2599999999999998, 3.7599999999999998, 4.0999999999999996) >>> l = [1.5, 2.26, 3.76, 4.1] >>> l [1.5, 2.2599999999999998, 3.7599999999999998, 4.0999999999999996] At least one other identical case so far exists in my Python Authors Group. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-04 22:31 Message: Logged In: YES user_id=31435 This is not a bug. Binary floating point cannot represent decimal fractions exactly, so some rounding always occurs (even in Python 1.5.2). What changed is that Python 2.0 shows more precision than before in certain circumstances (repr() and the interactive prompt). You can use str() or print to get the old, rounded output: >>> print 0.1+0.1 0.2 >>> Follow the link for a detailed example: http://www.python.org/cgi-bin/moinmoin/RepresentationError ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430200&group_id=5470 From noreply@sourceforge.net Tue Jun 5 06:33:22 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Jun 2001 22:33:22 -0700 Subject: [Python-bugs-list] [ python-Bugs-430200 ] corrupt floats in lists & tuples Message-ID: Bugs item #430200, was updated on 2001-06-04 22:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430200&group_id=5470 Category: Python Interpreter Core Group: Not a Bug Status: Closed Resolution: Invalid Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: corrupt floats in lists & tuples Initial Comment: C Python 2.1 on GNU/Linux Mandrake-7.2. Intel PIII. Compiled from sources. Assigning a tuple or list of floating-point numbers returns corrupted numbers. In the interpreter with nothing imported: >>> t = (0.5, 0.6, 0.76, 0.1) >>> t (0.5, 0.59999999999999998, 0.76000000000000001, 0.10000000000000001) >>> t = (1.5, 2.26, 3.76, 4.1) >>> t (1.5, 2.2599999999999998, 3.7599999999999998, 4.0999999999999996) >>> l = [1.5, 2.26, 3.76, 4.1] >>> l [1.5, 2.2599999999999998, 3.7599999999999998, 4.0999999999999996] At least one other identical case so far exists in my Python Authors Group. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-04 22:33 Message: Logged In: YES user_id=31435 I closed this with a boilerplate reply set up for it. Note that this (well, things "like this") has been a very active discussion topic on comp.lang.python over the past week. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-04 22:31 Message: Logged In: YES user_id=31435 This is not a bug. Binary floating point cannot represent decimal fractions exactly, so some rounding always occurs (even in Python 1.5.2). What changed is that Python 2.0 shows more precision than before in certain circumstances (repr() and the interactive prompt). You can use str() or print to get the old, rounded output: >>> print 0.1+0.1 0.2 >>> Follow the link for a detailed example: http://www.python.org/cgi-bin/moinmoin/RepresentationError ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430200&group_id=5470 From noreply@sourceforge.net Tue Jun 5 06:33:57 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Jun 2001 22:33:57 -0700 Subject: [Python-bugs-list] [ python-Bugs-422702 ] dbhash.open default Message-ID: Bugs item #422702, was updated on 2001-05-09 10:11 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=422702&group_id=5470 Category: None Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Grant Griffin (dspguru) Assigned to: Nobody/Anonymous (nobody) Summary: dbhash.open default Initial Comment: The function "dbhash.open" is described in the docs as: open(path, flag[, mode]) ... The flag argument can be 'r' (the default),... These two statements are inconsistent because 'flag' is not shown in the usage as a default. Also, the dbhash.open function is declared in the code as: def open(file, flag, mode=0666): which does not currently implement flag as a default. Therefore, I recommend the following: 1) Change the declaration to: def open(file, flag='r', mode=0666): This is consistent with the documented default, and also with anydbm.open. 2) Change the dbmhash doc to show the usage as: open(path[, flag[, mode]]) This notation is consistent with the current description, and with the suggested change in implementation. Thanks! =g2 ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-04 22:33 Message: Logged In: YES user_id=21627 Fixed with dbhash.py 1.6 and libdbhash.tex 1.4. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=422702&group_id=5470 From noreply@sourceforge.net Tue Jun 5 06:59:06 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Jun 2001 22:59:06 -0700 Subject: [Python-bugs-list] [ python-Bugs-428419 ] include/rangeobject.h needs extern "C" Message-ID: Bugs item #428419, was updated on 2001-05-29 12:53 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=428419&group_id=5470 Category: None Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Barry Alan Scott (barry-scott) Assigned to: Nobody/Anonymous (nobody) >Summary: include/rangeobject.h needs extern "C" Initial Comment: include/rangeobject.h needs extern "C" if compiling for C++ as the other .h files have. The workaround is to add these lines to the my app code: extern "C" DL_IMPORT(PyTypeObject) PyRange_Type; extern "C" DL_IMPORT(PyObject *) PyRange_New(long, long, long, int); ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-04 22:59 Message: Logged In: YES user_id=21627 Fixed with 2.16 of rangeobject.h. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=428419&group_id=5470 From noreply@sourceforge.net Tue Jun 5 07:05:57 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Jun 2001 23:05:57 -0700 Subject: [Python-bugs-list] [ python-Bugs-428972 ] any plan for ConfigParser supports XML? Message-ID: Bugs item #428972, was updated on 2001-05-31 05:08 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=428972&group_id=5470 Category: Extension Modules Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: eXom (jkuan) Assigned to: Nobody/Anonymous (nobody) Summary: any plan for ConfigParser supports XML? Initial Comment: There are more applications using XML for configuration file. Is there any plan to support this feature in ConfigParser? Thanks Joe ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-04 23:05 Message: Logged In: YES user_id=21627 ConfigParser is specifically designed to read .INI files, so I doubt that it will be extended to XML. If you need to read an XML configuration file, you best use xml.dom.minidom. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=428972&group_id=5470 From noreply@sourceforge.net Tue Jun 5 11:56:43 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 05 Jun 2001 03:56:43 -0700 Subject: [Python-bugs-list] [ python-Bugs-430269 ] python -U breaks import with 2.1 Message-ID: Bugs item #430269, was updated on 2001-06-05 03:56 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430269&group_id=5470 Category: Unicode Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Nobody/Anonymous (nobody) Summary: python -U breaks import with 2.1 Initial Comment: python -U under Windows is broken with Python 2.1: D:\>python Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import urllib >>> ^C D:\>python -U Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import urllib Traceback (most recent call last): File "", line 1, in ? ImportError: No module named urllib ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430269&group_id=5470 From noreply@sourceforge.net Wed Jun 6 02:04:20 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 05 Jun 2001 18:04:20 -0700 Subject: [Python-bugs-list] [ python-Bugs-430269 ] python -U breaks import with 2.1 Message-ID: Bugs item #430269, was updated on 2001-06-05 03:56 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430269&group_id=5470 Category: Unicode Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) >Assigned to: M.-A. Lemburg (lemburg) Summary: python -U breaks import with 2.1 Initial Comment: python -U under Windows is broken with Python 2.1: D:\>python Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import urllib >>> ^C D:\>python -U Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import urllib Traceback (most recent call last): File "", line 1, in ? ImportError: No module named urllib ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-05 18:04 Message: Logged In: YES user_id=31435 Assigned to Marc-Andre. M-A, do you expect -U to be useful at this point? I thought I saw docs at one point, but can't seem to find them again ... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430269&group_id=5470 From noreply@sourceforge.net Wed Jun 6 07:58:20 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 05 Jun 2001 23:58:20 -0700 Subject: [Python-bugs-list] [ python-Bugs-428289 ] Integrates log4p Message-ID: Bugs item #428289, was updated on 2001-05-29 05:55 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=428289&group_id=5470 Category: Python Interpreter Core Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: eXom (jkuan) Assigned to: Nobody/Anonymous (nobody) Summary: Integrates log4p Initial Comment: I have read about log4j which is great for developers. Also notice that has been ported to python, log4p, log4p.sourceforge.net. I personally think this will be one of the great tools if it integrates into python core. At least more open source python developer will definitely want to use it, rather than they have to maintain their own log4p. Now python has a module that provides auto test framework, I deeply think the next thing should be logging. Thanks Joe ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-05 23:58 Message: Logged In: YES user_id=21627 I hope this won't integrated into Python. Mirroring the Java package structure is not a good idea (from java.text import DateFormat???) Instead, it seems that all the underlying support libraries are already there: you have the syslog module on Unix, and the win32evtlog on Windows. It might be that some universal wrapper is desirable - but log4p currently does not look like material for the Python core. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=428289&group_id=5470 From noreply@sourceforge.net Wed Jun 6 08:21:37 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 00:21:37 -0700 Subject: [Python-bugs-list] [ python-Bugs-430269 ] python -U breaks import with 2.1 Message-ID: Bugs item #430269, was updated on 2001-06-05 03:56 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430269&group_id=5470 Category: Unicode Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: M.-A. Lemburg (lemburg) Summary: python -U breaks import with 2.1 Initial Comment: python -U under Windows is broken with Python 2.1: D:\>python Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import urllib >>> ^C D:\>python -U Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import urllib Traceback (most recent call last): File "", line 1, in ? ImportError: No module named urllib ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-06 00:21 Message: Logged In: YES user_id=21627 The (first) problem appears to be in string.py, which has _idmap = '' for i in range(256): _idmap = _idmap + chr(i) With -U, _idmap is a unicode string, and adding chr(128) to it will give an ASCII decoding error. Therefore, importing string fails. In turn, many other things fail as well. That can be solved by writing _idmap=str('') but then it will still complain that distutils.util cannot be imported in site. One problem may be that site.py changes the strings of sys.path into Unicode strings. In fact, when starting Python with -U -S, it will properly locate urllib. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-05 18:04 Message: Logged In: YES user_id=31435 Assigned to Marc-Andre. M-A, do you expect -U to be useful at this point? I thought I saw docs at one point, but can't seem to find them again ... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430269&group_id=5470 From noreply@sourceforge.net Wed Jun 6 10:57:06 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 02:57:06 -0700 Subject: [Python-bugs-list] [ python-Bugs-430627 ] Fixes for templates/module.tex file Message-ID: Bugs item #430627, was updated on 2001-06-06 02:57 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430627&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Fixes for templates/module.tex file Initial Comment: Hello, I am writing some docs for a module I implemented using the templates/module.tex file (great file BTW). I found some minor glitches which you may want to fix: Line 16: Comment says 'descrition' instead of 'description' (one p extra) Line 134: in "\subsection{Example \label{...}}" is the space not wanted IMHO. Line 134 is also not consistent with line 154, where the \label{..} tag is after the \subsection instead of within it. I don't know what to do with global constants (I have a variable NOMATCH='no match' at the global level of the module. Although the variable may be modified, the idea is that it is a constant. I don't know how to document that. The closest match is with \begin{datadesc} I think, but that is not entirely correct. While writing I simply assumed that stuff like \class{xx} or \function{xx} existed. Since LaTeX does not complain, obviously it does. I know that documentation about these macro's exists somewhere but until now, I have not run into it (I haven't been looking too hard for it though). For other users, you may want to add a comment near the top of the file that refers to the exact document that covers what macro's exist. Finally, you may want to add some notes how to LaTeX the module documentation on its own. Basic problem is that the template file on its own will not be parsed by LaTeX. I solved the problem by hacking lib/lib.tex and stripping away most of the existing stuff. A more elegant approach would be to create a wrapper tex file. All in all, I think you all did a great job of making the writing of documentation easier. The existence of docs for many modules as a large factor in the success of Python IMHO !! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430627&group_id=5470 From noreply@sourceforge.net Wed Jun 6 16:46:12 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 08:46:12 -0700 Subject: [Python-bugs-list] [ python-Bugs-424680 ] distutils module version # not StrictVer Message-ID: Bugs item #424680, was updated on 2001-05-16 15:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=424680&group_id=5470 Category: Distutils Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: A.M. Kuchling (akuchling) Summary: distutils module version # not StrictVer Initial Comment: There is a fairly detailed definition in the distutils module (in version.py) about valid (or Strict) version numbers. However the version number of the distutils module does not conform to this numbering scheme. Specifically Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import distutils >>> distutils.__version__ '1.0.2pre' >>> from distutils.version import StrictVersion >>> StrictVersion(distutils.__version__) Traceback (most recent call last): File "", line 1, in ? File "c:\program files\python21 \lib\distutils\version.py", line 42, in __init__ self.parse(vstring) File "c:\program files\python21 \lib\distutils\version.py", line 109, in parse raise ValueError, "invalid version number '%s'" % vstring ValueError: invalid version number '1.0.2pre' >>> ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2001-06-06 08:46 Message: Logged In: YES user_id=11375 Fixed in the current CVS. (This is also an obvious fix for 2.1.1.) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=424680&group_id=5470 From noreply@sourceforge.net Wed Jun 6 17:14:45 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 09:14:45 -0700 Subject: [Python-bugs-list] [ python-Bugs-430627 ] Fixes for templates/module.tex file Message-ID: Bugs item #430627, was updated on 2001-06-06 02:57 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430627&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Fixes for templates/module.tex file Initial Comment: Hello, I am writing some docs for a module I implemented using the templates/module.tex file (great file BTW). I found some minor glitches which you may want to fix: Line 16: Comment says 'descrition' instead of 'description' (one p extra) Line 134: in "\subsection{Example \label{...}}" is the space not wanted IMHO. Line 134 is also not consistent with line 154, where the \label{..} tag is after the \subsection instead of within it. I don't know what to do with global constants (I have a variable NOMATCH='no match' at the global level of the module. Although the variable may be modified, the idea is that it is a constant. I don't know how to document that. The closest match is with \begin{datadesc} I think, but that is not entirely correct. While writing I simply assumed that stuff like \class{xx} or \function{xx} existed. Since LaTeX does not complain, obviously it does. I know that documentation about these macro's exists somewhere but until now, I have not run into it (I haven't been looking too hard for it though). For other users, you may want to add a comment near the top of the file that refers to the exact document that covers what macro's exist. Finally, you may want to add some notes how to LaTeX the module documentation on its own. Basic problem is that the template file on its own will not be parsed by LaTeX. I solved the problem by hacking lib/lib.tex and stripping away most of the existing stuff. A more elegant approach would be to create a wrapper tex file. All in all, I think you all did a great job of making the writing of documentation easier. The existence of docs for many modules as a large factor in the success of Python IMHO !! ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-06-06 09:14 Message: Logged In: YES user_id=3066 Fixed in the files: Doc/templates/howto.tex (revisions 1.5, 1.4.12.1) Doc/templates/manual.tex (revisions 1.3, 1.2.12.1) Doc/templates/module.tex (revisions 1.22, 1.21.6.1) The space before the \label{} is considered acceptable, and is given more often than omitted in the Python documentation. This is done primarily for readability; no problems with formatting have been reported which relate to this. It is not an error for LaTeX. "datadesc" is the appropriate markup for module-level constants as well as mutable data. The description should indicate whether changes from outside the module are respected. You can use \constant{} to mark the names of constants in running text. The file Doc/templates/howto.tex includes an example showing how to include a module section in a formattable document. If you think this is not sufficient, please file a separate request for an example showing a wrapper. Thanks for your comments on the Python documentation! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430627&group_id=5470 From noreply@sourceforge.net Wed Jun 6 18:20:23 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 10:20:23 -0700 Subject: [Python-bugs-list] [ python-Bugs-430753 ] Python 2.1 build fails on hp-ux 11 Message-ID: Bugs item #430753, was updated on 2001-06-06 10:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430753&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.1 build fails on hp-ux 11 Initial Comment: I'm building on HP-UX 11.00 with HP C B.11.02.02. I've tried it on both Series 700 and 800 hardware. I've attached the make logfile. There are problems with: build.py not adding +z when compiling modules termios.c - may be fixed according to another bug report _cursesmodule.c Please send any fixes to joshua.weage@arup.com Thanks ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430753&group_id=5470 From noreply@sourceforge.net Wed Jun 6 20:28:50 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 12:28:50 -0700 Subject: [Python-bugs-list] [ python-Bugs-430269 ] python -U breaks import with 2.1 Message-ID: Bugs item #430269, was updated on 2001-06-05 03:56 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430269&group_id=5470 Category: Unicode Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: M.-A. Lemburg (lemburg) Summary: python -U breaks import with 2.1 Initial Comment: python -U under Windows is broken with Python 2.1: D:\>python Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import urllib >>> ^C D:\>python -U Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import urllib Traceback (most recent call last): File "", line 1, in ? ImportError: No module named urllib ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-06 12:28 Message: Logged In: YES user_id=31435 Martin, see email to Python-Dev: best I can tell, nobody expects -U to work yet. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-06 00:21 Message: Logged In: YES user_id=21627 The (first) problem appears to be in string.py, which has _idmap = '' for i in range(256): _idmap = _idmap + chr(i) With -U, _idmap is a unicode string, and adding chr(128) to it will give an ASCII decoding error. Therefore, importing string fails. In turn, many other things fail as well. That can be solved by writing _idmap=str('') but then it will still complain that distutils.util cannot be imported in site. One problem may be that site.py changes the strings of sys.path into Unicode strings. In fact, when starting Python with -U -S, it will properly locate urllib. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-05 18:04 Message: Logged In: YES user_id=31435 Assigned to Marc-Andre. M-A, do you expect -U to be useful at this point? I thought I saw docs at one point, but can't seem to find them again ... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430269&group_id=5470 From noreply@sourceforge.net Wed Jun 6 22:17:46 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 14:17:46 -0700 Subject: [Python-bugs-list] [ python-Bugs-419390 ] base64.py could be smarter... Message-ID: Bugs item #419390, was updated on 2001-04-26 23:57 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=419390&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Nobody/Anonymous (nobody) Summary: base64.py could be smarter... Initial Comment: base64.encodestring and decodestring take the provided string, wrap it in a StringIO, then pass it to encode/decode which uses read() to pull it back out again. Seems pretty inefficient. Replacing decodestring with: return binascii.a2b_base64(s) results in a speedup of a factor of 16 or so. (my sample: a 2Mb encoded voice message - takes an average of 10s in the current form, and 0.6s using just binascii.) A similar speedup for encodestring seems possible. ---------------------------------------------------------------------- >Comment By: Peter Schneider-Kamp (nowonder) Date: 2001-06-06 14:17 Message: Logged In: YES user_id=14463 Looks good to me. Uploaded (extremely small) patch #430846. Unfortunately speeding up encoding of a String seems to be harder (binascii.b2a_base64 accepts at most 76 bytes). Ideas anyone? ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2001-05-07 21:27 Message: Logged In: YES user_id=31392 Anthony, Could you submit a patch? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=419390&group_id=5470 From noreply@sourceforge.net Wed Jun 6 22:19:41 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 14:19:41 -0700 Subject: [Python-bugs-list] [ python-Bugs-419390 ] base64.py could be smarter... Message-ID: Bugs item #419390, was updated on 2001-04-26 23:57 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=419390&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) >Assigned to: Peter Schneider-Kamp (nowonder) Summary: base64.py could be smarter... Initial Comment: base64.encodestring and decodestring take the provided string, wrap it in a StringIO, then pass it to encode/decode which uses read() to pull it back out again. Seems pretty inefficient. Replacing decodestring with: return binascii.a2b_base64(s) results in a speedup of a factor of 16 or so. (my sample: a 2Mb encoded voice message - takes an average of 10s in the current form, and 0.6s using just binascii.) A similar speedup for encodestring seems possible. ---------------------------------------------------------------------- Comment By: Peter Schneider-Kamp (nowonder) Date: 2001-06-06 14:17 Message: Logged In: YES user_id=14463 Looks good to me. Uploaded (extremely small) patch #430846. Unfortunately speeding up encoding of a String seems to be harder (binascii.b2a_base64 accepts at most 76 bytes). Ideas anyone? ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2001-05-07 21:27 Message: Logged In: YES user_id=31392 Anthony, Could you submit a patch? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=419390&group_id=5470 From noreply@sourceforge.net Wed Jun 6 22:29:12 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 14:29:12 -0700 Subject: [Python-bugs-list] [ python-Bugs-430849 ] binascii.a2b_base64(""): internal error Message-ID: Bugs item #430849, was updated on 2001-06-06 14:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430849&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Peter Schneider-Kamp (nowonder) Assigned to: Nobody/Anonymous (nobody) Summary: binascii.a2b_base64(""): internal error Initial Comment: On trying to decode an empty string with binascii.a2b_base64 a SystemError is encountered: >>> binascii.a2b_base64("") Traceback (most recent call last): File "", line 1, in ? SystemError: Objects/stringobject.c:2589: bad argument to internal function The function that raises the SystemError is _PyString_Resize of stringobject.c fame. A quick&dirty fix is attached. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430849&group_id=5470 From noreply@sourceforge.net Wed Jun 6 22:41:02 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 14:41:02 -0700 Subject: [Python-bugs-list] [ python-Bugs-419390 ] base64.py could be smarter... Message-ID: Bugs item #419390, was updated on 2001-04-26 23:57 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=419390&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Peter Schneider-Kamp (nowonder) Summary: base64.py could be smarter... Initial Comment: base64.encodestring and decodestring take the provided string, wrap it in a StringIO, then pass it to encode/decode which uses read() to pull it back out again. Seems pretty inefficient. Replacing decodestring with: return binascii.a2b_base64(s) results in a speedup of a factor of 16 or so. (my sample: a 2Mb encoded voice message - takes an average of 10s in the current form, and 0.6s using just binascii.) A similar speedup for encodestring seems possible. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-06 14:41 Message: Logged In: YES user_id=31435 Give this a whirl (remove leading periods, inserted to default SF space-mangling): .def encodestring(s): . pieces = [] . for i in range(0, len(s), MAXBINSIZE): . chunk = s[i : i + MAXBINSIZE] . pieces.append(binascii.b2a_base64(chunk)) . return "".join(pieces) ---------------------------------------------------------------------- Comment By: Peter Schneider-Kamp (nowonder) Date: 2001-06-06 14:17 Message: Logged In: YES user_id=14463 Looks good to me. Uploaded (extremely small) patch #430846. Unfortunately speeding up encoding of a String seems to be harder (binascii.b2a_base64 accepts at most 76 bytes). Ideas anyone? ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2001-05-07 21:27 Message: Logged In: YES user_id=31392 Anthony, Could you submit a patch? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=419390&group_id=5470 From noreply@sourceforge.net Wed Jun 6 22:47:25 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 14:47:25 -0700 Subject: [Python-bugs-list] [ python-Bugs-430849 ] binascii.a2b_base64(""): internal error Message-ID: Bugs item #430849, was updated on 2001-06-06 14:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430849&group_id=5470 Category: Extension Modules Group: None Status: Open >Resolution: Accepted Priority: 5 Submitted By: Peter Schneider-Kamp (nowonder) >Assigned to: Peter Schneider-Kamp (nowonder) >Summary: binascii.a2b_base64(""): internal error Initial Comment: On trying to decode an empty string with binascii.a2b_base64 a SystemError is encountered: >>> binascii.a2b_base64("") Traceback (most recent call last): File "", line 1, in ? SystemError: Objects/stringobject.c:2589: bad argument to internal function The function that raises the SystemError is _PyString_Resize of stringobject.c fame. A quick&dirty fix is attached. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-06 14:47 Message: Logged In: YES user_id=31435 Accepted, although if the error condition is that the input is empty, it would be better to change the error msg to *say* that instead of the much vaguer "not enough data". That is, don't make the user guess what "not enough" means. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430849&group_id=5470 From noreply@sourceforge.net Wed Jun 6 23:46:25 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 15:46:25 -0700 Subject: [Python-bugs-list] [ python-Bugs-429329 ] actual-parameters *arg, **kws not doc'd Message-ID: Bugs item #429329, was updated on 2001-06-01 08:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429329&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Alex Martelli (aleax) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: actual-parameters *arg, **kws not doc'd Initial Comment: 5.3.4 in the language reference should document the forms *args and **kwds for actual parameters, but it makes no mention of them and does not allow for them in the syntax productions. ---------------------------------------------------------------------- >Comment By: Jeremy Hylton (jhylton) Date: 2001-06-06 15:46 Message: Logged In: YES user_id=31392 I made a little progress on this pre-parenthood. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-06-01 08:38 Message: Logged In: YES user_id=3066 Assigned to Jeremy, since he shepharded the patch into the Python release. Changes should be integrated with the 2.1.1 and head branches. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429329&group_id=5470 From noreply@sourceforge.net Wed Jun 6 23:51:07 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 15:51:07 -0700 Subject: [Python-bugs-list] [ python-Bugs-430269 ] python -U breaks import with 2.1 Message-ID: Bugs item #430269, was updated on 2001-06-05 03:56 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430269&group_id=5470 Category: Unicode >Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: M.-A. Lemburg (lemburg) Summary: python -U breaks import with 2.1 Initial Comment: python -U under Windows is broken with Python 2.1: D:\>python Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import urllib >>> ^C D:\>python -U Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import urllib Traceback (most recent call last): File "", line 1, in ? ImportError: No module named urllib ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-06 15:51 Message: Logged In: YES user_id=31435 Changed Group from Platform-Specific (since it's got nothing to do with Windows specifically) to Feature Request (since nobody believes it *should* work now). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-06 12:28 Message: Logged In: YES user_id=31435 Martin, see email to Python-Dev: best I can tell, nobody expects -U to work yet. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-06 00:21 Message: Logged In: YES user_id=21627 The (first) problem appears to be in string.py, which has _idmap = '' for i in range(256): _idmap = _idmap + chr(i) With -U, _idmap is a unicode string, and adding chr(128) to it will give an ASCII decoding error. Therefore, importing string fails. In turn, many other things fail as well. That can be solved by writing _idmap=str('') but then it will still complain that distutils.util cannot be imported in site. One problem may be that site.py changes the strings of sys.path into Unicode strings. In fact, when starting Python with -U -S, it will properly locate urllib. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-05 18:04 Message: Logged In: YES user_id=31435 Assigned to Marc-Andre. M-A, do you expect -U to be useful at this point? I thought I saw docs at one point, but can't seem to find them again ... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430269&group_id=5470 From noreply@sourceforge.net Thu Jun 7 06:54:03 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 22:54:03 -0700 Subject: [Python-bugs-list] [ python-Bugs-430849 ] binascii.a2b_base64(""): internal error Message-ID: Bugs item #430849, was updated on 2001-06-06 14:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430849&group_id=5470 Category: Extension Modules Group: None >Status: Closed Resolution: Accepted Priority: 5 Submitted By: Peter Schneider-Kamp (nowonder) Assigned to: Peter Schneider-Kamp (nowonder) >Summary: binascii.a2b_base64(""): internal error Initial Comment: On trying to decode an empty string with binascii.a2b_base64 a SystemError is encountered: >>> binascii.a2b_base64("") Traceback (most recent call last): File "", line 1, in ? SystemError: Objects/stringobject.c:2589: bad argument to internal function The function that raises the SystemError is _PyString_Resize of stringobject.c fame. A quick&dirty fix is attached. ---------------------------------------------------------------------- >Comment By: Peter Schneider-Kamp (nowonder) Date: 2001-06-06 22:54 Message: Logged In: YES user_id=14463 checked in with "Cannot decode empty input" ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-06 14:47 Message: Logged In: YES user_id=31435 Accepted, although if the error condition is that the input is empty, it would be better to change the error msg to *say* that instead of the much vaguer "not enough data". That is, don't make the user guess what "not enough" means. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430849&group_id=5470 From noreply@sourceforge.net Thu Jun 7 07:03:53 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Jun 2001 23:03:53 -0700 Subject: [Python-bugs-list] [ python-Bugs-419390 ] base64.py could be smarter... Message-ID: Bugs item #419390, was updated on 2001-04-26 23:57 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=419390&group_id=5470 Category: Python Library Group: Feature Request >Status: Closed Resolution: None Priority: 5 Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Peter Schneider-Kamp (nowonder) Summary: base64.py could be smarter... Initial Comment: base64.encodestring and decodestring take the provided string, wrap it in a StringIO, then pass it to encode/decode which uses read() to pull it back out again. Seems pretty inefficient. Replacing decodestring with: return binascii.a2b_base64(s) results in a speedup of a factor of 16 or so. (my sample: a 2Mb encoded voice message - takes an average of 10s in the current form, and 0.6s using just binascii.) A similar speedup for encodestring seems possible. ---------------------------------------------------------------------- >Comment By: Peter Schneider-Kamp (nowonder) Date: 2001-06-06 23:03 Message: Logged In: YES user_id=14463 Gave it a good whirl. Phew! 6 times faster than the original, 4 times faster than my best attempt. Updating patch #430846. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-06 14:41 Message: Logged In: YES user_id=31435 Give this a whirl (remove leading periods, inserted to default SF space-mangling): .def encodestring(s): . pieces = [] . for i in range(0, len(s), MAXBINSIZE): . chunk = s[i : i + MAXBINSIZE] . pieces.append(binascii.b2a_base64(chunk)) . return "".join(pieces) ---------------------------------------------------------------------- Comment By: Peter Schneider-Kamp (nowonder) Date: 2001-06-06 14:17 Message: Logged In: YES user_id=14463 Looks good to me. Uploaded (extremely small) patch #430846. Unfortunately speeding up encoding of a String seems to be harder (binascii.b2a_base64 accepts at most 76 bytes). Ideas anyone? ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2001-05-07 21:27 Message: Logged In: YES user_id=31392 Anthony, Could you submit a patch? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=419390&group_id=5470 From noreply@sourceforge.net Thu Jun 7 11:44:33 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Jun 2001 03:44:33 -0700 Subject: [Python-bugs-list] [ python-Bugs-430991 ] wrong co_lnotab Message-ID: Bugs item #430991, was updated on 2001-06-07 03:44 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430991&group_id=5470 Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: wrong co_lnotab Initial Comment: The compiler produces a buggy co_lnotab in code objects if a single source line produces more than 255 bytes of bytecode. This can lead to tracebacks pointing to wrong source lines in "python -O". For example, to emit the information "266 bytes of bytecode, next line is 5 source code lines below", it writes in co_lnotab 255 5 11 0 althought it should write 255 0 11 5 Because of this an exception occurring in the last 11 bytes of code will be incorrectly reported 5 lines below. The problem is even more confusing if the number of lines to skip is itself larger than 255 (see attached example file). Fix: in compile.c correct the function com_set_lineno by replacing the inner while loop with the following : while (incr_addr > 255) { com_add_lnotab(c, 255, 0); incr_addr -= 255; } while (incr_line > 255) { com_add_lnotab(c, incr_addr, 255); incr_line -= 255; incr_addr = 0; } if (incr_line > 0 || incr_addr > 0) { com_add_lnotab(c, incr_addr, incr_line); } ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430991&group_id=5470 From noreply@sourceforge.net Thu Jun 7 12:21:08 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Jun 2001 04:21:08 -0700 Subject: [Python-bugs-list] [ python-Bugs-431000 ] PyMapping_DelItem[String]() is broken Message-ID: Bugs item #431000, was updated on 2001-06-07 04:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431000&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 3 Submitted By: Guido van Rossum (gvanrossum) Assigned to: Nobody/Anonymous (nobody) Summary: PyMapping_DelItem[String]() is broken Initial Comment: Somehow nobody ever noticed this, so I doubt this is a big problem in practice. But it's a bug nevertheless. In abstract.h, the abstract APIs PyMapping_DelItem() is equivalenced to PyDict_DelItem(); likewise for ...DelItemString(). This is broken because the PyMapping_ family of functions should work for any mapping type, not just for dictionaries! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431000&group_id=5470 From noreply@sourceforge.net Thu Jun 7 15:26:33 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Jun 2001 07:26:33 -0700 Subject: [Python-bugs-list] [ python-Bugs-431060 ] print 'foo',;readline() softspace error Message-ID: Bugs item #431060, was updated on 2001-06-07 07:26 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431060&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: print 'foo',;readline() softspace error Initial Comment: python 2.1 (and 1.5), intel linux and sparc solaris. def f(): print 'foo: ', sys.stdin.readline() print 'bar: ' f() foo: george bar: A print with trailing comma, followed by a readline and another print, puts an extra space at the beginning of the second printed line. An explicit setting of sys.stdout.softspace=0 after the first print averts this error. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431060&group_id=5470 From noreply@sourceforge.net Thu Jun 7 19:59:08 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Jun 2001 11:59:08 -0700 Subject: [Python-bugs-list] [ python-Bugs-431191 ] termios, Python 2.1 and 1.5.2, AIX, SCO Message-ID: Bugs item #431191, was updated on 2001-06-07 11:59 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431191&group_id=5470 Category: Build Group: None Status: Open Resolution: None Priority: 5 Submitted By: Mike Kent (mikent) Assigned to: Nobody/Anonymous (nobody) Summary: termios, Python 2.1 and 1.5.2, AIX, SCO Initial Comment: Getting the 'termios' module to compile or work on various platforms is like trying to pass a bowling ball. We need to input no-echoing passwords, therefore the need for termios. Building Python 2.1 on aix-4.3-2.1, with termios turned on in Modules/Setup, results in undefined symbol errors for 'VDISCARD' and 'VWERASE'. Building Python 1.5.2 on the same platform, with termios turned on, works fine, and gives us non-echoing passwords via getpass. Building Python 1.5.2 on SCO Open Server 5, with termios turned on, yields echoing passwords. Hmm. Upon further investingation (browsing the getpass.py source code), we found that the configure script believes this platform to be sco_sv3, and that the necessary file Lib/plat_sco_sv3/TERMIOS.py was missing. Running the 'regen' script in that directory created TERMIOS.py, which allowed a build of python with termios turned on. However, when testing getpass, it would now generate a termios.error exception. From investigating this (debugging termios.c), we determined that a call to tcsetattr was returning an error code and setting errno to 22 (Invalid argument). We gave up at that point. Wasn't one of the selling points of Python the ability to write code that would run on multiple platforms? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431191&group_id=5470 From noreply@sourceforge.net Thu Jun 7 23:20:01 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Jun 2001 15:20:01 -0700 Subject: [Python-bugs-list] [ python-Bugs-421212 ] PythonPath registry value ignored in Py2 Message-ID: Bugs item #421212, was updated on 2001-05-03 18:18 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=421212&group_id=5470 Category: None Group: None >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Mark Hammond (mhammond) Summary: PythonPath registry value ignored in Py2 Initial Comment: The difference I see between this and 229584 is that there are subkeys under ..\2.1\PythonPath as I installed win32extensions (bld 139). I still cannot pick up these paths. I resorted to simply hard codeing them in the path for windows 2k. Is this no longer supported? Esp since 229584 seems closed wi works for me. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-07 15:20 Message: Logged In: YES user_id=21627 If I understand the report correctly, the user complains that HKLM\Software\Python\PythonCore\2.1\PythonPath is not used to build sys.path. According to the commentary in PC/getpathp.c, this is not a bug: If Python can locate its home from PYTHONHOME or by finding the landmark, it will ignore the PythonPath default value, and only consider the subkeys. So I close this as Won't Fix. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-05-03 19:56 Message: Logged In: YES user_id=31435 Assigned to Mark. Unclear what this is about, or even what "Py2" means. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=421212&group_id=5470 From noreply@sourceforge.net Thu Jun 7 23:26:05 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Jun 2001 15:26:05 -0700 Subject: [Python-bugs-list] [ python-Bugs-416704 ] More robust freeze Message-ID: Bugs item #416704, was updated on 2001-04-17 07:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=416704&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Toby Dickenson (htrd) Assigned to: Mark Hammond (mhammond) Summary: More robust freeze Initial Comment: This patch addresses three issues, all relating to robustness of frozen programs. Specifically, this patch allows explicit and complete control over which modules may be loaded from source on the filesystem of the host system where the frozen program is run, and which may not. Without this patch it is impossible to create a non- trivial frozen program which will *never* load a module from source on the filesystem. 1. A patch to correct bug #404545 (frozen package import uses wrong files). Under this change, submodules of a frozen package must themselves be frozen modules. Previously, the import machinery may also try to import submodules from curiously named files (packagename.modulename.py) from directories in sys.path 2. A patch to add an extra command line option -E to freeze.py, which forces freeze to terminate with an error message if there are modules that it can not locate. If this switch is not specified then the default behaviour is unchanged: modules which can not be found by freeze will not be included in the frozen program, and the import machinery will try to load them from source on sys.path when the frozen program is run. In practice we have found that a missing module is probably an error (and it is a fairly frequent error too!). The -E switch can be used to detect this error; any missing modules will cause freeze.py to fail. In the rare case of a frozen module importing a non- frozen one (ie one which should be loaded from source when the program is run), the non-frozen module must be excluded from the freeze using the -x option. 3. A patch to add an extra command line option -X to freeze.py, which indicates that a specified module is excluded from the freeze, and also that the frozen program should not try to load the module from sys.path when it is imported. Importing the specified module will always trigger an ImportError. This is useful if a module used by a frozen program can optionally use a submodule... try: import optional_submodule except ImportError: pass It may be preferable for the frozen program's behaviour to not depend on whether optional_submodule happens to be installed on the host system, and that the 'import optional_submodule' should always fail with an ImportError. This can be achieved using the '- X optional_submodule' command line switch to freeze.py This is implemented by including the excluded module in the frozen imports table (_PyImport_FrozenModules), with the code pointer set to NULL. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-07 15:26 Message: Logged In: YES user_id=21627 Why is this assigned to Mark? I cannot see anything windows-specific in it. Mark, if you are not interested in reviewing this patch, I recommend to unassign this from yourself. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=416704&group_id=5470 From noreply@sourceforge.net Fri Jun 8 06:19:47 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Jun 2001 22:19:47 -0700 Subject: [Python-bugs-list] [ python-Bugs-231249 ] cgi.py opens too many (temporary) files Message-ID: Bugs item #231249, was updated on 2001-02-06 04:59 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=231249&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Richard van de Stadt (stadt) Assigned to: Guido van Rossum (gvanrossum) Summary: cgi.py opens too many (temporary) files Initial Comment: cgi.FieldStorage() is used to get the contents of a webform. It turns out that for each line, a new temporary file is opened. This causes the script that is using cgi.FieldStorage() to reach the webserver's limit of number of opened files, as described by 'ulimit -n'. The standard value for Solaris systems seems to be 64, so webforms with that many fields cannot be dealt with. A solution would seem to use the same temporary filename, since only a maxmimum one file is (temporarily) used at the same time. I did an "ls|wc -l" while the script was running, which showed only zeroes and ones. (I'm using Python for CyberChair, an online paper submission and reviewing system. The webform under discussion has one input field for each reviewer, stating the papers he or she is supposed to be reviewing. One conference that is using CyberChair has almost 140 reviewers. Their system's open file limit is 64. Using the same data on a system with an open file limit of 260 _is_ able to deal with this.) ---------------------------------------------------------------------- Comment By: Richard Jones (richard) Date: 2001-06-07 22:19 Message: Logged In: YES user_id=6405 I've just encountered this bug myself on Mac OS X. The default number for "ulimit -n" is 256, so you can imagine that it's a little worrying that I ran out :) As has been discussed, the multipart/form-data sumission sends a sub-part for every form name=value pair. I ran into the bug in cgi.py because I have a select list with >256 options - which I selected all entries in. This tips me over the 256 open file limit. I have two half-baked alternative suggestions for a solution: 1. use a single tempfile, opened when the multipart parsing is started. That tempfile may then be passed to the child FieldStorage instances and used by the parse_single calls. The child instances just keep track of their index and length in the tempfile. 2. use StringIO for parts of type "text/plain" and use a tempfile for all the rest. This has the problem that someone could cut-paste a core image into a text field though. I might have a crack at a patch for approach #1 this weekend... ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2001-04-12 21:04 Message: Logged In: YES user_id=149084 The patch posted 11 Apr is a neat and compact solution! The only thing I can imagine would be a problem would be if a form had a large number of (small) fields which set the content-length attribute. I don't have an example of such, though. Text fields perhaps? If that was a realistic problem, a solution might be for make_file() to maintain a pool of temporary files; if the field (binary or not) turned out to be small a StringIO could be created and the temporary file returned to the pool. There are a couple of things I've been thinking about in cgi.py; the patch doesn't seem to change the situation one way or the other: There doesn't seem to be any RFC requirement that a file upload be accompanied by a content-length attribute, regardless of whether it is binary or ascii. In fact, some of the RFC examples I've seen omit it. If content-length is not specified, the upload will be processed by file.readline(). Can this cause problems for arbitrary binary files? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-12 11:59 Message: Logged In: YES user_id=6380 Uploading a new patch, more complicated. I don't like it as much. But it works even if the caller uses item.file.fileno(). ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2001-04-12 10:05 Message: Logged In: YES user_id=149084 I have a thought on this, but it will be about 10 hours before I can submit it. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-11 13:20 Message: Logged In: YES user_id=6380 Here's a proposed patch. Can anyone think of a reason why this should not be checked in as part of 2.1? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-10 11:54 Message: Logged In: YES user_id=6380 I wish I'd heard about this sooner. It does seem a problem and it does make sense to use StringIO unless there's a lot of data. But we can't fix this in time for 2.1... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-04-10 10:54 Message: Logged In: YES user_id=11375 Unassigning so someone else can take a look at it. ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2001-02-18 23:32 Message: In the particular HTML form referenced it appears that a workaround might be to eliminate the enctype attribute in the
tag and take the application/x-www-form-urlencoded default since no files are being uploaded. When make_file creates the temporary files they are immediately unlinked. There is probably a brief period before the unlink is finalized during which the ls process might see a file; that would account for the output of ls | wc. It appears that the current cgi.py implementation leaves all the (hundreds of) files open until the cgi process releases the FieldStorage object or exits. My first thought was, if the filename recovered from the header is None have make_file create a StringIO object instead of a temp file. That way a temp file is only created when a file is uploaded. This is not inconsistent with the cgi.py docs. Unfortunately, RFC2388 4.4 states that a filename is not required to be sent, so it looks like your solution based on the size of the data received is the correct one. Below 1K you could copy the temp file contents to a StringIO and assign it to self.file, then explicitly close the temp file via its descriptor. If only I understood the module better ::-(( and had a way of tunnel testing it I might have had the temerity to offer a patch. (I'm away for a couple of weeks starting tomorrow.) ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-02-18 14:08 Message: Ah, I see; the traceback makes this much clearer. When you're uploading a file, everything in the form is sent as a MIME document in the body; every field is accompanied by a boundary separator and Content-Disposition header. In multipart mode, cgi.py copies each field into a temporary file. The first idea I had was to only use tempfiles for the actual upload field; unfortunately, that doesn't help because the upload field isn't special, and cgi.py has no way to know which it is ahead of time. Possible second approach: measure the size of the resulting file; if it's less than some threshold (1K? 10K?), read its contents into memory and close the tempfile. This means only the largest fields will require that a file descriptor be kept open. I'll explore this more after beta1. ---------------------------------------------------------------------- Comment By: Richard van de Stadt (stadt) Date: 2001-02-17 18:37 Message: I do *not* mean file upload fields. I stumbled upon this with a webform that contains 141 'simple' input fields like the form you can see here (which 'only' contains 31 of those input fields): http://www.cyberchair.org/cgi-cyb/genAssignPageReviewerPapers.py (use chair/chair to login) When the maximum number of file descriptors used per process was increased to 160 (by the sysadmins), the problem did not occur anymore, and the webform could be processed. This was the error message I got: Traceback (most recent call last): File "/usr/local/etc/httpd/DocumentRoot/ICML2001/cgi-bin/submitAssignRP.py", line 144, in main File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 504, in __init__ File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 593, in read_multi File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 506, in __init__ File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 603, in read_single File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 623, in read_lines File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 713, in make_file File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/tempfile.py", line 144, in TemporaryFile OSError: [Errno 24] Too many open files: '/home/yara/brodley/icml2001/tmp/@26048.61' I understand why you assume that it would concern *file* uploads, but this is not the case. As I reported before, it turns out that for each 'simple' field a temporary file is used in to transfer the contents to the script that uses the cgi.FieldStorage() method, even if no files are being uploaded. The problem is not that too many files are open at the same time (which is 1 at most). It is the *amount* of files that is causing the troubles. If the same temporary file would be used, this problem would probably not have happened. My colleague Fred Gansevles wrote a possible solution, but mentioned that this might introduce the need for protection against a 'symlink attack' (whatever that may be). This solution(?) concentrates on the open file descriptor's problem, while Fred suggests a redesign of FieldStorage() would probably be better. import os, tempfile AANTAL = 50 class TemporaryFile: def __init__(self): self.name = tempfile.mktemp("") open(self.name, 'w').close() self.offset = 0 def seek(self, offset): self.offset = offset def read(self): fd = open(self.name, 'w+b', -1) fd.seek(self.offset) data = fd.read() self.offset = fd.tell() fd.close() return data def write(self, data): fd = open(self.name, 'w+b', -1) fd.seek(self.offset) fd.write(data) self.offset = fd.tell() fd.close() def __del__(self): os.unlink(self.name) def add_fd(l, n) : map(lambda x,l=l: l.append(open('/dev/null')), range(n)) def add_tmp(l, n) : map(lambda x,l=l: l.append(TemporaryFile()), range(n)) def main (): import getopt, sys try: import resource soft, hard = resource.getrlimit (resource.RLIMIT_NOFILE) resource.setrlimit (resource.RLIMIT_NOFILE, (hard, hard)) except ImportError: soft, hard = 64, 1024 opts, args = getopt.getopt(sys.argv[1:], 'n:t') aantal = AANTAL tmp = add_fd for o, a in opts: if o == '-n': aantal = int(a) elif o == '-t': tmp = add_tmp print "aantal te gebruiken fd's:", aantal #dutch; English: 'number of fds to be used' print 'tmp:', tmp.func_name tmp_files = [] files=[] tmp(tmp_files, aantal) try: add_fd(files,hard) except IOError: pass print "aantal vrije gebruiken fd's:", len(files) #enlish: 'number of free fds' main() Running the above code: python ulimit.py [-n number] [-t] default number = 50, while using 'real' fd-s for temporary files. When using the '-t' flag 'smart' temporary files are used. Output: $ python ulimit.py aantal te gebruiken fd's: 50 tmp: add_fd aantal vrije gebruiken fd's: 970 $ python ulimit.py -t aantal te gebruiken fd's: 50 tmp: add_tmp aantal vrije gebruiken fd's: 1020 $ python ulimit.py -n 1000 aantal te gebruiken fd's: 1000 tmp: add_fd aantal vrije gebruiken fd's: 20 $ python ulimit.py -n 1000 -t aantal te gebruiken fd's: 1000 tmp: add_tmp aantal vrije gebruiken fd's: 1020 ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-02-16 21:41 Message: I assume you mean 64 file upload fields, right? Can you provide a small test program that triggers the problem? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=231249&group_id=5470 From noreply@sourceforge.net Fri Jun 8 20:37:05 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Jun 2001 12:37:05 -0700 Subject: [Python-bugs-list] [ python-Bugs-411374 ] [Irix] SIGINT causes crash Message-ID: Bugs item #411374, was updated on 2001-03-26 06:38 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=411374&group_id=5470 Category: Python Interpreter Core Group: Platform-specific Status: Open >Resolution: Works For Me Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Sjoerd Mullender (sjoerd) Summary: [Irix] SIGINT causes crash Initial Comment: python version 2.1b2 on Irix 6.5.8f: dumps core on a segfault when SIGINT is sent to the process, either by keystroke or using the kill command. ---------------------------------------------------------------------- >Comment By: Sjoerd Mullender (sjoerd) Date: 2001-06-08 12:37 Message: Logged In: YES user_id=43607 Always first try recompiling without -O option. I have never seen a problem like this, and I use python quite a lot on an SGI (currently IRIX 6.5.12m), but I never compile with -O. If this doesn't help, please supply a stack trace. (If it does help, complain to SGI :-). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=411374&group_id=5470 From noreply@sourceforge.net Sat Jun 9 03:16:46 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Jun 2001 19:16:46 -0700 Subject: [Python-bugs-list] [ python-Bugs-431557 ] issue with include/cStringIO.h and C++ Message-ID: Bugs item #431557, was updated on 2001-06-08 19:16 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431557&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: David Gravereaux (davygrvy) Assigned to: Nobody/Anonymous (nobody) Summary: issue with include/cStringIO.h and C++ Initial Comment: The #define for PycString_IMPORT, should probably be wrapped in a pre-processor check for __cplusplus. what is: #define PycString_IMPORT \ PycStringIO=xxxPyCObject_Import ("cStringIO", "cStringIO_CAPI") could be: #ifdef __cplusplus # define PycString_IMPORT \ PycStringIO=static_cast(xxxPyCObject_Import ("cStringIO", "cStringIO_CAPI")) #else # define PycString_IMPORT \ PycStringIO=xxxPyCObject_Import ("cStringIO", "cStringIO_CAPI") #endif to avoid a compiler warning about not being able to place a void* where a struct PycStringIO_CAPI* should go. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431557&group_id=5470 From noreply@sourceforge.net Sat Jun 9 09:00:14 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 09 Jun 2001 01:00:14 -0700 Subject: [Python-bugs-list] [ python-Bugs-431557 ] issue with include/cStringIO.h and C++ Message-ID: Bugs item #431557, was updated on 2001-06-08 19:16 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431557&group_id=5470 Category: Python Interpreter Core Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: David Gravereaux (davygrvy) Assigned to: Nobody/Anonymous (nobody) Summary: issue with include/cStringIO.h and C++ Initial Comment: The #define for PycString_IMPORT, should probably be wrapped in a pre-processor check for __cplusplus. what is: #define PycString_IMPORT \ PycStringIO=xxxPyCObject_Import ("cStringIO", "cStringIO_CAPI") could be: #ifdef __cplusplus # define PycString_IMPORT \ PycStringIO=static_cast(xxxPyCObject_Import ("cStringIO", "cStringIO_CAPI")) #else # define PycString_IMPORT \ PycStringIO=xxxPyCObject_Import ("cStringIO", "cStringIO_CAPI") #endif to avoid a compiler warning about not being able to place a void* where a struct PycStringIO_CAPI* should go. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-09 01:00 Message: Logged In: YES user_id=21627 Fixed with cStringIO.h 2.15. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431557&group_id=5470 From noreply@sourceforge.net Sat Jun 9 10:13:00 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 09 Jun 2001 02:13:00 -0700 Subject: [Python-bugs-list] [ python-Bugs-430991 ] wrong co_lnotab Message-ID: Bugs item #430991, was updated on 2001-06-07 03:44 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430991&group_id=5470 Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) >Assigned to: Tim Peters (tim_one) Summary: wrong co_lnotab Initial Comment: The compiler produces a buggy co_lnotab in code objects if a single source line produces more than 255 bytes of bytecode. This can lead to tracebacks pointing to wrong source lines in "python -O". For example, to emit the information "266 bytes of bytecode, next line is 5 source code lines below", it writes in co_lnotab 255 5 11 0 althought it should write 255 0 11 5 Because of this an exception occurring in the last 11 bytes of code will be incorrectly reported 5 lines below. The problem is even more confusing if the number of lines to skip is itself larger than 255 (see attached example file). Fix: in compile.c correct the function com_set_lineno by replacing the inner while loop with the following : while (incr_addr > 255) { com_add_lnotab(c, 255, 0); incr_addr -= 255; } while (incr_line > 255) { com_add_lnotab(c, incr_addr, 255); incr_line -= 255; incr_addr = 0; } if (incr_line > 0 || incr_addr > 0) { com_add_lnotab(c, incr_addr, incr_line); } ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-09 02:13 Message: Logged In: YES user_id=31435 Assigned to me. Good eye! I agree with your analysis. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430991&group_id=5470 From noreply@sourceforge.net Sat Jun 9 10:27:32 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 09 Jun 2001 02:27:32 -0700 Subject: [Python-bugs-list] [ python-Bugs-430991 ] wrong co_lnotab Message-ID: Bugs item #430991, was updated on 2001-06-07 03:44 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430991&group_id=5470 Category: Parser/Compiler Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Tim Peters (tim_one) Summary: wrong co_lnotab Initial Comment: The compiler produces a buggy co_lnotab in code objects if a single source line produces more than 255 bytes of bytecode. This can lead to tracebacks pointing to wrong source lines in "python -O". For example, to emit the information "266 bytes of bytecode, next line is 5 source code lines below", it writes in co_lnotab 255 5 11 0 althought it should write 255 0 11 5 Because of this an exception occurring in the last 11 bytes of code will be incorrectly reported 5 lines below. The problem is even more confusing if the number of lines to skip is itself larger than 255 (see attached example file). Fix: in compile.c correct the function com_set_lineno by replacing the inner while loop with the following : while (incr_addr > 255) { com_add_lnotab(c, 255, 0); incr_addr -= 255; } while (incr_line > 255) { com_add_lnotab(c, incr_addr, 255); incr_line -= 255; incr_addr = 0; } if (incr_line > 0 || incr_addr > 0) { com_add_lnotab(c, incr_addr, incr_line); } ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-09 02:27 Message: Logged In: YES user_id=31435 Fixed as you suggested (thanks!), in Misc/ACKS 1.97 Python/compile.c 2.201 Tools/compiler/compiler/pyassem.py 1.20 ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-09 02:13 Message: Logged In: YES user_id=31435 Assigned to me. Good eye! I agree with your analysis. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430991&group_id=5470 From noreply@sourceforge.net Sat Jun 9 11:21:46 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 09 Jun 2001 03:21:46 -0700 Subject: [Python-bugs-list] [ python-Bugs-431597 ] Code being copied into shelve and UserDi Message-ID: Bugs item #431597, was updated on 2001-06-09 03:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431597&group_id=5470 Category: Python Library Group: 3rd Party Status: Open Resolution: None Priority: 5 Submitted By: Rick Jones (rjones33410) Assigned to: Nobody/Anonymous (nobody) Summary: Code being copied into shelve and UserDi Initial Comment: I've been writing a contacts program that uses a UserDict as a direct interface to the record which is then placed into a shelve. When I view the DB file created by the shelve module I can see my code in the Table. This is a sample from the very start of the shelve file, before the first record. This apears in the begining like this no matter how many times I re-run it with varying number of records. ­•n}ã ®“Å} €ä£t%v YÐ磀%v CurKeys=db.keys() Test=len(CurKeys) Temp=1 NewList=[] for list in CurKeys: One=int(list) NewList.append(One) if Test in NewList: Test=Test+1 Temp=Test NextID='%s' % Temp GetEntry=ThisEntry() db[NextID]=GetEntry print 'New Entry no.',NextID db.close() print 'Doneÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ This is the method that is actually being looped. You'll note at the end the shelve is closed after each record is added yet only a single copy of the code. I'm baffled as to how this can be? Not to mention it copied the entire document/module without regard to scope. I discovered this when I did a record loop test to stuff 1000 records into the shelve. I opened the file to see if there were any noticable problems since I don't know what kind of density the shelve can handle and saw my code in the table along with the record I looped in. Now I figured I had somehow made the module scope a variable that was being looped in with the record so I went through the entire 1000 record entries and my code only apears once, although it is somewhat broken up and much more binary looking than my record. I'm sure you can understand how much of a problem this can be in a comercial app. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431597&group_id=5470 From noreply@sourceforge.net Sat Jun 9 12:48:16 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 09 Jun 2001 04:48:16 -0700 Subject: [Python-bugs-list] [ python-Bugs-416288 ] infrequent memory leak in pyexpat Message-ID: Bugs item #416288, was updated on 2001-04-15 10:35 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=416288&group_id=5470 Category: XML Group: None Status: Open Resolution: None Priority: 5 Submitted By: douglas orr (dougbo) Assigned to: Martin v. Löwis (loewis) Summary: infrequent memory leak in pyexpat Initial Comment: In pyexpat.c, the macro call for the handler dispatch (my##NAME##Handler) for CharacterHandler allocates an object implicitly by calling one of the conversion-to- unicode routines. If there is a problem in the PyBuildValue, resulting in args == NULL, that object will be leaked. Low priority, but the macros probably need some reworking to handle this. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-09 04:48 Message: Logged In: YES user_id=21627 That seems to be a bug in Py_BuildValue: It should decref its N arguments if it can't create a tuple. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=416288&group_id=5470 From noreply@sourceforge.net Sat Jun 9 19:11:34 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 09 Jun 2001 11:11:34 -0700 Subject: [Python-bugs-list] [ python-Bugs-430269 ] python -U breaks import with 2.1 Message-ID: Bugs item #430269, was updated on 2001-06-05 03:56 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430269&group_id=5470 Category: Unicode Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: M.-A. Lemburg (lemburg) Summary: python -U breaks import with 2.1 Initial Comment: python -U under Windows is broken with Python 2.1: D:\>python Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import urllib >>> ^C D:\>python -U Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import urllib Traceback (most recent call last): File "", line 1, in ? ImportError: No module named urllib ---------------------------------------------------------------------- >Comment By: Walter Dörwald (doerwalter) Date: 2001-06-09 11:11 Message: Logged In: YES user_id=89016 To make -U more useful as a testbed for Unicode migration, it might be useful to change str(), repr() and chr(), so that they return Unicode objects when running with python - U. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-06 15:51 Message: Logged In: YES user_id=31435 Changed Group from Platform-Specific (since it's got nothing to do with Windows specifically) to Feature Request (since nobody believes it *should* work now). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-06 12:28 Message: Logged In: YES user_id=31435 Martin, see email to Python-Dev: best I can tell, nobody expects -U to work yet. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-06 00:21 Message: Logged In: YES user_id=21627 The (first) problem appears to be in string.py, which has _idmap = '' for i in range(256): _idmap = _idmap + chr(i) With -U, _idmap is a unicode string, and adding chr(128) to it will give an ASCII decoding error. Therefore, importing string fails. In turn, many other things fail as well. That can be solved by writing _idmap=str('') but then it will still complain that distutils.util cannot be imported in site. One problem may be that site.py changes the strings of sys.path into Unicode strings. In fact, when starting Python with -U -S, it will properly locate urllib. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-05 18:04 Message: Logged In: YES user_id=31435 Assigned to Marc-Andre. M-A, do you expect -U to be useful at this point? I thought I saw docs at one point, but can't seem to find them again ... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=430269&group_id=5470 From noreply@sourceforge.net Sun Jun 10 08:55:33 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Jun 2001 00:55:33 -0700 Subject: [Python-bugs-list] [ python-Bugs-431772 ] traceback.print_exc() causes traceback Message-ID: Bugs item #431772, was updated on 2001-06-10 00:55 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431772&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Atsuo Ishimoto (atsuoi) Assigned to: Nobody/Anonymous (nobody) Summary: traceback.print_exc() causes traceback Initial Comment: Executing this code: import traceback try: comp = compile('def aaa (a=1,b):pass', '','exec') except: traceback.print_exc() causes exception with Python 2.1/Win32. Traceback (most recent call last): File "a.py", line 3, in ? comp = compile('def aaa (a=1,b):pass', '','exec') Traceback (most recent call last): File "a.py", line 5, in ? traceback.print_exc() File "c:\tools\python21\lib\traceback.py", line 209, in print_exc print_exception(etype, value, tb, limit, file) File "c:\tools\python21\lib\traceback.py", line 124, in print_exception lines = format_exception_only(etype, value) File "c:\tools\python21\lib\traceback.py", line 175, in format_exception_only while i < len(line) and line[i].isspace(): TypeError: len() of unsized object ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431772&group_id=5470 From noreply@sourceforge.net Sun Jun 10 12:50:28 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Jun 2001 04:50:28 -0700 Subject: [Python-bugs-list] [ python-Bugs-431772 ] traceback.print_exc() causes traceback Message-ID: Bugs item #431772, was updated on 2001-06-10 00:55 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431772&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Atsuo Ishimoto (atsuoi) Assigned to: Nobody/Anonymous (nobody) Summary: traceback.print_exc() causes traceback Initial Comment: Executing this code: import traceback try: comp = compile('def aaa (a=1,b):pass', '','exec') except: traceback.print_exc() causes exception with Python 2.1/Win32. Traceback (most recent call last): File "a.py", line 3, in ? comp = compile('def aaa (a=1,b):pass', '','exec') Traceback (most recent call last): File "a.py", line 5, in ? traceback.print_exc() File "c:\tools\python21\lib\traceback.py", line 209, in print_exc print_exception(etype, value, tb, limit, file) File "c:\tools\python21\lib\traceback.py", line 124, in print_exception lines = format_exception_only(etype, value) File "c:\tools\python21\lib\traceback.py", line 175, in format_exception_only while i < len(line) and line[i].isspace(): TypeError: len() of unsized object ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-06-10 04:50 Message: Logged In: YES user_id=6656 I think this patch: Index: traceback.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/traceback.py,v retrieving revision 1.25 diff -c -r1.25 traceback.py *** traceback.py 2001/03/29 04:36:08 1.25 --- traceback.py 2001/06/10 11:48:51 *************** *** 171,189 **** if not filename: filename = "" list.append(' File "%s", line %d\n' % (filename, lineno)) ! i = 0 ! while i < len(line) and line[i].isspace(): ! i = i+1 ! list.append(' %s\n' % line.strip()) ! if offset is not None: ! s = ' ' ! for c in line[i:offset-1]: ! if c.isspace(): ! s = s + c ! else: ! s = s + ' ' ! list.append('%s^\n' % s) ! value = msg s = _some_str(value) if s: list.append('%s: %s\n' % (str(stype), s)) --- 171,190 ---- if not filename: filename = "" list.append(' File "%s", line %d\n' % (filename, lineno)) ! if line is not None: ! i = 0 ! while i < len(line) and line[i].isspace(): ! i = i+1 ! list.append(' %s\n' % line.strip()) ! if offset is not None: ! s = ' ' ! for c in line[i:offset-1]: ! if c.isspace(): ! s = s + c ! else: ! s = s + ' ' ! list.append('%s^\n' % s) ! value = msg s = _some_str(value) if s: list.append('%s: %s\n' % (str(stype), s)) fixes this. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431772&group_id=5470 From noreply@sourceforge.net Sun Jun 10 19:58:49 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Jun 2001 11:58:49 -0700 Subject: [Python-bugs-list] [ python-Bugs-431772 ] traceback.print_exc() causes traceback Message-ID: Bugs item #431772, was updated on 2001-06-10 00:55 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431772&group_id=5470 >Category: Python Library Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Atsuo Ishimoto (atsuoi) >Assigned to: Tim Peters (tim_one) Summary: traceback.print_exc() causes traceback Initial Comment: Executing this code: import traceback try: comp = compile('def aaa (a=1,b):pass', '','exec') except: traceback.print_exc() causes exception with Python 2.1/Win32. Traceback (most recent call last): File "a.py", line 3, in ? comp = compile('def aaa (a=1,b):pass', '','exec') Traceback (most recent call last): File "a.py", line 5, in ? traceback.print_exc() File "c:\tools\python21\lib\traceback.py", line 209, in print_exc print_exception(etype, value, tb, limit, file) File "c:\tools\python21\lib\traceback.py", line 124, in print_exception lines = format_exception_only(etype, value) File "c:\tools\python21\lib\traceback.py", line 175, in format_exception_only while i < len(line) and line[i].isspace(): TypeError: len() of unsized object ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-10 11:58 Message: Logged In: YES user_id=31435 Thank you, Michael! I checked this in, Lib/traceback.py, rev 1.26. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-06-10 04:50 Message: Logged In: YES user_id=6656 I think this patch: Index: traceback.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/traceback.py,v retrieving revision 1.25 diff -c -r1.25 traceback.py *** traceback.py 2001/03/29 04:36:08 1.25 --- traceback.py 2001/06/10 11:48:51 *************** *** 171,189 **** if not filename: filename = "" list.append(' File "%s", line %d\n' % (filename, lineno)) ! i = 0 ! while i < len(line) and line[i].isspace(): ! i = i+1 ! list.append(' %s\n' % line.strip()) ! if offset is not None: ! s = ' ' ! for c in line[i:offset-1]: ! if c.isspace(): ! s = s + c ! else: ! s = s + ' ' ! list.append('%s^\n' % s) ! value = msg s = _some_str(value) if s: list.append('%s: %s\n' % (str(stype), s)) --- 171,190 ---- if not filename: filename = "" list.append(' File "%s", line %d\n' % (filename, lineno)) ! if line is not None: ! i = 0 ! while i < len(line) and line[i].isspace(): ! i = i+1 ! list.append(' %s\n' % line.strip()) ! if offset is not None: ! s = ' ' ! for c in line[i:offset-1]: ! if c.isspace(): ! s = s + c ! else: ! s = s + ' ' ! list.append('%s^\n' % s) ! value = msg s = _some_str(value) if s: list.append('%s: %s\n' % (str(stype), s)) fixes this. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431772&group_id=5470 From noreply@sourceforge.net Sat Jun 9 07:34:20 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Jun 2001 23:34:20 -0700 Subject: [Python-bugs-list] [ python-Bugs-431886 ] listcomp syntax too confusing (tuples) Message-ID: Bugs item #431886, was updated on 2001-06-08 23:34 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431886&group_id=5470 Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: Tim Peters (tim_one) Assigned to: Nobody/Anonymous (nobody) Summary: listcomp syntax too confusing (tuples) Initial Comment: We were careful to make sure that tuple targets in listcomps required parens, i.e. [x, x+1 for x in s] # rejected [(x, x+1) for x in s] # OK but we didn't anticipate other "surprise tuple" cases. Most recently from c.l.py, """ I tried the one-line command in a interaction mode: [x for x in [1, 2, 3], y for y in [4, 5, 6]] and the result surprised me, that is: [[1,2,3],[1,2,3],[1,2,3],9,9,9] Who can explain the behavior? Since I expected the result should be: [[1,4],[1,5],[1,6],[2,4],...] """ This is too surprising; we should require that the listcomp be spelled [x for x in ([1, 2, 3], y) for y in [4, 5, 6]] instead if that's really what they want (which it almost certainly isn't!). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431886&group_id=5470 From noreply@sourceforge.net Sun Jun 10 21:20:34 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Jun 2001 13:20:34 -0700 Subject: [Python-bugs-list] [ python-Bugs-431899 ] tkfileDialog on NT makes float fr specif Message-ID: Bugs item #431899, was updated on 2001-06-10 13:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431899&group_id=5470 Category: Tkinter Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: tkfileDialog on NT makes float fr specif Initial Comment: If I use the line: (Tkinter 8.3 for Python 2.0) file = tkFileDialog.askopenfilename(...) on an NT french workstation, that turn off floats using dot but comma separator for Tcl... then if your have defined a Text widget, calling self.yview('moveto', '1.0') failed with an unavailable type error: TclError: expected floating-point number but got "1.0" this appends in lib-tk\Tkinter.py line 2846 in yview self.tk.call((self._w, 'yview') + what) But the bugs in my opinion comes from Tcl tkFileDialog which activate a flag about float memory representation for tcl. The problem is that I'm unable to find the turnarround i.e. finding tcl methode to turn on US float representation. All help may be pleased. Jerry alias the foolish dracomorpheus python french fan ;-) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431899&group_id=5470 From noreply@sourceforge.net Mon Jun 11 05:56:21 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Jun 2001 21:56:21 -0700 Subject: [Python-bugs-list] [ python-Bugs-409430 ] pydoc shouldn't use #!/usr/bin/env Message-ID: Bugs item #409430, was updated on 2001-03-17 10:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=409430&group_id=5470 Category: Installation Group: None Status: Open Resolution: None Priority: 3 Submitted By: Michael Hudson (mwh) Assigned to: Ka-Ping Yee (ping) Summary: pydoc shouldn't use #!/usr/bin/env Initial Comment: I've moaned about this on python-dev but I want to make sure it doesn't get forgotten. I've just built from CVS, installed in /usr/local, and: $ pydoc -g Traceback (most recent call last): File "/usr/local/bin/pydoc", line 3, in ? import pydoc ImportError: No module named pydoc because the /usr/bin/env python thing hits the older python in /usr first. Don't really know how best to implement this, not being a distutils whiz. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-06-10 21:56 Message: Logged In: NO I am having similar problems with the win 2.1 ver it seems none of my saved .py files are recognised. Is this simply a case of not configed as an executable or a version clash? (solutions as well as opinions would be nice) I can however import string,sys and other modules if that muddies the water any. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-10 12:11 Message: Logged In: YES user_id=6380 OK, ping-pong. Ping, do you have any bright ideas? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-04-10 10:57 Message: Logged In: YES user_id=11375 The pydoc script is Ping's, really. Fixing this requires Distutils hackery, and I don't see that this is worth fixing. Leaving it to someone else to make the decision to close it, though. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-03-17 22:23 Message: Logged In: YES user_id=31435 Assigned to Andrew because I seem to recall he wrote the pydoc script. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=409430&group_id=5470 From noreply@sourceforge.net Sat Jun 9 23:06:06 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 09 Jun 2001 15:06:06 -0700 Subject: [Python-bugs-list] [ python-Bugs-231249 ] cgi.py opens too many (temporary) files Message-ID: Bugs item #231249, was updated on 2001-02-06 04:59 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=231249&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Richard van de Stadt (stadt) Assigned to: Guido van Rossum (gvanrossum) Summary: cgi.py opens too many (temporary) files Initial Comment: cgi.FieldStorage() is used to get the contents of a webform. It turns out that for each line, a new temporary file is opened. This causes the script that is using cgi.FieldStorage() to reach the webserver's limit of number of opened files, as described by 'ulimit -n'. The standard value for Solaris systems seems to be 64, so webforms with that many fields cannot be dealt with. A solution would seem to use the same temporary filename, since only a maxmimum one file is (temporarily) used at the same time. I did an "ls|wc -l" while the script was running, which showed only zeroes and ones. (I'm using Python for CyberChair, an online paper submission and reviewing system. The webform under discussion has one input field for each reviewer, stating the papers he or she is supposed to be reviewing. One conference that is using CyberChair has almost 140 reviewers. Their system's open file limit is 64. Using the same data on a system with an open file limit of 260 _is_ able to deal with this.) ---------------------------------------------------------------------- Comment By: douglas bagnall (dbagnall) Date: 2001-06-09 15:06 Message: Logged In: YES user_id=107204 This has been causing me trouble too, on various machines. The patch from 2001-04-12 08:20 fixed the problem, but since then I haven't been able to upload files bigger than about 1k. I will try using 2.1 before I investigate that tho. Guido mentioned another more complicated, less likable, patch on 2001-04-13, which doesn't seem to have been uploaded. Or do I just not know where to look? ---------------------------------------------------------------------- Comment By: Richard Jones (richard) Date: 2001-06-07 22:19 Message: Logged In: YES user_id=6405 I've just encountered this bug myself on Mac OS X. The default number for "ulimit -n" is 256, so you can imagine that it's a little worrying that I ran out :) As has been discussed, the multipart/form-data sumission sends a sub-part for every form name=value pair. I ran into the bug in cgi.py because I have a select list with >256 options - which I selected all entries in. This tips me over the 256 open file limit. I have two half-baked alternative suggestions for a solution: 1. use a single tempfile, opened when the multipart parsing is started. That tempfile may then be passed to the child FieldStorage instances and used by the parse_single calls. The child instances just keep track of their index and length in the tempfile. 2. use StringIO for parts of type "text/plain" and use a tempfile for all the rest. This has the problem that someone could cut-paste a core image into a text field though. I might have a crack at a patch for approach #1 this weekend... ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2001-04-12 21:04 Message: Logged In: YES user_id=149084 The patch posted 11 Apr is a neat and compact solution! The only thing I can imagine would be a problem would be if a form had a large number of (small) fields which set the content-length attribute. I don't have an example of such, though. Text fields perhaps? If that was a realistic problem, a solution might be for make_file() to maintain a pool of temporary files; if the field (binary or not) turned out to be small a StringIO could be created and the temporary file returned to the pool. There are a couple of things I've been thinking about in cgi.py; the patch doesn't seem to change the situation one way or the other: There doesn't seem to be any RFC requirement that a file upload be accompanied by a content-length attribute, regardless of whether it is binary or ascii. In fact, some of the RFC examples I've seen omit it. If content-length is not specified, the upload will be processed by file.readline(). Can this cause problems for arbitrary binary files? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-12 11:59 Message: Logged In: YES user_id=6380 Uploading a new patch, more complicated. I don't like it as much. But it works even if the caller uses item.file.fileno(). ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2001-04-12 10:05 Message: Logged In: YES user_id=149084 I have a thought on this, but it will be about 10 hours before I can submit it. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-11 13:20 Message: Logged In: YES user_id=6380 Here's a proposed patch. Can anyone think of a reason why this should not be checked in as part of 2.1? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-10 11:54 Message: Logged In: YES user_id=6380 I wish I'd heard about this sooner. It does seem a problem and it does make sense to use StringIO unless there's a lot of data. But we can't fix this in time for 2.1... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-04-10 10:54 Message: Logged In: YES user_id=11375 Unassigning so someone else can take a look at it. ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2001-02-18 23:32 Message: In the particular HTML form referenced it appears that a workaround might be to eliminate the enctype attribute in the tag and take the application/x-www-form-urlencoded default since no files are being uploaded. When make_file creates the temporary files they are immediately unlinked. There is probably a brief period before the unlink is finalized during which the ls process might see a file; that would account for the output of ls | wc. It appears that the current cgi.py implementation leaves all the (hundreds of) files open until the cgi process releases the FieldStorage object or exits. My first thought was, if the filename recovered from the header is None have make_file create a StringIO object instead of a temp file. That way a temp file is only created when a file is uploaded. This is not inconsistent with the cgi.py docs. Unfortunately, RFC2388 4.4 states that a filename is not required to be sent, so it looks like your solution based on the size of the data received is the correct one. Below 1K you could copy the temp file contents to a StringIO and assign it to self.file, then explicitly close the temp file via its descriptor. If only I understood the module better ::-(( and had a way of tunnel testing it I might have had the temerity to offer a patch. (I'm away for a couple of weeks starting tomorrow.) ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-02-18 14:08 Message: Ah, I see; the traceback makes this much clearer. When you're uploading a file, everything in the form is sent as a MIME document in the body; every field is accompanied by a boundary separator and Content-Disposition header. In multipart mode, cgi.py copies each field into a temporary file. The first idea I had was to only use tempfiles for the actual upload field; unfortunately, that doesn't help because the upload field isn't special, and cgi.py has no way to know which it is ahead of time. Possible second approach: measure the size of the resulting file; if it's less than some threshold (1K? 10K?), read its contents into memory and close the tempfile. This means only the largest fields will require that a file descriptor be kept open. I'll explore this more after beta1. ---------------------------------------------------------------------- Comment By: Richard van de Stadt (stadt) Date: 2001-02-17 18:37 Message: I do *not* mean file upload fields. I stumbled upon this with a webform that contains 141 'simple' input fields like the form you can see here (which 'only' contains 31 of those input fields): http://www.cyberchair.org/cgi-cyb/genAssignPageReviewerPapers.py (use chair/chair to login) When the maximum number of file descriptors used per process was increased to 160 (by the sysadmins), the problem did not occur anymore, and the webform could be processed. This was the error message I got: Traceback (most recent call last): File "/usr/local/etc/httpd/DocumentRoot/ICML2001/cgi-bin/submitAssignRP.py", line 144, in main File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 504, in __init__ File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 593, in read_multi File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 506, in __init__ File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 603, in read_single File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 623, in read_lines File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 713, in make_file File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/tempfile.py", line 144, in TemporaryFile OSError: [Errno 24] Too many open files: '/home/yara/brodley/icml2001/tmp/@26048.61' I understand why you assume that it would concern *file* uploads, but this is not the case. As I reported before, it turns out that for each 'simple' field a temporary file is used in to transfer the contents to the script that uses the cgi.FieldStorage() method, even if no files are being uploaded. The problem is not that too many files are open at the same time (which is 1 at most). It is the *amount* of files that is causing the troubles. If the same temporary file would be used, this problem would probably not have happened. My colleague Fred Gansevles wrote a possible solution, but mentioned that this might introduce the need for protection against a 'symlink attack' (whatever that may be). This solution(?) concentrates on the open file descriptor's problem, while Fred suggests a redesign of FieldStorage() would probably be better. import os, tempfile AANTAL = 50 class TemporaryFile: def __init__(self): self.name = tempfile.mktemp("") open(self.name, 'w').close() self.offset = 0 def seek(self, offset): self.offset = offset def read(self): fd = open(self.name, 'w+b', -1) fd.seek(self.offset) data = fd.read() self.offset = fd.tell() fd.close() return data def write(self, data): fd = open(self.name, 'w+b', -1) fd.seek(self.offset) fd.write(data) self.offset = fd.tell() fd.close() def __del__(self): os.unlink(self.name) def add_fd(l, n) : map(lambda x,l=l: l.append(open('/dev/null')), range(n)) def add_tmp(l, n) : map(lambda x,l=l: l.append(TemporaryFile()), range(n)) def main (): import getopt, sys try: import resource soft, hard = resource.getrlimit (resource.RLIMIT_NOFILE) resource.setrlimit (resource.RLIMIT_NOFILE, (hard, hard)) except ImportError: soft, hard = 64, 1024 opts, args = getopt.getopt(sys.argv[1:], 'n:t') aantal = AANTAL tmp = add_fd for o, a in opts: if o == '-n': aantal = int(a) elif o == '-t': tmp = add_tmp print "aantal te gebruiken fd's:", aantal #dutch; English: 'number of fds to be used' print 'tmp:', tmp.func_name tmp_files = [] files=[] tmp(tmp_files, aantal) try: add_fd(files,hard) except IOError: pass print "aantal vrije gebruiken fd's:", len(files) #enlish: 'number of free fds' main() Running the above code: python ulimit.py [-n number] [-t] default number = 50, while using 'real' fd-s for temporary files. When using the '-t' flag 'smart' temporary files are used. Output: $ python ulimit.py aantal te gebruiken fd's: 50 tmp: add_fd aantal vrije gebruiken fd's: 970 $ python ulimit.py -t aantal te gebruiken fd's: 50 tmp: add_tmp aantal vrije gebruiken fd's: 1020 $ python ulimit.py -n 1000 aantal te gebruiken fd's: 1000 tmp: add_fd aantal vrije gebruiken fd's: 20 $ python ulimit.py -n 1000 -t aantal te gebruiken fd's: 1000 tmp: add_tmp aantal vrije gebruiken fd's: 1020 ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-02-16 21:41 Message: I assume you mean 64 file upload fields, right? Can you provide a small test program that triggers the problem? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=231249&group_id=5470 From noreply@sourceforge.net Mon Jun 11 22:24:16 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Jun 2001 14:24:16 -0700 Subject: [Python-bugs-list] [ python-Bugs-432208 ] dict.keys() and dict.values() not new Message-ID: Bugs item #432208, was updated on 2001-06-11 14:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432208&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: dict.keys() and dict.values() not new Initial Comment: In the development docs at http://python.sourceforge.net/devel-docs/lib/typesmapping.html both dict.keys() and dict.values() have note "(2)" which is "New in version 2.2." That does not seem correct :-) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432208&group_id=5470 From noreply@sourceforge.net Sun Jun 10 12:06:31 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Jun 2001 04:06:31 -0700 Subject: [Python-bugs-list] [ python-Bugs-432247 ] Deprecated Module regsub Message-ID: Bugs item #432247, was updated on 2001-06-10 04:06 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432247&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: carlos herrera (shakari) Assigned to: Nobody/Anonymous (nobody) Summary: Deprecated Module regsub Initial Comment: Hi i have installed python 2.1 on my linux server (slackware 7) and i have installed mailman 2.0.3 i have configurated and when i call make install it sends a warnign message as follows: /usr/local/lib/python2.1/regsub.py:15: DeprecationWarning: the regsub module is deprecated; please use re.sub() DeprecationWarning) i would like to know what is the problem if is the python or the mailman ? thank you ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432247&group_id=5470 From noreply@sourceforge.net Tue Jun 12 04:32:21 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Jun 2001 20:32:21 -0700 Subject: [Python-bugs-list] [ python-Bugs-432208 ] dict.keys() and dict.values() not new Message-ID: Bugs item #432208, was updated on 2001-06-11 14:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432208&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: dict.keys() and dict.values() not new Initial Comment: In the development docs at http://python.sourceforge.net/devel-docs/lib/typesmapping.html both dict.keys() and dict.values() have note "(2)" which is "New in version 2.2." That does not seem correct :-) ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-06-11 20:32 Message: Logged In: YES user_id=3066 You're right -- they've been there a while! It looks like this mixup happened when I renumbered the notes when I added the iter*() methods to the table. I've fixed this in Doc/lib/libstdtypes.tex revision 1.60. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432208&group_id=5470 From noreply@sourceforge.net Tue Jun 12 11:54:41 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 03:54:41 -0700 Subject: [Python-bugs-list] [ python-Bugs-432369 ] ConfigParser: problem w/ mixed-case opts Message-ID: Bugs item #432369, was updated on 2001-06-12 03:54 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432369&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: ConfigParser: problem w/ mixed-case opts Initial Comment: When using mixed-case option-names, ConfigParser raises a KeyError on multi-line options like this one: """ Symptoms: bla blubber; some symptom; some other symptom; yet another symptom """ Reason: 'optname' is not converted permanently but only when storing the first value part. The following patch solves the problem. Regards +++hartmut ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432369&group_id=5470 From noreply@sourceforge.net Tue Jun 12 12:09:51 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 04:09:51 -0700 Subject: [Python-bugs-list] [ python-Bugs-432373 ] file.tell() gives wrong value Message-ID: Bugs item #432373, was updated on 2001-06-12 04:09 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432373&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Elmar Sonnenschein (eso) Assigned to: Nobody/Anonymous (nobody) Summary: file.tell() gives wrong value Initial Comment: Invoking tell() on a file object will return a wrong (arbitrary?) value if called before seeking. Example: The following script f = open('c:\test.xyz') print 'pos: ' + `f.tell()` print 'read: ' + f.read(3) print 'pos: ' + `f.tell()` f.seek(0) print 'pos: ' + `f.tell()` print 'read: ' + f.read(3) print 'pos: ' + `f.tell()` f.close() will yield the following result: pos: 0 read: XYZ pos: 3587 <-- wrong value pos: 0 read: XYZ pos: 3 Only the return value of tell is wrong, not the actual file position, i. e. a consecutive read() will return the correct bytes. It doesn't help to seek before reading, only seeking _after_ reading will set the return value of tell() correctly. File size of 'test.xyz' was 3.822.167 Bytes. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432373&group_id=5470 From noreply@sourceforge.net Tue Jun 12 12:15:37 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 04:15:37 -0700 Subject: [Python-bugs-list] [ python-Bugs-432373 ] file.tell() gives wrong value Message-ID: Bugs item #432373, was updated on 2001-06-12 04:09 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432373&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Elmar Sonnenschein (eso) Assigned to: Nobody/Anonymous (nobody) Summary: file.tell() gives wrong value Initial Comment: Invoking tell() on a file object will return a wrong (arbitrary?) value if called before seeking. Example: The following script f = open('c:\test.xyz') print 'pos: ' + `f.tell()` print 'read: ' + f.read(3) print 'pos: ' + `f.tell()` f.seek(0) print 'pos: ' + `f.tell()` print 'read: ' + f.read(3) print 'pos: ' + `f.tell()` f.close() will yield the following result: pos: 0 read: XYZ pos: 3587 <-- wrong value pos: 0 read: XYZ pos: 3 Only the return value of tell is wrong, not the actual file position, i. e. a consecutive read() will return the correct bytes. It doesn't help to seek before reading, only seeking _after_ reading will set the return value of tell() correctly. File size of 'test.xyz' was 3.822.167 Bytes. ---------------------------------------------------------------------- >Comment By: Elmar Sonnenschein (eso) Date: 2001-06-12 04:15 Message: Logged In: YES user_id=145214 Checked on Python 2.0, 2.1, and ActivePython 2.1 - always the same. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432373&group_id=5470 From noreply@sourceforge.net Tue Jun 12 13:52:30 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 05:52:30 -0700 Subject: [Python-bugs-list] [ python-Bugs-432384 ] Recursion in PyString_AsEncodedString? Message-ID: Bugs item #432384, was updated on 2001-06-12 05:52 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432384&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Nobody/Anonymous (nobody) Summary: Recursion in PyString_AsEncodedString? Initial Comment: The deprecated function PyString_AsEncodedString seems to contain an endless recursion: PyObject *PyString_AsEncodedString( PyObject *str, const char *encoding, const char *errors) { PyObject *v; v = PyString_AsEncodedString(str, encoding, errors); if (v == NULL) goto onError; ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432384&group_id=5470 From noreply@sourceforge.net Tue Jun 12 14:15:35 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 06:15:35 -0700 Subject: [Python-bugs-list] [ python-Bugs-432384 ] Recursion in PyString_AsEncodedString? Message-ID: Bugs item #432384, was updated on 2001-06-12 05:52 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432384&group_id=5470 Category: Python Interpreter Core Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Walter Dörwald (doerwalter) >Assigned to: M.-A. Lemburg (lemburg) Summary: Recursion in PyString_AsEncodedString? Initial Comment: The deprecated function PyString_AsEncodedString seems to contain an endless recursion: PyObject *PyString_AsEncodedString( PyObject *str, const char *encoding, const char *errors) { PyObject *v; v = PyString_AsEncodedString(str, encoding, errors); if (v == NULL) goto onError; ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-12 06:15 Message: Logged In: YES user_id=38388 Thanks for spotting this one. A fix was checked in as /cvsroot/python/python/dist/src/Objects/stringobject.c,v <-- stringobject.c new revision: 2.118; previous revision: 2.117 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432384&group_id=5470 From noreply@sourceforge.net Tue Jun 12 14:52:01 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 06:52:01 -0700 Subject: [Python-bugs-list] [ python-Bugs-432373 ] file.tell() gives wrong value Message-ID: Bugs item #432373, was updated on 2001-06-12 04:09 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432373&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Elmar Sonnenschein (eso) Assigned to: Nobody/Anonymous (nobody) Summary: file.tell() gives wrong value Initial Comment: Invoking tell() on a file object will return a wrong (arbitrary?) value if called before seeking. Example: The following script f = open('c:\test.xyz') print 'pos: ' + `f.tell()` print 'read: ' + f.read(3) print 'pos: ' + `f.tell()` f.seek(0) print 'pos: ' + `f.tell()` print 'read: ' + f.read(3) print 'pos: ' + `f.tell()` f.close() will yield the following result: pos: 0 read: XYZ pos: 3587 <-- wrong value pos: 0 read: XYZ pos: 3 Only the return value of tell is wrong, not the actual file position, i. e. a consecutive read() will return the correct bytes. It doesn't help to seek before reading, only seeking _after_ reading will set the return value of tell() correctly. File size of 'test.xyz' was 3.822.167 Bytes. ---------------------------------------------------------------------- Comment By: Hans Nowak (zephyrfalcon) Date: 2001-06-12 06:52 Message: Logged In: YES user_id=173607 Works fine for me... I'm using Python 2.1 on Windows NT 4, sp 5. :-/ Maybe it's platform dependent? ---------------------------------------------------------------------- Comment By: Elmar Sonnenschein (eso) Date: 2001-06-12 04:15 Message: Logged In: YES user_id=145214 Checked on Python 2.0, 2.1, and ActivePython 2.1 - always the same. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432373&group_id=5470 From noreply@sourceforge.net Tue Jun 12 14:56:59 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 06:56:59 -0700 Subject: [Python-bugs-list] [ python-Bugs-432373 ] file.tell() gives wrong value Message-ID: Bugs item #432373, was updated on 2001-06-12 04:09 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432373&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Elmar Sonnenschein (eso) Assigned to: Nobody/Anonymous (nobody) Summary: file.tell() gives wrong value Initial Comment: Invoking tell() on a file object will return a wrong (arbitrary?) value if called before seeking. Example: The following script f = open('c:\test.xyz') print 'pos: ' + `f.tell()` print 'read: ' + f.read(3) print 'pos: ' + `f.tell()` f.seek(0) print 'pos: ' + `f.tell()` print 'read: ' + f.read(3) print 'pos: ' + `f.tell()` f.close() will yield the following result: pos: 0 read: XYZ pos: 3587 <-- wrong value pos: 0 read: XYZ pos: 3 Only the return value of tell is wrong, not the actual file position, i. e. a consecutive read() will return the correct bytes. It doesn't help to seek before reading, only seeking _after_ reading will set the return value of tell() correctly. File size of 'test.xyz' was 3.822.167 Bytes. ---------------------------------------------------------------------- >Comment By: Elmar Sonnenschein (eso) Date: 2001-06-12 06:56 Message: Logged In: YES user_id=145214 Just found out that it only happens if it is a binary file which is opened without the 'b' mode flag. Therefore it is not severe but still strange behavior. Platform is Windows 2000. ---------------------------------------------------------------------- Comment By: Hans Nowak (zephyrfalcon) Date: 2001-06-12 06:52 Message: Logged In: YES user_id=173607 Works fine for me... I'm using Python 2.1 on Windows NT 4, sp 5. :-/ Maybe it's platform dependent? ---------------------------------------------------------------------- Comment By: Elmar Sonnenschein (eso) Date: 2001-06-12 04:15 Message: Logged In: YES user_id=145214 Checked on Python 2.0, 2.1, and ActivePython 2.1 - always the same. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432373&group_id=5470 From noreply@sourceforge.net Tue Jun 12 18:13:37 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 10:13:37 -0700 Subject: [Python-bugs-list] [ python-Bugs-432247 ] Deprecated Module regsub Message-ID: Bugs item #432247, was updated on 2001-06-10 04:06 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432247&group_id=5470 Category: None Group: None >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: carlos herrera (shakari) Assigned to: Nobody/Anonymous (nobody) Summary: Deprecated Module regsub Initial Comment: Hi i have installed python 2.1 on my linux server (slackware 7) and i have installed mailman 2.0.3 i have configurated and when i call make install it sends a warnign message as follows: /usr/local/lib/python2.1/regsub.py:15: DeprecationWarning: the regsub module is deprecated; please use re.sub() DeprecationWarning) i would like to know what is the problem if is the python or the mailman ? thank you ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-12 10:13 Message: Logged In: YES user_id=21627 This is a mailman problem; it uses a deprecated function. You can filter the warnings out if you want, see http://www.python.org/doc/current/lib/module-warnings.html for details. Most recent mailman releases don't use regsub anymore, so updating mailman might be another option. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432247&group_id=5470 From noreply@sourceforge.net Tue Jun 12 18:19:00 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 10:19:00 -0700 Subject: [Python-bugs-list] [ python-Bugs-432247 ] Deprecated Module regsub Message-ID: Bugs item #432247, was updated on 2001-06-10 04:06 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432247&group_id=5470 Category: None >Group: Not a Bug Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: carlos herrera (shakari) >Assigned to: Barry Warsaw (bwarsaw) Summary: Deprecated Module regsub Initial Comment: Hi i have installed python 2.1 on my linux server (slackware 7) and i have installed mailman 2.0.3 i have configurated and when i call make install it sends a warnign message as follows: /usr/local/lib/python2.1/regsub.py:15: DeprecationWarning: the regsub module is deprecated; please use re.sub() DeprecationWarning) i would like to know what is the problem if is the python or the mailman ? thank you ---------------------------------------------------------------------- >Comment By: Barry Warsaw (bwarsaw) Date: 2001-06-12 10:18 Message: Logged In: YES user_id=12800 Martin's right. Upgrade to Mailman 2.0.5 which is Python 2.1 compatible. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-12 10:13 Message: Logged In: YES user_id=21627 This is a mailman problem; it uses a deprecated function. You can filter the warnings out if you want, see http://www.python.org/doc/current/lib/module-warnings.html for details. Most recent mailman releases don't use regsub anymore, so updating mailman might be another option. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432247&group_id=5470 From noreply@sourceforge.net Tue Jun 12 19:41:51 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 11:41:51 -0700 Subject: [Python-bugs-list] [ python-Bugs-432497 ] curses module doesn't build on HP-UX Message-ID: Bugs item #432497, was updated on 2001-06-12 11:41 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432497&group_id=5470 Category: Extension Modules Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: M.-A. Lemburg (lemburg) Assigned to: Nobody/Anonymous (nobody) Summary: curses module doesn't build on HP-UX Initial Comment: The curses module does not build on HP-UX 11.00 (don't know about other versions). The reason according to Peter Stoldt (peter_stoldt@hp.com) who provided the fix below is to look for the curses header file in a different directory. Here is his fix: In py_curses.h exchange the line with #include with #include This would have to be done in a platform specific way of course. Perhaps all it takes is adding the curses_colr/ dir to the compiler call as -I option... not sure. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432497&group_id=5470 From noreply@sourceforge.net Tue Jun 12 19:48:48 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 11:48:48 -0700 Subject: [Python-bugs-list] [ python-Bugs-432501 ] Problem with urllib and proxies / Win32 Message-ID: Bugs item #432501, was updated on 2001-06-12 11:48 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432501&group_id=5470 Category: Windows Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: M.-A. Lemburg (lemburg) Assigned to: Mark Hammond (mhammond) Summary: Problem with urllib and proxies / Win32 Initial Comment: There's a problem with urllib on Windows. Here's a quote which relates to the problem: """ In the newest version of the urllib. They added a section which pulls the http web_proxy from the windows NT registry. Unfortunately they did not think to check for 127.0.0.1 and remove it from the proxy list and they also did not handle (no proxy addresses). As a result the new library reads the proxy settings from the Windows NT Registry for IE and attempts to use them. """ Any thoughts on how to solve this ? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432501&group_id=5470 From noreply@sourceforge.net Tue Jun 12 22:50:01 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 14:50:01 -0700 Subject: [Python-bugs-list] [ python-Bugs-432552 ] PyLong_AsLongLong() problems Message-ID: Bugs item #432552, was updated on 2001-06-12 14:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432552&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Tim Peters (tim_one) Assigned to: Tim Peters (tim_one) Summary: PyLong_AsLongLong() problems Initial Comment: The C API function PyLong_AsLongLong() botches some overflow endcases. Most obviously, if the Python long contains the most-negative C long long (-(2**63) when sizeof(long long) == 8), on most boxes signed right shifts sign-extend (C doesn't define this), and then the if ((x >> SHIFT) != prev) overflow test triggers by mistake (because the leading 1-bit gets misinterpreted as "a sign bit", and the signed right shift then duplicates it 15 times). Bumped into this while writing a LONG_LONG API test for _testcapimodule.c. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432552&group_id=5470 From noreply@sourceforge.net Tue Jun 12 23:35:42 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 15:35:42 -0700 Subject: [Python-bugs-list] [ python-Bugs-432570 ] overlapping groups ? Message-ID: Bugs item #432570, was updated on 2001-06-12 15:35 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432570&group_id=5470 Category: Regular Expressions Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: overlapping groups ? Initial Comment: with perl: $tag = ''; $tag =~ /<(\/|\?)?(.*?)(\/|\?)?>/; print "1=$1\n"; print "2=$2\n"; print "3=$3\n"; you get: 1= 2=abc xyz="def/ghi" 3= with python (ActivePython 2.1, build 210 ActiveState based on Python 2.1 (#15, Apr 19 2001, 10:28:27) [MSC 32 bit (Intel)] on win32): import re p = re.compile("<(/|\?)?(.*?)(/|\?)?>") tag = '' m = p.search(tag) print '1='+str(m.group(1)) print '2='+str(m.group(2)) print '3='+str(m.group(3)) you get: 1=None 2=abc xyz="def/ghi" 3=abc xyz="def/ uups ... cu, jk. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432570&group_id=5470 From noreply@sourceforge.net Wed Jun 13 01:36:50 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 17:36:50 -0700 Subject: [Python-bugs-list] [ python-Bugs-432552 ] PyLong_AsLongLong() problems Message-ID: Bugs item #432552, was updated on 2001-06-12 14:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432552&group_id=5470 Category: Python Interpreter Core Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Tim Peters (tim_one) Assigned to: Tim Peters (tim_one) Summary: PyLong_AsLongLong() problems Initial Comment: The C API function PyLong_AsLongLong() botches some overflow endcases. Most obviously, if the Python long contains the most-negative C long long (-(2**63) when sizeof(long long) == 8), on most boxes signed right shifts sign-extend (C doesn't define this), and then the if ((x >> SHIFT) != prev) overflow test triggers by mistake (because the leading 1-bit gets misinterpreted as "a sign bit", and the signed right shift then duplicates it 15 times). Bumped into this while writing a LONG_LONG API test for _testcapimodule.c. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-12 17:36 Message: Logged In: YES user_id=31435 Repaired, in Modules/_testcapimodule.c, rev 1.5 Objects/longobject.c, rev 1.75 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432552&group_id=5470 From noreply@sourceforge.net Wed Jun 13 03:20:08 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 19:20:08 -0700 Subject: [Python-bugs-list] [ python-Bugs-432621 ] httplib: multiple Set-Cookie headers Message-ID: Bugs item #432621, was updated on 2001-06-12 19:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432621&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: httplib: multiple Set-Cookie headers Initial Comment: httplib does not support multiple headers of the same name, because the headers are stored in a dictionary. This causes a problem because an HTTP response can contain multiple "Set-Cookie". It is stated in RFC2109 - HTTP State Management Mechanism that "An origin server may include multiple Set-Cookie headers in a response. " With the current python implementation, only the last "Set-Cookie" header is included in the headers dictionary, effectively meaning that the other cookies were lost. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432621&group_id=5470 From noreply@sourceforge.net Wed Jun 13 07:17:15 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Jun 2001 23:17:15 -0700 Subject: [Python-bugs-list] [ python-Bugs-432501 ] Problem with urllib and proxies / Win32 Message-ID: Bugs item #432501, was updated on 2001-06-12 11:48 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432501&group_id=5470 Category: Windows Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: M.-A. Lemburg (lemburg) Assigned to: Mark Hammond (mhammond) Summary: Problem with urllib and proxies / Win32 Initial Comment: There's a problem with urllib on Windows. Here's a quote which relates to the problem: """ In the newest version of the urllib. They added a section which pulls the http web_proxy from the windows NT registry. Unfortunately they did not think to check for 127.0.0.1 and remove it from the proxy list and they also did not handle (no proxy addresses). As a result the new library reads the proxy settings from the Windows NT Registry for IE and attempts to use them. """ Any thoughts on how to solve this ? ---------------------------------------------------------------------- >Comment By: Mark Hammond (mhammond) Date: 2001-06-12 23:17 Message: Logged In: YES user_id=14198 This is not a problem with the win32 proxy detection code, but with urllib in general. urllib itself does not handle the concept of "proxy exclude list", and nor does it handle the localhost case - if a proxy is configured, it uses it. So either you are after an enhancement to urllib to allow certain addresses to bypass the proxy, or a technique to allow the registry to be ignored. I believe the latter can be handled by setting "ignored_proxy=something" in the environment. Can you clarify exactly what you want here? If it is the urllib enhancement then I am not the best person for this - I don't have a proxy server available, and don't have much code that uses urllib. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432501&group_id=5470 From noreply@sourceforge.net Wed Jun 13 08:28:59 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Jun 2001 00:28:59 -0700 Subject: [Python-bugs-list] [ python-Bugs-432501 ] Problem with urllib and proxies / Win32 Message-ID: Bugs item #432501, was updated on 2001-06-12 11:48 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432501&group_id=5470 Category: Windows Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: M.-A. Lemburg (lemburg) Assigned to: Mark Hammond (mhammond) Summary: Problem with urllib and proxies / Win32 Initial Comment: There's a problem with urllib on Windows. Here's a quote which relates to the problem: """ In the newest version of the urllib. They added a section which pulls the http web_proxy from the windows NT registry. Unfortunately they did not think to check for 127.0.0.1 and remove it from the proxy list and they also did not handle (no proxy addresses). As a result the new library reads the proxy settings from the Windows NT Registry for IE and attempts to use them. """ Any thoughts on how to solve this ? ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-13 00:28 Message: Logged In: YES user_id=38388 Well the point is that if you have IE configured to use a proxy then urllib will automagically use it for all requests. This is obviously not ideal for certain requests like one to the localhost. Looking at the code I cannot find any way to switch off proxies by using environment variables (ok, you can specify "http_proxy=", but that will only result in an error that the proxy is not found). So in the end, I think this is a bug in the sense that you cannot turn proxy handling off and a feature request in the sense that it should be possible to turn it off ;-) IMHO, the localhost and 127.0.0.1 should always be excluded from the proxy handling. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2001-06-12 23:17 Message: Logged In: YES user_id=14198 This is not a problem with the win32 proxy detection code, but with urllib in general. urllib itself does not handle the concept of "proxy exclude list", and nor does it handle the localhost case - if a proxy is configured, it uses it. So either you are after an enhancement to urllib to allow certain addresses to bypass the proxy, or a technique to allow the registry to be ignored. I believe the latter can be handled by setting "ignored_proxy=something" in the environment. Can you clarify exactly what you want here? If it is the urllib enhancement then I am not the best person for this - I don't have a proxy server available, and don't have much code that uses urllib. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432501&group_id=5470 From noreply@sourceforge.net Wed Jun 13 08:38:12 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Jun 2001 00:38:12 -0700 Subject: [Python-bugs-list] [ python-Bugs-432501 ] Problem with urllib and proxies / Win32 Message-ID: Bugs item #432501, was updated on 2001-06-12 11:48 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432501&group_id=5470 Category: Windows Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: M.-A. Lemburg (lemburg) Assigned to: Mark Hammond (mhammond) Summary: Problem with urllib and proxies / Win32 Initial Comment: There's a problem with urllib on Windows. Here's a quote which relates to the problem: """ In the newest version of the urllib. They added a section which pulls the http web_proxy from the windows NT registry. Unfortunately they did not think to check for 127.0.0.1 and remove it from the proxy list and they also did not handle (no proxy addresses). As a result the new library reads the proxy settings from the Windows NT Registry for IE and attempts to use them. """ Any thoughts on how to solve this ? ---------------------------------------------------------------------- >Comment By: Mark Hammond (mhammond) Date: 2001-06-13 00:38 Message: Logged In: YES user_id=14198 Looking at the code, I suspect "blah_proxy" will disable the registry. This will setup the scheme "blah://" to use a proxy, and avoid the registry code completely. As "blah://" is invalid, http etc requests should work fine. While I agree in general with the fact that localhost should never be proxied, that makes this bug no longer Window specific, and not related to the IE code at all. Hence I am probably not the best person to have this assigned to - if you want it fixed that is :) ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-13 00:28 Message: Logged In: YES user_id=38388 Well the point is that if you have IE configured to use a proxy then urllib will automagically use it for all requests. This is obviously not ideal for certain requests like one to the localhost. Looking at the code I cannot find any way to switch off proxies by using environment variables (ok, you can specify "http_proxy=", but that will only result in an error that the proxy is not found). So in the end, I think this is a bug in the sense that you cannot turn proxy handling off and a feature request in the sense that it should be possible to turn it off ;-) IMHO, the localhost and 127.0.0.1 should always be excluded from the proxy handling. ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2001-06-12 23:17 Message: Logged In: YES user_id=14198 This is not a problem with the win32 proxy detection code, but with urllib in general. urllib itself does not handle the concept of "proxy exclude list", and nor does it handle the localhost case - if a proxy is configured, it uses it. So either you are after an enhancement to urllib to allow certain addresses to bypass the proxy, or a technique to allow the registry to be ignored. I believe the latter can be handled by setting "ignored_proxy=something" in the environment. Can you clarify exactly what you want here? If it is the urllib enhancement then I am not the best person for this - I don't have a proxy server available, and don't have much code that uses urllib. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432501&group_id=5470 From noreply@sourceforge.net Wed Jun 13 15:58:48 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Jun 2001 07:58:48 -0700 Subject: [Python-bugs-list] [ python-Bugs-432786 ] Python 2.1 test_locale fails Message-ID: Bugs item #432786, was updated on 2001-06-13 07:58 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432786&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Paul M. Dubuc (dubuc) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.1 test_locale fails Initial Comment: I'm building Python 2.1 on Solaris 2.6. When I 'make test', the test_locale module is the only one that fails: test test_locale failed -- Writing: "'%f' % 1024 == '1024.000000' != '1,024.000000'", expected: '' ... The actual stdout doesn't match the expected stdout. This much did match (between asterisk lines): ********************************************************************** test_locale ********************************************************************** Then ... We expected (repr): '' But instead we got: "'%f' % 1024 == '1024.000000' != '1,024.000000'" ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432786&group_id=5470 From noreply@sourceforge.net Wed Jun 13 18:12:30 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Jun 2001 10:12:30 -0700 Subject: [Python-bugs-list] [ python-Bugs-429357 ] non-greedy regexp duplicating match bug Message-ID: Bugs item #429357, was updated on 2001-06-01 09:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429357&group_id=5470 Category: Regular Expressions Group: None Status: Open Resolution: None Priority: 5 Submitted By: Matthew Mueller (donut) Assigned to: Nobody/Anonymous (nobody) Summary: non-greedy regexp duplicating match bug Initial Comment: I found some weird bug, where when a non-greedy match doesn't match anything, it will duplicate the rest of the string instead of being None. #pyrebug.py: import re urlrebug=re.compile(""" (.*?):// #scheme ( (.*?) #user (?: :(.*) #pass )? @)? (.*?) #addr (?::([0-9]+))? #port (/.*)?$ #path """, re.VERBOSE) testbad='foo://bah:81/pth' print urlrebug.match(testbad).groups() Bug Output: >python2.1 pyrebug.py ('foo', None, 'bah:81/pth', None, 'bah', '81', '/pth') >python-cvs pyrebug.py ('foo', None, 'bah:81/pth', None, 'bah', '81', '/pth') Good (expected) Output: >python1.5 pyrebug.py ('foo', None, None, None, 'bah', '81', '/pth') ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-06-13 10:12 Message: Logged In: NO What's happening makes sense, on one level. When the regex engine gets to the user:pass@ part ((.*?)(?::(.*))?@)? which fill groups 2, 3, and 4, the .*? of group 3 has to try at every character in the rest of the string before admitting overall defeat. In doing that, the last time that group 3 successfully completely locally, it has the rest of the string matched. Of course, overall, group three is enclosed within group 2, and when group two couldn't complete successfully, the engine knows it can skip group two (due to the ? modifying it), so it totally bails on groups 2, 3 and 4 to continue with the rest of the expression. What you'd like to happen is when that "bailing" happens for group 2, the enclosing groups 3 and 4 would get zereoed out (since they didn't participate in the *final* overall match). That makes sense, and is what I would expect to happen. However, what *is* happening is that group 3 is keeping the string that *it* last matched (even thought that last match didn't contribute to the final, overall match). I'm not explaining this well -- I hope you can understand it despite that. Sorry. Jeffrey ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429357&group_id=5470 From noreply@sourceforge.net Thu Jun 14 06:28:46 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Jun 2001 22:28:46 -0700 Subject: [Python-bugs-list] [ python-Bugs-405227 ] sizeof(Py_UNICODE)==2 ???? Message-ID: Bugs item #405227, was updated on 2001-03-01 11:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 Category: Unicode Group: Platform-specific >Status: Open Resolution: Wont Fix Priority: 5 Submitted By: Jon Saenz (jsaenz) Assigned to: M.-A. Lemburg (lemburg) Summary: sizeof(Py_UNICODE)==2 ???? Initial Comment: We are trying to install Python 2.0 in a Cray T3E. After a painful process of removing several modules which produce some errors (mmap, sha, md5), we get core dumps when we run python because under this platform, there does not exist a 16-bit numeric type. Unsigned short is 4 bytes long. We have finally defined unicode objects as unsigned short, despite they are 4 bytes long, and we have changed a sentence in Objects/unicodeobject.c to: if (sizeof(Py_UNICODE)!=sizeof(unsigned short){ It compiles and runs now, but the test on regular expressions crashes and the whole regression test does, too. Support of Unicode for this platform is not correct in version 2.0 of Python. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-13 22:28 Message: Logged In: YES user_id=31435 I opened this again. It's simply unacceptable to require that the platform have a 2-byte integer type. That doesn't mean it's easy to fix, but it's a design error all the same. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-03-16 11:27 Message: Logged In: YES user_id=38388 The current Unicode implementation needs Py_UNICODE to be a 16-bit entity and so does SRE. To get this to work on the Cray, you could try to use a 2-char struct which is then cast to a short in all those places which assume a 16-bit number representation. Simply using a 4-byte entity as basis will not work, since the fact that Py_UNICODE fits into 2 bytes is hard-coded into the implementation in a number of places. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-03-01 15:29 Message: Logged In: YES user_id=31435 Notes: + Python was ported to T3E last year, IIRC by Marc Poinot. May want to track him down. + Python's Unicode support doesn't rely on any platform Unicode support. Whether it's "useless" depends on the user, not the platform. + Face it : Crays are the only platforms that don't have a native 16-bit integer type. + Even so, I believe at least SRE is happy to work with 32- bit Unicode (glibc's wchar_t is 4 bytes, IIRC), so that much was likely a shallow problem. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:09 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-03-01 14:05 Message: Logged In: YES user_id=3066 Marc-Andre, can you deal with the general Unicode issues here and then pass this along to Fredrik for SRE updates? (Or better yet, work in parallel?) Thanks! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 From noreply@sourceforge.net Thu Jun 14 06:48:20 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Jun 2001 22:48:20 -0700 Subject: [Python-bugs-list] [ python-Bugs-432373 ] [Windows] file.tell() gives wrong value Message-ID: Bugs item #432373, was updated on 2001-06-12 04:09 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432373&group_id=5470 Category: Python Library >Group: Platform-specific Status: Open Resolution: None >Priority: 3 Submitted By: Elmar Sonnenschein (eso) Assigned to: Nobody/Anonymous (nobody) >Summary: [Windows] file.tell() gives wrong value Initial Comment: Invoking tell() on a file object will return a wrong (arbitrary?) value if called before seeking. Example: The following script f = open('c:\test.xyz') print 'pos: ' + `f.tell()` print 'read: ' + f.read(3) print 'pos: ' + `f.tell()` f.seek(0) print 'pos: ' + `f.tell()` print 'read: ' + f.read(3) print 'pos: ' + `f.tell()` f.close() will yield the following result: pos: 0 read: XYZ pos: 3587 <-- wrong value pos: 0 read: XYZ pos: 3 Only the return value of tell is wrong, not the actual file position, i. e. a consecutive read() will return the correct bytes. It doesn't help to seek before reading, only seeking _after_ reading will set the return value of tell() correctly. File size of 'test.xyz' was 3.822.167 Bytes. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-13 22:48 Message: Logged In: YES user_id=31435 Changed to Platform-specific (I'm sure this doesn't happen under Unix variants). What happens if you write this little program in C instead? My guess it will do the same thing. If so, it's a Microsoft library problem Python can't hide (Python .tell () and .seek() simply call the platform C library functions). Reduced the priority until there's evidence this is actually a Python (not mscvrt.dll) inelegance. ---------------------------------------------------------------------- Comment By: Elmar Sonnenschein (eso) Date: 2001-06-12 06:56 Message: Logged In: YES user_id=145214 Just found out that it only happens if it is a binary file which is opened without the 'b' mode flag. Therefore it is not severe but still strange behavior. Platform is Windows 2000. ---------------------------------------------------------------------- Comment By: Hans Nowak (zephyrfalcon) Date: 2001-06-12 06:52 Message: Logged In: YES user_id=173607 Works fine for me... I'm using Python 2.1 on Windows NT 4, sp 5. :-/ Maybe it's platform dependent? ---------------------------------------------------------------------- Comment By: Elmar Sonnenschein (eso) Date: 2001-06-12 04:15 Message: Logged In: YES user_id=145214 Checked on Python 2.0, 2.1, and ActivePython 2.1 - always the same. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432373&group_id=5470 From noreply@sourceforge.net Thu Jun 14 08:59:54 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 00:59:54 -0700 Subject: [Python-bugs-list] [ python-Bugs-429357 ] non-greedy regexp duplicating match bug Message-ID: Bugs item #429357, was updated on 2001-06-01 09:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429357&group_id=5470 Category: Regular Expressions Group: None Status: Open Resolution: None Priority: 5 Submitted By: Matthew Mueller (donut) Assigned to: Nobody/Anonymous (nobody) Summary: non-greedy regexp duplicating match bug Initial Comment: I found some weird bug, where when a non-greedy match doesn't match anything, it will duplicate the rest of the string instead of being None. #pyrebug.py: import re urlrebug=re.compile(""" (.*?):// #scheme ( (.*?) #user (?: :(.*) #pass )? @)? (.*?) #addr (?::([0-9]+))? #port (/.*)?$ #path """, re.VERBOSE) testbad='foo://bah:81/pth' print urlrebug.match(testbad).groups() Bug Output: >python2.1 pyrebug.py ('foo', None, 'bah:81/pth', None, 'bah', '81', '/pth') >python-cvs pyrebug.py ('foo', None, 'bah:81/pth', None, 'bah', '81', '/pth') Good (expected) Output: >python1.5 pyrebug.py ('foo', None, None, None, 'bah', '81', '/pth') ---------------------------------------------------------------------- >Comment By: Matthew Mueller (donut) Date: 2001-06-14 00:59 Message: Logged In: YES user_id=65253 I think I understand what you are saying, and in the context of the test, it doesn't seem too bad. BUT, my original code (and what I'd like to have) did not have the surrounding group. So I'd just get: ('foo', 'bah:81/pth', None, 'bah', '81', '/pth') Knowing the general ease of messing up regexs when writing them, I'm sure you can image the pain I went through before actually realizing it was a python bug :) ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-06-13 10:12 Message: Logged In: NO What's happening makes sense, on one level. When the regex engine gets to the user:pass@ part ((.*?)(?::(.*))?@)? which fill groups 2, 3, and 4, the .*? of group 3 has to try at every character in the rest of the string before admitting overall defeat. In doing that, the last time that group 3 successfully completely locally, it has the rest of the string matched. Of course, overall, group three is enclosed within group 2, and when group two couldn't complete successfully, the engine knows it can skip group two (due to the ? modifying it), so it totally bails on groups 2, 3 and 4 to continue with the rest of the expression. What you'd like to happen is when that "bailing" happens for group 2, the enclosing groups 3 and 4 would get zereoed out (since they didn't participate in the *final* overall match). That makes sense, and is what I would expect to happen. However, what *is* happening is that group 3 is keeping the string that *it* last matched (even thought that last match didn't contribute to the final, overall match). I'm not explaining this well -- I hope you can understand it despite that. Sorry. Jeffrey ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429357&group_id=5470 From noreply@sourceforge.net Thu Jun 14 09:19:43 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 01:19:43 -0700 Subject: [Python-bugs-list] [ python-Bugs-408936 ] Python2.0 re module: greedy regexp bug 2 Message-ID: Bugs item #408936, was updated on 2001-03-15 13:59 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=408936&group_id=5470 >Category: Regular Expressions Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Bastian Kleineidam (calvin) Assigned to: Fredrik Lundh (effbot) Summary: Python2.0 re module: greedy regexp bug 2 Initial Comment: Yeah, try this: re.search(r"") and it does not match, but it should match, no? In more complicated examples I even get infinite recursion, if youre interested, I will make a script for this. The above example should be in the Regression Test Suite. Look also at [ #405358 ] Python2.0 re module: greedy regexp bug, perhaps this is somehow related? I dont know. ---------------------------------------------------------------------- >Comment By: Fredrik Lundh (effbot) Date: 2001-06-14 01:19 Message: Logged In: YES user_id=38376 fixed as a side-effect of some other patch (probably the big 2001-03-20 update) ---------------------------------------------------------------------- Comment By: Bastian Kleineidam (calvin) Date: 2001-04-10 06:11 Message: Logged In: YES user_id=9205 Ok, its closed in recent builds, but this is still a candidate for the Python 2.01 bugfix release. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-04-10 05:52 Message: Logged In: YES user_id=6656 Works for me in recent builds, so I guess it's fixed. Someone want to mark it closed? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=408936&group_id=5470 From noreply@sourceforge.net Thu Jun 14 09:20:25 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 01:20:25 -0700 Subject: [Python-bugs-list] [ python-Bugs-210665 ] Compiling python on hpux 11.00 with threads (PR#360) Message-ID: Bugs item #210665, was updated on 2000-07-31 14:12 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=210665&group_id=5470 Category: Threads Group: Platform-specific Status: Open Resolution: None Priority: 3 Submitted By: Nobody/Anonymous (nobody) Assigned to: Guido van Rossum (gvanrossum) Summary: Compiling python on hpux 11.00 with threads (PR#360) Initial Comment: Jitterbug-Id: 360 Submitted-By: philipp.jocham@salomon.at Date: Fri, 16 Jun 2000 08:47:06 -0400 (EDT) Version: 1.5.2 OS: HP-UX 11.00 There are two missing details in the configure process to make this work out of the box. First: The function pthread_create isn't found in library libpthread.a but in libcma.a, because pthread_create is just a macro in sys/pthread.h pointing to __pthread_create_system After patching ./configure directly and running ./configure --with-thread (now detecting the correct library /usr/lib/libpthread.a) I also added -lcl to Modules/Makefile at LIBS= -lnet -lnsl -ldld -lpthread -lcl otherwise importing of modules with threads didn't work (in this case oci_.sl from DCOracle). I'm not sure about the correct syntax or wether it's the correct place and method, but I would suggest a solution like the following code snippet. [...] AC_MSG_CHECKING(for --with-thread) [...] AC_DEFINE(_POSIX_THREADS) LIBS="$LIBS -lpthread -lcl" LIBOBJS="$LIBOBJS thread.o"], [ AC_CHECK_LIB(pthread, __pthread_create_system, [AC_DEFINE(WITH_THREAD) [...] I hope this helps to make installation process smoother. Fell free to contact me, if there are further questions. Philipp -- I confirm that, to the best of my knowledge and belief, this contribution is free of any claims of third parties under copyright, patent or other rights or interests ("claims"). To the extent that I have any such claims, I hereby grant to CNRI a nonexclusive, irrevocable, royalty-free, worldwide license to reproduce, distribute, perform and/or display publicly, prepare derivative versions, and otherwise use this contribution as part of the Python software and its related documentation, or any derivative versions thereof, at no cost to CNRI or its licensed users, and to authorize others to do so. I acknowledge that CNRI may, at its sole discretion, decide whether or not to incorporate this contribution in the Python software and its related documentation. I further grant CNRI permission to use my name and other identifying information provided to CNRI by me for use in connection with the Python software and its related documentation. ==================================================================== Audit trail: Tue Jul 11 08:26:01 2000 guido moved from incoming to open ---------------------------------------------------------------------- Comment By: Richard Townsend (rptownsend) Date: 2001-06-14 01:20 Message: Logged In: YES user_id=200117 I applied the patch from thewrittenword's site, but when I ran autoconf it generated a corrupt configure script. There problem occurs around lines 3895-3906: if test "$USE_THREAD_MODULE" != "#" then # If the above checks didn't disable threads, (at least) OSF1 # needs this '-threads' argument during linking. case $ac_sys_system in OSF1 fi LDLAST=-threads;; esac fi fi fi The case statement has been trashed by the extra 'fi' token. I tried manually editing it like this: if test "$USE_THREAD_MODULE" != "#" then # If the above checks didn't disable threads, (at least) OSF1 # needs this '-threads' argument during linking. case $ac_sys_system in OSF1) LDLAST=-threads;; esac fi fi fi But it still fails with an 'else' not matched at line 3422. I can't see where the extra 'fi' should go. ---------------------------------------------------------------------- Comment By: The Written Word (china) (tww-china) Date: 2001-05-06 22:36 Message: Logged In: YES user_id=119770 You can find a patch to fix this against python 2.1 at: ftp://ftp.thewrittenword.com/outgoing/pub/python-2.1-416696.patch You'll need to rerun autoconf to test. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-11-02 08:38 Message: Reopened because there's a dissenting opinion. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-11-02 07:25 Message: Ok, check out the configure.in patch I created against Python 2.0: ftp://ftp.thewrittenword.com/outgoing/pub/python-2.0.patch I tested it under HP-UX 11.00 and it works just fine. The thread test worked too. -- albert chin (china@thewrittenword.com) ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-11-01 09:18 Message: Ick! Why check for anything with __ prepended to the name? Isn't that like checking for a "hidden" function, which might not be there in a followup version? On HP-UX 11.00, pthread_create is in /usr/lib/libc.sl anyway. The proper way to check for pthread_create is: AC_TRY_LINK([#include void * start_routine (void *arg) { exit (0); }], [ pthread_create (NULL, NULL, start_routine, NULL)], [ AC_MSG_RESULT(yes)], [ AC_MSG_RESULT(no)]) I modified configure.in in 2.0 to remove the patch you included in CVS 1.175 and added a test to include similar to the above (linked without -lpthread and with -lpthread). I'm testing now. Will provide a patch when things are tested. Also, I don't think threads on HP-UX 10.20 will work unless you have the DCE libraries installed. Anyhow, I'd probably avoid threads on 10.20. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-30 09:48 Message: Philipp submitted a patch to configure.in that fixes the problem for him and doesn't look like it would break things for others. configure.in, CVS revision 1.175. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-13 07:45 Message: OK, so the correct thing to do seems to be to somehow add #include to the tests for thread libraries. I'm afraid I won't be able to fix this in 2.0final, but I'll think about fixing it in 2.1 (or 2.0.1, whichever comes first :-). ---------------------------------------------------------------------- Comment By: Eddy De Greef (edg) Date: 2000-10-10 04:55 Message: I can confirm that the bug still exists in 2.0c1. It works fine on HP-UX 10.20 (which only has libcma), but not on HP-UX 11 (which both has libcma and libpthread). The pthread_create function is not defined as a macro though, but as a static function: static int pthread_create(pthread_t *tid, const pthread_attr_t *attr, void *(*start_routine)(void *), void *arg) { return(__pthread_create_system(tid, attr, start_routine, arg)); } I don't see an easy way to work around this. I'm not a configure expert, but perhaps the script should first check whether this code compiles and links: #include int main() { pthread_create(0,0,0,0); return 0; } and if not, fall back on the other tests ? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-06 10:40 Message: I have two reports from people for whom configure, build and run with threads now works on HP-UX 10 and 11. I'm not sure what to do about this report... What's different on Philipp's system??? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-09-25 06:10 Message: I'm hoping that this was fixed by recent changes. Sent an email to the original submittor to verify. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-09-22 02:56 Message: Taking this because I'm considering to redesign the thread configuration section in configure.in anyway -- there's a similar bug report for Alpha OSF1. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2000-09-07 15:05 Message: Please do triage on this bug. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=210665&group_id=5470 From noreply@sourceforge.net Thu Jun 14 09:26:01 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 01:26:01 -0700 Subject: [Python-bugs-list] [ python-Bugs-433024 ] SRE: (?flag) isn't properly scoped Message-ID: Bugs item #433024, was updated on 2001-06-14 01:26 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433024&group_id=5470 Category: Regular Expressions Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fredrik Lundh (effbot) Assigned to: Fredrik Lundh (effbot) Summary: SRE: (?flag) isn't properly scoped Initial Comment: from the jeffrey friedl report: The way (?i) works now is that if it appears anywhere in the regex, it turns on case-insensative matching for the entire regex. This is different (and less useful) than how Perl or Sun's Java package does it [I'm pretty sure SRE does it this way to exactly match the version of PCRE used in 1.5.2, but maybe it's time to move forward... /F] ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433024&group_id=5470 From noreply@sourceforge.net Thu Jun 14 09:27:15 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 01:27:15 -0700 Subject: [Python-bugs-list] [ python-Bugs-433027 ] SRE: (?-flag) is not supported. Message-ID: Bugs item #433027, was updated on 2001-06-14 01:27 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433027&group_id=5470 Category: Regular Expressions Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fredrik Lundh (effbot) Assigned to: Fredrik Lundh (effbot) Summary: SRE: (?-flag) is not supported. Initial Comment: from the jeffrey friedl report: (?-i) is not supported. It'd be nice to have ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433027&group_id=5470 From noreply@sourceforge.net Thu Jun 14 09:28:32 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 01:28:32 -0700 Subject: [Python-bugs-list] [ python-Bugs-433028 ] SRE: (?flag:...) is not supported Message-ID: Bugs item #433028, was updated on 2001-06-14 01:28 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433028&group_id=5470 Category: Regular Expressions Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fredrik Lundh (effbot) Assigned to: Fredrik Lundh (effbot) Summary: SRE: (?flag:...) is not supported Initial Comment: from the jeffrey friedl report: (?flag:...) and (?-flag:...) are not supported. They'd be nice. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433028&group_id=5470 From noreply@sourceforge.net Thu Jun 14 09:29:23 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 01:29:23 -0700 Subject: [Python-bugs-list] [ python-Bugs-433029 ] SRE: posix classes aren't supported Message-ID: Bugs item #433029, was updated on 2001-06-14 01:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433029&group_id=5470 Category: Regular Expressions Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fredrik Lundh (effbot) Assigned to: Fredrik Lundh (effbot) Summary: SRE: posix classes aren't supported Initial Comment: from the jeffrey friedl report: Posix class stuff aren't supported. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433029&group_id=5470 From noreply@sourceforge.net Thu Jun 14 09:30:57 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 01:30:57 -0700 Subject: [Python-bugs-list] [ python-Bugs-433030 ] SRE: (?>...) is not supported Message-ID: Bugs item #433030, was updated on 2001-06-14 01:30 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433030&group_id=5470 Category: Regular Expressions Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fredrik Lundh (effbot) Assigned to: Fredrik Lundh (effbot) Summary: SRE: (?>...) is not supported Initial Comment: from the jeffrey friedl report: (?>...) is not supported [this is a "stand-alone pattern". the engine has code for this, but the parser doesn't recognize this yet. shouldn't be too hard to fix; I just need a couple of good test cases before I start /F] ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433030&group_id=5470 From noreply@sourceforge.net Thu Jun 14 09:32:30 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 01:32:30 -0700 Subject: [Python-bugs-list] [ python-Bugs-433031 ] SRE: x++ isn't supported Message-ID: Bugs item #433031, was updated on 2001-06-14 01:32 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433031&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Fredrik Lundh (effbot) Assigned to: Nobody/Anonymous (nobody) Summary: SRE: x++ isn't supported Initial Comment: from jeffrey friedl: [perl has] added some interesting things that you might want to consider. In particular, posessive quantifiers X++ (which acts exactly like (?>X+), but is much easier to grok). Very nice. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433031&group_id=5470 From noreply@sourceforge.net Thu Jun 14 09:33:39 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 01:33:39 -0700 Subject: [Python-bugs-list] [ python-Bugs-433031 ] SRE: x++ isn't supported Message-ID: Bugs item #433031, was updated on 2001-06-14 01:32 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433031&group_id=5470 >Category: Regular Expressions >Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fredrik Lundh (effbot) >Assigned to: Fredrik Lundh (effbot) Summary: SRE: x++ isn't supported Initial Comment: from jeffrey friedl: [perl has] added some interesting things that you might want to consider. In particular, posessive quantifiers X++ (which acts exactly like (?>X+), but is much easier to grok). Very nice. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433031&group_id=5470 From noreply@sourceforge.net Thu Jun 14 10:17:22 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 02:17:22 -0700 Subject: [Python-bugs-list] [ python-Bugs-433047 ] missing args to PyArg_ParseTuple Message-ID: Bugs item #433047, was updated on 2001-06-14 02:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433047&group_id=5470 Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: missing args to PyArg_ParseTuple Initial Comment: The following calls to PyArg_ParseTuple are missing an argument, according to their format string: Modules/_codecmodule.c:443: in utf_16_le_encode Modules/_codecmodule.c:466: in utf_16_be_encode Modules/pcremodule.c:77: in PyPcre_exec ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433047&group_id=5470 From noreply@sourceforge.net Thu Jun 14 11:47:32 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 03:47:32 -0700 Subject: [Python-bugs-list] [ python-Bugs-419062 ] python 2.1 : building pbs on AIX 4.3.2 Message-ID: Bugs item #419062, was updated on 2001-04-25 23:59 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=419062&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: python 2.1 : building pbs on AIX 4.3.2 Initial Comment: I met what could be called "regression"? problems : the "make all" process stopped at the beginning of the building of the shared modules under AIX 4.3.2. 1. it is looking for a "ld_so_aix" under the destination directory (${prefix}/lib/python2.1}) ... which, for an obvious reason is not there during the compiling phase 2. if you try to force the process by creating the directory and putting the "so needed" program in it, a new stop occurs : it is unable to find the "Python.exp" or whatever other *.exp file needed by the linking process Those problems didn't occur during the build of python 2.0 wich has been compiled fine under AIX 4.3.2 ---------------------------------------------------------------------- Comment By: The Written Word (china) (tww-china) Date: 2001-06-14 03:47 Message: Logged In: YES user_id=119770 Try the patch at ftp://ftp.thewrittenword.com/outgoing/pub/python-2.1-419062.patch ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=419062&group_id=5470 From noreply@sourceforge.net Thu Jun 14 12:18:01 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 04:18:01 -0700 Subject: [Python-bugs-list] [ python-Bugs-420476 ] Python 2.1 'make test' failures: AIX 4.2 Message-ID: Bugs item #420476, was updated on 2001-05-01 09:02 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=420476&group_id=5470 Category: None Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Barry Warsaw (bwarsaw) Summary: Python 2.1 'make test' failures: AIX 4.2 Initial Comment: Successfully built and installed Python 2.1 on AIX 4.2. However, 3 regression tests failed. The output from 'make test' (edited for brevity) is shown below: make test ...successful tests deleted... test___all__ test test___all__ failed -- tty has no __all__ attribute ...successful tests deleted... test_coercion The actual stdout doesn't match the expected stdout. This much did match (between asterisk lines): ******************************************************* *************** test_coercion 2 + 2 = 4 2 += 2 => 4 2 - 2 = 0 2 -= 2 => 0 2 * 2 = 4 2 *= 2 => 4 2 / 2 = 1 2 /= 2 => 1 2 ** 2 = 4 2 **= 2 => 4 2 % 2 = 0 2 %= 2 => 0 2 + 4.0 = 6.0 2 += 4.0 => 6.0 2 - 4.0 = -2.0 2 -= 4.0 => -2.0 2 * 4.0 = 8.0 2 *= 4.0 => 8.0 2 / 4.0 = 0.5 2 /= 4.0 => 0.5 2 ** 4.0 = 16.0 2 **= 4.0 => 16.0 2 % 4.0 = 2.0 2 %= 4.0 => 2.0 2 + 2 = 4 2 += 2 => 4 2 - 2 = 0 2 -= 2 => 0 2 * 2 = 4 2 *= 2 => 4 2 / 2 = 1 2 /= 2 => 1 2 ** 2 = 4 2 **= 2 => 4 2 % 2 = 0 2 %= 2 => 0 2 + (2+0j) = (4+0j) 2 += (2+0j) => (4+0j) 2 - (2+0j) = 0j 2 -= (2+0j) => 0j 2 * (2+0j) = (4+0j) 2 *= (2+0j) => (4+0j) 2 / (2+0j) = ******************************************************* *************** Then ... We expected (repr): '(1+0j)' But instead we got: '(1-0j)' test test_coercion failed -- Writing: '(1-0j)', expected: '(1+0j)' ...successful tests deleted... test_pty test test_pty failed -- Tail of expected stdout unseen: 'y pet fish, Eric.\n' ...successful tests deleted... test_zlib test test_zlib skipped -- No module named zlib 112 tests OK. 3 tests failed: test___all__ test_coercion test_pty 22 tests skipped: test_al test_bsddb test_cd test_cl test_dl test_gdbm test_gl test_gzip test_imgfile test_largefile test_linuxaudio dev test_minidom test_nis test_openpty test_pyexpat test_sax test_sunaudiodev test_sundry test_winreg test_winsound test_zipfile test_zlib make: 1254-004 The error code from the last command is 1. ---------------------------------------------------------------------- Comment By: The Written Word (china) (tww-china) Date: 2001-06-14 04:18 Message: Logged In: YES user_id=119770 If you have the IBM C compiler, the magic flag is -qfloat=nomaf. This should probably be documented in Misc/AIX-NOTES. According to the C for AIX manual: The nomaf option is provided for cases where it is necessary to exactly duplicate the double results of an implmeentation that does not have multiply-add operations. The nomaf option prevents the compiler from generating any multiply-add operations. Not using multiply-add operations decreases accuracy and performance but strictly conforms to the IEEE standard for double-precision arithmetic. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-05-11 11:27 Message: Logged In: YES user_id=31435 test___all__ failures are always spurious (due to incomplete module objects left behind in sys.modules for other reasons -- in this case presumably related to the test_pty failure). The test_coerce failure is a +0 versus -0 IEEE-754 thing. Platforms that produce -0 here are not computing the correct sign bit for (+0)-(+0) (according to the 754 rules). First make sure you're using whatever gibberish your platform C requires to produce 754-conforming code. If it still fails, and the HW supports any sort of fused multiply+add or multiply+subtract instructions, tell the compiler not to use them. If it still fails, we're in for long and tedious platform-specific debugging work. Assigned to Barry in case he has a clue about test_pty (I don't). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=420476&group_id=5470 From noreply@sourceforge.net Thu Jun 14 20:25:10 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 12:25:10 -0700 Subject: [Python-bugs-list] [ python-Bugs-433223 ] LICENSE file in 2.0.1c has three typos Message-ID: Bugs item #433223, was updated on 2001-06-14 12:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433223&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Gregor Hoffleit (flight) Assigned to: Nobody/Anonymous (nobody) Summary: LICENSE file in 2.0.1c has three typos Initial Comment: As Thomas Wouters already pointed out, the LICENSE file in 2.0.1 is a little bit odd: In lines 23 and 85, it refers to "Python 2.1" instead of "Python 2.0.1". This should be changed before the release of 2.0.1. Also, in the LICENSE file in CVS (the one that's going to be used starting with 2.1.1), you should change line 28+29 to the version used in 2.0.1, i.e. it should now read "including Python 2.0.1" instead of "starting with Python 2.1". ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433223&group_id=5470 From noreply@sourceforge.net Thu Jun 14 20:27:52 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 12:27:52 -0700 Subject: [Python-bugs-list] [ python-Bugs-433223 ] LICENSE file in 2.0.1c1 has three typos Message-ID: Bugs item #433223, was updated on 2001-06-14 12:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433223&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Gregor Hoffleit (flight) >Assigned to: Guido van Rossum (gvanrossum) >Summary: LICENSE file in 2.0.1c1 has three typos Initial Comment: As Thomas Wouters already pointed out, the LICENSE file in 2.0.1 is a little bit odd: In lines 23 and 85, it refers to "Python 2.1" instead of "Python 2.0.1". This should be changed before the release of 2.0.1. Also, in the LICENSE file in CVS (the one that's going to be used starting with 2.1.1), you should change line 28+29 to the version used in 2.0.1, i.e. it should now read "including Python 2.0.1" instead of "starting with Python 2.1". ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433223&group_id=5470 From noreply@sourceforge.net Thu Jun 14 20:37:49 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 12:37:49 -0700 Subject: [Python-bugs-list] [ python-Bugs-433228 ] repr(list) woes when len(list) big Message-ID: Bugs item #433228, was updated on 2001-06-14 12:37 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433228&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Tim Peters (tim_one) Assigned to: Nobody/Anonymous (nobody) Summary: repr(list) woes when len(list) big Initial Comment: This code fatally confuses Win2K: ints = range(100000) x = map(ints, ints) "ints" isn't a callable object, so call_object does PyObject_Repr on it in order to produce an error msg. In the bowels of list_repr, i gets to 21069 and that's all: string_concat's call to PyObject_MALLOC never returns. size==136374 at this point, so it's not like we're asking for an unreasonable amount of memory, Win2K is just lost. Hitting Ctrl+C does interrupt the program, but it dies immediately then with a memory fault inside MS's runtime libraries. The simpler x = repr(range(100000)) is much the same, except list_repr's i sticks at 15713 then, and hitting Ctrl+C confuses the debugger. On Linux there are no memory faults, but on Fred's laptop the second program snippet showed no sign of completing. Since list_repr uses a quadratic-time algorithm, that much was expected; whether it's reasonable is open to debate. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433228&group_id=5470 From noreply@sourceforge.net Thu Jun 14 20:38:45 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 12:38:45 -0700 Subject: [Python-bugs-list] [ python-Bugs-433223 ] LICENSE file in 2.0.1c1 has three typos Message-ID: Bugs item #433223, was updated on 2001-06-14 12:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433223&group_id=5470 Category: None Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Gregor Hoffleit (flight) Assigned to: Guido van Rossum (gvanrossum) Summary: LICENSE file in 2.0.1c1 has three typos Initial Comment: As Thomas Wouters already pointed out, the LICENSE file in 2.0.1 is a little bit odd: In lines 23 and 85, it refers to "Python 2.1" instead of "Python 2.0.1". This should be changed before the release of 2.0.1. Also, in the LICENSE file in CVS (the one that's going to be used starting with 2.1.1), you should change line 28+29 to the version used in 2.0.1, i.e. it should now read "including Python 2.0.1" instead of "starting with Python 2.1". ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-14 12:38 Message: Logged In: YES user_id=6380 The 2.0.1 license is fixed in CVS. (This was fixed after I pushed the release files out but before I announced the release. It will be corrected in 2.0.1 final.) The 2.1.1 CVS license is also fixed now. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433223&group_id=5470 From noreply@sourceforge.net Thu Jun 14 21:35:52 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Jun 2001 13:35:52 -0700 Subject: [Python-bugs-list] [ python-Bugs-433234 ] Problems building under HP-UX 11.0 Message-ID: Bugs item #433234, was updated on 2001-06-14 13:35 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433234&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Ivan Munoz (imbunche) Assigned to: Nobody/Anonymous (nobody) Summary: Problems building under HP-UX 11.0 Initial Comment: My system: HP-UX dev1 B.11.00 A 9000/782 2015744842 two-user license I'm using gcc, gcc version 2.95.2 19991024 (release) Trying to compile Python 2.1 I ran into a number of problems. After the customary configure; make I get the following error while making the modules ld: DP relative code in file build/temp.hp-ux-B.11.00-9000/782-2.1/_weakref.o - shared library must be position independent. Use +z or +Z to recompile. WARNING: building of extension "_weakref" failed: command 'ld' failed with exit status 1 The error message is the same for all extensions. I fixed it by defining in the Makefile CFLAGSFORSHARED=-fpic tried again ... better but: gcc -Wl,-E -Wl,+s -Wl,+b/home/imunoz/local/hp/lib/python2.1/lib-dynload -o python \ Modules/python.o \ libpython2.1.a -lnsl -ldld -lpthread -lm /usr/bin/ld: Data Linkage Table (+z) overflow in file libpython2.1.a(exceptions.o) - use +Z option to recompile collect2: ld returned 1 exit status make: *** [python] Error 1 Then I changed -fpic by -fPIC and voila! (tip found thanx to a google search). still problems when generating the module's shared libraries. Fixed by: Makefile:301 # Build the shared modules sharedmods: $(PYTHON) PY_CFLAGS= $(CFLAGS) $(CFLAGSFORSHARED) -fPIC PYTHONPATH= ./$(PYTHON) $(srcdir)/setup.py build This last step I don't understand but Python won't make the modules without it. Finally got Python 2.1 in HP-UX 11. Does anyone knows a better way? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433234&group_id=5470 From noreply@sourceforge.net Fri Jun 15 17:40:51 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Jun 2001 09:40:51 -0700 Subject: [Python-bugs-list] [ python-Bugs-433228 ] repr(list) woes when len(list) big Message-ID: Bugs item #433228, was updated on 2001-06-14 12:37 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433228&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Tim Peters (tim_one) Assigned to: Nobody/Anonymous (nobody) Summary: repr(list) woes when len(list) big Initial Comment: This code fatally confuses Win2K: ints = range(100000) x = map(ints, ints) "ints" isn't a callable object, so call_object does PyObject_Repr on it in order to produce an error msg. In the bowels of list_repr, i gets to 21069 and that's all: string_concat's call to PyObject_MALLOC never returns. size==136374 at this point, so it's not like we're asking for an unreasonable amount of memory, Win2K is just lost. Hitting Ctrl+C does interrupt the program, but it dies immediately then with a memory fault inside MS's runtime libraries. The simpler x = repr(range(100000)) is much the same, except list_repr's i sticks at 15713 then, and hitting Ctrl+C confuses the debugger. On Linux there are no memory faults, but on Fred's laptop the second program snippet showed no sign of completing. Since list_repr uses a quadratic-time algorithm, that much was expected; whether it's reasonable is open to debate. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-15 09:40 Message: Logged In: YES user_id=6380 Seem to be two things: 1) The error message in call_object() uses repr() of an unknown object. THIS IS EVIL. Error messages should NEVER use the repr of an object unless they know for sure that the repr fits in a few hundred bytes. They should show the type of the unknown object instead. 2) Repr of a very long list is inefficient. I can live with that; it falls in the category "then don't do that". it can be interrupted with ^C. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433228&group_id=5470 From noreply@sourceforge.net Fri Jun 15 18:06:25 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Jun 2001 10:06:25 -0700 Subject: [Python-bugs-list] [ python-Bugs-433481 ] No way to link python itself with C++ Message-ID: Bugs item #433481, was updated on 2001-06-15 10:06 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433481&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Stephan A. Fiedler (sfiedler) Assigned to: Nobody/Anonymous (nobody) Summary: No way to link python itself with C++ Initial Comment: I'm running on Solaris 2.7 with the Sun Workshop compiler, version 4.2. I have built an extension module in C++ as a shared object. When I attempt to import it into Python, I get an error about missing symbols related to C++ exception handling: ImportError: ld.so.1: python: fatal: relocation error: file /home/saf/pymidas/m2k/solaris_debug/comp/m2kapi.so: symbol _ex_keylock: referenced symbol not found This symbol lives in the C++ runtime, libC.so. 'ldd python' shows that this library is not available to the Python executable itself, because the C compiler linked the executable. If I manually edit the makefile for building python so that LINKCC is $(PURIFY) $(CXX) instead of $(PURIFY) $(CC) and then relink just the Python executable, I can see (with ldd) that the C++ runtime libC.so is now linked with Python, and I am able to load my module. (I believe it is actually no problem to build the entire system with LINKCC calling CXX instead of CC.) In case it's relevant, my extension module itself is compiled with these flags: -DDEBUG -DSUNCC_ -mt -pto -PIC -xildoff +w2 -D_POSIX_PTHREAD_SEMANTICS -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 and linked with these: -G -z text Bug #413582 may be related to this in some way. So the short of it is that I would like a configure option to link the final python executable using the C++ compiler on Solaris, so that I can get the C++ runtime linked in with python itself. Note that this doesn't seem to matter on Compaq Tru64 Unix systems, where the default Python build works just fine with my extension module. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433481&group_id=5470 From noreply@sourceforge.net Sat Jun 16 01:15:56 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Jun 2001 17:15:56 -0700 Subject: [Python-bugs-list] [ python-Bugs-433228 ] repr(list) woes when len(list) big Message-ID: Bugs item #433228, was updated on 2001-06-14 12:37 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433228&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Tim Peters (tim_one) Assigned to: Nobody/Anonymous (nobody) Summary: repr(list) woes when len(list) big Initial Comment: This code fatally confuses Win2K: ints = range(100000) x = map(ints, ints) "ints" isn't a callable object, so call_object does PyObject_Repr on it in order to produce an error msg. In the bowels of list_repr, i gets to 21069 and that's all: string_concat's call to PyObject_MALLOC never returns. size==136374 at this point, so it's not like we're asking for an unreasonable amount of memory, Win2K is just lost. Hitting Ctrl+C does interrupt the program, but it dies immediately then with a memory fault inside MS's runtime libraries. The simpler x = repr(range(100000)) is much the same, except list_repr's i sticks at 15713 then, and hitting Ctrl+C confuses the debugger. On Linux there are no memory faults, but on Fred's laptop the second program snippet showed no sign of completing. Since list_repr uses a quadratic-time algorithm, that much was expected; whether it's reasonable is open to debate. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-15 17:15 Message: Logged In: YES user_id=31435 WRT #1, I patched call_object() (ceval.c rev 2.247), and now it displays TypeError: object of type 'list' is not callable WRT #2, I'm leaving this report open, because interrupting via Ctrl+C can lead to memory faults on Win2K (see original report), and because the 2.2 Python pprint.pprint(x) is much faster than builtin repr(x) for large x of list, tuple and dict types (on both Windows and Linux). This makes "an excuse" less appealing than it was in 2.1. Making repr() linear-time in these cases is straightforward, but requires a Python-level way to get at string.join. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-15 09:40 Message: Logged In: YES user_id=6380 Seem to be two things: 1) The error message in call_object() uses repr() of an unknown object. THIS IS EVIL. Error messages should NEVER use the repr of an object unless they know for sure that the repr fits in a few hundred bytes. They should show the type of the unknown object instead. 2) Repr of a very long list is inefficient. I can live with that; it falls in the category "then don't do that". it can be interrupted with ^C. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433228&group_id=5470 From noreply@sourceforge.net Sat Jun 16 03:17:17 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Jun 2001 19:17:17 -0700 Subject: [Python-bugs-list] [ python-Bugs-433625 ] bug in PyThread_release_lock() Message-ID: Bugs item #433625, was updated on 2001-06-15 19:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 Category: Threads Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Shih-Hao Liu (shihao) Assigned to: Nobody/Anonymous (nobody) Summary: bug in PyThread_release_lock() Initial Comment: Mutex should be hold when calling pthread_cond_signal(). This function should look like: PyThread_release_lock(PyThread_type_lock lock) { pthread_lock *thelock = (pthread_lock *)lock; int status, error = 0; dprintf(("PyThread_release_lock(%p) called\n", lock)); status = pthread_mutex_lock( &thelock->mut ); CHECK_STATUS("pthread_mutex_lock[3]"); thelock->locked = 0; /* ***** call pthread_cond_signal before unlock mutex */ status = pthread_cond_signal( &thelock->lock_released ); CHECK_STATUS("pthread_cond_signal"); status = pthread_mutex_unlock( &thelock->mut ); CHECK_STATUS("pthread_mutex_unlock[3]"); /* wake up someone (anyone, if any) waiting on the lock */ } ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 From noreply@sourceforge.net Sat Jun 16 06:14:28 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Jun 2001 22:14:28 -0700 Subject: [Python-bugs-list] [ python-Bugs-433228 ] repr(list) woes when len(list) big Message-ID: Bugs item #433228, was updated on 2001-06-14 12:37 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433228&group_id=5470 Category: Python Interpreter Core Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Tim Peters (tim_one) >Assigned to: Tim Peters (tim_one) Summary: repr(list) woes when len(list) big Initial Comment: This code fatally confuses Win2K: ints = range(100000) x = map(ints, ints) "ints" isn't a callable object, so call_object does PyObject_Repr on it in order to produce an error msg. In the bowels of list_repr, i gets to 21069 and that's all: string_concat's call to PyObject_MALLOC never returns. size==136374 at this point, so it's not like we're asking for an unreasonable amount of memory, Win2K is just lost. Hitting Ctrl+C does interrupt the program, but it dies immediately then with a memory fault inside MS's runtime libraries. The simpler x = repr(range(100000)) is much the same, except list_repr's i sticks at 15713 then, and hitting Ctrl+C confuses the debugger. On Linux there are no memory faults, but on Fred's laptop the second program snippet showed no sign of completing. Since list_repr uses a quadratic-time algorithm, that much was expected; whether it's reasonable is open to debate. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-15 22:14 Message: Logged In: YES user_id=31435 Closed. I gave Python linear-time repr() implementations for dicts, lists and tuples: Include/stringobject.h; new revision: 2.28 Objects/dictobject.c; new revision: 2.104 Objects/listobject.c,v new revision: 2.96 Objects/stringobject.c; new revision 2.119 Objects/tupleobject.c; new revision: 2.53 ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-15 17:15 Message: Logged In: YES user_id=31435 WRT #1, I patched call_object() (ceval.c rev 2.247), and now it displays TypeError: object of type 'list' is not callable WRT #2, I'm leaving this report open, because interrupting via Ctrl+C can lead to memory faults on Win2K (see original report), and because the 2.2 Python pprint.pprint(x) is much faster than builtin repr(x) for large x of list, tuple and dict types (on both Windows and Linux). This makes "an excuse" less appealing than it was in 2.1. Making repr() linear-time in these cases is straightforward, but requires a Python-level way to get at string.join. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-15 09:40 Message: Logged In: YES user_id=6380 Seem to be two things: 1) The error message in call_object() uses repr() of an unknown object. THIS IS EVIL. Error messages should NEVER use the repr of an object unless they know for sure that the repr fits in a few hundred bytes. They should show the type of the unknown object instead. 2) Repr of a very long list is inefficient. I can live with that; it falls in the category "then don't do that". it can be interrupted with ^C. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433228&group_id=5470 From noreply@sourceforge.net Sat Jun 16 08:53:44 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Jun 2001 00:53:44 -0700 Subject: [Python-bugs-list] [ python-Bugs-432786 ] Python 2.1 test_locale fails Message-ID: Bugs item #432786, was updated on 2001-06-13 07:58 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432786&group_id=5470 Category: Extension Modules Group: None >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Paul M. Dubuc (dubuc) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.1 test_locale fails Initial Comment: I'm building Python 2.1 on Solaris 2.6. When I 'make test', the test_locale module is the only one that fails: test test_locale failed -- Writing: "'%f' % 1024 == '1024.000000' != '1,024.000000'", expected: '' ... The actual stdout doesn't match the expected stdout. This much did match (between asterisk lines): ********************************************************************** test_locale ********************************************************************** Then ... We expected (repr): '' But instead we got: "'%f' % 1024 == '1024.000000' != '1,024.000000'" ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-16 00:53 Message: Logged In: YES user_id=21627 That appears to be a bug in Solaris 2.6. To see the problem, please try the following program import locale locale.setlocale(locale.LC_ALL,"en_US") c=locale.localeconv() print c['grouping'],repr(c['thousands_sep']) In the en_US locale, the thousands separator *should* be a comma, but Solaris 2.6 reports that this locale has no thousands separator. For locale information, Python relies on what the operating system reports. As it is an OS bug, I'm closing the report as "won't fix". ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432786&group_id=5470 From noreply@sourceforge.net Sat Jun 16 09:03:38 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Jun 2001 01:03:38 -0700 Subject: [Python-bugs-list] [ python-Bugs-433481 ] No way to link python itself with C++ Message-ID: Bugs item #433481, was updated on 2001-06-15 10:06 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433481&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Stephan A. Fiedler (sfiedler) Assigned to: Nobody/Anonymous (nobody) Summary: No way to link python itself with C++ Initial Comment: I'm running on Solaris 2.7 with the Sun Workshop compiler, version 4.2. I have built an extension module in C++ as a shared object. When I attempt to import it into Python, I get an error about missing symbols related to C++ exception handling: ImportError: ld.so.1: python: fatal: relocation error: file /home/saf/pymidas/m2k/solaris_debug/comp/m2kapi.so: symbol _ex_keylock: referenced symbol not found This symbol lives in the C++ runtime, libC.so. 'ldd python' shows that this library is not available to the Python executable itself, because the C compiler linked the executable. If I manually edit the makefile for building python so that LINKCC is $(PURIFY) $(CXX) instead of $(PURIFY) $(CC) and then relink just the Python executable, I can see (with ldd) that the C++ runtime libC.so is now linked with Python, and I am able to load my module. (I believe it is actually no problem to build the entire system with LINKCC calling CXX instead of CC.) In case it's relevant, my extension module itself is compiled with these flags: -DDEBUG -DSUNCC_ -mt -pto -PIC -xildoff +w2 -D_POSIX_PTHREAD_SEMANTICS -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 and linked with these: -G -z text Bug #413582 may be related to this in some way. So the short of it is that I would like a configure option to link the final python executable using the C++ compiler on Solaris, so that I can get the C++ runtime linked in with python itself. Note that this doesn't seem to matter on Compaq Tru64 Unix systems, where the default Python build works just fine with my extension module. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-16 01:03 Message: Logged In: YES user_id=21627 I believe the right fix to your problem would be to link your extension module using CC, not using ld. In theory, that should provide all required libraries to the shared object itself. Please report whether this solves the problem. As for the configure option: This is already configurable. Just set LINKCC when making python. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433481&group_id=5470 From noreply@sourceforge.net Sat Jun 16 18:21:03 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Jun 2001 10:21:03 -0700 Subject: [Python-bugs-list] [ python-Bugs-433775 ] module build dir first in test import Message-ID: Bugs item #433775, was updated on 2001-06-16 10:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433775&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Sjoerd Mullender (sjoerd) Assigned to: Nobody/Anonymous (nobody) Summary: module build dir first in test import Initial Comment: This problem was found on a Linux (RedHat 7.1) system, but applies to all Unix systems. In the step "python setup.py build", after a shared module is built, setup checks whether it can import the module. During this test, the build directory should be at the beginning of sys.path to make sure you get the module that was actually built. Currently, the build directory is at the end. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433775&group_id=5470 From noreply@sourceforge.net Sun Jun 17 08:06:37 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 00:06:37 -0700 Subject: [Python-bugs-list] [ python-Bugs-433854 ] Wrong sys.path in weird situation Message-ID: Bugs item #433854, was updated on 2001-06-17 00:06 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433854&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nicholas Riley (nriley) Assigned to: Nobody/Anonymous (nobody) Summary: Wrong sys.path in weird situation Initial Comment: I swear, this was my normal setup! :-) Python 2.1 on IRIX 6.5.10. % echo $PYTHONPATH /home/reed/njriley/usr/lib/python2.1/site-packages % echo $PATH /home/reed/njriley/bin/sgi:/home/reed/njriley/bin:/usr/bin/X11:/usr/local/bin:/local/bin:/usr/dcs/software/unsupported/bin:/usr/bsd:/usr/sbin:/usr/bin:/bin:/usr/etc:/etc % python Python 2.1 (#2, Jun 17 2001, 01:43:04) [C] on irix6 Type "copyright", "credits" or "license" for more information. >>> import sys; print sys.path ['', '/home/reed/njriley/usr/lib/python2.1/site-packages', '/home/reed/njriley/encap/python-2.1/lib/python2.1', '/home/reed/njriley/encap/python-2.1/lib/python2.1/plat-irix6', '/home/reed/njriley/encap/python-2.1/lib/python2.1/lib-tk', '/home/reed/njriley/encap/python-2.1/lib/python2.1/lib-dynload'] >>> ^D % ls -d /home/reed/njriley/encap gls: /home/reed/njriley/encap: No such file or directory So, how did it get there? This way: lrwx--x--x 1 njriley reed 7 Jun 17 01:55 bin -> usr/bin/ lrwxr-xr-x 1 njriley reed 30 Jun 17 01:46 bin/python -> ../encap/python-2.1/bin/python* If I replace ~/bin with ~/usr/bin in my PATH, everything is fine. Python is trying to resolve the second symlink before resolving the first one, thereby causing a problem. --Nicholas ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433854&group_id=5470 From noreply@sourceforge.net Sun Jun 17 11:42:20 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 03:42:20 -0700 Subject: [Python-bugs-list] [ python-Bugs-433875 ] 2.1 nocaret.py: SyntaxError Message-ID: Bugs item #433875, was updated on 2001-06-17 03:42 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433875&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: 2.1 nocaret.py: SyntaxError Initial Comment: when running compileall on a 2.1 distribution: File "/usr/local/lib/python2.1/test/nocaret.py", line 2 [x for x in x] = x SyntaxError: can't assign to list comprehension SyntaxError: from __future__ imports must occur at the beginning of the file (test_future3.py, line 3) SyntaxError: from __future__ imports must occur at the beginning of the file (test_future4.py, line 3) SyntaxError: from __future__ imports must occur at the beginning of the file (test_future5.py, line 4) SyntaxError: from __future__ imports must occur at the beginning of the file (test_future6.py, line 3) SyntaxError: from __future__ imports must occur at the beginning of the file (test_future7.py, line 3) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433875&group_id=5470 From noreply@sourceforge.net Sun Jun 17 12:27:45 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 04:27:45 -0700 Subject: [Python-bugs-list] [ python-Bugs-433882 ] UTF-8: unpaired surrogates mishandled Message-ID: Bugs item #433882, was updated on 2001-06-17 04:27 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433882&group_id=5470 Category: Unicode Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: UTF-8: unpaired surrogates mishandled Initial Comment: Two bugs: 1. UTF-8 encoding of unpaired high surrogate produces an invalid UTF-8 byte sequence. 2. UTF-8 decoding of any unpaired surrogate produces an exception ("illegal encoding") instead of the corresponding 16-bit scalar value. See attached file utf8bugs.py for example plus detailed remarks. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433882&group_id=5470 From noreply@sourceforge.net Sun Jun 17 14:04:55 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 06:04:55 -0700 Subject: [Python-bugs-list] [ python-Bugs-433904 ] rexec: all s_* methods return None only Message-ID: Bugs item #433904, was updated on 2001-06-17 06:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433904&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Alex Martelli (aleax) Assigned to: Nobody/Anonymous (nobody) Summary: rexec: all s_* methods return None only Initial Comment: D:\py21>python Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import rexec >>> r=rexec.RExec() >>> x=r.r_eval('2+2') >>> print x 4 >>> x=r.s_eval('2+2') >>> print x None >>> Cause: method s_apply lacks a 'return r' at the end, and all the other s_* methods should be 'return self.s_apply(...' but are in fact lacking the return keyword (they just call s_apply but ignore its result). Alex ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433904&group_id=5470 From noreply@sourceforge.net Sun Jun 17 18:42:12 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 10:42:12 -0700 Subject: [Python-bugs-list] [ python-Bugs-433047 ] missing args to PyArg_ParseTuple Message-ID: Bugs item #433047, was updated on 2001-06-14 02:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433047&group_id=5470 >Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) >Assigned to: M.-A. Lemburg (lemburg) Summary: missing args to PyArg_ParseTuple Initial Comment: The following calls to PyArg_ParseTuple are missing an argument, according to their format string: Modules/_codecmodule.c:443: in utf_16_le_encode Modules/_codecmodule.c:466: in utf_16_be_encode Modules/pcremodule.c:77: in PyPcre_exec ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-17 10:42 Message: Logged In: YES user_id=31435 Assigned to MAL for the _codecsmodule.c snafus. Suggest assigning to AMK next for the PCRE one. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433047&group_id=5470 From noreply@sourceforge.net Sun Jun 17 19:33:37 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 11:33:37 -0700 Subject: [Python-bugs-list] [ python-Bugs-433047 ] missing args to PyArg_ParseTuple Message-ID: Bugs item #433047, was updated on 2001-06-14 02:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433047&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) >Assigned to: A.M. Kuchling (akuchling) Summary: missing args to PyArg_ParseTuple Initial Comment: The following calls to PyArg_ParseTuple are missing an argument, according to their format string: Modules/_codecmodule.c:443: in utf_16_le_encode Modules/_codecmodule.c:466: in utf_16_be_encode Modules/pcremodule.c:77: in PyPcre_exec ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 11:33 Message: Logged In: YES user_id=38388 Fixed the codec part... Andrew is next in line ;-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 10:42 Message: Logged In: YES user_id=31435 Assigned to MAL for the _codecsmodule.c snafus. Suggest assigning to AMK next for the PCRE one. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433047&group_id=5470 From noreply@sourceforge.net Sun Jun 17 19:47:29 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 11:47:29 -0700 Subject: [Python-bugs-list] [ python-Bugs-405227 ] sizeof(Py_UNICODE)==2 ???? Message-ID: Bugs item #405227, was updated on 2001-03-01 11:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 Category: Unicode Group: Platform-specific Status: Open >Resolution: Postponed Priority: 5 Submitted By: Jon Saenz (jsaenz) Assigned to: M.-A. Lemburg (lemburg) Summary: sizeof(Py_UNICODE)==2 ???? Initial Comment: We are trying to install Python 2.0 in a Cray T3E. After a painful process of removing several modules which produce some errors (mmap, sha, md5), we get core dumps when we run python because under this platform, there does not exist a 16-bit numeric type. Unsigned short is 4 bytes long. We have finally defined unicode objects as unsigned short, despite they are 4 bytes long, and we have changed a sentence in Objects/unicodeobject.c to: if (sizeof(Py_UNICODE)!=sizeof(unsigned short){ It compiles and runs now, but the test on regular expressions crashes and the whole regression test does, too. Support of Unicode for this platform is not correct in version 2.0 of Python. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 11:47 Message: Logged In: YES user_id=38388 It may be a design error, but getting this right for all platforms is hard and by choosing the 16-bit type we managed to handle 95% of all platforms in a fast and reliable way. Any idea how we could "emulate" a 16-bit integer type ? We need the integer type because we do calculcations on the values. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-13 22:28 Message: Logged In: YES user_id=31435 I opened this again. It's simply unacceptable to require that the platform have a 2-byte integer type. That doesn't mean it's easy to fix, but it's a design error all the same. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-03-16 11:27 Message: Logged In: YES user_id=38388 The current Unicode implementation needs Py_UNICODE to be a 16-bit entity and so does SRE. To get this to work on the Cray, you could try to use a 2-char struct which is then cast to a short in all those places which assume a 16-bit number representation. Simply using a 4-byte entity as basis will not work, since the fact that Py_UNICODE fits into 2 bytes is hard-coded into the implementation in a number of places. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-03-01 15:29 Message: Logged In: YES user_id=31435 Notes: + Python was ported to T3E last year, IIRC by Marc Poinot. May want to track him down. + Python's Unicode support doesn't rely on any platform Unicode support. Whether it's "useless" depends on the user, not the platform. + Face it : Crays are the only platforms that don't have a native 16-bit integer type. + Even so, I believe at least SRE is happy to work with 32- bit Unicode (glibc's wchar_t is 4 bytes, IIRC), so that much was likely a shallow problem. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:09 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-03-01 14:05 Message: Logged In: YES user_id=3066 Marc-Andre, can you deal with the general Unicode issues here and then pass this along to Fredrik for SRE updates? (Or better yet, work in parallel?) Thanks! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 From noreply@sourceforge.net Sun Jun 17 20:44:07 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 12:44:07 -0700 Subject: [Python-bugs-list] [ python-Bugs-405227 ] sizeof(Py_UNICODE)==2 ???? Message-ID: Bugs item #405227, was updated on 2001-03-01 11:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 Category: Unicode Group: Platform-specific Status: Open Resolution: Postponed Priority: 5 Submitted By: Jon Saenz (jsaenz) Assigned to: M.-A. Lemburg (lemburg) Summary: sizeof(Py_UNICODE)==2 ???? Initial Comment: We are trying to install Python 2.0 in a Cray T3E. After a painful process of removing several modules which produce some errors (mmap, sha, md5), we get core dumps when we run python because under this platform, there does not exist a 16-bit numeric type. Unsigned short is 4 bytes long. We have finally defined unicode objects as unsigned short, despite they are 4 bytes long, and we have changed a sentence in Objects/unicodeobject.c to: if (sizeof(Py_UNICODE)!=sizeof(unsigned short){ It compiles and runs now, but the test on regular expressions crashes and the whole regression test does, too. Support of Unicode for this platform is not correct in version 2.0 of Python. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-17 12:44 Message: Logged In: YES user_id=31435 Point me to one of the calculations that's thought to be a problem, and happy to suggest something (I didn't find one on my own, but I'm not familiar with the details here). BTW, I reopened this because we got another report of T3E woes on c.l.py that day. You certainly need at least 16 bits, but it's hard to see how having more than that could be a genuine problem -- at worst "this kind of thing" usually requires no more than masking with 0xffff at the end. That can be hidden in a macro that's a nop on platforms that don't need it, if micro-efficiency is a concern. Often even that isn't needed. For example, binascii_crc32 absolutely must compute a 32-bit checksum, but works fine on platforms with 8-byte longs. The only "trick" needed to make that work was to compute the complement via crc ^ 0xFFFFFFFFUL instead of via ~crc ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 11:47 Message: Logged In: YES user_id=38388 It may be a design error, but getting this right for all platforms is hard and by choosing the 16-bit type we managed to handle 95% of all platforms in a fast and reliable way. Any idea how we could "emulate" a 16-bit integer type ? We need the integer type because we do calculcations on the values. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-13 22:28 Message: Logged In: YES user_id=31435 I opened this again. It's simply unacceptable to require that the platform have a 2-byte integer type. That doesn't mean it's easy to fix, but it's a design error all the same. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-03-16 11:27 Message: Logged In: YES user_id=38388 The current Unicode implementation needs Py_UNICODE to be a 16-bit entity and so does SRE. To get this to work on the Cray, you could try to use a 2-char struct which is then cast to a short in all those places which assume a 16-bit number representation. Simply using a 4-byte entity as basis will not work, since the fact that Py_UNICODE fits into 2 bytes is hard-coded into the implementation in a number of places. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-03-01 15:29 Message: Logged In: YES user_id=31435 Notes: + Python was ported to T3E last year, IIRC by Marc Poinot. May want to track him down. + Python's Unicode support doesn't rely on any platform Unicode support. Whether it's "useless" depends on the user, not the platform. + Face it : Crays are the only platforms that don't have a native 16-bit integer type. + Even so, I believe at least SRE is happy to work with 32- bit Unicode (glibc's wchar_t is 4 bytes, IIRC), so that much was likely a shallow problem. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:09 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-03-01 14:05 Message: Logged In: YES user_id=3066 Marc-Andre, can you deal with the general Unicode issues here and then pass this along to Fredrik for SRE updates? (Or better yet, work in parallel?) Thanks! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 From noreply@sourceforge.net Sun Jun 17 20:57:23 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 12:57:23 -0700 Subject: [Python-bugs-list] [ python-Bugs-405227 ] sizeof(Py_UNICODE)==2 ???? Message-ID: Bugs item #405227, was updated on 2001-03-01 11:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 Category: Unicode Group: Platform-specific Status: Open Resolution: Postponed Priority: 5 Submitted By: Jon Saenz (jsaenz) Assigned to: M.-A. Lemburg (lemburg) Summary: sizeof(Py_UNICODE)==2 ???? Initial Comment: We are trying to install Python 2.0 in a Cray T3E. After a painful process of removing several modules which produce some errors (mmap, sha, md5), we get core dumps when we run python because under this platform, there does not exist a 16-bit numeric type. Unsigned short is 4 bytes long. We have finally defined unicode objects as unsigned short, despite they are 4 bytes long, and we have changed a sentence in Objects/unicodeobject.c to: if (sizeof(Py_UNICODE)!=sizeof(unsigned short){ It compiles and runs now, but the test on regular expressions crashes and the whole regression test does, too. Support of Unicode for this platform is not correct in version 2.0 of Python. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 12:57 Message: Logged In: YES user_id=38388 The codecs are full of things like: ch = ((s[0] & 0x0f) << 12) + ((s[1] & 0x3f) << 6) + (s[2] & 0x3f); if (ch < 0x800 || (ch >= 0xd800 && ch < 0xe000)) { errmsg = "illegal encoding"; goto utf8Error; } where ch is a Py_UNICODE character. The other "problem" is that pointer dereferencing is used a lot in the code (using arrays of Py_UNICODE chars). We could probably shift the calculations to Py_UCS4 integers and then only do the data buffer access with Py_UNICODE which would then be mapped to a a 2-char-array to get the data buffer layout right. Still, I think this is low priority. Patches are welcome of course :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 12:44 Message: Logged In: YES user_id=31435 Point me to one of the calculations that's thought to be a problem, and happy to suggest something (I didn't find one on my own, but I'm not familiar with the details here). BTW, I reopened this because we got another report of T3E woes on c.l.py that day. You certainly need at least 16 bits, but it's hard to see how having more than that could be a genuine problem -- at worst "this kind of thing" usually requires no more than masking with 0xffff at the end. That can be hidden in a macro that's a nop on platforms that don't need it, if micro-efficiency is a concern. Often even that isn't needed. For example, binascii_crc32 absolutely must compute a 32-bit checksum, but works fine on platforms with 8-byte longs. The only "trick" needed to make that work was to compute the complement via crc ^ 0xFFFFFFFFUL instead of via ~crc ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 11:47 Message: Logged In: YES user_id=38388 It may be a design error, but getting this right for all platforms is hard and by choosing the 16-bit type we managed to handle 95% of all platforms in a fast and reliable way. Any idea how we could "emulate" a 16-bit integer type ? We need the integer type because we do calculcations on the values. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-13 22:28 Message: Logged In: YES user_id=31435 I opened this again. It's simply unacceptable to require that the platform have a 2-byte integer type. That doesn't mean it's easy to fix, but it's a design error all the same. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-03-16 11:27 Message: Logged In: YES user_id=38388 The current Unicode implementation needs Py_UNICODE to be a 16-bit entity and so does SRE. To get this to work on the Cray, you could try to use a 2-char struct which is then cast to a short in all those places which assume a 16-bit number representation. Simply using a 4-byte entity as basis will not work, since the fact that Py_UNICODE fits into 2 bytes is hard-coded into the implementation in a number of places. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-03-01 15:29 Message: Logged In: YES user_id=31435 Notes: + Python was ported to T3E last year, IIRC by Marc Poinot. May want to track him down. + Python's Unicode support doesn't rely on any platform Unicode support. Whether it's "useless" depends on the user, not the platform. + Face it : Crays are the only platforms that don't have a native 16-bit integer type. + Even so, I believe at least SRE is happy to work with 32- bit Unicode (glibc's wchar_t is 4 bytes, IIRC), so that much was likely a shallow problem. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:09 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-03-01 14:05 Message: Logged In: YES user_id=3066 Marc-Andre, can you deal with the general Unicode issues here and then pass this along to Fredrik for SRE updates? (Or better yet, work in parallel?) Thanks! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 From noreply@sourceforge.net Sun Jun 17 21:54:57 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 13:54:57 -0700 Subject: [Python-bugs-list] [ python-Bugs-433625 ] bug in PyThread_release_lock() Message-ID: Bugs item #433625, was updated on 2001-06-15 19:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 Category: Threads Group: Platform-specific >Status: Deleted >Resolution: Invalid Priority: 5 Submitted By: Shih-Hao Liu (shihao) Assigned to: Nobody/Anonymous (nobody) Summary: bug in PyThread_release_lock() Initial Comment: Mutex should be hold when calling pthread_cond_signal(). This function should look like: PyThread_release_lock(PyThread_type_lock lock) { pthread_lock *thelock = (pthread_lock *)lock; int status, error = 0; dprintf(("PyThread_release_lock(%p) called\n", lock)); status = pthread_mutex_lock( &thelock->mut ); CHECK_STATUS("pthread_mutex_lock[3]"); thelock->locked = 0; /* ***** call pthread_cond_signal before unlock mutex */ status = pthread_cond_signal( &thelock->lock_released ); CHECK_STATUS("pthread_cond_signal"); status = pthread_mutex_unlock( &thelock->mut ); CHECK_STATUS("pthread_mutex_unlock[3]"); /* wake up someone (anyone, if any) waiting on the lock */ } ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 From noreply@sourceforge.net Sun Jun 17 22:05:35 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 14:05:35 -0700 Subject: [Python-bugs-list] [ python-Bugs-405227 ] sizeof(Py_UNICODE)==2 ???? Message-ID: Bugs item #405227, was updated on 2001-03-01 11:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 Category: Unicode Group: Platform-specific Status: Open Resolution: Postponed Priority: 5 Submitted By: Jon Saenz (jsaenz) Assigned to: M.-A. Lemburg (lemburg) Summary: sizeof(Py_UNICODE)==2 ???? Initial Comment: We are trying to install Python 2.0 in a Cray T3E. After a painful process of removing several modules which produce some errors (mmap, sha, md5), we get core dumps when we run python because under this platform, there does not exist a 16-bit numeric type. Unsigned short is 4 bytes long. We have finally defined unicode objects as unsigned short, despite they are 4 bytes long, and we have changed a sentence in Objects/unicodeobject.c to: if (sizeof(Py_UNICODE)!=sizeof(unsigned short){ It compiles and runs now, but the test on regular expressions crashes and the whole regression test does, too. Support of Unicode for this platform is not correct in version 2.0 of Python. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:05 Message: Logged In: YES user_id=31435 The code snippet there will work fine with any integral type >= 2 bytes if you just add the line ch &= 0xffff; between the computation and the "if". It will actually work fine even if you *don't* put in that mask, but deducing that required analysis of the specific operations (you shift 4 bits left 12, 6 bits left 6 so they don't overlap with the first chunk and so the "+" can't cause a carry, and then add another chunk of non- overlapping 6 bits, so again there's no carry, and therefore the infinite-precision result fits in no more than 16 bits, and so there's no need to mask). About pointers, I don't see a problem there either, unless you're casting a Py_UNICODE* to a char* then adding a hardcoded 2. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 12:57 Message: Logged In: YES user_id=38388 The codecs are full of things like: ch = ((s[0] & 0x0f) << 12) + ((s[1] & 0x3f) << 6) + (s[2] & 0x3f); if (ch < 0x800 || (ch >= 0xd800 && ch < 0xe000)) { errmsg = "illegal encoding"; goto utf8Error; } where ch is a Py_UNICODE character. The other "problem" is that pointer dereferencing is used a lot in the code (using arrays of Py_UNICODE chars). We could probably shift the calculations to Py_UCS4 integers and then only do the data buffer access with Py_UNICODE which would then be mapped to a a 2-char-array to get the data buffer layout right. Still, I think this is low priority. Patches are welcome of course :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 12:44 Message: Logged In: YES user_id=31435 Point me to one of the calculations that's thought to be a problem, and happy to suggest something (I didn't find one on my own, but I'm not familiar with the details here). BTW, I reopened this because we got another report of T3E woes on c.l.py that day. You certainly need at least 16 bits, but it's hard to see how having more than that could be a genuine problem -- at worst "this kind of thing" usually requires no more than masking with 0xffff at the end. That can be hidden in a macro that's a nop on platforms that don't need it, if micro-efficiency is a concern. Often even that isn't needed. For example, binascii_crc32 absolutely must compute a 32-bit checksum, but works fine on platforms with 8-byte longs. The only "trick" needed to make that work was to compute the complement via crc ^ 0xFFFFFFFFUL instead of via ~crc ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 11:47 Message: Logged In: YES user_id=38388 It may be a design error, but getting this right for all platforms is hard and by choosing the 16-bit type we managed to handle 95% of all platforms in a fast and reliable way. Any idea how we could "emulate" a 16-bit integer type ? We need the integer type because we do calculcations on the values. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-13 22:28 Message: Logged In: YES user_id=31435 I opened this again. It's simply unacceptable to require that the platform have a 2-byte integer type. That doesn't mean it's easy to fix, but it's a design error all the same. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-03-16 11:27 Message: Logged In: YES user_id=38388 The current Unicode implementation needs Py_UNICODE to be a 16-bit entity and so does SRE. To get this to work on the Cray, you could try to use a 2-char struct which is then cast to a short in all those places which assume a 16-bit number representation. Simply using a 4-byte entity as basis will not work, since the fact that Py_UNICODE fits into 2 bytes is hard-coded into the implementation in a number of places. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-03-01 15:29 Message: Logged In: YES user_id=31435 Notes: + Python was ported to T3E last year, IIRC by Marc Poinot. May want to track him down. + Python's Unicode support doesn't rely on any platform Unicode support. Whether it's "useless" depends on the user, not the platform. + Face it : Crays are the only platforms that don't have a native 16-bit integer type. + Even so, I believe at least SRE is happy to work with 32- bit Unicode (glibc's wchar_t is 4 bytes, IIRC), so that much was likely a shallow problem. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:09 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-03-01 14:05 Message: Logged In: YES user_id=3066 Marc-Andre, can you deal with the general Unicode issues here and then pass this along to Fredrik for SRE updates? (Or better yet, work in parallel?) Thanks! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 From noreply@sourceforge.net Sun Jun 17 22:16:32 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 14:16:32 -0700 Subject: [Python-bugs-list] [ python-Bugs-433625 ] bug in PyThread_release_lock() Message-ID: Bugs item #433625, was updated on 2001-06-15 19:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 Category: Threads Group: Platform-specific Status: Deleted Resolution: Invalid Priority: 5 Submitted By: Shih-Hao Liu (shihao) Assigned to: Nobody/Anonymous (nobody) Summary: bug in PyThread_release_lock() Initial Comment: Mutex should be hold when calling pthread_cond_signal(). This function should look like: PyThread_release_lock(PyThread_type_lock lock) { pthread_lock *thelock = (pthread_lock *)lock; int status, error = 0; dprintf(("PyThread_release_lock(%p) called\n", lock)); status = pthread_mutex_lock( &thelock->mut ); CHECK_STATUS("pthread_mutex_lock[3]"); thelock->locked = 0; /* ***** call pthread_cond_signal before unlock mutex */ status = pthread_cond_signal( &thelock->lock_released ); CHECK_STATUS("pthread_cond_signal"); status = pthread_mutex_unlock( &thelock->mut ); CHECK_STATUS("pthread_mutex_unlock[3]"); /* wake up someone (anyone, if any) waiting on the lock */ } ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:16 Message: Logged In: YES user_id=31435 Why? It's allowed to signal the condition whether or not the mutex is held. Since changing this can have visible effects on thread scheduling, I'm reluctant to change it without a good reason. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 From noreply@sourceforge.net Sun Jun 17 22:38:10 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 14:38:10 -0700 Subject: [Python-bugs-list] [ python-Bugs-433904 ] rexec: all s_* methods return None only Message-ID: Bugs item #433904, was updated on 2001-06-17 06:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433904&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Alex Martelli (aleax) >Assigned to: Guido van Rossum (gvanrossum) Summary: rexec: all s_* methods return None only Initial Comment: D:\py21>python Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import rexec >>> r=rexec.RExec() >>> x=r.r_eval('2+2') >>> print x 4 >>> x=r.s_eval('2+2') >>> print x None >>> Cause: method s_apply lacks a 'return r' at the end, and all the other s_* methods should be 'return self.s_apply(...' but are in fact lacking the return keyword (they just call s_apply but ignore its result). Alex ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-17 14:38 Message: Logged In: YES user_id=6380 Good catch. You see, rexec hasn't exactly been miss popularity... Does the attached patch fix this for you? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433904&group_id=5470 From noreply@sourceforge.net Sun Jun 17 22:51:41 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 14:51:41 -0700 Subject: [Python-bugs-list] [ python-Bugs-433625 ] bug in PyThread_release_lock() Message-ID: Bugs item #433625, was updated on 2001-06-15 19:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 Category: Threads Group: Platform-specific >Status: Closed Resolution: Invalid Priority: 5 Submitted By: Shih-Hao Liu (shihao) Assigned to: Nobody/Anonymous (nobody) Summary: bug in PyThread_release_lock() Initial Comment: Mutex should be hold when calling pthread_cond_signal(). This function should look like: PyThread_release_lock(PyThread_type_lock lock) { pthread_lock *thelock = (pthread_lock *)lock; int status, error = 0; dprintf(("PyThread_release_lock(%p) called\n", lock)); status = pthread_mutex_lock( &thelock->mut ); CHECK_STATUS("pthread_mutex_lock[3]"); thelock->locked = 0; /* ***** call pthread_cond_signal before unlock mutex */ status = pthread_cond_signal( &thelock->lock_released ); CHECK_STATUS("pthread_cond_signal"); status = pthread_mutex_unlock( &thelock->mut ); CHECK_STATUS("pthread_mutex_unlock[3]"); /* wake up someone (anyone, if any) waiting on the lock */ } ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-17 14:51 Message: Logged In: YES user_id=6380 Set status to closed -- no need to delete it. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:16 Message: Logged In: YES user_id=31435 Why? It's allowed to signal the condition whether or not the mutex is held. Since changing this can have visible effects on thread scheduling, I'm reluctant to change it without a good reason. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 From noreply@sourceforge.net Sun Jun 17 23:01:20 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 15:01:20 -0700 Subject: [Python-bugs-list] [ python-Bugs-433625 ] bug in PyThread_release_lock() Message-ID: Bugs item #433625, was updated on 2001-06-15 19:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 Category: Threads Group: Platform-specific >Status: Open Resolution: Invalid Priority: 5 Submitted By: Shih-Hao Liu (shihao) >Assigned to: Tim Peters (tim_one) Summary: bug in PyThread_release_lock() Initial Comment: Mutex should be hold when calling pthread_cond_signal(). This function should look like: PyThread_release_lock(PyThread_type_lock lock) { pthread_lock *thelock = (pthread_lock *)lock; int status, error = 0; dprintf(("PyThread_release_lock(%p) called\n", lock)); status = pthread_mutex_lock( &thelock->mut ); CHECK_STATUS("pthread_mutex_lock[3]"); thelock->locked = 0; /* ***** call pthread_cond_signal before unlock mutex */ status = pthread_cond_signal( &thelock->lock_released ); CHECK_STATUS("pthread_cond_signal"); status = pthread_mutex_unlock( &thelock->mut ); CHECK_STATUS("pthread_mutex_unlock[3]"); /* wake up someone (anyone, if any) waiting on the lock */ } ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-17 15:01 Message: Logged In: YES user_id=31435 Ack, did I delete this?! I sure didn't intend to -- didn't even intend to close it. Reopened pending more info. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-17 14:51 Message: Logged In: YES user_id=6380 Set status to closed -- no need to delete it. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:16 Message: Logged In: YES user_id=31435 Why? It's allowed to signal the condition whether or not the mutex is held. Since changing this can have visible effects on thread scheduling, I'm reluctant to change it without a good reason. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 From noreply@sourceforge.net Mon Jun 18 00:02:35 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 16:02:35 -0700 Subject: [Python-bugs-list] [ python-Bugs-433625 ] bug in PyThread_release_lock() Message-ID: Bugs item #433625, was updated on 2001-06-15 19:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 Category: Threads Group: Platform-specific >Status: Closed Resolution: Invalid Priority: 5 Submitted By: Shih-Hao Liu (shihao) Assigned to: Tim Peters (tim_one) Summary: bug in PyThread_release_lock() Initial Comment: Mutex should be hold when calling pthread_cond_signal(). This function should look like: PyThread_release_lock(PyThread_type_lock lock) { pthread_lock *thelock = (pthread_lock *)lock; int status, error = 0; dprintf(("PyThread_release_lock(%p) called\n", lock)); status = pthread_mutex_lock( &thelock->mut ); CHECK_STATUS("pthread_mutex_lock[3]"); thelock->locked = 0; /* ***** call pthread_cond_signal before unlock mutex */ status = pthread_cond_signal( &thelock->lock_released ); CHECK_STATUS("pthread_cond_signal"); status = pthread_mutex_unlock( &thelock->mut ); CHECK_STATUS("pthread_mutex_unlock[3]"); /* wake up someone (anyone, if any) waiting on the lock */ } ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-17 16:02 Message: Logged In: YES user_id=31435 Closing this again, as it appears the original submitter deleted it. shihao, if you want to pursure this, open it again. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 15:01 Message: Logged In: YES user_id=31435 Ack, did I delete this?! I sure didn't intend to -- didn't even intend to close it. Reopened pending more info. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-17 14:51 Message: Logged In: YES user_id=6380 Set status to closed -- no need to delete it. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:16 Message: Logged In: YES user_id=31435 Why? It's allowed to signal the condition whether or not the mutex is held. Since changing this can have visible effects on thread scheduling, I'm reluctant to change it without a good reason. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 From noreply@sourceforge.net Mon Jun 18 03:03:24 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 19:03:24 -0700 Subject: [Python-bugs-list] [ python-Bugs-433882 ] UTF-8: unpaired surrogates mishandled Message-ID: Bugs item #433882, was updated on 2001-06-17 04:27 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433882&group_id=5470 Category: Unicode Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: UTF-8: unpaired surrogates mishandled Initial Comment: Two bugs: 1. UTF-8 encoding of unpaired high surrogate produces an invalid UTF-8 byte sequence. 2. UTF-8 decoding of any unpaired surrogate produces an exception ("illegal encoding") instead of the corresponding 16-bit scalar value. See attached file utf8bugs.py for example plus detailed remarks. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-17 19:03 Message: Logged In: YES user_id=21627 I think the codec should reject unpaired surrogates both when encoding and when decoding. I don't have a copy of ISO 10646, but Unicode 3.1 points out # ISO/IEC 10646 does not allow mapping of unpaired surrogates, nor U+FFFE and U+FFFF (but it does allow other noncharacters). So apparently, encoding unpaired surrogates as UTF-8 is not allowed according to ISO 10646. I think Python should follow this rule, instead of the Unicode one. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433882&group_id=5470 From noreply@sourceforge.net Mon Jun 18 07:01:19 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 23:01:19 -0700 Subject: [Python-bugs-list] [ python-Bugs-433625 ] bug in PyThread_release_lock() Message-ID: Bugs item #433625, was updated on 2001-06-15 19:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 Category: Threads Group: Platform-specific >Status: Open Resolution: Invalid Priority: 5 Submitted By: Shih-Hao Liu (shihao) Assigned to: Tim Peters (tim_one) Summary: bug in PyThread_release_lock() Initial Comment: Mutex should be hold when calling pthread_cond_signal(). This function should look like: PyThread_release_lock(PyThread_type_lock lock) { pthread_lock *thelock = (pthread_lock *)lock; int status, error = 0; dprintf(("PyThread_release_lock(%p) called\n", lock)); status = pthread_mutex_lock( &thelock->mut ); CHECK_STATUS("pthread_mutex_lock[3]"); thelock->locked = 0; /* ***** call pthread_cond_signal before unlock mutex */ status = pthread_cond_signal( &thelock->lock_released ); CHECK_STATUS("pthread_cond_signal"); status = pthread_mutex_unlock( &thelock->mut ); CHECK_STATUS("pthread_mutex_unlock[3]"); /* wake up someone (anyone, if any) waiting on the lock */ } ---------------------------------------------------------------------- >Comment By: Shih-Hao Liu (shihao) Date: 2001-06-17 23:01 Message: Logged In: YES user_id=246388 I closed it because I thought the thelock->locked variable will ensure that the PyThread_release_lock will help to protect the condition variable and I was wrong. The linuxthread man page on pthread_cond_signal: A condition variable must always be associated with a mutex, to avoid the race condition where a thread prepares to wait on a condition variable and another thread signals the condition just before the first thread actually waits on it. which means you can't call pthread_cond_signal & pthread_cond_wait on the same condition variable at the same time. And using a mutex to protect them is a good idea. Here is how thing might go wrong with current implementation: thread 1 thread 2 |int PyThread_acquire_lock _ |/** assume lock was acquired | by thread 1, hence locked=0 | & success would be 0 **/ |{ | ... | status = pthread_mutex_lo | CHECK_STATUS("pthread_mut | success = thelock->locked | if (success) thelock->loc | status = pthread_mutex_un | /** thread 2 suspended **/ void PyThread_release_lock _| { | ... | status = pthread_mutex_loc| CHECK_STATUS("pthread_mute| | thelock->locked = 0; | | status = pthread_mutex_unl| /** thread 1 suspend **/ | | CHECK_STATUS("pthread_mut | | if ( !success && waitflag | /* continue trying unti | | /* mut must be locked b | * protocol */ | status = pthread_mutex_ | CHECK_STATUS("pthread_m | while ( thelock->locked | status = pthread_cond |/** thread 2 suspended while | updating shared data ** CHECK_STATUS("pthread_mute| | /* wake up someone (anyone| status = pthread_cond_sign| /** thread 1 update shared | data and corrupt it. **/ | Not sure what the effect would be. It's wouldn't be nice anyway. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 16:02 Message: Logged In: YES user_id=31435 Closing this again, as it appears the original submitter deleted it. shihao, if you want to pursure this, open it again. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 15:01 Message: Logged In: YES user_id=31435 Ack, did I delete this?! I sure didn't intend to -- didn't even intend to close it. Reopened pending more info. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-17 14:51 Message: Logged In: YES user_id=6380 Set status to closed -- no need to delete it. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:16 Message: Logged In: YES user_id=31435 Why? It's allowed to signal the condition whether or not the mutex is held. Since changing this can have visible effects on thread scheduling, I'm reluctant to change it without a good reason. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 From noreply@sourceforge.net Mon Jun 18 07:59:14 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Jun 2001 23:59:14 -0700 Subject: [Python-bugs-list] [ python-Bugs-433904 ] rexec: all s_* methods return None only Message-ID: Bugs item #433904, was updated on 2001-06-17 06:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433904&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Alex Martelli (aleax) Assigned to: Guido van Rossum (gvanrossum) Summary: rexec: all s_* methods return None only Initial Comment: D:\py21>python Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import rexec >>> r=rexec.RExec() >>> x=r.r_eval('2+2') >>> print x 4 >>> x=r.s_eval('2+2') >>> print x None >>> Cause: method s_apply lacks a 'return r' at the end, and all the other s_* methods should be 'return self.s_apply(...' but are in fact lacking the return keyword (they just call s_apply but ignore its result). Alex ---------------------------------------------------------------------- >Comment By: Alex Martelli (aleax) Date: 2001-06-17 23:59 Message: Logged In: YES user_id=60314 Yep, the patch is of course perfect, thanks. Hope it makes it into 2.1.1 as it's so obviously flawless & necessary. (I had never used the s_* methods, but I'm religiously checking everything I'm writing about for the Nutshell book, so...:-). Alex ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-17 14:38 Message: Logged In: YES user_id=6380 Good catch. You see, rexec hasn't exactly been miss popularity... Does the attached patch fix this for you? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433904&group_id=5470 From noreply@sourceforge.net Mon Jun 18 09:19:21 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Jun 2001 01:19:21 -0700 Subject: [Python-bugs-list] [ python-Bugs-405227 ] sizeof(Py_UNICODE)==2 ???? Message-ID: Bugs item #405227, was updated on 2001-03-01 11:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 Category: Unicode Group: Platform-specific Status: Open Resolution: Postponed Priority: 5 Submitted By: Jon Saenz (jsaenz) Assigned to: M.-A. Lemburg (lemburg) Summary: sizeof(Py_UNICODE)==2 ???? Initial Comment: We are trying to install Python 2.0 in a Cray T3E. After a painful process of removing several modules which produce some errors (mmap, sha, md5), we get core dumps when we run python because under this platform, there does not exist a 16-bit numeric type. Unsigned short is 4 bytes long. We have finally defined unicode objects as unsigned short, despite they are 4 bytes long, and we have changed a sentence in Objects/unicodeobject.c to: if (sizeof(Py_UNICODE)!=sizeof(unsigned short){ It compiles and runs now, but the test on regular expressions crashes and the whole regression test does, too. Support of Unicode for this platform is not correct in version 2.0 of Python. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-18 01:19 Message: Logged In: YES user_id=38388 Ok, I agree that the math will probably work in most cases due to the fact that UTF-16 will never produce values outside the 16-bit range, but you still have the problem with iterating over Py_UNICODE arrays: the compiler will assume that ch++ means to move the pointer by sizeof(Py_UNICODE) bytes and this breaks in case you use e.g. a 32-bit integer type for Py_UNICODE. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:05 Message: Logged In: YES user_id=31435 The code snippet there will work fine with any integral type >= 2 bytes if you just add the line ch &= 0xffff; between the computation and the "if". It will actually work fine even if you *don't* put in that mask, but deducing that required analysis of the specific operations (you shift 4 bits left 12, 6 bits left 6 so they don't overlap with the first chunk and so the "+" can't cause a carry, and then add another chunk of non- overlapping 6 bits, so again there's no carry, and therefore the infinite-precision result fits in no more than 16 bits, and so there's no need to mask). About pointers, I don't see a problem there either, unless you're casting a Py_UNICODE* to a char* then adding a hardcoded 2. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 12:57 Message: Logged In: YES user_id=38388 The codecs are full of things like: ch = ((s[0] & 0x0f) << 12) + ((s[1] & 0x3f) << 6) + (s[2] & 0x3f); if (ch < 0x800 || (ch >= 0xd800 && ch < 0xe000)) { errmsg = "illegal encoding"; goto utf8Error; } where ch is a Py_UNICODE character. The other "problem" is that pointer dereferencing is used a lot in the code (using arrays of Py_UNICODE chars). We could probably shift the calculations to Py_UCS4 integers and then only do the data buffer access with Py_UNICODE which would then be mapped to a a 2-char-array to get the data buffer layout right. Still, I think this is low priority. Patches are welcome of course :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 12:44 Message: Logged In: YES user_id=31435 Point me to one of the calculations that's thought to be a problem, and happy to suggest something (I didn't find one on my own, but I'm not familiar with the details here). BTW, I reopened this because we got another report of T3E woes on c.l.py that day. You certainly need at least 16 bits, but it's hard to see how having more than that could be a genuine problem -- at worst "this kind of thing" usually requires no more than masking with 0xffff at the end. That can be hidden in a macro that's a nop on platforms that don't need it, if micro-efficiency is a concern. Often even that isn't needed. For example, binascii_crc32 absolutely must compute a 32-bit checksum, but works fine on platforms with 8-byte longs. The only "trick" needed to make that work was to compute the complement via crc ^ 0xFFFFFFFFUL instead of via ~crc ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 11:47 Message: Logged In: YES user_id=38388 It may be a design error, but getting this right for all platforms is hard and by choosing the 16-bit type we managed to handle 95% of all platforms in a fast and reliable way. Any idea how we could "emulate" a 16-bit integer type ? We need the integer type because we do calculcations on the values. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-13 22:28 Message: Logged In: YES user_id=31435 I opened this again. It's simply unacceptable to require that the platform have a 2-byte integer type. That doesn't mean it's easy to fix, but it's a design error all the same. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-03-16 11:27 Message: Logged In: YES user_id=38388 The current Unicode implementation needs Py_UNICODE to be a 16-bit entity and so does SRE. To get this to work on the Cray, you could try to use a 2-char struct which is then cast to a short in all those places which assume a 16-bit number representation. Simply using a 4-byte entity as basis will not work, since the fact that Py_UNICODE fits into 2 bytes is hard-coded into the implementation in a number of places. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-03-01 15:29 Message: Logged In: YES user_id=31435 Notes: + Python was ported to T3E last year, IIRC by Marc Poinot. May want to track him down. + Python's Unicode support doesn't rely on any platform Unicode support. Whether it's "useless" depends on the user, not the platform. + Face it : Crays are the only platforms that don't have a native 16-bit integer type. + Even so, I believe at least SRE is happy to work with 32- bit Unicode (glibc's wchar_t is 4 bytes, IIRC), so that much was likely a shallow problem. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:09 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-03-01 14:05 Message: Logged In: YES user_id=3066 Marc-Andre, can you deal with the general Unicode issues here and then pass this along to Fredrik for SRE updates? (Or better yet, work in parallel?) Thanks! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 From noreply@sourceforge.net Mon Jun 18 12:56:53 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Jun 2001 04:56:53 -0700 Subject: [Python-bugs-list] [ python-Bugs-434143 ] calendar module broken for 1900 Message-ID: Bugs item #434143, was updated on 2001-06-18 04:56 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434143&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Alexandre Fayolle (afayolle) Assigned to: Nobody/Anonymous (nobody) Summary: calendar module broken for 1900 Initial Comment: Hi there, this is a 'feature' I met on both 1.5.2 and 2.1. >>> import calendar >>> calendar.prcal(1865) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.1/calendar.py", line 160, in prcal print calendar(year, w, l, c), File "/usr/lib/python2.1/calendar.py", line 179, in calendar cal = monthcalendar(year, amonth) File "/usr/lib/python2.1/calendar.py", line 85, in monthcalendar day1, ndays = monthrange(year, month) File "/usr/lib/python2.1/calendar.py", line 78, in monthrange day1 = weekday(year, month, 1) File "/usr/lib/python2.1/calendar.py", line 69, in weekday secs = mktime((year, month, day, 0, 0, 0, 0, 0, 0)) ValueError: year out of range (00-99, 1900-*) (note that the documentation only refers to 1970 as a possible limit, and does not mention how dates in 00-99 range are processed) Now if I try to get the calendar for year 1900 (which is supposed to work according to the message hereabove), I get >>> calendar.prcal(1900) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.1/calendar.py", line 160, in prcal print calendar(year, w, l, c), File "/usr/lib/python2.1/calendar.py", line 179, in calendar cal = monthcalendar(year, amonth) File "/usr/lib/python2.1/calendar.py", line 85, in monthcalendar day1, ndays = monthrange(year, month) File "/usr/lib/python2.1/calendar.py", line 78, in monthrange day1 = weekday(year, month, 1) File "/usr/lib/python2.1/calendar.py", line 69, in weekday secs = mktime((year, month, day, 0, 0, 0, 0, 0, 0)) OverflowError: mktime argument out of range I guess this is low priority. Cheers Alexandre Fayolle ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434143&group_id=5470 From noreply@sourceforge.net Mon Jun 18 13:34:45 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Jun 2001 05:34:45 -0700 Subject: [Python-bugs-list] [ python-Bugs-433904 ] rexec: all s_* methods return None only Message-ID: Bugs item #433904, was updated on 2001-06-17 06:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433904&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Alex Martelli (aleax) Assigned to: Guido van Rossum (gvanrossum) Summary: rexec: all s_* methods return None only Initial Comment: D:\py21>python Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import rexec >>> r=rexec.RExec() >>> x=r.r_eval('2+2') >>> print x 4 >>> x=r.s_eval('2+2') >>> print x None >>> Cause: method s_apply lacks a 'return r' at the end, and all the other s_* methods should be 'return self.s_apply(...' but are in fact lacking the return keyword (they just call s_apply but ignore its result). Alex ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-18 05:34 Message: Logged In: YES user_id=6380 Fixed, rexec.py 1.29 and 1.28.4.1. ---------------------------------------------------------------------- Comment By: Alex Martelli (aleax) Date: 2001-06-17 23:59 Message: Logged In: YES user_id=60314 Yep, the patch is of course perfect, thanks. Hope it makes it into 2.1.1 as it's so obviously flawless & necessary. (I had never used the s_* methods, but I'm religiously checking everything I'm writing about for the Nutshell book, so...:-). Alex ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-17 14:38 Message: Logged In: YES user_id=6380 Good catch. You see, rexec hasn't exactly been miss popularity... Does the attached patch fix this for you? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433904&group_id=5470 From noreply@sourceforge.net Mon Jun 18 13:38:16 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Jun 2001 05:38:16 -0700 Subject: [Python-bugs-list] [ python-Bugs-405227 ] sizeof(Py_UNICODE)==2 ???? Message-ID: Bugs item #405227, was updated on 2001-03-01 11:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 Category: Unicode Group: Platform-specific Status: Open Resolution: Postponed Priority: 5 Submitted By: Jon Saenz (jsaenz) Assigned to: M.-A. Lemburg (lemburg) Summary: sizeof(Py_UNICODE)==2 ???? Initial Comment: We are trying to install Python 2.0 in a Cray T3E. After a painful process of removing several modules which produce some errors (mmap, sha, md5), we get core dumps when we run python because under this platform, there does not exist a 16-bit numeric type. Unsigned short is 4 bytes long. We have finally defined unicode objects as unsigned short, despite they are 4 bytes long, and we have changed a sentence in Objects/unicodeobject.c to: if (sizeof(Py_UNICODE)!=sizeof(unsigned short){ It compiles and runs now, but the test on regular expressions crashes and the whole regression test does, too. Support of Unicode for this platform is not correct in version 2.0 of Python. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-18 05:38 Message: Logged In: YES user_id=6380 Huh? That depends on how ch is declared, and what kind of data is in the array. If it's an array of Py_UNICODE elements, and ch is declared as "Py_UNICODE *ch;", then ch++ will do the right thing (increment it by one Py_UNICODE unit). Now, the one thing you can NOT assume is that if you read external 16-bit data into a character buffer, that the Unicode characters correspond to Py_UNICODE characters -- perhaps this is what you're after? ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-18 01:19 Message: Logged In: YES user_id=38388 Ok, I agree that the math will probably work in most cases due to the fact that UTF-16 will never produce values outside the 16-bit range, but you still have the problem with iterating over Py_UNICODE arrays: the compiler will assume that ch++ means to move the pointer by sizeof(Py_UNICODE) bytes and this breaks in case you use e.g. a 32-bit integer type for Py_UNICODE. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:05 Message: Logged In: YES user_id=31435 The code snippet there will work fine with any integral type >= 2 bytes if you just add the line ch &= 0xffff; between the computation and the "if". It will actually work fine even if you *don't* put in that mask, but deducing that required analysis of the specific operations (you shift 4 bits left 12, 6 bits left 6 so they don't overlap with the first chunk and so the "+" can't cause a carry, and then add another chunk of non- overlapping 6 bits, so again there's no carry, and therefore the infinite-precision result fits in no more than 16 bits, and so there's no need to mask). About pointers, I don't see a problem there either, unless you're casting a Py_UNICODE* to a char* then adding a hardcoded 2. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 12:57 Message: Logged In: YES user_id=38388 The codecs are full of things like: ch = ((s[0] & 0x0f) << 12) + ((s[1] & 0x3f) << 6) + (s[2] & 0x3f); if (ch < 0x800 || (ch >= 0xd800 && ch < 0xe000)) { errmsg = "illegal encoding"; goto utf8Error; } where ch is a Py_UNICODE character. The other "problem" is that pointer dereferencing is used a lot in the code (using arrays of Py_UNICODE chars). We could probably shift the calculations to Py_UCS4 integers and then only do the data buffer access with Py_UNICODE which would then be mapped to a a 2-char-array to get the data buffer layout right. Still, I think this is low priority. Patches are welcome of course :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 12:44 Message: Logged In: YES user_id=31435 Point me to one of the calculations that's thought to be a problem, and happy to suggest something (I didn't find one on my own, but I'm not familiar with the details here). BTW, I reopened this because we got another report of T3E woes on c.l.py that day. You certainly need at least 16 bits, but it's hard to see how having more than that could be a genuine problem -- at worst "this kind of thing" usually requires no more than masking with 0xffff at the end. That can be hidden in a macro that's a nop on platforms that don't need it, if micro-efficiency is a concern. Often even that isn't needed. For example, binascii_crc32 absolutely must compute a 32-bit checksum, but works fine on platforms with 8-byte longs. The only "trick" needed to make that work was to compute the complement via crc ^ 0xFFFFFFFFUL instead of via ~crc ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 11:47 Message: Logged In: YES user_id=38388 It may be a design error, but getting this right for all platforms is hard and by choosing the 16-bit type we managed to handle 95% of all platforms in a fast and reliable way. Any idea how we could "emulate" a 16-bit integer type ? We need the integer type because we do calculcations on the values. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-13 22:28 Message: Logged In: YES user_id=31435 I opened this again. It's simply unacceptable to require that the platform have a 2-byte integer type. That doesn't mean it's easy to fix, but it's a design error all the same. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-03-16 11:27 Message: Logged In: YES user_id=38388 The current Unicode implementation needs Py_UNICODE to be a 16-bit entity and so does SRE. To get this to work on the Cray, you could try to use a 2-char struct which is then cast to a short in all those places which assume a 16-bit number representation. Simply using a 4-byte entity as basis will not work, since the fact that Py_UNICODE fits into 2 bytes is hard-coded into the implementation in a number of places. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-03-01 15:29 Message: Logged In: YES user_id=31435 Notes: + Python was ported to T3E last year, IIRC by Marc Poinot. May want to track him down. + Python's Unicode support doesn't rely on any platform Unicode support. Whether it's "useless" depends on the user, not the platform. + Face it : Crays are the only platforms that don't have a native 16-bit integer type. + Even so, I believe at least SRE is happy to work with 32- bit Unicode (glibc's wchar_t is 4 bytes, IIRC), so that much was likely a shallow problem. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:09 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-03-01 14:05 Message: Logged In: YES user_id=3066 Marc-Andre, can you deal with the general Unicode issues here and then pass this along to Fredrik for SRE updates? (Or better yet, work in parallel?) Thanks! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 From noreply@sourceforge.net Mon Jun 18 14:17:53 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Jun 2001 06:17:53 -0700 Subject: [Python-bugs-list] [ python-Bugs-405227 ] sizeof(Py_UNICODE)==2 ???? Message-ID: Bugs item #405227, was updated on 2001-03-01 11:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 Category: Unicode Group: Platform-specific Status: Open Resolution: Postponed Priority: 5 Submitted By: Jon Saenz (jsaenz) Assigned to: M.-A. Lemburg (lemburg) Summary: sizeof(Py_UNICODE)==2 ???? Initial Comment: We are trying to install Python 2.0 in a Cray T3E. After a painful process of removing several modules which produce some errors (mmap, sha, md5), we get core dumps when we run python because under this platform, there does not exist a 16-bit numeric type. Unsigned short is 4 bytes long. We have finally defined unicode objects as unsigned short, despite they are 4 bytes long, and we have changed a sentence in Objects/unicodeobject.c to: if (sizeof(Py_UNICODE)!=sizeof(unsigned short){ It compiles and runs now, but the test on regular expressions crashes and the whole regression test does, too. Support of Unicode for this platform is not correct in version 2.0 of Python. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-18 06:17 Message: Logged In: YES user_id=38388 Of course, you could declare Py_UNICODE as "unsigned int" and then store Unicode characters in e.g. 4 bytes each on platforms which don't have a 16-bit integer type. The reason for being picky about the 16 bits is that we chose UTF-16 as internal data storage format and that format defines the byte stream in terms of entities which have 2 bytes for each character. This format provides the best low-level integration with other Unicode storage formats such as wchar_t on Windows. That's why I would like to keep this compatibility if at all possible. I am not sure, but I think that sre also makes the 2-byte assumption internally in some places. A simple test for this would be to define Py_UNICODE as unsigned long and then run the regression suite... ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-18 05:38 Message: Logged In: YES user_id=6380 Huh? That depends on how ch is declared, and what kind of data is in the array. If it's an array of Py_UNICODE elements, and ch is declared as "Py_UNICODE *ch;", then ch++ will do the right thing (increment it by one Py_UNICODE unit). Now, the one thing you can NOT assume is that if you read external 16-bit data into a character buffer, that the Unicode characters correspond to Py_UNICODE characters -- perhaps this is what you're after? ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-18 01:19 Message: Logged In: YES user_id=38388 Ok, I agree that the math will probably work in most cases due to the fact that UTF-16 will never produce values outside the 16-bit range, but you still have the problem with iterating over Py_UNICODE arrays: the compiler will assume that ch++ means to move the pointer by sizeof(Py_UNICODE) bytes and this breaks in case you use e.g. a 32-bit integer type for Py_UNICODE. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:05 Message: Logged In: YES user_id=31435 The code snippet there will work fine with any integral type >= 2 bytes if you just add the line ch &= 0xffff; between the computation and the "if". It will actually work fine even if you *don't* put in that mask, but deducing that required analysis of the specific operations (you shift 4 bits left 12, 6 bits left 6 so they don't overlap with the first chunk and so the "+" can't cause a carry, and then add another chunk of non- overlapping 6 bits, so again there's no carry, and therefore the infinite-precision result fits in no more than 16 bits, and so there's no need to mask). About pointers, I don't see a problem there either, unless you're casting a Py_UNICODE* to a char* then adding a hardcoded 2. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 12:57 Message: Logged In: YES user_id=38388 The codecs are full of things like: ch = ((s[0] & 0x0f) << 12) + ((s[1] & 0x3f) << 6) + (s[2] & 0x3f); if (ch < 0x800 || (ch >= 0xd800 && ch < 0xe000)) { errmsg = "illegal encoding"; goto utf8Error; } where ch is a Py_UNICODE character. The other "problem" is that pointer dereferencing is used a lot in the code (using arrays of Py_UNICODE chars). We could probably shift the calculations to Py_UCS4 integers and then only do the data buffer access with Py_UNICODE which would then be mapped to a a 2-char-array to get the data buffer layout right. Still, I think this is low priority. Patches are welcome of course :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 12:44 Message: Logged In: YES user_id=31435 Point me to one of the calculations that's thought to be a problem, and happy to suggest something (I didn't find one on my own, but I'm not familiar with the details here). BTW, I reopened this because we got another report of T3E woes on c.l.py that day. You certainly need at least 16 bits, but it's hard to see how having more than that could be a genuine problem -- at worst "this kind of thing" usually requires no more than masking with 0xffff at the end. That can be hidden in a macro that's a nop on platforms that don't need it, if micro-efficiency is a concern. Often even that isn't needed. For example, binascii_crc32 absolutely must compute a 32-bit checksum, but works fine on platforms with 8-byte longs. The only "trick" needed to make that work was to compute the complement via crc ^ 0xFFFFFFFFUL instead of via ~crc ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 11:47 Message: Logged In: YES user_id=38388 It may be a design error, but getting this right for all platforms is hard and by choosing the 16-bit type we managed to handle 95% of all platforms in a fast and reliable way. Any idea how we could "emulate" a 16-bit integer type ? We need the integer type because we do calculcations on the values. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-13 22:28 Message: Logged In: YES user_id=31435 I opened this again. It's simply unacceptable to require that the platform have a 2-byte integer type. That doesn't mean it's easy to fix, but it's a design error all the same. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-03-16 11:27 Message: Logged In: YES user_id=38388 The current Unicode implementation needs Py_UNICODE to be a 16-bit entity and so does SRE. To get this to work on the Cray, you could try to use a 2-char struct which is then cast to a short in all those places which assume a 16-bit number representation. Simply using a 4-byte entity as basis will not work, since the fact that Py_UNICODE fits into 2 bytes is hard-coded into the implementation in a number of places. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-03-01 15:29 Message: Logged In: YES user_id=31435 Notes: + Python was ported to T3E last year, IIRC by Marc Poinot. May want to track him down. + Python's Unicode support doesn't rely on any platform Unicode support. Whether it's "useless" depends on the user, not the platform. + Face it : Crays are the only platforms that don't have a native 16-bit integer type. + Even so, I believe at least SRE is happy to work with 32- bit Unicode (glibc's wchar_t is 4 bytes, IIRC), so that much was likely a shallow problem. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:09 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-03-01 14:05 Message: Logged In: YES user_id=3066 Marc-Andre, can you deal with the general Unicode issues here and then pass this along to Fredrik for SRE updates? (Or better yet, work in parallel?) Thanks! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 From noreply@sourceforge.net Mon Jun 18 16:14:02 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Jun 2001 08:14:02 -0700 Subject: [Python-bugs-list] [ python-Bugs-434186 ] 0x80000000/2 != 0x80000000>>1 Message-ID: Bugs item #434186, was updated on 2001-06-18 08:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434186&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: 0x80000000/2 != 0x80000000>>1 Initial Comment: [16:07:29 toby@ruislip-manor] $ python Python 2.1 (#2, May 15 2001, 11:04:28) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Type "copyright", "credits" or "license" for more information. >>> 0x80000000>>1 -1073741824 >>> 0x80000000/2 1073741824 >>> 0x80000000/-2 -1073741824 >>> Pretty much says it all. the problem seems to be computing -xi in intobject.c:i_divmod causing an overflow. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434186&group_id=5470 From noreply@sourceforge.net Mon Jun 18 16:55:22 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Jun 2001 08:55:22 -0700 Subject: [Python-bugs-list] [ python-Bugs-434199 ] ftplib changes CRLF to LF on Windows Message-ID: Bugs item #434199, was updated on 2001-06-18 08:55 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434199&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: ftplib changes CRLF to LF on Windows Initial Comment: Version 2.0 Putting a file to an FTP server (NT 4.0) seems to change the line endings from CRLF to LF only. It does not matter if the transfer is ascii or binary. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434199&group_id=5470 From noreply@sourceforge.net Mon Jun 18 17:28:11 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Jun 2001 09:28:11 -0700 Subject: [Python-bugs-list] [ python-Bugs-434186 ] 0x80000000/2 != 0x80000000>>1 Message-ID: Bugs item #434186, was updated on 2001-06-18 08:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434186&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None >Priority: 7 Submitted By: Nobody/Anonymous (nobody) >Assigned to: Tim Peters (tim_one) Summary: 0x80000000/2 != 0x80000000>>1 Initial Comment: [16:07:29 toby@ruislip-manor] $ python Python 2.1 (#2, May 15 2001, 11:04:28) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Type "copyright", "credits" or "license" for more information. >>> 0x80000000>>1 -1073741824 >>> 0x80000000/2 1073741824 >>> 0x80000000/-2 -1073741824 >>> Pretty much says it all. the problem seems to be computing -xi in intobject.c:i_divmod causing an overflow. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-18 09:28 Message: Logged In: YES user_id=31435 Very curious! On Windows, >>> 0x80000000 >> 1 -1073741824 >>> 0x80000000 / 2 -1073741824 >>> 0x80000000 / -2 1073741824 >>> That is, it works as expected. However, that appears to be an accident due to the way the MS compiler optimizes this. In a debug build, the Windows results match yours: Python 2.2a0 (#16, Jun 18 2001, 11:17:03) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> 0x80000000 / 2 1073741824 [5509 refs] >>> 0x80000000 / -2 -1073741824 [5509 refs] >>> Certainly agreed this is a bug, and boosted the priority. Until it's fixed, you won't see the problem if you use long ints instead. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434186&group_id=5470 From noreply@sourceforge.net Mon Jun 18 19:18:32 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Jun 2001 11:18:32 -0700 Subject: [Python-bugs-list] [ python-Bugs-433481 ] No way to link python itself with C++ Message-ID: Bugs item #433481, was updated on 2001-06-15 10:06 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433481&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Stephan A. Fiedler (sfiedler) Assigned to: Nobody/Anonymous (nobody) Summary: No way to link python itself with C++ Initial Comment: I'm running on Solaris 2.7 with the Sun Workshop compiler, version 4.2. I have built an extension module in C++ as a shared object. When I attempt to import it into Python, I get an error about missing symbols related to C++ exception handling: ImportError: ld.so.1: python: fatal: relocation error: file /home/saf/pymidas/m2k/solaris_debug/comp/m2kapi.so: symbol _ex_keylock: referenced symbol not found This symbol lives in the C++ runtime, libC.so. 'ldd python' shows that this library is not available to the Python executable itself, because the C compiler linked the executable. If I manually edit the makefile for building python so that LINKCC is $(PURIFY) $(CXX) instead of $(PURIFY) $(CC) and then relink just the Python executable, I can see (with ldd) that the C++ runtime libC.so is now linked with Python, and I am able to load my module. (I believe it is actually no problem to build the entire system with LINKCC calling CXX instead of CC.) In case it's relevant, my extension module itself is compiled with these flags: -DDEBUG -DSUNCC_ -mt -pto -PIC -xildoff +w2 -D_POSIX_PTHREAD_SEMANTICS -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 and linked with these: -G -z text Bug #413582 may be related to this in some way. So the short of it is that I would like a configure option to link the final python executable using the C++ compiler on Solaris, so that I can get the C++ runtime linked in with python itself. Note that this doesn't seem to matter on Compaq Tru64 Unix systems, where the default Python build works just fine with my extension module. ---------------------------------------------------------------------- >Comment By: Stephan A. Fiedler (sfiedler) Date: 2001-06-18 11:18 Message: Logged In: YES user_id=246063 I should have given the full link line like this: CC -G -z text -o pyapi_launch.so $(OTHER_LIBS) -xildoff -ldl -lposix4 -lnsl -lsocket -lfftw_threads -lrfftw_threads -lfftw -lrfftw -lreadline -ltermcap $(OTHER_LIBS) just expands to a bunch of .so's that were themselves linked in the same way. pyapi_launch.so is my extension module. This does not solve the problem. The news about LINKCC is delightful. To make sure I understand, is it merely (csh syntax): setenv LINKCC CC make ? Or would I also/instead need to do /bin/env LINKCC=CC ./configure ... make ? This may well be all I need. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-16 01:03 Message: Logged In: YES user_id=21627 I believe the right fix to your problem would be to link your extension module using CC, not using ld. In theory, that should provide all required libraries to the shared object itself. Please report whether this solves the problem. As for the configure option: This is already configurable. Just set LINKCC when making python. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433481&group_id=5470 From noreply@sourceforge.net Mon Jun 18 19:56:50 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Jun 2001 11:56:50 -0700 Subject: [Python-bugs-list] [ python-Bugs-434199 ] ftplib changes CRLF to LF on Windows Message-ID: Bugs item #434199, was updated on 2001-06-18 08:55 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434199&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: ftplib changes CRLF to LF on Windows Initial Comment: Version 2.0 Putting a file to an FTP server (NT 4.0) seems to change the line endings from CRLF to LF only. It does not matter if the transfer is ascii or binary. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-18 11:56 Message: Logged In: YES user_id=6380 Sorry, there's not enough information in this bug report to be able to track it down. Can you show a brief self-contained program that demonstrates the bug? Since ftplib doesn't open the file to be transferred, my suspicion is that the problem is actually in your code calling ftplib: if you used ``f = open(filename, "r")'' to open the file that you pass to ftplib's storbinary() method, you are opening it in text mode. Try ``f = open(filename, "rb")'' to open it in binary mode. Please report back here otherwise we'll close this bug report next week for lack of information. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434199&group_id=5470 From noreply@sourceforge.net Mon Jun 18 20:04:43 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Jun 2001 12:04:43 -0700 Subject: [Python-bugs-list] [ python-Bugs-433047 ] missing args to PyArg_ParseTuple Message-ID: Bugs item #433047, was updated on 2001-06-14 02:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433047&group_id=5470 Category: Extension Modules Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: A.M. Kuchling (akuchling) Summary: missing args to PyArg_ParseTuple Initial Comment: The following calls to PyArg_ParseTuple are missing an argument, according to their format string: Modules/_codecmodule.c:443: in utf_16_le_encode Modules/_codecmodule.c:466: in utf_16_be_encode Modules/pcremodule.c:77: in PyPcre_exec ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2001-06-18 12:04 Message: Logged In: YES user_id=11375 Fixed in revision 2.26 of pcremodule.c. Thanks for reporting this! ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 11:33 Message: Logged In: YES user_id=38388 Fixed the codec part... Andrew is next in line ;-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 10:42 Message: Logged In: YES user_id=31435 Assigned to MAL for the _codecsmodule.c snafus. Suggest assigning to AMK next for the PCRE one. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433047&group_id=5470 From noreply@sourceforge.net Mon Jun 18 20:22:48 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Jun 2001 12:22:48 -0700 Subject: [Python-bugs-list] [ python-Bugs-434186 ] 0x80000000/2 != 0x80000000>>1 Message-ID: Bugs item #434186, was updated on 2001-06-18 08:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434186&group_id=5470 Category: Python Interpreter Core Group: None >Status: Closed >Resolution: Fixed Priority: 7 Submitted By: Nobody/Anonymous (nobody) Assigned to: Tim Peters (tim_one) Summary: 0x80000000/2 != 0x80000000>>1 Initial Comment: [16:07:29 toby@ruislip-manor] $ python Python 2.1 (#2, May 15 2001, 11:04:28) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Type "copyright", "credits" or "license" for more information. >>> 0x80000000>>1 -1073741824 >>> 0x80000000/2 1073741824 >>> 0x80000000/-2 -1073741824 >>> Pretty much says it all. the problem seems to be computing -xi in intobject.c:i_divmod causing an overflow. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-18 12:22 Message: Logged In: YES user_id=31435 Fixed via a simpler algorithm, in Lib/test/test_b1.py revision: 1.35 Objects/intobject.c revision: 2.57 ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-18 09:28 Message: Logged In: YES user_id=31435 Very curious! On Windows, >>> 0x80000000 >> 1 -1073741824 >>> 0x80000000 / 2 -1073741824 >>> 0x80000000 / -2 1073741824 >>> That is, it works as expected. However, that appears to be an accident due to the way the MS compiler optimizes this. In a debug build, the Windows results match yours: Python 2.2a0 (#16, Jun 18 2001, 11:17:03) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> 0x80000000 / 2 1073741824 [5509 refs] >>> 0x80000000 / -2 -1073741824 [5509 refs] >>> Certainly agreed this is a bug, and boosted the priority. Until it's fixed, you won't see the problem if you use long ints instead. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434186&group_id=5470 From noreply@sourceforge.net Tue Jun 19 11:07:04 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Jun 2001 03:07:04 -0700 Subject: [Python-bugs-list] [ python-Bugs-210665 ] Compiling python on hpux 11.00 with threads (PR#360) Message-ID: Bugs item #210665, was updated on 2000-07-31 14:12 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=210665&group_id=5470 Category: Threads Group: Platform-specific Status: Open Resolution: None Priority: 3 Submitted By: Nobody/Anonymous (nobody) Assigned to: Guido van Rossum (gvanrossum) Summary: Compiling python on hpux 11.00 with threads (PR#360) Initial Comment: Jitterbug-Id: 360 Submitted-By: philipp.jocham@salomon.at Date: Fri, 16 Jun 2000 08:47:06 -0400 (EDT) Version: 1.5.2 OS: HP-UX 11.00 There are two missing details in the configure process to make this work out of the box. First: The function pthread_create isn't found in library libpthread.a but in libcma.a, because pthread_create is just a macro in sys/pthread.h pointing to __pthread_create_system After patching ./configure directly and running ./configure --with-thread (now detecting the correct library /usr/lib/libpthread.a) I also added -lcl to Modules/Makefile at LIBS= -lnet -lnsl -ldld -lpthread -lcl otherwise importing of modules with threads didn't work (in this case oci_.sl from DCOracle). I'm not sure about the correct syntax or wether it's the correct place and method, but I would suggest a solution like the following code snippet. [...] AC_MSG_CHECKING(for --with-thread) [...] AC_DEFINE(_POSIX_THREADS) LIBS="$LIBS -lpthread -lcl" LIBOBJS="$LIBOBJS thread.o"], [ AC_CHECK_LIB(pthread, __pthread_create_system, [AC_DEFINE(WITH_THREAD) [...] I hope this helps to make installation process smoother. Fell free to contact me, if there are further questions. Philipp -- I confirm that, to the best of my knowledge and belief, this contribution is free of any claims of third parties under copyright, patent or other rights or interests ("claims"). To the extent that I have any such claims, I hereby grant to CNRI a nonexclusive, irrevocable, royalty-free, worldwide license to reproduce, distribute, perform and/or display publicly, prepare derivative versions, and otherwise use this contribution as part of the Python software and its related documentation, or any derivative versions thereof, at no cost to CNRI or its licensed users, and to authorize others to do so. I acknowledge that CNRI may, at its sole discretion, decide whether or not to incorporate this contribution in the Python software and its related documentation. I further grant CNRI permission to use my name and other identifying information provided to CNRI by me for use in connection with the Python software and its related documentation. ==================================================================== Audit trail: Tue Jul 11 08:26:01 2000 guido moved from incoming to open ---------------------------------------------------------------------- Comment By: Richard Townsend (rptownsend) Date: 2001-06-19 03:07 Message: Logged In: YES user_id=200117 I have now applied the new patch file 'python-2.1.patch' available at: ftp://ftp.thewrittenword.com/outgoing/pub I can now successfully build Python 2.1 with threads enabled, on HP-UX 11. ---------------------------------------------------------------------- Comment By: Richard Townsend (rptownsend) Date: 2001-06-14 01:20 Message: Logged In: YES user_id=200117 I applied the patch from thewrittenword's site, but when I ran autoconf it generated a corrupt configure script. There problem occurs around lines 3895-3906: if test "$USE_THREAD_MODULE" != "#" then # If the above checks didn't disable threads, (at least) OSF1 # needs this '-threads' argument during linking. case $ac_sys_system in OSF1 fi LDLAST=-threads;; esac fi fi fi The case statement has been trashed by the extra 'fi' token. I tried manually editing it like this: if test "$USE_THREAD_MODULE" != "#" then # If the above checks didn't disable threads, (at least) OSF1 # needs this '-threads' argument during linking. case $ac_sys_system in OSF1) LDLAST=-threads;; esac fi fi fi But it still fails with an 'else' not matched at line 3422. I can't see where the extra 'fi' should go. ---------------------------------------------------------------------- Comment By: The Written Word (china) (tww-china) Date: 2001-05-06 22:36 Message: Logged In: YES user_id=119770 You can find a patch to fix this against python 2.1 at: ftp://ftp.thewrittenword.com/outgoing/pub/python-2.1-416696.patch You'll need to rerun autoconf to test. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-11-02 08:38 Message: Reopened because there's a dissenting opinion. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-11-02 07:25 Message: Ok, check out the configure.in patch I created against Python 2.0: ftp://ftp.thewrittenword.com/outgoing/pub/python-2.0.patch I tested it under HP-UX 11.00 and it works just fine. The thread test worked too. -- albert chin (china@thewrittenword.com) ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-11-01 09:18 Message: Ick! Why check for anything with __ prepended to the name? Isn't that like checking for a "hidden" function, which might not be there in a followup version? On HP-UX 11.00, pthread_create is in /usr/lib/libc.sl anyway. The proper way to check for pthread_create is: AC_TRY_LINK([#include void * start_routine (void *arg) { exit (0); }], [ pthread_create (NULL, NULL, start_routine, NULL)], [ AC_MSG_RESULT(yes)], [ AC_MSG_RESULT(no)]) I modified configure.in in 2.0 to remove the patch you included in CVS 1.175 and added a test to include similar to the above (linked without -lpthread and with -lpthread). I'm testing now. Will provide a patch when things are tested. Also, I don't think threads on HP-UX 10.20 will work unless you have the DCE libraries installed. Anyhow, I'd probably avoid threads on 10.20. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-30 09:48 Message: Philipp submitted a patch to configure.in that fixes the problem for him and doesn't look like it would break things for others. configure.in, CVS revision 1.175. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-13 07:45 Message: OK, so the correct thing to do seems to be to somehow add #include to the tests for thread libraries. I'm afraid I won't be able to fix this in 2.0final, but I'll think about fixing it in 2.1 (or 2.0.1, whichever comes first :-). ---------------------------------------------------------------------- Comment By: Eddy De Greef (edg) Date: 2000-10-10 04:55 Message: I can confirm that the bug still exists in 2.0c1. It works fine on HP-UX 10.20 (which only has libcma), but not on HP-UX 11 (which both has libcma and libpthread). The pthread_create function is not defined as a macro though, but as a static function: static int pthread_create(pthread_t *tid, const pthread_attr_t *attr, void *(*start_routine)(void *), void *arg) { return(__pthread_create_system(tid, attr, start_routine, arg)); } I don't see an easy way to work around this. I'm not a configure expert, but perhaps the script should first check whether this code compiles and links: #include int main() { pthread_create(0,0,0,0); return 0; } and if not, fall back on the other tests ? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-06 10:40 Message: I have two reports from people for whom configure, build and run with threads now works on HP-UX 10 and 11. I'm not sure what to do about this report... What's different on Philipp's system??? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-09-25 06:10 Message: I'm hoping that this was fixed by recent changes. Sent an email to the original submittor to verify. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-09-22 02:56 Message: Taking this because I'm considering to redesign the thread configuration section in configure.in anyway -- there's a similar bug report for Alpha OSF1. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2000-09-07 15:05 Message: Please do triage on this bug. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=210665&group_id=5470 From noreply@sourceforge.net Tue Jun 19 15:48:41 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Jun 2001 07:48:41 -0700 Subject: [Python-bugs-list] [ python-Bugs-434479 ] os.listdir loses on linux w/NTFS vols Message-ID: Bugs item #434479, was updated on 2001-06-19 07:48 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434479&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: jeremy bornstein (ukekuma) Assigned to: Nobody/Anonymous (nobody) Summary: os.listdir loses on linux w/NTFS vols Initial Comment: os.listdir() on a directory which is on an NTFS volume omits one entry from the directory listing. Example: planet {188}: grep ntfs /etc/fstab /dev/hda1 /lose ntfs uid=500,gid=500,umask=555 1 2 planet {189}: ls /lose Documents and Settings/ My Music/ Program Files/ PUTTY.RND $Secure unzipped/ WINNT/ planet {190}: python2.1 Python 2.1 (#1, Jun 19 2001, 00:32:28) [GCC 2.96 20000731 (Red Hat Linux 7.1 2.96-81)] on linux2 Type "copyright", "credits" or "license" for more information. >>> import os >>> os.listdir('/lose') ['$Secure', 'Documents and Settings', 'My Music', 'Program Files', 'PUTTY.RND', 'unzipped'] >>> planet {191}: (In the example, note that the directory 'WINNT' is not returned by os.listdir.) I have verified this bug with/1.5.2, 1.6.1, and 2.1 on Linux (RH7.1) only. I have only tested it on this one NTFS volume and this one computer. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434479&group_id=5470 From noreply@sourceforge.net Tue Jun 19 19:18:47 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Jun 2001 11:18:47 -0700 Subject: [Python-bugs-list] [ python-Bugs-434547 ] Problems with C++ ext. on Tru64 Message-ID: Bugs item #434547, was updated on 2001-06-19 11:18 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434547&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: Problems with C++ ext. on Tru64 Initial Comment: { I post this letter to comp.lang.python for discussion, python sourceforge bugtracker to make sure someone reads it and omniorb@uk.research.att.com as important appendix to the letter about compiling omniORB on Tru64 } I am currently trying to compile Python/OmniORB/OmniORBpython suite on Tru64 Unix (the new name for Digital Unix/OSF) with DEC CXX 6.2. For the longer story search for my next post, but I have some important observations about Python. Are they bugs? Anyone skilled to check it further is welcome. In case I should post this somewhere else, please let me know. The tests described below used Python 2.1. The problem which forced me to perform this analysis happened during compilation of omniORB 3.0.3. I start from the less important things going to the more important. 1) While compiling Python with DEC CXX (below you will find why I did it), I got the error message (on Include/structmember.h) about incorrect usage of language extension (probably they in some situations use 'readonly' in the way similar to 'const'). I have not diagnosed it in the great detail (seems that compiler options and pragmas set by python makefiles influence the situation somehow) but changing readonly to - say - read_only should not spoil anything and will help. I worked around the problemy by using cxx -Dreadonly=_readonly as the compiler name. 2) In contrary to most configure scripts, Python configure script ignores environment variable CC. The problem is in case switch checking wheter --with-gcc or --without-gcc is specified: if test "${with_gcc+set}" = set; then (....) else case $ac_sys_system in OSF1) CC=cc without_gcc=;; (...) To compile python with cxx I manually edited the line above but I think compiling python with compiler different than cc and gcc should be possible in the natural way. In case people dislike CC checking, maybe --with-cc=<...> could be done? 3) So, let's tell why I needed to compile python with DEC CXX. While using 'default' (compiled with cc) python, I was unable to use python extension modules written in C++ (I got the problem while trying to compile and use _omniidl module from omniORB but seems it would be the same for others): - the '_omniidlmodule.so' file links correctly and is correct - attempts to import it results in python -c 'import _omniidl' Traceback (innermost last): File "", line 1, in ? ImportError: dlopen: Unresolved symbols The problem is caused by the lack of symbols from libcxx.so (C++ compiler shared library). I am not expert regarding dlopen but seems that python, while loading the module, does not load shared libraries the module depends on (at least on Tru64). After I recompiled python with cxx (mainly to get the python executable linked permanently with libcxx.so so this library is present while my module is being imported) the problem disappeared and the module imported and worked correctly. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434547&group_id=5470 From noreply@sourceforge.net Tue Jun 19 21:52:02 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Jun 2001 13:52:02 -0700 Subject: [Python-bugs-list] [ python-Bugs-433625 ] bug in PyThread_release_lock() Message-ID: Bugs item #433625, was updated on 2001-06-15 19:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 Category: Threads Group: Platform-specific Status: Open Resolution: Invalid Priority: 5 Submitted By: Shih-Hao Liu (shihao) Assigned to: Tim Peters (tim_one) Summary: bug in PyThread_release_lock() Initial Comment: Mutex should be hold when calling pthread_cond_signal(). This function should look like: PyThread_release_lock(PyThread_type_lock lock) { pthread_lock *thelock = (pthread_lock *)lock; int status, error = 0; dprintf(("PyThread_release_lock(%p) called\n", lock)); status = pthread_mutex_lock( &thelock->mut ); CHECK_STATUS("pthread_mutex_lock[3]"); thelock->locked = 0; /* ***** call pthread_cond_signal before unlock mutex */ status = pthread_cond_signal( &thelock->lock_released ); CHECK_STATUS("pthread_cond_signal"); status = pthread_mutex_unlock( &thelock->mut ); CHECK_STATUS("pthread_mutex_unlock[3]"); /* wake up someone (anyone, if any) waiting on the lock */ } ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-19 13:52 Message: Logged In: YES user_id=31435 It appears you're concerned that the signal will "get lost" in this scenario. I agree that it may, but it doesn't matter: thread 2's "while (thelock->locked)" test fails because thread 1 already set thelock->locked to 0, so thread 2 doesn't execute its pthread_cond_wait, so it doesn't matter that nobody is *waiting* to see thread 1's pthread_cond_signal. IOW, the condition thread 1 will try to signal has *already* been detected by thread 2, and the signal is useless (because redundant) information. Indeed, it's a beauty of the condition protocol that signals can be sloppy. The linuxthread man page is worded strangely here, but note that it does not say the mutex *must* be locked. They can't, either, because POSIX doesn't require it; while POSIX stds aren't available freely online, some derived specs are, and are much clearer about this; e.g., see the Single Unix Specification, here: I'll tell you why I don't *want* to change this: in Python's use of the global interpreter lock, it's almost always the case that someone is waiting on the lock. By releasing the mutex before signaling, this gives a waiter a chance to run immediately upon calling pthread_cond_signal. Else, because pthread_cond_wait (which the waiters are executing) has to lock the mutex, if the signaler is holding the mutex during the signal, pthread_cond_signal can't finish the job -- it was to go back to the signaler and let it unlock the mutex first before a waiter can proceed. This makes the region of exclusion longer than it needs to be. So if there's not an actual race problem here (& I still don't see one), I don't want to change this. ---------------------------------------------------------------------- Comment By: Shih-Hao Liu (shihao) Date: 2001-06-17 23:01 Message: Logged In: YES user_id=246388 I closed it because I thought the thelock->locked variable will ensure that the PyThread_release_lock will help to protect the condition variable and I was wrong. The linuxthread man page on pthread_cond_signal: A condition variable must always be associated with a mutex, to avoid the race condition where a thread prepares to wait on a condition variable and another thread signals the condition just before the first thread actually waits on it. which means you can't call pthread_cond_signal & pthread_cond_wait on the same condition variable at the same time. And using a mutex to protect them is a good idea. Here is how thing might go wrong with current implementation: thread 1 thread 2 |int PyThread_acquire_lock _ |/** assume lock was acquired | by thread 1, hence locked=0 | & success would be 0 **/ |{ | ... | status = pthread_mutex_lo | CHECK_STATUS("pthread_mut | success = thelock->locked | if (success) thelock->loc | status = pthread_mutex_un | /** thread 2 suspended **/ void PyThread_release_lock _| { | ... | status = pthread_mutex_loc| CHECK_STATUS("pthread_mute| | thelock->locked = 0; | | status = pthread_mutex_unl| /** thread 1 suspend **/ | | CHECK_STATUS("pthread_mut | | if ( !success && waitflag | /* continue trying unti | | /* mut must be locked b | * protocol */ | status = pthread_mutex_ | CHECK_STATUS("pthread_m | while ( thelock->locked | status = pthread_cond |/** thread 2 suspended while | updating shared data ** CHECK_STATUS("pthread_mute| | /* wake up someone (anyone| status = pthread_cond_sign| /** thread 1 update shared | data and corrupt it. **/ | Not sure what the effect would be. It's wouldn't be nice anyway. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 16:02 Message: Logged In: YES user_id=31435 Closing this again, as it appears the original submitter deleted it. shihao, if you want to pursure this, open it again. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 15:01 Message: Logged In: YES user_id=31435 Ack, did I delete this?! I sure didn't intend to -- didn't even intend to close it. Reopened pending more info. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-17 14:51 Message: Logged In: YES user_id=6380 Set status to closed -- no need to delete it. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:16 Message: Logged In: YES user_id=31435 Why? It's allowed to signal the condition whether or not the mutex is held. Since changing this can have visible effects on thread scheduling, I'm reluctant to change it without a good reason. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 From noreply@sourceforge.net Tue Jun 19 22:27:20 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Jun 2001 14:27:20 -0700 Subject: [Python-bugs-list] [ python-Bugs-433625 ] bug in PyThread_release_lock() Message-ID: Bugs item #433625, was updated on 2001-06-15 19:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 Category: Threads Group: Platform-specific >Status: Closed Resolution: Invalid Priority: 5 Submitted By: Shih-Hao Liu (shihao) Assigned to: Tim Peters (tim_one) Summary: bug in PyThread_release_lock() Initial Comment: Mutex should be hold when calling pthread_cond_signal(). This function should look like: PyThread_release_lock(PyThread_type_lock lock) { pthread_lock *thelock = (pthread_lock *)lock; int status, error = 0; dprintf(("PyThread_release_lock(%p) called\n", lock)); status = pthread_mutex_lock( &thelock->mut ); CHECK_STATUS("pthread_mutex_lock[3]"); thelock->locked = 0; /* ***** call pthread_cond_signal before unlock mutex */ status = pthread_cond_signal( &thelock->lock_released ); CHECK_STATUS("pthread_cond_signal"); status = pthread_mutex_unlock( &thelock->mut ); CHECK_STATUS("pthread_mutex_unlock[3]"); /* wake up someone (anyone, if any) waiting on the lock */ } ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-19 13:52 Message: Logged In: YES user_id=31435 It appears you're concerned that the signal will "get lost" in this scenario. I agree that it may, but it doesn't matter: thread 2's "while (thelock->locked)" test fails because thread 1 already set thelock->locked to 0, so thread 2 doesn't execute its pthread_cond_wait, so it doesn't matter that nobody is *waiting* to see thread 1's pthread_cond_signal. IOW, the condition thread 1 will try to signal has *already* been detected by thread 2, and the signal is useless (because redundant) information. Indeed, it's a beauty of the condition protocol that signals can be sloppy. The linuxthread man page is worded strangely here, but note that it does not say the mutex *must* be locked. They can't, either, because POSIX doesn't require it; while POSIX stds aren't available freely online, some derived specs are, and are much clearer about this; e.g., see the Single Unix Specification, here: I'll tell you why I don't *want* to change this: in Python's use of the global interpreter lock, it's almost always the case that someone is waiting on the lock. By releasing the mutex before signaling, this gives a waiter a chance to run immediately upon calling pthread_cond_signal. Else, because pthread_cond_wait (which the waiters are executing) has to lock the mutex, if the signaler is holding the mutex during the signal, pthread_cond_signal can't finish the job -- it was to go back to the signaler and let it unlock the mutex first before a waiter can proceed. This makes the region of exclusion longer than it needs to be. So if there's not an actual race problem here (& I still don't see one), I don't want to change this. ---------------------------------------------------------------------- Comment By: Shih-Hao Liu (shihao) Date: 2001-06-17 23:01 Message: Logged In: YES user_id=246388 I closed it because I thought the thelock->locked variable will ensure that the PyThread_release_lock will help to protect the condition variable and I was wrong. The linuxthread man page on pthread_cond_signal: A condition variable must always be associated with a mutex, to avoid the race condition where a thread prepares to wait on a condition variable and another thread signals the condition just before the first thread actually waits on it. which means you can't call pthread_cond_signal & pthread_cond_wait on the same condition variable at the same time. And using a mutex to protect them is a good idea. Here is how thing might go wrong with current implementation: thread 1 thread 2 |int PyThread_acquire_lock _ |/** assume lock was acquired | by thread 1, hence locked=0 | & success would be 0 **/ |{ | ... | status = pthread_mutex_lo | CHECK_STATUS("pthread_mut | success = thelock->locked | if (success) thelock->loc | status = pthread_mutex_un | /** thread 2 suspended **/ void PyThread_release_lock _| { | ... | status = pthread_mutex_loc| CHECK_STATUS("pthread_mute| | thelock->locked = 0; | | status = pthread_mutex_unl| /** thread 1 suspend **/ | | CHECK_STATUS("pthread_mut | | if ( !success && waitflag | /* continue trying unti | | /* mut must be locked b | * protocol */ | status = pthread_mutex_ | CHECK_STATUS("pthread_m | while ( thelock->locked | status = pthread_cond |/** thread 2 suspended while | updating shared data ** CHECK_STATUS("pthread_mute| | /* wake up someone (anyone| status = pthread_cond_sign| /** thread 1 update shared | data and corrupt it. **/ | Not sure what the effect would be. It's wouldn't be nice anyway. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 16:02 Message: Logged In: YES user_id=31435 Closing this again, as it appears the original submitter deleted it. shihao, if you want to pursure this, open it again. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 15:01 Message: Logged In: YES user_id=31435 Ack, did I delete this?! I sure didn't intend to -- didn't even intend to close it. Reopened pending more info. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-17 14:51 Message: Logged In: YES user_id=6380 Set status to closed -- no need to delete it. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:16 Message: Logged In: YES user_id=31435 Why? It's allowed to signal the condition whether or not the mutex is held. Since changing this can have visible effects on thread scheduling, I'm reluctant to change it without a good reason. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 From noreply@sourceforge.net Tue Jun 19 22:41:57 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Jun 2001 14:41:57 -0700 Subject: [Python-bugs-list] [ python-Bugs-433625 ] bug in PyThread_release_lock() Message-ID: Bugs item #433625, was updated on 2001-06-15 19:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 Category: Threads Group: Platform-specific >Status: Open Resolution: Invalid Priority: 5 Submitted By: Shih-Hao Liu (shihao) Assigned to: Tim Peters (tim_one) Summary: bug in PyThread_release_lock() Initial Comment: Mutex should be hold when calling pthread_cond_signal(). This function should look like: PyThread_release_lock(PyThread_type_lock lock) { pthread_lock *thelock = (pthread_lock *)lock; int status, error = 0; dprintf(("PyThread_release_lock(%p) called\n", lock)); status = pthread_mutex_lock( &thelock->mut ); CHECK_STATUS("pthread_mutex_lock[3]"); thelock->locked = 0; /* ***** call pthread_cond_signal before unlock mutex */ status = pthread_cond_signal( &thelock->lock_released ); CHECK_STATUS("pthread_cond_signal"); status = pthread_mutex_unlock( &thelock->mut ); CHECK_STATUS("pthread_mutex_unlock[3]"); /* wake up someone (anyone, if any) waiting on the lock */ } ---------------------------------------------------------------------- >Comment By: Shih-Hao Liu (shihao) Date: 2001-06-19 14:41 Message: Logged In: YES user_id=246388 Oops. The problem will be arised if there is a thread 3 called PyThread_acquire_lock after thread 1 set thelock->locked to 0 and before thread 2 calling pthread_mutex_lock. "while (thelock->locked)" will success for thread 2 and it will call pthread_cond_wait and might collide with thread 1's pthread_cond_signal. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-19 13:52 Message: Logged In: YES user_id=31435 It appears you're concerned that the signal will "get lost" in this scenario. I agree that it may, but it doesn't matter: thread 2's "while (thelock->locked)" test fails because thread 1 already set thelock->locked to 0, so thread 2 doesn't execute its pthread_cond_wait, so it doesn't matter that nobody is *waiting* to see thread 1's pthread_cond_signal. IOW, the condition thread 1 will try to signal has *already* been detected by thread 2, and the signal is useless (because redundant) information. Indeed, it's a beauty of the condition protocol that signals can be sloppy. The linuxthread man page is worded strangely here, but note that it does not say the mutex *must* be locked. They can't, either, because POSIX doesn't require it; while POSIX stds aren't available freely online, some derived specs are, and are much clearer about this; e.g., see the Single Unix Specification, here: I'll tell you why I don't *want* to change this: in Python's use of the global interpreter lock, it's almost always the case that someone is waiting on the lock. By releasing the mutex before signaling, this gives a waiter a chance to run immediately upon calling pthread_cond_signal. Else, because pthread_cond_wait (which the waiters are executing) has to lock the mutex, if the signaler is holding the mutex during the signal, pthread_cond_signal can't finish the job -- it was to go back to the signaler and let it unlock the mutex first before a waiter can proceed. This makes the region of exclusion longer than it needs to be. So if there's not an actual race problem here (& I still don't see one), I don't want to change this. ---------------------------------------------------------------------- Comment By: Shih-Hao Liu (shihao) Date: 2001-06-17 23:01 Message: Logged In: YES user_id=246388 I closed it because I thought the thelock->locked variable will ensure that the PyThread_release_lock will help to protect the condition variable and I was wrong. The linuxthread man page on pthread_cond_signal: A condition variable must always be associated with a mutex, to avoid the race condition where a thread prepares to wait on a condition variable and another thread signals the condition just before the first thread actually waits on it. which means you can't call pthread_cond_signal & pthread_cond_wait on the same condition variable at the same time. And using a mutex to protect them is a good idea. Here is how thing might go wrong with current implementation: thread 1 thread 2 |int PyThread_acquire_lock _ |/** assume lock was acquired | by thread 1, hence locked=0 | & success would be 0 **/ |{ | ... | status = pthread_mutex_lo | CHECK_STATUS("pthread_mut | success = thelock->locked | if (success) thelock->loc | status = pthread_mutex_un | /** thread 2 suspended **/ void PyThread_release_lock _| { | ... | status = pthread_mutex_loc| CHECK_STATUS("pthread_mute| | thelock->locked = 0; | | status = pthread_mutex_unl| /** thread 1 suspend **/ | | CHECK_STATUS("pthread_mut | | if ( !success && waitflag | /* continue trying unti | | /* mut must be locked b | * protocol */ | status = pthread_mutex_ | CHECK_STATUS("pthread_m | while ( thelock->locked | status = pthread_cond |/** thread 2 suspended while | updating shared data ** CHECK_STATUS("pthread_mute| | /* wake up someone (anyone| status = pthread_cond_sign| /** thread 1 update shared | data and corrupt it. **/ | Not sure what the effect would be. It's wouldn't be nice anyway. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 16:02 Message: Logged In: YES user_id=31435 Closing this again, as it appears the original submitter deleted it. shihao, if you want to pursure this, open it again. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 15:01 Message: Logged In: YES user_id=31435 Ack, did I delete this?! I sure didn't intend to -- didn't even intend to close it. Reopened pending more info. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-17 14:51 Message: Logged In: YES user_id=6380 Set status to closed -- no need to delete it. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:16 Message: Logged In: YES user_id=31435 Why? It's allowed to signal the condition whether or not the mutex is held. Since changing this can have visible effects on thread scheduling, I'm reluctant to change it without a good reason. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 From noreply@sourceforge.net Tue Jun 19 23:06:04 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Jun 2001 15:06:04 -0700 Subject: [Python-bugs-list] [ python-Bugs-433625 ] bug in PyThread_release_lock() Message-ID: Bugs item #433625, was updated on 2001-06-15 19:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 Category: Threads Group: Platform-specific Status: Open Resolution: Invalid Priority: 5 Submitted By: Shih-Hao Liu (shihao) Assigned to: Tim Peters (tim_one) Summary: bug in PyThread_release_lock() Initial Comment: Mutex should be hold when calling pthread_cond_signal(). This function should look like: PyThread_release_lock(PyThread_type_lock lock) { pthread_lock *thelock = (pthread_lock *)lock; int status, error = 0; dprintf(("PyThread_release_lock(%p) called\n", lock)); status = pthread_mutex_lock( &thelock->mut ); CHECK_STATUS("pthread_mutex_lock[3]"); thelock->locked = 0; /* ***** call pthread_cond_signal before unlock mutex */ status = pthread_cond_signal( &thelock->lock_released ); CHECK_STATUS("pthread_cond_signal"); status = pthread_mutex_unlock( &thelock->mut ); CHECK_STATUS("pthread_mutex_unlock[3]"); /* wake up someone (anyone, if any) waiting on the lock */ } ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-19 15:06 Message: Logged In: YES user_id=31435 > The problem will be arised if there is a thread 3 > called PyThread_acquire_lock You never mentioned thread 3 again, so I don't know what it has to do with this. > after thread 1 set thelock->locked to 0 and before > thread 2 calling pthread_mutex_lock. I understand both of those. Are you assuming that, e.g., thread 3's PyThread_acquire_lock completes in whole during this gap? I don't know what else you could mean, so let's assume that. > "while (thelock->locked)" will success for thread 2 Sure. > and it will call pthread_cond_wait Yup. > and might collide with thread 1's pthread_cond_signal. What does "collide" mean to you? All the pthread_cond_xxx functions must be implemented as if atomic, so there's no meaningful sense (to me) in which they can collide -- unless they're implemented incorrectly. Assuming they are implemented correctly, it again doesn't matter that thread 2 misses thread 1's signal, because thread *3* exploited the information thread 1 was going to signal, by acquiring the lock. It's actually good that thread 2 isn't bothered with it: there's no real info in the signal anymore (at best, if thread 2 got it, it would wake up and go "oops! it's still locked; I'll wait again"). All that matters now is whether thread 2 gets a chance to see thread *3*'s signal, at the time thread 3 releases the lock. And it will, because thread 3 can't release the lock without acquiring the mutex first, and thread 2 holds the mutex at all times except when in its pthread_cond_wait call (so thread 3 can't release the lock except when thread 2 is in pthread_cond_wait). Note that I'm not at all concerned about "fairness" here, only about races. ---------------------------------------------------------------------- Comment By: Shih-Hao Liu (shihao) Date: 2001-06-19 14:41 Message: Logged In: YES user_id=246388 Oops. The problem will be arised if there is a thread 3 called PyThread_acquire_lock after thread 1 set thelock->locked to 0 and before thread 2 calling pthread_mutex_lock. "while (thelock->locked)" will success for thread 2 and it will call pthread_cond_wait and might collide with thread 1's pthread_cond_signal. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-19 13:52 Message: Logged In: YES user_id=31435 It appears you're concerned that the signal will "get lost" in this scenario. I agree that it may, but it doesn't matter: thread 2's "while (thelock->locked)" test fails because thread 1 already set thelock->locked to 0, so thread 2 doesn't execute its pthread_cond_wait, so it doesn't matter that nobody is *waiting* to see thread 1's pthread_cond_signal. IOW, the condition thread 1 will try to signal has *already* been detected by thread 2, and the signal is useless (because redundant) information. Indeed, it's a beauty of the condition protocol that signals can be sloppy. The linuxthread man page is worded strangely here, but note that it does not say the mutex *must* be locked. They can't, either, because POSIX doesn't require it; while POSIX stds aren't available freely online, some derived specs are, and are much clearer about this; e.g., see the Single Unix Specification, here: I'll tell you why I don't *want* to change this: in Python's use of the global interpreter lock, it's almost always the case that someone is waiting on the lock. By releasing the mutex before signaling, this gives a waiter a chance to run immediately upon calling pthread_cond_signal. Else, because pthread_cond_wait (which the waiters are executing) has to lock the mutex, if the signaler is holding the mutex during the signal, pthread_cond_signal can't finish the job -- it was to go back to the signaler and let it unlock the mutex first before a waiter can proceed. This makes the region of exclusion longer than it needs to be. So if there's not an actual race problem here (& I still don't see one), I don't want to change this. ---------------------------------------------------------------------- Comment By: Shih-Hao Liu (shihao) Date: 2001-06-17 23:01 Message: Logged In: YES user_id=246388 I closed it because I thought the thelock->locked variable will ensure that the PyThread_release_lock will help to protect the condition variable and I was wrong. The linuxthread man page on pthread_cond_signal: A condition variable must always be associated with a mutex, to avoid the race condition where a thread prepares to wait on a condition variable and another thread signals the condition just before the first thread actually waits on it. which means you can't call pthread_cond_signal & pthread_cond_wait on the same condition variable at the same time. And using a mutex to protect them is a good idea. Here is how thing might go wrong with current implementation: thread 1 thread 2 |int PyThread_acquire_lock _ |/** assume lock was acquired | by thread 1, hence locked=0 | & success would be 0 **/ |{ | ... | status = pthread_mutex_lo | CHECK_STATUS("pthread_mut | success = thelock->locked | if (success) thelock->loc | status = pthread_mutex_un | /** thread 2 suspended **/ void PyThread_release_lock _| { | ... | status = pthread_mutex_loc| CHECK_STATUS("pthread_mute| | thelock->locked = 0; | | status = pthread_mutex_unl| /** thread 1 suspend **/ | | CHECK_STATUS("pthread_mut | | if ( !success && waitflag | /* continue trying unti | | /* mut must be locked b | * protocol */ | status = pthread_mutex_ | CHECK_STATUS("pthread_m | while ( thelock->locked | status = pthread_cond |/** thread 2 suspended while | updating shared data ** CHECK_STATUS("pthread_mute| | /* wake up someone (anyone| status = pthread_cond_sign| /** thread 1 update shared | data and corrupt it. **/ | Not sure what the effect would be. It's wouldn't be nice anyway. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 16:02 Message: Logged In: YES user_id=31435 Closing this again, as it appears the original submitter deleted it. shihao, if you want to pursure this, open it again. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 15:01 Message: Logged In: YES user_id=31435 Ack, did I delete this?! I sure didn't intend to -- didn't even intend to close it. Reopened pending more info. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-17 14:51 Message: Logged In: YES user_id=6380 Set status to closed -- no need to delete it. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:16 Message: Logged In: YES user_id=31435 Why? It's allowed to signal the condition whether or not the mutex is held. Since changing this can have visible effects on thread scheduling, I'm reluctant to change it without a good reason. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 From noreply@sourceforge.net Wed Jun 20 06:14:07 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Jun 2001 22:14:07 -0700 Subject: [Python-bugs-list] [ python-Bugs-433625 ] bug in PyThread_release_lock() Message-ID: Bugs item #433625, was updated on 2001-06-15 19:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 Category: Threads Group: Platform-specific Status: Open Resolution: Invalid Priority: 5 Submitted By: Shih-Hao Liu (shihao) Assigned to: Tim Peters (tim_one) Summary: bug in PyThread_release_lock() Initial Comment: Mutex should be hold when calling pthread_cond_signal(). This function should look like: PyThread_release_lock(PyThread_type_lock lock) { pthread_lock *thelock = (pthread_lock *)lock; int status, error = 0; dprintf(("PyThread_release_lock(%p) called\n", lock)); status = pthread_mutex_lock( &thelock->mut ); CHECK_STATUS("pthread_mutex_lock[3]"); thelock->locked = 0; /* ***** call pthread_cond_signal before unlock mutex */ status = pthread_cond_signal( &thelock->lock_released ); CHECK_STATUS("pthread_cond_signal"); status = pthread_mutex_unlock( &thelock->mut ); CHECK_STATUS("pthread_mutex_unlock[3]"); /* wake up someone (anyone, if any) waiting on the lock */ } ---------------------------------------------------------------------- >Comment By: Shih-Hao Liu (shihao) Date: 2001-06-19 22:14 Message: Logged In: YES user_id=246388 > I understand both of those. Are you assuming that, e.g., > thread 3's PyThread_acquire_lock completes in whole during > this gap? I don't know what else you could mean, so let's > assume that. Yes, I assume thread 3 completed in this gap and it is possible. "Collide" means while thread 2's pthread_cond_wait was modifiying the internal data structure, thread 1 calls pthread_cond_signal. The point here is not missing signals, the problem is the possiblity that pthread_cond_signal preempt the execution of pthread_cond_wait. I can't find any document says pthread_cond_xxx functions must be automic operations. In pthread_cond_xxx man page, it only mentioned: pthread_cond_wait atomically unlocks the mutex (as per ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ pthread_unlock_mutex) and waits for the condition variable ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cond to be signaled. The thread execution is suspended and ^^^^^^^^^^^^^^^^^^^ does not consume any CPU time until the condition variable is signaled. The mutex must be locked by the calling thread on entrance to pthread_cond_wait. Before returning to the calling thread, pthread_cond_wait re-acquires mutex (as per pthread_lock_mutex). Unlocking the mutex and suspending on the condition vari­ able is done atomically. Thus, if all threads always acquire the mutex before signaling the condition, this guarantees that the condition cannot be signaled (and thus ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ignored) between the time a thread locks the mutex and the ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ time it waits on the condition variable. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ I take it that between the time the mutex is locked by pthread_mutex_lock and unlocked by pthread_cond_wait, you can't call pthread_cond_signal. I also browse through LinuxThread implementation and can't find pthread_cond_xxxx is implemented automically. I found there is another way to fix this without having to call the pthread_cond_signal while holding the mutex. If we do: if (!thelock->locked) status = pthread_cond_signal( &thelock->lock_released ); we guarantee that when pthread_cond_signal is called, the acquire_lock code will not be in the middle of pthread_cond_signal. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-19 15:06 Message: Logged In: YES user_id=31435 > The problem will be arised if there is a thread 3 > called PyThread_acquire_lock You never mentioned thread 3 again, so I don't know what it has to do with this. > after thread 1 set thelock->locked to 0 and before > thread 2 calling pthread_mutex_lock. I understand both of those. Are you assuming that, e.g., thread 3's PyThread_acquire_lock completes in whole during this gap? I don't know what else you could mean, so let's assume that. > "while (thelock->locked)" will success for thread 2 Sure. > and it will call pthread_cond_wait Yup. > and might collide with thread 1's pthread_cond_signal. What does "collide" mean to you? All the pthread_cond_xxx functions must be implemented as if atomic, so there's no meaningful sense (to me) in which they can collide -- unless they're implemented incorrectly. Assuming they are implemented correctly, it again doesn't matter that thread 2 misses thread 1's signal, because thread *3* exploited the information thread 1 was going to signal, by acquiring the lock. It's actually good that thread 2 isn't bothered with it: there's no real info in the signal anymore (at best, if thread 2 got it, it would wake up and go "oops! it's still locked; I'll wait again"). All that matters now is whether thread 2 gets a chance to see thread *3*'s signal, at the time thread 3 releases the lock. And it will, because thread 3 can't release the lock without acquiring the mutex first, and thread 2 holds the mutex at all times except when in its pthread_cond_wait call (so thread 3 can't release the lock except when thread 2 is in pthread_cond_wait). Note that I'm not at all concerned about "fairness" here, only about races. ---------------------------------------------------------------------- Comment By: Shih-Hao Liu (shihao) Date: 2001-06-19 14:41 Message: Logged In: YES user_id=246388 Oops. The problem will be arised if there is a thread 3 called PyThread_acquire_lock after thread 1 set thelock->locked to 0 and before thread 2 calling pthread_mutex_lock. "while (thelock->locked)" will success for thread 2 and it will call pthread_cond_wait and might collide with thread 1's pthread_cond_signal. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-19 13:52 Message: Logged In: YES user_id=31435 It appears you're concerned that the signal will "get lost" in this scenario. I agree that it may, but it doesn't matter: thread 2's "while (thelock->locked)" test fails because thread 1 already set thelock->locked to 0, so thread 2 doesn't execute its pthread_cond_wait, so it doesn't matter that nobody is *waiting* to see thread 1's pthread_cond_signal. IOW, the condition thread 1 will try to signal has *already* been detected by thread 2, and the signal is useless (because redundant) information. Indeed, it's a beauty of the condition protocol that signals can be sloppy. The linuxthread man page is worded strangely here, but note that it does not say the mutex *must* be locked. They can't, either, because POSIX doesn't require it; while POSIX stds aren't available freely online, some derived specs are, and are much clearer about this; e.g., see the Single Unix Specification, here: I'll tell you why I don't *want* to change this: in Python's use of the global interpreter lock, it's almost always the case that someone is waiting on the lock. By releasing the mutex before signaling, this gives a waiter a chance to run immediately upon calling pthread_cond_signal. Else, because pthread_cond_wait (which the waiters are executing) has to lock the mutex, if the signaler is holding the mutex during the signal, pthread_cond_signal can't finish the job -- it was to go back to the signaler and let it unlock the mutex first before a waiter can proceed. This makes the region of exclusion longer than it needs to be. So if there's not an actual race problem here (& I still don't see one), I don't want to change this. ---------------------------------------------------------------------- Comment By: Shih-Hao Liu (shihao) Date: 2001-06-17 23:01 Message: Logged In: YES user_id=246388 I closed it because I thought the thelock->locked variable will ensure that the PyThread_release_lock will help to protect the condition variable and I was wrong. The linuxthread man page on pthread_cond_signal: A condition variable must always be associated with a mutex, to avoid the race condition where a thread prepares to wait on a condition variable and another thread signals the condition just before the first thread actually waits on it. which means you can't call pthread_cond_signal & pthread_cond_wait on the same condition variable at the same time. And using a mutex to protect them is a good idea. Here is how thing might go wrong with current implementation: thread 1 thread 2 |int PyThread_acquire_lock _ |/** assume lock was acquired | by thread 1, hence locked=0 | & success would be 0 **/ |{ | ... | status = pthread_mutex_lo | CHECK_STATUS("pthread_mut | success = thelock->locked | if (success) thelock->loc | status = pthread_mutex_un | /** thread 2 suspended **/ void PyThread_release_lock _| { | ... | status = pthread_mutex_loc| CHECK_STATUS("pthread_mute| | thelock->locked = 0; | | status = pthread_mutex_unl| /** thread 1 suspend **/ | | CHECK_STATUS("pthread_mut | | if ( !success && waitflag | /* continue trying unti | | /* mut must be locked b | * protocol */ | status = pthread_mutex_ | CHECK_STATUS("pthread_m | while ( thelock->locked | status = pthread_cond |/** thread 2 suspended while | updating shared data ** CHECK_STATUS("pthread_mute| | /* wake up someone (anyone| status = pthread_cond_sign| /** thread 1 update shared | data and corrupt it. **/ | Not sure what the effect would be. It's wouldn't be nice anyway. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 16:02 Message: Logged In: YES user_id=31435 Closing this again, as it appears the original submitter deleted it. shihao, if you want to pursure this, open it again. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 15:01 Message: Logged In: YES user_id=31435 Ack, did I delete this?! I sure didn't intend to -- didn't even intend to close it. Reopened pending more info. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-17 14:51 Message: Logged In: YES user_id=6380 Set status to closed -- no need to delete it. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:16 Message: Logged In: YES user_id=31435 Why? It's allowed to signal the condition whether or not the mutex is held. Since changing this can have visible effects on thread scheduling, I'm reluctant to change it without a good reason. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 From noreply@sourceforge.net Wed Jun 20 08:17:57 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Jun 2001 00:17:57 -0700 Subject: [Python-bugs-list] [ python-Bugs-433625 ] bug in PyThread_release_lock() Message-ID: Bugs item #433625, was updated on 2001-06-15 19:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 Category: Threads Group: Platform-specific >Status: Closed Resolution: Invalid Priority: 5 Submitted By: Shih-Hao Liu (shihao) Assigned to: Tim Peters (tim_one) Summary: bug in PyThread_release_lock() Initial Comment: Mutex should be hold when calling pthread_cond_signal(). This function should look like: PyThread_release_lock(PyThread_type_lock lock) { pthread_lock *thelock = (pthread_lock *)lock; int status, error = 0; dprintf(("PyThread_release_lock(%p) called\n", lock)); status = pthread_mutex_lock( &thelock->mut ); CHECK_STATUS("pthread_mutex_lock[3]"); thelock->locked = 0; /* ***** call pthread_cond_signal before unlock mutex */ status = pthread_cond_signal( &thelock->lock_released ); CHECK_STATUS("pthread_cond_signal"); status = pthread_mutex_unlock( &thelock->mut ); CHECK_STATUS("pthread_mutex_unlock[3]"); /* wake up someone (anyone, if any) waiting on the lock */ } ---------------------------------------------------------------------- >Comment By: Shih-Hao Liu (shihao) Date: 2001-06-20 00:17 Message: Logged In: YES user_id=246388 > I also browse through > LinuxThread implementation and can't find > pthread_cond_xxxx is implemented automically. I found they do call __pthread_lock when updating the priority queue after take a closer look. I guess I have to look at elsewhere to figure out why my Python script is spinning. ---------------------------------------------------------------------- Comment By: Shih-Hao Liu (shihao) Date: 2001-06-19 22:14 Message: Logged In: YES user_id=246388 > I understand both of those. Are you assuming that, e.g., > thread 3's PyThread_acquire_lock completes in whole during > this gap? I don't know what else you could mean, so let's > assume that. Yes, I assume thread 3 completed in this gap and it is possible. "Collide" means while thread 2's pthread_cond_wait was modifiying the internal data structure, thread 1 calls pthread_cond_signal. The point here is not missing signals, the problem is the possiblity that pthread_cond_signal preempt the execution of pthread_cond_wait. I can't find any document says pthread_cond_xxx functions must be automic operations. In pthread_cond_xxx man page, it only mentioned: pthread_cond_wait atomically unlocks the mutex (as per ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ pthread_unlock_mutex) and waits for the condition variable ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cond to be signaled. The thread execution is suspended and ^^^^^^^^^^^^^^^^^^^ does not consume any CPU time until the condition variable is signaled. The mutex must be locked by the calling thread on entrance to pthread_cond_wait. Before returning to the calling thread, pthread_cond_wait re-acquires mutex (as per pthread_lock_mutex). Unlocking the mutex and suspending on the condition vari­ able is done atomically. Thus, if all threads always acquire the mutex before signaling the condition, this guarantees that the condition cannot be signaled (and thus ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ignored) between the time a thread locks the mutex and the ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ time it waits on the condition variable. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ I take it that between the time the mutex is locked by pthread_mutex_lock and unlocked by pthread_cond_wait, you can't call pthread_cond_signal. I also browse through LinuxThread implementation and can't find pthread_cond_xxxx is implemented automically. I found there is another way to fix this without having to call the pthread_cond_signal while holding the mutex. If we do: if (!thelock->locked) status = pthread_cond_signal( &thelock->lock_released ); we guarantee that when pthread_cond_signal is called, the acquire_lock code will not be in the middle of pthread_cond_signal. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-19 15:06 Message: Logged In: YES user_id=31435 > The problem will be arised if there is a thread 3 > called PyThread_acquire_lock You never mentioned thread 3 again, so I don't know what it has to do with this. > after thread 1 set thelock->locked to 0 and before > thread 2 calling pthread_mutex_lock. I understand both of those. Are you assuming that, e.g., thread 3's PyThread_acquire_lock completes in whole during this gap? I don't know what else you could mean, so let's assume that. > "while (thelock->locked)" will success for thread 2 Sure. > and it will call pthread_cond_wait Yup. > and might collide with thread 1's pthread_cond_signal. What does "collide" mean to you? All the pthread_cond_xxx functions must be implemented as if atomic, so there's no meaningful sense (to me) in which they can collide -- unless they're implemented incorrectly. Assuming they are implemented correctly, it again doesn't matter that thread 2 misses thread 1's signal, because thread *3* exploited the information thread 1 was going to signal, by acquiring the lock. It's actually good that thread 2 isn't bothered with it: there's no real info in the signal anymore (at best, if thread 2 got it, it would wake up and go "oops! it's still locked; I'll wait again"). All that matters now is whether thread 2 gets a chance to see thread *3*'s signal, at the time thread 3 releases the lock. And it will, because thread 3 can't release the lock without acquiring the mutex first, and thread 2 holds the mutex at all times except when in its pthread_cond_wait call (so thread 3 can't release the lock except when thread 2 is in pthread_cond_wait). Note that I'm not at all concerned about "fairness" here, only about races. ---------------------------------------------------------------------- Comment By: Shih-Hao Liu (shihao) Date: 2001-06-19 14:41 Message: Logged In: YES user_id=246388 Oops. The problem will be arised if there is a thread 3 called PyThread_acquire_lock after thread 1 set thelock->locked to 0 and before thread 2 calling pthread_mutex_lock. "while (thelock->locked)" will success for thread 2 and it will call pthread_cond_wait and might collide with thread 1's pthread_cond_signal. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-19 13:52 Message: Logged In: YES user_id=31435 It appears you're concerned that the signal will "get lost" in this scenario. I agree that it may, but it doesn't matter: thread 2's "while (thelock->locked)" test fails because thread 1 already set thelock->locked to 0, so thread 2 doesn't execute its pthread_cond_wait, so it doesn't matter that nobody is *waiting* to see thread 1's pthread_cond_signal. IOW, the condition thread 1 will try to signal has *already* been detected by thread 2, and the signal is useless (because redundant) information. Indeed, it's a beauty of the condition protocol that signals can be sloppy. The linuxthread man page is worded strangely here, but note that it does not say the mutex *must* be locked. They can't, either, because POSIX doesn't require it; while POSIX stds aren't available freely online, some derived specs are, and are much clearer about this; e.g., see the Single Unix Specification, here: I'll tell you why I don't *want* to change this: in Python's use of the global interpreter lock, it's almost always the case that someone is waiting on the lock. By releasing the mutex before signaling, this gives a waiter a chance to run immediately upon calling pthread_cond_signal. Else, because pthread_cond_wait (which the waiters are executing) has to lock the mutex, if the signaler is holding the mutex during the signal, pthread_cond_signal can't finish the job -- it was to go back to the signaler and let it unlock the mutex first before a waiter can proceed. This makes the region of exclusion longer than it needs to be. So if there's not an actual race problem here (& I still don't see one), I don't want to change this. ---------------------------------------------------------------------- Comment By: Shih-Hao Liu (shihao) Date: 2001-06-17 23:01 Message: Logged In: YES user_id=246388 I closed it because I thought the thelock->locked variable will ensure that the PyThread_release_lock will help to protect the condition variable and I was wrong. The linuxthread man page on pthread_cond_signal: A condition variable must always be associated with a mutex, to avoid the race condition where a thread prepares to wait on a condition variable and another thread signals the condition just before the first thread actually waits on it. which means you can't call pthread_cond_signal & pthread_cond_wait on the same condition variable at the same time. And using a mutex to protect them is a good idea. Here is how thing might go wrong with current implementation: thread 1 thread 2 |int PyThread_acquire_lock _ |/** assume lock was acquired | by thread 1, hence locked=0 | & success would be 0 **/ |{ | ... | status = pthread_mutex_lo | CHECK_STATUS("pthread_mut | success = thelock->locked | if (success) thelock->loc | status = pthread_mutex_un | /** thread 2 suspended **/ void PyThread_release_lock _| { | ... | status = pthread_mutex_loc| CHECK_STATUS("pthread_mute| | thelock->locked = 0; | | status = pthread_mutex_unl| /** thread 1 suspend **/ | | CHECK_STATUS("pthread_mut | | if ( !success && waitflag | /* continue trying unti | | /* mut must be locked b | * protocol */ | status = pthread_mutex_ | CHECK_STATUS("pthread_m | while ( thelock->locked | status = pthread_cond |/** thread 2 suspended while | updating shared data ** CHECK_STATUS("pthread_mute| | /* wake up someone (anyone| status = pthread_cond_sign| /** thread 1 update shared | data and corrupt it. **/ | Not sure what the effect would be. It's wouldn't be nice anyway. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 16:02 Message: Logged In: YES user_id=31435 Closing this again, as it appears the original submitter deleted it. shihao, if you want to pursure this, open it again. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 15:01 Message: Logged In: YES user_id=31435 Ack, did I delete this?! I sure didn't intend to -- didn't even intend to close it. Reopened pending more info. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-17 14:51 Message: Logged In: YES user_id=6380 Set status to closed -- no need to delete it. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:16 Message: Logged In: YES user_id=31435 Why? It's allowed to signal the condition whether or not the mutex is held. Since changing this can have visible effects on thread scheduling, I'm reluctant to change it without a good reason. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433625&group_id=5470 From noreply@sourceforge.net Wed Jun 20 11:25:43 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Jun 2001 03:25:43 -0700 Subject: [Python-bugs-list] [ python-Bugs-434743 ] rexec bug / doc bug? Message-ID: Bugs item #434743, was updated on 2001-06-20 03:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434743&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Harri Pasanen (harripasanen) Assigned to: Nobody/Anonymous (nobody) Summary: rexec bug / doc bug? Initial Comment: I would expect to get an ImportError from the estr_env.r_exec("import sys; sys.exit(1)") line below. Even if 'sys' is not in the ok_builtin_modules, it seems to import just fine. import rexec class MySandBox(rexec.RExec): def __init__(self, hooks, verbose): rexec.RExec.__init__(self, hooks, verbose) print self.ok_builtin_modules restr_env = MySandBox(None, 1) restr_env.r_exec("print 'does something'") restr_env.r_exec("import sys; sys.exit(1)") print "Never comes here" Is the documentation incomplete or is this a bug? -Harri ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434743&group_id=5470 From noreply@sourceforge.net Wed Jun 20 19:13:35 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Jun 2001 11:13:35 -0700 Subject: [Python-bugs-list] [ python-Bugs-434743 ] rexec bug / doc bug? Message-ID: Bugs item #434743, was updated on 2001-06-20 03:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434743&group_id=5470 >Category: Documentation >Group: Not a Bug Status: Open Resolution: None Priority: 5 Submitted By: Harri Pasanen (harripasanen) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: rexec bug / doc bug? Initial Comment: I would expect to get an ImportError from the estr_env.r_exec("import sys; sys.exit(1)") line below. Even if 'sys' is not in the ok_builtin_modules, it seems to import just fine. import rexec class MySandBox(rexec.RExec): def __init__(self, hooks, verbose): rexec.RExec.__init__(self, hooks, verbose) print self.ok_builtin_modules restr_env = MySandBox(None, 1) restr_env.r_exec("print 'does something'") restr_env.r_exec("import sys; sys.exit(1)") print "Never comes here" Is the documentation incomplete or is this a bug? -Harri ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-20 11:13 Message: Logged In: YES user_id=6380 Code executing in a rexec sandbox has its own copy of 'sys', complete with a fake sys.path, sys.modules, sys.exit etc. sys.exit() happens to raise the SystemExit exception; the caller should catch that if the sandboxed code is not supposed to cause the program to exit (the caller should be catching all exceptions anyway, right). Assigned to Fred for a small doc update perhaps. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434743&group_id=5470 From noreply@sourceforge.net Wed Jun 20 23:13:36 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Jun 2001 15:13:36 -0700 Subject: [Python-bugs-list] [ python-Bugs-434944 ] setup.py - nonstandard paths Message-ID: Bugs item #434944, was updated on 2001-06-20 15:13 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434944&group_id=5470 Category: Build Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Robert Minsk (rminsk) Assigned to: Nobody/Anonymous (nobody) Summary: setup.py - nonstandard paths Initial Comment: In my build environment I have to ensure that the same version of each software package is available across many different platforms. To do this I compile code into a directory structure when the root path of /usr/tools/fw. So a tools like flex would result in files /usr/tools/fw/bin/flex, /usr/tools/fw/include/FlexLexer.h, /usr/tools/fw/lib/libfl.a, ... In the Python 2.1 build environment it does not seem that you can pass extra search paths too setup.py. I must either hack setup.py to look in /usr/tools/fw or manually add each module to Modules/Setup. It would be nice for setup.py to be able to take extra search paths. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434944&group_id=5470 From noreply@sourceforge.net Thu Jun 21 01:19:19 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Jun 2001 17:19:19 -0700 Subject: [Python-bugs-list] [ python-Bugs-434975 ] Typo on Posix Large File Support page Message-ID: Bugs item #434975, was updated on 2001-06-20 17:19 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434975&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Matthew Cowles (mdcowles) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Typo on Posix Large File Support page Initial Comment: On the page http://www.python.org/doc/current/ lib/posix-large-files.html The line CC="-D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64" should probably be CC="cc -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64" or something like that. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434975&group_id=5470 From noreply@sourceforge.net Thu Jun 21 02:38:00 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Jun 2001 18:38:00 -0700 Subject: [Python-bugs-list] [ python-Bugs-434988 ] Possible bug in _cursesmodule.c Message-ID: Bugs item #434988, was updated on 2001-06-20 18:37 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434988&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Robert Minsk (rminsk) Assigned to: Nobody/Anonymous (nobody) Summary: Possible bug in _cursesmodule.c Initial Comment: When trying to clean up SGI compiler warning messages I ran across the following in Modules/_cursesmodule.c from Python-2.1. Around line 192: } else if(PyString_Check(obj) & (PyString_Size(obj) == 1)) { Should this be "&&" and not "&"? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434988&group_id=5470 From noreply@sourceforge.net Thu Jun 21 02:42:37 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Jun 2001 18:42:37 -0700 Subject: [Python-bugs-list] [ python-Bugs-434989 ] Possible bug in parsermodule.c Message-ID: Bugs item #434989, was updated on 2001-06-20 18:42 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434989&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Robert Minsk (rminsk) Assigned to: Nobody/Anonymous (nobody) Summary: Possible bug in parsermodule.c Initial Comment: When getting rid of warning messages from the SGI compiler I ran across the following in Modules/parsemodule.c in Python-2.1: Line 2527 in Modules/parsermodule.c reads: while (res & (tree != 0)) { should this be a "&&" and not a "&"? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434989&group_id=5470 From noreply@sourceforge.net Thu Jun 21 02:51:12 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Jun 2001 18:51:12 -0700 Subject: [Python-bugs-list] [ python-Bugs-434992 ] Cleanup of warning messages Message-ID: Bugs item #434992, was updated on 2001-06-20 18:51 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434992&group_id=5470 Category: Build Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Robert Minsk (rminsk) Assigned to: Nobody/Anonymous (nobody) Summary: Cleanup of warning messages Initial Comment: I just compiled Python-2.1 of the SGI using the latest compilers (7.3.1.2m) with all the warning flags turned on. The following patch will get rid of most of the warning messages. I would like to see this incorporated into the next release. It is easier to spot real problems when you do not have to sort thru other warning messages. The included patch does not include other optional modules and the ones setup.py finds by default. I may have found 2 bugs in the process. Please see bugs 434989 and 434988. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434992&group_id=5470 From noreply@sourceforge.net Thu Jun 21 07:18:23 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Jun 2001 23:18:23 -0700 Subject: [Python-bugs-list] [ python-Bugs-435026 ] SGI cores on 1.0 / 0 Message-ID: Bugs item #435026, was updated on 2001-06-20 23:18 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Drew Whitehouse (drw900) Assigned to: Nobody/Anonymous (nobody) Summary: SGI cores on 1.0 / 0 Initial Comment: python21 cores evaluating 1.0 / 0 on SGI. MIPSpro Compilers: Version 7.3.1.1m SGI_ABI = -n32 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 From noreply@sourceforge.net Thu Jun 21 10:05:50 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Jun 2001 02:05:50 -0700 Subject: [Python-bugs-list] [ python-Bugs-434743 ] rexec bug / doc bug? Message-ID: Bugs item #434743, was updated on 2001-06-20 03:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434743&group_id=5470 Category: Documentation Group: Not a Bug Status: Open Resolution: None Priority: 5 Submitted By: Harri Pasanen (harripasanen) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: rexec bug / doc bug? Initial Comment: I would expect to get an ImportError from the estr_env.r_exec("import sys; sys.exit(1)") line below. Even if 'sys' is not in the ok_builtin_modules, it seems to import just fine. import rexec class MySandBox(rexec.RExec): def __init__(self, hooks, verbose): rexec.RExec.__init__(self, hooks, verbose) print self.ok_builtin_modules restr_env = MySandBox(None, 1) restr_env.r_exec("print 'does something'") restr_env.r_exec("import sys; sys.exit(1)") print "Never comes here" Is the documentation incomplete or is this a bug? -Harri ---------------------------------------------------------------------- >Comment By: Harri Pasanen (harripasanen) Date: 2001-06-21 02:05 Message: Logged In: YES user_id=77088 Ok, what threw me of is that SystemExit is not putting up a traceback on screen. Why is that? So it looked like sys.exit() was called and working instead. If I do something like: restr_env.r_exec("a = 1/0") then I do get a traceback on screen. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-20 11:13 Message: Logged In: YES user_id=6380 Code executing in a rexec sandbox has its own copy of 'sys', complete with a fake sys.path, sys.modules, sys.exit etc. sys.exit() happens to raise the SystemExit exception; the caller should catch that if the sandboxed code is not supposed to cause the program to exit (the caller should be catching all exceptions anyway, right). Assigned to Fred for a small doc update perhaps. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434743&group_id=5470 From noreply@sourceforge.net Thu Jun 21 10:29:42 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Jun 2001 02:29:42 -0700 Subject: [Python-bugs-list] [ python-Bugs-435066 ] PyObject_ClearWeakRefs misdocumented Message-ID: Bugs item #435066, was updated on 2001-06-21 02:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435066&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Michael Abbott (araneidae) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: PyObject_ClearWeakRefs misdocumented Initial Comment: In section 3.3.3 of release 2.1 of the "Python Library Reference" we are advised to write the following code: ... if(!PyObject_ClearWeakRefs(op)) return; ... However, this routine is now declared to return void, so this is evidently out of date. Also, PyObject_ClearWeakRefs does not appear in release 2.1 of the "Python/C API Reference Manual" (nor do many other routines, alas!) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435066&group_id=5470 From noreply@sourceforge.net Thu Jun 21 12:22:01 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Jun 2001 04:22:01 -0700 Subject: [Python-bugs-list] [ python-Bugs-434743 ] rexec bug / doc bug? Message-ID: Bugs item #434743, was updated on 2001-06-20 03:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434743&group_id=5470 Category: Documentation Group: Not a Bug Status: Open Resolution: None Priority: 5 Submitted By: Harri Pasanen (harripasanen) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: rexec bug / doc bug? Initial Comment: I would expect to get an ImportError from the estr_env.r_exec("import sys; sys.exit(1)") line below. Even if 'sys' is not in the ok_builtin_modules, it seems to import just fine. import rexec class MySandBox(rexec.RExec): def __init__(self, hooks, verbose): rexec.RExec.__init__(self, hooks, verbose) print self.ok_builtin_modules restr_env = MySandBox(None, 1) restr_env.r_exec("print 'does something'") restr_env.r_exec("import sys; sys.exit(1)") print "Never comes here" Is the documentation incomplete or is this a bug? -Harri ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-21 04:22 Message: Logged In: YES user_id=6380 SystemExit's semantics are that *if* it is propagated all the way out of the main program, Python exits. This is a nice way to implement sys.exit() (which is supposed to exit the program, of course, like exit() in C) while still honoring try/finally clauses. If this isn't fully documented (it should be both with sys.exit and SystemExit in the library manual) that's an opportunity for Fred to make the documentation even better by explaining this and its rationale. ---------------------------------------------------------------------- Comment By: Harri Pasanen (harripasanen) Date: 2001-06-21 02:05 Message: Logged In: YES user_id=77088 Ok, what threw me of is that SystemExit is not putting up a traceback on screen. Why is that? So it looked like sys.exit() was called and working instead. If I do something like: restr_env.r_exec("a = 1/0") then I do get a traceback on screen. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-20 11:13 Message: Logged In: YES user_id=6380 Code executing in a rexec sandbox has its own copy of 'sys', complete with a fake sys.path, sys.modules, sys.exit etc. sys.exit() happens to raise the SystemExit exception; the caller should catch that if the sandboxed code is not supposed to cause the program to exit (the caller should be catching all exceptions anyway, right). Assigned to Fred for a small doc update perhaps. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434743&group_id=5470 From noreply@sourceforge.net Fri Jun 22 03:41:23 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Jun 2001 19:41:23 -0700 Subject: [Python-bugs-list] [ python-Bugs-435026 ] SGI cores on 1.0 / 0 Message-ID: Bugs item #435026, was updated on 2001-06-20 23:18 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Drew Whitehouse (drw900) Assigned to: Nobody/Anonymous (nobody) Summary: SGI cores on 1.0 / 0 Initial Comment: python21 cores evaluating 1.0 / 0 on SGI. MIPSpro Compilers: Version 7.3.1.1m SGI_ABI = -n32 ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-21 19:41 Message: Logged In: YES user_id=132786 Python 2.1 core dumps on any using the 7.3.1.2m compilers with -O2 or greater. If the file Objects/floatobject.c is compiled with -O1 everything seems fine. It is core dumping in the macro CONVERT_TO_DOUBLE. It seems the call stack gets corrupted. I'm trying to find a workaround besides -O1. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 From noreply@sourceforge.net Fri Jun 22 03:52:49 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Jun 2001 19:52:49 -0700 Subject: [Python-bugs-list] [ python-Bugs-435026 ] SGI cores on 1.0 / 0 Message-ID: Bugs item #435026, was updated on 2001-06-20 23:18 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Drew Whitehouse (drw900) Assigned to: Nobody/Anonymous (nobody) Summary: SGI cores on 1.0 / 0 Initial Comment: python21 cores evaluating 1.0 / 0 on SGI. MIPSpro Compilers: Version 7.3.1.1m SGI_ABI = -n32 ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-21 19:52 Message: Logged In: YES user_id=31435 Robert, let us know if you find it! There's always *some* optimization bug on SGI boxes, but this one is particularly noxious. Someone on c.l.py suggested it may be a problem with Python accessing a double at an unaligned (for the platform) memory address. They didn't follow up, so I don't know whether that's the case, but if it is we would consider it a bug in Python (we try to stick to std C, so if there's an unaligned access it's a bug in our coding -- but I don't see anything like that by eyeball). ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-21 19:41 Message: Logged In: YES user_id=132786 Python 2.1 core dumps on any using the 7.3.1.2m compilers with -O2 or greater. If the file Objects/floatobject.c is compiled with -O1 everything seems fine. It is core dumping in the macro CONVERT_TO_DOUBLE. It seems the call stack gets corrupted. I'm trying to find a workaround besides -O1. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 From noreply@sourceforge.net Fri Jun 22 04:42:01 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Jun 2001 20:42:01 -0700 Subject: [Python-bugs-list] [ python-Bugs-435026 ] SGI cores on 1.0 / 0 Message-ID: Bugs item #435026, was updated on 2001-06-20 23:18 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Drew Whitehouse (drw900) Assigned to: Nobody/Anonymous (nobody) Summary: SGI cores on 1.0 / 0 Initial Comment: python21 cores evaluating 1.0 / 0 on SGI. MIPSpro Compilers: Version 7.3.1.1m SGI_ABI = -n32 ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-21 20:42 Message: Logged In: YES user_id=132786 Thank you tim_one for the hint... I still trying to track it down but changing Include/intobject.h PyIntObject to include memory alignment pragmas seem to be the trick. On the SGI I've changed it to #pragma pack(8) typedef struct { PyObject_HEAD long ob_ival; } PyIntObject; #pragma pack(0) This is only a hack until I find a way to get the proper alignment. I'm begining to wonder if anywhere in the float code something is trying to cast a int object to a float object. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-21 19:52 Message: Logged In: YES user_id=31435 Robert, let us know if you find it! There's always *some* optimization bug on SGI boxes, but this one is particularly noxious. Someone on c.l.py suggested it may be a problem with Python accessing a double at an unaligned (for the platform) memory address. They didn't follow up, so I don't know whether that's the case, but if it is we would consider it a bug in Python (we try to stick to std C, so if there's an unaligned access it's a bug in our coding -- but I don't see anything like that by eyeball). ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-21 19:41 Message: Logged In: YES user_id=132786 Python 2.1 core dumps on any using the 7.3.1.2m compilers with -O2 or greater. If the file Objects/floatobject.c is compiled with -O1 everything seems fine. It is core dumping in the macro CONVERT_TO_DOUBLE. It seems the call stack gets corrupted. I'm trying to find a workaround besides -O1. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 From noreply@sourceforge.net Fri Jun 22 13:40:06 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Jun 2001 05:40:06 -0700 Subject: [Python-bugs-list] [ python-Bugs-435446 ] Python.h not found while building extent Message-ID: Bugs item #435446, was updated on 2001-06-22 05:40 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435446&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Gilles Civario (gcivario) Assigned to: Nobody/Anonymous (nobody) Summary: Python.h not found while building extent Initial Comment: I've got an extention module which installation was simple whith python1.5.2 With python2.1, an error occure : $ make -f Makefile.pre.in boot $ make static cc -O -I/luna/civario/prodtmp/include/python2.1 -I/luna/civario/prodtmp/include/python2.1 -DHAVE_CONFIG_H -c config.c cc -O -DSunOS -DHAVE_CONFIG_H -I../../SUNTOOL/sources -L../../SUNTOOL/Solaris/bin -lsuntool -lF77 -lsunmath -lM77 -lfui -lfai -lfai2 -lfsumai -lfprodai -lfminlai -lfmaxlai -lfminvai -lfmaxvai -lfsu -lsunmath -lm -c ././../../SUNTOOL/sources/lcmmodule.c -o ./lcmmodule.o "././../../SUNTOOL/sources/lcmp.h", line 40: cannot find include file: "Python.h" cc: acomp failed for ././../../SUNTOOL/sources/lcmmodule.c *** Error code 2 make: Fatal error: Command failed for target `lcmmodule.o' The Setup file look so : -----8<-------------------------------------------------------------- PLATFORM=Solaris MACH=SunOS F7LIBS= -lF77 -lsunmath -lM77 -lfui -lfai -lfai2 -lfsumai -lfprodai -lfminlai -lfmaxlai -lfminvai -lfmaxvai -lfsu -lsunmath -lm F9LIBS= -lF77 -lsunmath -lM77 -lfui -lfai -lfai2 -lfsumai -lfprodai -lfminlai -lfmaxlai -lfminvai -lfmaxvai -lfsu -lsunmath -lm LCM_IMPORT= LIBAUT = bin/libdragon.a MYLIBS= -L../../SUNTOOL/$(PLATFORM)/bin -lsuntool FLAGS= -O $(LCM_IMPORT) -D$(MACH) -DHAVE_CONFIG_H -I../../SUNTOOL/sources lcm ../../SUNTOOL/sources/lcmmodule.c $(FLAGS) $(MYLIBS) $(F7LIBS) sunset ../sources/sunsetmodule.c $(FLAGS) $(LIBAUT) $(MYLIBS) $(F7LIBS) -----8<-------------------------------------------------------------- Finaly, I found a workarround by changing the PY_CFLAGS in CFLAGS in the Makefile : -----8<-------------------------------------------------------------- # Rules appended by makedepend ./lcmmodule.o: $(srcdir)/./../../SUNTOOL/sources/lcmmodule.c; $(CC) $(PY_CFLAGS) $(FLAGS) $(MYLIBS) $(F7LIBS) -c $(srcdir)/./../../SUNTOOL/sources/lcmmodule.c -o ./lcmmodule.o ./lcmmodule$(SO): ./lcmmodule.o; $(LDSHARED) ./lcmmodule.o $(FLAGS) $(MYLIBS) $(F7LIBS) -o ./lcmmodule$(SO) ./sunsetmodule.o: $(srcdir)/./../sources/sunsetmodule.c; $(CC) $(PY_CFLAGS) $(FLAGS) $(LIBAUT) $(MYLIBS) $(F7LIBS) -c $(srcdir)/./../sources/sunsetmodule.c -o ./sunsetmodule.o ./sunsetmodule$(SO): ./sunsetmodule.o; $(LDSHARED) ./sunsetmodule.o $(FLAGS) $(LIBAUT) $(MYLIBS) $(F7LIBS) -o ./sunsetmodule$(SO) -----8<-------------------------------------------------------------- Should I change my Setup file, or is it a bug ? $ uname -a SunOS saturne 5.7 Generic_106541-12 sun4u sparc SUNW,Ultra-60 $ python Python 2.1 (#1, Jun 21 2001, 16:05:32) [C] on sunos5 Type "copyright", "credits" or "license" for more information. Gilles. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435446&group_id=5470 From noreply@sourceforge.net Fri Jun 22 14:58:32 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Jun 2001 06:58:32 -0700 Subject: [Python-bugs-list] [ python-Bugs-435455 ] Python 2.0.1c1 fails to build on RH7.1 Message-ID: Bugs item #435455, was updated on 2001-06-22 06:58 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435455&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Ole H. Nielsen (ohnielse) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.0.1c1 fails to build on RH7.1 Initial Comment: Building Python 2.0.1c1 on a RedHat 7.1 (2.4.2-2 on i586) fails at this point: cd Modules; make OPT="-g -O2 -Wall -Wstrict-prototypes -fPIC" VERSION="2.0" \ prefix="/usr/local" exec_prefix="/usr/local" \ sharedmods make[1]: Entering directory `/scratch/ohnielse/Python-2.0.1/Modules' gcc -fpic -g -O2 -Wall -Wstrict-prototypes -fPIC -I./../Include -I.. -DHAVE_CONFIG_H -c ./bsddbmodule.c ./bsddbmodule.c: In function `newdbhashobject': ./bsddbmodule.c:55: `HASHINFO' undeclared (first use in this function) ./bsddbmodule.c:55: (Each undeclared identifier is reported only once ./bsddbmodule.c:55: for each function it appears in.) ./bsddbmodule.c:55: parse error before `info' ./bsddbmodule.c:60: `info' undeclared (first use in this function) ./bsddbmodule.c:71: warning: implicit declaration of function `dbopen' ./bsddbmodule.c:71: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `newdbbtobject': ./bsddbmodule.c:100: `BTREEINFO' undeclared (first use in this function) ./bsddbmodule.c:100: parse error before `info' ./bsddbmodule.c:105: `info' undeclared (first use in this function) ./bsddbmodule.c:118: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `newdbrnobject': ./bsddbmodule.c:147: `RECNOINFO' undeclared (first use in this function) ./bsddbmodule.c:147: parse error before `info' ./bsddbmodule.c:152: `info' undeclared (first use in this function) ./bsddbmodule.c:164: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `bsddb_dealloc': ./bsddbmodule.c:202: too few arguments to function ./bsddbmodule.c: In function `bsddb_length': ./bsddbmodule.c:232: structure has no member named `seq' ./bsddbmodule.c:233: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c:235: structure has no member named `seq' ./bsddbmodule.c:236: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c:229: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_subscript': ./bsddbmodule.c:265: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:265: too few arguments to function ./bsddbmodule.c: In function `bsddb_ass_sub': ./bsddbmodule.c:307: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:307: too few arguments to function ./bsddbmodule.c:330: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:330: too few arguments to function ./bsddbmodule.c: In function `bsddb_close': ./bsddbmodule.c:357: too few arguments to function ./bsddbmodule.c: In function `bsddb_keys': ./bsddbmodule.c:386: structure has no member named `seq' ./bsddbmodule.c:386: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c:407: structure has no member named `seq' ./bsddbmodule.c:407: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c:376: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_has_key': ./bsddbmodule.c:440: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:440: too few arguments to function ./bsddbmodule.c: In function `bsddb_set_location': ./bsddbmodule.c:466: structure has no member named `seq' ./bsddbmodule.c:466: `R_CURSOR' undeclared (first use in this function) ./bsddbmodule.c:453: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_seq': ./bsddbmodule.c:503: structure has no member named `seq' ./bsddbmodule.c:489: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_next': ./bsddbmodule.c:531: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_previous': ./bsddbmodule.c:536: `R_PREV' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_first': ./bsddbmodule.c:541: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_last': ./bsddbmodule.c:546: `R_LAST' undeclared (first use in this function) make[1]: *** [bsddbmodule.o] Error 1 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435455&group_id=5470 From noreply@sourceforge.net Fri Jun 22 17:03:18 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Jun 2001 09:03:18 -0700 Subject: [Python-bugs-list] [ python-Bugs-434975 ] Typo on Posix Large File Support page Message-ID: Bugs item #434975, was updated on 2001-06-20 17:19 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434975&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Matthew Cowles (mdcowles) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Typo on Posix Large File Support page Initial Comment: On the page http://www.python.org/doc/current/ lib/posix-large-files.html The line CC="-D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64" should probably be CC="cc -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64" or something like that. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-06-22 09:03 Message: Logged In: YES user_id=3066 Fixed in Doc/lib/libposix.tex revisions 1.58 and 1.56.4.2. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434975&group_id=5470 From noreply@sourceforge.net Fri Jun 22 18:20:50 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Jun 2001 10:20:50 -0700 Subject: [Python-bugs-list] [ python-Bugs-435066 ] PyObject_ClearWeakRefs misdocumented Message-ID: Bugs item #435066, was updated on 2001-06-21 02:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435066&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Michael Abbott (araneidae) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: PyObject_ClearWeakRefs misdocumented Initial Comment: In section 3.3.3 of release 2.1 of the "Python Library Reference" we are advised to write the following code: ... if(!PyObject_ClearWeakRefs(op)) return; ... However, this routine is now declared to return void, so this is evidently out of date. Also, PyObject_ClearWeakRefs does not appear in release 2.1 of the "Python/C API Reference Manual" (nor do many other routines, alas!) ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-06-22 10:20 Message: Logged In: YES user_id=3066 Fixed in Doc/lib/libweakref.tex revisions 1.9 and 1.7.2.2. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435066&group_id=5470 From noreply@sourceforge.net Fri Jun 22 19:23:20 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Jun 2001 11:23:20 -0700 Subject: [Python-bugs-list] [ python-Bugs-434743 ] rexec bug / doc bug? Message-ID: Bugs item #434743, was opened at 2001-06-20 03:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434743&group_id=5470 Category: Documentation Group: Not a Bug >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Harri Pasanen (harripasanen) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: rexec bug / doc bug? Initial Comment: I would expect to get an ImportError from the estr_env.r_exec("import sys; sys.exit(1)") line below. Even if 'sys' is not in the ok_builtin_modules, it seems to import just fine. import rexec class MySandBox(rexec.RExec): def __init__(self, hooks, verbose): rexec.RExec.__init__(self, hooks, verbose) print self.ok_builtin_modules restr_env = MySandBox(None, 1) restr_env.r_exec("print 'does something'") restr_env.r_exec("import sys; sys.exit(1)") print "Never comes here" Is the documentation incomplete or is this a bug? -Harri ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-06-22 11:23 Message: Logged In: YES user_id=3066 Clarified documentation for the rexec module in Doc/lib/librexec.tex revisions 1.15 and 1.14.2.1. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-21 04:22 Message: Logged In: YES user_id=6380 SystemExit's semantics are that *if* it is propagated all the way out of the main program, Python exits. This is a nice way to implement sys.exit() (which is supposed to exit the program, of course, like exit() in C) while still honoring try/finally clauses. If this isn't fully documented (it should be both with sys.exit and SystemExit in the library manual) that's an opportunity for Fred to make the documentation even better by explaining this and its rationale. ---------------------------------------------------------------------- Comment By: Harri Pasanen (harripasanen) Date: 2001-06-21 02:05 Message: Logged In: YES user_id=77088 Ok, what threw me of is that SystemExit is not putting up a traceback on screen. Why is that? So it looked like sys.exit() was called and working instead. If I do something like: restr_env.r_exec("a = 1/0") then I do get a traceback on screen. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-20 11:13 Message: Logged In: YES user_id=6380 Code executing in a rexec sandbox has its own copy of 'sys', complete with a fake sys.path, sys.modules, sys.exit etc. sys.exit() happens to raise the SystemExit exception; the caller should catch that if the sandboxed code is not supposed to cause the program to exit (the caller should be catching all exceptions anyway, right). Assigned to Fred for a small doc update perhaps. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434743&group_id=5470 From noreply@sourceforge.net Fri Jun 22 23:29:48 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Jun 2001 15:29:48 -0700 Subject: [Python-bugs-list] [ python-Bugs-435596 ] Fork/Thread problems on FreeBSD Message-ID: Bugs item #435596, was opened at 2001-06-22 15:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435596&group_id=5470 Category: Threads Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: Fork/Thread problems on FreeBSD Initial Comment: Run this code on both Linux and FreeBSD. On Linux you get a continuous stream of *'s. On FreeBSD you get 1. FreeBSD is wrong. import thread, os, sys, time def run(): while 1: if os.fork() == 0: time.sleep(0.001) sys.stderr.write('*') sys.stderr.flush() sys.exit(0) break os.wait() thread.start_new_thread(run, ()) while 1: time.sleep(0.001) pass I ran into this problem when trying to use Popen3 to run a system call from Zope. The fork in Popen3 never gets to the execvp. It works fine on Linux. I believe the problem in the above code is caused by the same issue. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435596&group_id=5470 From noreply@sourceforge.net Sat Jun 23 01:48:17 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Jun 2001 17:48:17 -0700 Subject: [Python-bugs-list] [ python-Bugs-435026 ] SGI cores on 1.0 / 0 Message-ID: Bugs item #435026, was opened at 2001-06-20 23:18 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Drew Whitehouse (drw900) Assigned to: Nobody/Anonymous (nobody) Summary: SGI cores on 1.0 / 0 Initial Comment: python21 cores evaluating 1.0 / 0 on SGI. MIPSpro Compilers: Version 7.3.1.1m SGI_ABI = -n32 ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-22 17:48 Message: Logged In: YES user_id=132786 After some more investigation it seems to be a CPU pipelining bug in the optimizer. #define CONVERT_TO_DOUBLE(obj, dbl) \ if (PyFloat_Check(obj)) \ dbl = PyFloat_AS_DOUBLE(obj); \ else if (convert_to_double(&(obj), &(dbl)) < 0) \ return obj; PyFloat_Check is a macro as well is PyFloat_AS_DOUBLE. Due to the CPU pipelining PyFloat_AS_DOUBLE (a cast to a double) is always being called. What happens is non-float objects that are not 8-byte aligned are being cast to a double. I am trying to figure out if I can reorder the code to not cause this pipelining issue. I will also see if I can somehow force a nop after PyFloat_Check. I will also open a bug with SGI. ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-21 20:42 Message: Logged In: YES user_id=132786 Thank you tim_one for the hint... I still trying to track it down but changing Include/intobject.h PyIntObject to include memory alignment pragmas seem to be the trick. On the SGI I've changed it to #pragma pack(8) typedef struct { PyObject_HEAD long ob_ival; } PyIntObject; #pragma pack(0) This is only a hack until I find a way to get the proper alignment. I'm begining to wonder if anywhere in the float code something is trying to cast a int object to a float object. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-21 19:52 Message: Logged In: YES user_id=31435 Robert, let us know if you find it! There's always *some* optimization bug on SGI boxes, but this one is particularly noxious. Someone on c.l.py suggested it may be a problem with Python accessing a double at an unaligned (for the platform) memory address. They didn't follow up, so I don't know whether that's the case, but if it is we would consider it a bug in Python (we try to stick to std C, so if there's an unaligned access it's a bug in our coding -- but I don't see anything like that by eyeball). ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-21 19:41 Message: Logged In: YES user_id=132786 Python 2.1 core dumps on any using the 7.3.1.2m compilers with -O2 or greater. If the file Objects/floatobject.c is compiled with -O1 everything seems fine. It is core dumping in the macro CONVERT_TO_DOUBLE. It seems the call stack gets corrupted. I'm trying to find a workaround besides -O1. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 From noreply@sourceforge.net Sat Jun 23 04:00:06 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Jun 2001 20:00:06 -0700 Subject: [Python-bugs-list] [ python-Bugs-435026 ] SGI cores on 1.0 / 0 Message-ID: Bugs item #435026, was opened at 2001-06-20 23:18 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 Category: Python Interpreter Core >Group: 3rd Party >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Drew Whitehouse (drw900) >Assigned to: Tim Peters (tim_one) Summary: SGI cores on 1.0 / 0 Initial Comment: python21 cores evaluating 1.0 / 0 on SGI. MIPSpro Compilers: Version 7.3.1.1m SGI_ABI = -n32 ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-22 20:00 Message: Logged In: YES user_id=31435 Nice work, Robert! Now we have a problem: when an optimizer is doing something wrong, then typically (a) there are any number of changes you could make that mask the problem under a specific release of the compiler, but (b) they're accidents, so it will just break again under some other release of the compiler. The PyFloat_AS_DOUBLE () is deliberately under the protection of an "if" test that ensures its legality, so the compiler is insane (not following the rules) in generating code that ignores this: how do you out-think an insane algorithm? You can fool it at random, but since it's not playing by the rules there's nothing *reliable* you can do. For that reason, I'm changing this to "3rd Party" and closing with "Won't Fix" -- it's not our doing, and there's nothing principled we can do about it short of slowing the code on all platforms (by, e.g., using an external function form of PyFloat_AS_DOUBLE, thus inhibiting the bad code generation). By the way, ask SGI to add Python to their standard compiler regression suite: *something* is always broken on SGI boxes, and disabling optimization always fixes it. ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-22 17:48 Message: Logged In: YES user_id=132786 After some more investigation it seems to be a CPU pipelining bug in the optimizer. #define CONVERT_TO_DOUBLE(obj, dbl) \ if (PyFloat_Check(obj)) \ dbl = PyFloat_AS_DOUBLE(obj); \ else if (convert_to_double(&(obj), &(dbl)) < 0) \ return obj; PyFloat_Check is a macro as well is PyFloat_AS_DOUBLE. Due to the CPU pipelining PyFloat_AS_DOUBLE (a cast to a double) is always being called. What happens is non-float objects that are not 8-byte aligned are being cast to a double. I am trying to figure out if I can reorder the code to not cause this pipelining issue. I will also see if I can somehow force a nop after PyFloat_Check. I will also open a bug with SGI. ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-21 20:42 Message: Logged In: YES user_id=132786 Thank you tim_one for the hint... I still trying to track it down but changing Include/intobject.h PyIntObject to include memory alignment pragmas seem to be the trick. On the SGI I've changed it to #pragma pack(8) typedef struct { PyObject_HEAD long ob_ival; } PyIntObject; #pragma pack(0) This is only a hack until I find a way to get the proper alignment. I'm begining to wonder if anywhere in the float code something is trying to cast a int object to a float object. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-21 19:52 Message: Logged In: YES user_id=31435 Robert, let us know if you find it! There's always *some* optimization bug on SGI boxes, but this one is particularly noxious. Someone on c.l.py suggested it may be a problem with Python accessing a double at an unaligned (for the platform) memory address. They didn't follow up, so I don't know whether that's the case, but if it is we would consider it a bug in Python (we try to stick to std C, so if there's an unaligned access it's a bug in our coding -- but I don't see anything like that by eyeball). ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-21 19:41 Message: Logged In: YES user_id=132786 Python 2.1 core dumps on any using the 7.3.1.2m compilers with -O2 or greater. If the file Objects/floatobject.c is compiled with -O1 everything seems fine. It is core dumping in the macro CONVERT_TO_DOUBLE. It seems the call stack gets corrupted. I'm trying to find a workaround besides -O1. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 From noreply@sourceforge.net Sat Jun 23 20:45:45 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Jun 2001 12:45:45 -0700 Subject: [Python-bugs-list] [ python-Bugs-434479 ] os.listdir loses on linux w/NTFS vols Message-ID: Bugs item #434479, was opened at 2001-06-19 07:48 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434479&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: jeremy bornstein (ukekuma) Assigned to: Nobody/Anonymous (nobody) Summary: os.listdir loses on linux w/NTFS vols Initial Comment: os.listdir() on a directory which is on an NTFS volume omits one entry from the directory listing. Example: planet {188}: grep ntfs /etc/fstab /dev/hda1 /lose ntfs uid=500,gid=500,umask=555 1 2 planet {189}: ls /lose Documents and Settings/ My Music/ Program Files/ PUTTY.RND $Secure unzipped/ WINNT/ planet {190}: python2.1 Python 2.1 (#1, Jun 19 2001, 00:32:28) [GCC 2.96 20000731 (Red Hat Linux 7.1 2.96-81)] on linux2 Type "copyright", "credits" or "license" for more information. >>> import os >>> os.listdir('/lose') ['$Secure', 'Documents and Settings', 'My Music', 'Program Files', 'PUTTY.RND', 'unzipped'] >>> planet {191}: (In the example, note that the directory 'WINNT' is not returned by os.listdir.) I have verified this bug with/1.5.2, 1.6.1, and 2.1 on Linux (RH7.1) only. I have only tested it on this one NTFS volume and this one computer. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-23 12:45 Message: Logged In: YES user_id=21627 This is likely a bug in the NTFS driver, not in Python. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434479&group_id=5470 From noreply@sourceforge.net Sat Jun 23 20:50:57 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Jun 2001 12:50:57 -0700 Subject: [Python-bugs-list] [ python-Bugs-434547 ] Problems with C++ ext. on Tru64 Message-ID: Bugs item #434547, was opened at 2001-06-19 11:18 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434547&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: Problems with C++ ext. on Tru64 Initial Comment: { I post this letter to comp.lang.python for discussion, python sourceforge bugtracker to make sure someone reads it and omniorb@uk.research.att.com as important appendix to the letter about compiling omniORB on Tru64 } I am currently trying to compile Python/OmniORB/OmniORBpython suite on Tru64 Unix (the new name for Digital Unix/OSF) with DEC CXX 6.2. For the longer story search for my next post, but I have some important observations about Python. Are they bugs? Anyone skilled to check it further is welcome. In case I should post this somewhere else, please let me know. The tests described below used Python 2.1. The problem which forced me to perform this analysis happened during compilation of omniORB 3.0.3. I start from the less important things going to the more important. 1) While compiling Python with DEC CXX (below you will find why I did it), I got the error message (on Include/structmember.h) about incorrect usage of language extension (probably they in some situations use 'readonly' in the way similar to 'const'). I have not diagnosed it in the great detail (seems that compiler options and pragmas set by python makefiles influence the situation somehow) but changing readonly to - say - read_only should not spoil anything and will help. I worked around the problemy by using cxx -Dreadonly=_readonly as the compiler name. 2) In contrary to most configure scripts, Python configure script ignores environment variable CC. The problem is in case switch checking wheter --with-gcc or --without-gcc is specified: if test "${with_gcc+set}" = set; then (....) else case $ac_sys_system in OSF1) CC=cc without_gcc=;; (...) To compile python with cxx I manually edited the line above but I think compiling python with compiler different than cc and gcc should be possible in the natural way. In case people dislike CC checking, maybe --with-cc=<...> could be done? 3) So, let's tell why I needed to compile python with DEC CXX. While using 'default' (compiled with cc) python, I was unable to use python extension modules written in C++ (I got the problem while trying to compile and use _omniidl module from omniORB but seems it would be the same for others): - the '_omniidlmodule.so' file links correctly and is correct - attempts to import it results in python -c 'import _omniidl' Traceback (innermost last): File "", line 1, in ? ImportError: dlopen: Unresolved symbols The problem is caused by the lack of symbols from libcxx.so (C++ compiler shared library). I am not expert regarding dlopen but seems that python, while loading the module, does not load shared libraries the module depends on (at least on Tru64). After I recompiled python with cxx (mainly to get the python executable linked permanently with libcxx.so so this library is present while my module is being imported) the problem disappeared and the module imported and worked correctly. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-23 12:50 Message: Logged In: YES user_id=21627 It should not be required to link an application with the C++ compiler if a shared library needs the C++ runtime system. Most likely, your error is to link the extension module with ld; C++ extension modules must always be linked with CC. If that doesn't help, you need to link the extension module with -lcxx explicitly. If that still doesn't help, complain to DEC. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434547&group_id=5470 From noreply@sourceforge.net Sat Jun 23 20:56:02 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Jun 2001 12:56:02 -0700 Subject: [Python-bugs-list] [ python-Bugs-434989 ] Possible bug in parsermodule.c Message-ID: Bugs item #434989, was opened at 2001-06-20 18:42 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434989&group_id=5470 Category: Extension Modules Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Robert Minsk (rminsk) Assigned to: Nobody/Anonymous (nobody) Summary: Possible bug in parsermodule.c Initial Comment: When getting rid of warning messages from the SGI compiler I ran across the following in Modules/parsemodule.c in Python-2.1: Line 2527 in Modules/parsermodule.c reads: while (res & (tree != 0)) { should this be a "&&" and not a "&"? ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-23 12:56 Message: Logged In: YES user_id=21627 Fixed in parsermodule 2.61. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434989&group_id=5470 From noreply@sourceforge.net Sat Jun 23 20:59:02 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Jun 2001 12:59:02 -0700 Subject: [Python-bugs-list] [ python-Bugs-434988 ] Possible bug in _cursesmodule.c Message-ID: Bugs item #434988, was opened at 2001-06-20 18:37 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434988&group_id=5470 Category: Extension Modules Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Robert Minsk (rminsk) Assigned to: Nobody/Anonymous (nobody) Summary: Possible bug in _cursesmodule.c Initial Comment: When trying to clean up SGI compiler warning messages I ran across the following in Modules/_cursesmodule.c from Python-2.1. Around line 192: } else if(PyString_Check(obj) & (PyString_Size(obj) == 1)) { Should this be "&&" and not "&"? ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-23 12:59 Message: Logged In: YES user_id=21627 Fixed in _cursesmodule.c 2.52. Thanks for contributing. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=434988&group_id=5470 From noreply@sourceforge.net Sat Jun 23 21:50:27 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Jun 2001 13:50:27 -0700 Subject: [Python-bugs-list] [ python-Bugs-435446 ] Python.h not found while building extent Message-ID: Bugs item #435446, was opened at 2001-06-22 05:40 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435446&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Gilles Civario (gcivario) Assigned to: Nobody/Anonymous (nobody) Summary: Python.h not found while building extent Initial Comment: I've got an extention module which installation was simple whith python1.5.2 With python2.1, an error occure : $ make -f Makefile.pre.in boot $ make static cc -O -I/luna/civario/prodtmp/include/python2.1 -I/luna/civario/prodtmp/include/python2.1 -DHAVE_CONFIG_H -c config.c cc -O -DSunOS -DHAVE_CONFIG_H -I../../SUNTOOL/sources -L../../SUNTOOL/Solaris/bin -lsuntool -lF77 -lsunmath -lM77 -lfui -lfai -lfai2 -lfsumai -lfprodai -lfminlai -lfmaxlai -lfminvai -lfmaxvai -lfsu -lsunmath -lm -c ././../../SUNTOOL/sources/lcmmodule.c -o ./lcmmodule.o "././../../SUNTOOL/sources/lcmp.h", line 40: cannot find include file: "Python.h" cc: acomp failed for ././../../SUNTOOL/sources/lcmmodule.c *** Error code 2 make: Fatal error: Command failed for target `lcmmodule.o' The Setup file look so : -----8<-------------------------------------------------------------- PLATFORM=Solaris MACH=SunOS F7LIBS= -lF77 -lsunmath -lM77 -lfui -lfai -lfai2 -lfsumai -lfprodai -lfminlai -lfmaxlai -lfminvai -lfmaxvai -lfsu -lsunmath -lm F9LIBS= -lF77 -lsunmath -lM77 -lfui -lfai -lfai2 -lfsumai -lfprodai -lfminlai -lfmaxlai -lfminvai -lfmaxvai -lfsu -lsunmath -lm LCM_IMPORT= LIBAUT = bin/libdragon.a MYLIBS= -L../../SUNTOOL/$(PLATFORM)/bin -lsuntool FLAGS= -O $(LCM_IMPORT) -D$(MACH) -DHAVE_CONFIG_H -I../../SUNTOOL/sources lcm ../../SUNTOOL/sources/lcmmodule.c $(FLAGS) $(MYLIBS) $(F7LIBS) sunset ../sources/sunsetmodule.c $(FLAGS) $(LIBAUT) $(MYLIBS) $(F7LIBS) -----8<-------------------------------------------------------------- Finaly, I found a workarround by changing the PY_CFLAGS in CFLAGS in the Makefile : -----8<-------------------------------------------------------------- # Rules appended by makedepend ./lcmmodule.o: $(srcdir)/./../../SUNTOOL/sources/lcmmodule.c; $(CC) $(PY_CFLAGS) $(FLAGS) $(MYLIBS) $(F7LIBS) -c $(srcdir)/./../../SUNTOOL/sources/lcmmodule.c -o ./lcmmodule.o ./lcmmodule$(SO): ./lcmmodule.o; $(LDSHARED) ./lcmmodule.o $(FLAGS) $(MYLIBS) $(F7LIBS) -o ./lcmmodule$(SO) ./sunsetmodule.o: $(srcdir)/./../sources/sunsetmodule.c; $(CC) $(PY_CFLAGS) $(FLAGS) $(LIBAUT) $(MYLIBS) $(F7LIBS) -c $(srcdir)/./../sources/sunsetmodule.c -o ./sunsetmodule.o ./sunsetmodule$(SO): ./sunsetmodule.o; $(LDSHARED) ./sunsetmodule.o $(FLAGS) $(LIBAUT) $(MYLIBS) $(F7LIBS) -o ./sunsetmodule$(SO) -----8<-------------------------------------------------------------- Should I change my Setup file, or is it a bug ? $ uname -a SunOS saturne 5.7 Generic_106541-12 sun4u sparc SUNW,Ultra-60 $ python Python 2.1 (#1, Jun 21 2001, 16:05:32) [C] on sunos5 Type "copyright", "credits" or "license" for more information. Gilles. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-23 13:50 Message: Logged In: YES user_id=21627 That seems to be a bug in Makefile.pre.in. As a work-around, I recommend to put *shared* as the first line of your Setup file; that will give a dynamically-loadable extension module. As for the nature of the bug: It appears that PY_CFLAGS is not set in the generated Makefile. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435446&group_id=5470 From noreply@sourceforge.net Sat Jun 23 21:58:24 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Jun 2001 13:58:24 -0700 Subject: [Python-bugs-list] [ python-Bugs-435455 ] Python 2.0.1c1 fails to build on RH7.1 Message-ID: Bugs item #435455, was opened at 2001-06-22 06:58 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435455&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Ole H. Nielsen (ohnielse) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.0.1c1 fails to build on RH7.1 Initial Comment: Building Python 2.0.1c1 on a RedHat 7.1 (2.4.2-2 on i586) fails at this point: cd Modules; make OPT="-g -O2 -Wall -Wstrict-prototypes -fPIC" VERSION="2.0" \ prefix="/usr/local" exec_prefix="/usr/local" \ sharedmods make[1]: Entering directory `/scratch/ohnielse/Python-2.0.1/Modules' gcc -fpic -g -O2 -Wall -Wstrict-prototypes -fPIC -I./../Include -I.. -DHAVE_CONFIG_H -c ./bsddbmodule.c ./bsddbmodule.c: In function `newdbhashobject': ./bsddbmodule.c:55: `HASHINFO' undeclared (first use in this function) ./bsddbmodule.c:55: (Each undeclared identifier is reported only once ./bsddbmodule.c:55: for each function it appears in.) ./bsddbmodule.c:55: parse error before `info' ./bsddbmodule.c:60: `info' undeclared (first use in this function) ./bsddbmodule.c:71: warning: implicit declaration of function `dbopen' ./bsddbmodule.c:71: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `newdbbtobject': ./bsddbmodule.c:100: `BTREEINFO' undeclared (first use in this function) ./bsddbmodule.c:100: parse error before `info' ./bsddbmodule.c:105: `info' undeclared (first use in this function) ./bsddbmodule.c:118: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `newdbrnobject': ./bsddbmodule.c:147: `RECNOINFO' undeclared (first use in this function) ./bsddbmodule.c:147: parse error before `info' ./bsddbmodule.c:152: `info' undeclared (first use in this function) ./bsddbmodule.c:164: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `bsddb_dealloc': ./bsddbmodule.c:202: too few arguments to function ./bsddbmodule.c: In function `bsddb_length': ./bsddbmodule.c:232: structure has no member named `seq' ./bsddbmodule.c:233: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c:235: structure has no member named `seq' ./bsddbmodule.c:236: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c:229: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_subscript': ./bsddbmodule.c:265: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:265: too few arguments to function ./bsddbmodule.c: In function `bsddb_ass_sub': ./bsddbmodule.c:307: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:307: too few arguments to function ./bsddbmodule.c:330: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:330: too few arguments to function ./bsddbmodule.c: In function `bsddb_close': ./bsddbmodule.c:357: too few arguments to function ./bsddbmodule.c: In function `bsddb_keys': ./bsddbmodule.c:386: structure has no member named `seq' ./bsddbmodule.c:386: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c:407: structure has no member named `seq' ./bsddbmodule.c:407: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c:376: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_has_key': ./bsddbmodule.c:440: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:440: too few arguments to function ./bsddbmodule.c: In function `bsddb_set_location': ./bsddbmodule.c:466: structure has no member named `seq' ./bsddbmodule.c:466: `R_CURSOR' undeclared (first use in this function) ./bsddbmodule.c:453: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_seq': ./bsddbmodule.c:503: structure has no member named `seq' ./bsddbmodule.c:489: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_next': ./bsddbmodule.c:531: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_previous': ./bsddbmodule.c:536: `R_PREV' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_first': ./bsddbmodule.c:541: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_last': ./bsddbmodule.c:546: `R_LAST' undeclared (first use in this function) make[1]: *** [bsddbmodule.o] Error 1 ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-23 13:58 Message: Logged In: YES user_id=21627 Please report the following things: - the line in Setup that you activated to enable compilation of bsddb - the exact version of the bsddb RPM package that provides db.h - whether or not this packages includes a file db_185.h ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435455&group_id=5470 From noreply@sourceforge.net Sat Jun 23 22:02:31 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Jun 2001 14:02:31 -0700 Subject: [Python-bugs-list] [ python-Bugs-435596 ] Fork/Thread problems on FreeBSD Message-ID: Bugs item #435596, was opened at 2001-06-22 15:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435596&group_id=5470 Category: Threads Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: Fork/Thread problems on FreeBSD Initial Comment: Run this code on both Linux and FreeBSD. On Linux you get a continuous stream of *'s. On FreeBSD you get 1. FreeBSD is wrong. import thread, os, sys, time def run(): while 1: if os.fork() == 0: time.sleep(0.001) sys.stderr.write('*') sys.stderr.flush() sys.exit(0) break os.wait() thread.start_new_thread(run, ()) while 1: time.sleep(0.001) pass I ran into this problem when trying to use Popen3 to run a system call from Zope. The fork in Popen3 never gets to the execvp. It works fine on Linux. I believe the problem in the above code is caused by the same issue. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-23 14:02 Message: Logged In: YES user_id=21627 Why do you think this is a bug in Python? Can you determine whether the thread is started, and whether the fork returns for the parent? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435596&group_id=5470 From noreply@sourceforge.net Sun Jun 24 00:13:59 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Jun 2001 16:13:59 -0700 Subject: [Python-bugs-list] [ python-Bugs-433882 ] UTF-8: unpaired surrogates mishandled Message-ID: Bugs item #433882, was opened at 2001-06-17 04:27 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433882&group_id=5470 Category: Unicode Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) >Assigned to: M.-A. Lemburg (lemburg) Summary: UTF-8: unpaired surrogates mishandled Initial Comment: Two bugs: 1. UTF-8 encoding of unpaired high surrogate produces an invalid UTF-8 byte sequence. 2. UTF-8 decoding of any unpaired surrogate produces an exception ("illegal encoding") instead of the corresponding 16-bit scalar value. See attached file utf8bugs.py for example plus detailed remarks. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-17 19:03 Message: Logged In: YES user_id=21627 I think the codec should reject unpaired surrogates both when encoding and when decoding. I don't have a copy of ISO 10646, but Unicode 3.1 points out # ISO/IEC 10646 does not allow mapping of unpaired surrogates, nor U+FFFE and U+FFFF (but it does allow other noncharacters). So apparently, encoding unpaired surrogates as UTF-8 is not allowed according to ISO 10646. I think Python should follow this rule, instead of the Unicode one. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433882&group_id=5470 From noreply@sourceforge.net Mon Jun 25 11:03:30 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Jun 2001 03:03:30 -0700 Subject: [Python-bugs-list] [ python-Bugs-436058 ] _PyTrace_Init needs a prototype Message-ID: Bugs item #436058, was opened at 2001-06-25 03:03 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436058&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: _PyTrace_Init needs a prototype Initial Comment: _PyTrace_Init() needs a declaration in an include file somewhere. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436058&group_id=5470 From noreply@sourceforge.net Mon Jun 25 15:36:07 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Jun 2001 07:36:07 -0700 Subject: [Python-bugs-list] [ python-Bugs-436103 ] Compiling pygtk Message-ID: Bugs item #436103, was opened at 2001-06-25 07:36 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436103&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: Compiling pygtk Initial Comment: Hello, I wanted to install Narval from (www.logilab.org) I install Python 2.1 and i try to install pygtk. And i get this error Like i am a newbie it's perhaps nothing from python narval@tst03cn:~/install/pygtk-0.6.6$ make make all-recursive make[1]: Entering directory `/home/narval/install/pygtk-0.6.6' Making all in generate make[2]: Entering directory `/home/narval/install/pygtk-0.6.6/generate' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/narval/install/pygtk-0.6.6/generate' Making all in pyglade make[2]: Entering directory `/home/narval/install/pygtk-0.6.6/pyglade' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/narval/install/pygtk-0.6.6/pyglade' make[2]: Entering directory `/home/narval/install/pygtk-0.6.6' cd . && /usr/bin/python mkgtk.py 'import site' failed; use -v for traceback Traceback (innermost last): File "mkgtk.py", line 5, in ? import generate File "./generate/generate.py", line 1, in ? import os File "/home/narval/lib/python2.1/os.py", line 37 return [n for n in dir(module) if n[0] != '_'] ^ SyntaxError: invalid syntax make[2]: *** [gtkmodule_defs.c] Error 1 make[2]: Leaving directory `/home/narval/install/pygtk-0.6.6' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/narval/install/pygtk-0.6.6' make: *** [all-recursive-am] Error 2 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436103&group_id=5470 From noreply@sourceforge.net Mon Jun 25 17:18:03 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Jun 2001 09:18:03 -0700 Subject: [Python-bugs-list] [ python-Bugs-436058 ] _PyTrace_Init needs a prototype Message-ID: Bugs item #436058, was opened at 2001-06-25 03:03 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436058&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: _PyTrace_Init needs a prototype Initial Comment: _PyTrace_Init() needs a declaration in an include file somewhere. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-06-25 09:18 Message: Logged In: YES user_id=3066 _PyTrace_Init() will be removed as a side-effect of the new profiler interface I'm working on, which I only got word that I could talk about this morning. ;) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436058&group_id=5470 From noreply@sourceforge.net Mon Jun 25 18:18:38 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Jun 2001 10:18:38 -0700 Subject: [Python-bugs-list] [ python-Bugs-436130 ] solaris2.6 problems with readline Message-ID: Bugs item #436130, was opened at 2001-06-25 10:18 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436130&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Fredrik Stenberg (fredriks) Assigned to: Nobody/Anonymous (nobody) Summary: solaris2.6 problems with readline Initial Comment: having problem with compiling python2.0.1 2.0 (i think i always had this problem after 1.5.2) on solaris 2.6 gcc -g -O2 -Wall -Wstrict-prototypes -fPIC -I./../Include -I.. -DHAVE_CONFIG_H -c ./readline.c ./readline.c: In function `setup_readline': ./readline.c:414: `CPPFunction' undeclared (first use in this function) ./readline.c:414: (Each undeclared identifier is reported only once ./readline.c:414: for each function it appears in.) ./readline.c:414: parse error before `)' *** Error code 1 I have always used to exchange Modules/readline.c with the old file from the 1.5.2 release. I finally got around to checking whats wrong, (or atleast browse around the code). readline.c Line 414 in void setup_readline states, rl_attempted_completion_function = (CPPFunction *)flex_complete; should this not be; rl_attempted_completion_function = (Function *)flex_complete; I have no problems if i change CPPfunction into Function, i'm no readline expert but i think this is the problem. *sysinfo* gcc 2.95.2 solaris 2.6 readline4.1 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436130&group_id=5470 From noreply@sourceforge.net Mon Jun 25 18:26:16 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Jun 2001 10:26:16 -0700 Subject: [Python-bugs-list] [ python-Bugs-436131 ] freeze: global symbols not exported Message-ID: Bugs item #436131, was opened at 2001-06-25 10:26 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436131&group_id=5470 Category: demos and tools Group: None Status: Open Resolution: None Priority: 5 Submitted By: Charles Schwieters (chuckorama) Assigned to: Nobody/Anonymous (nobody) Summary: freeze: global symbols not exported Initial Comment: python-2.1 linux-2.2, others? the freeze tool does not export global symbols. As a result the frozen executable fails with unresolved symbols in shared objects. fix: include the LINKFORSHARED flag in freeze.py: *** freeze.py~ Tue Mar 20 15:43:33 2001 --- freeze.py Fri Jun 22 14:36:23 2001 *************** *** 434,440 **** somevars[key] = makevars[key] somevars['CFLAGS'] = string.join(cflags) # override ! files = ['$(OPT)', '$(LDFLAGS)', base_config_c, base_frozen_c] + \ files + supp_sources + addfiles + libs + \ ['$(MODLIBS)', '$(LIBS)', '$(SYSLIBS)'] --- 434,440 ---- somevars[key] = makevars[key] somevars['CFLAGS'] = string.join(cflags) # override ! files = ['$(OPT)', '$(LDFLAGS)', '$(LINKFORSHARED)',base_config_c, base_frozen_c] + \ files + supp_sources + addfiles + libs + \ ['$(MODLIBS)', '$(LIBS)', '$(SYSLIBS)'] ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436131&group_id=5470 From noreply@sourceforge.net Mon Jun 25 21:12:22 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Jun 2001 13:12:22 -0700 Subject: [Python-bugs-list] [ python-Bugs-433481 ] No way to link python itself with C++ Message-ID: Bugs item #433481, was opened at 2001-06-15 10:06 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433481&group_id=5470 Category: Build Group: Platform-specific >Status: Closed Resolution: None Priority: 5 Submitted By: Stephan A. Fiedler (sfiedler) Assigned to: Nobody/Anonymous (nobody) Summary: No way to link python itself with C++ Initial Comment: I'm running on Solaris 2.7 with the Sun Workshop compiler, version 4.2. I have built an extension module in C++ as a shared object. When I attempt to import it into Python, I get an error about missing symbols related to C++ exception handling: ImportError: ld.so.1: python: fatal: relocation error: file /home/saf/pymidas/m2k/solaris_debug/comp/m2kapi.so: symbol _ex_keylock: referenced symbol not found This symbol lives in the C++ runtime, libC.so. 'ldd python' shows that this library is not available to the Python executable itself, because the C compiler linked the executable. If I manually edit the makefile for building python so that LINKCC is $(PURIFY) $(CXX) instead of $(PURIFY) $(CC) and then relink just the Python executable, I can see (with ldd) that the C++ runtime libC.so is now linked with Python, and I am able to load my module. (I believe it is actually no problem to build the entire system with LINKCC calling CXX instead of CC.) In case it's relevant, my extension module itself is compiled with these flags: -DDEBUG -DSUNCC_ -mt -pto -PIC -xildoff +w2 -D_POSIX_PTHREAD_SEMANTICS -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 and linked with these: -G -z text Bug #413582 may be related to this in some way. So the short of it is that I would like a configure option to link the final python executable using the C++ compiler on Solaris, so that I can get the C++ runtime linked in with python itself. Note that this doesn't seem to matter on Compaq Tru64 Unix systems, where the default Python build works just fine with my extension module. ---------------------------------------------------------------------- >Comment By: Stephan A. Fiedler (sfiedler) Date: 2001-06-25 13:12 Message: Logged In: YES user_id=246063 Using LINKCC=CC on the configure line worked perfectly for us. Thanks for the tip. (I changed the bug state to closed; don't know if I'm the one who was supposed to do that, but I am no longer troubled by this circumstance.) ---------------------------------------------------------------------- Comment By: Stephan A. Fiedler (sfiedler) Date: 2001-06-18 11:18 Message: Logged In: YES user_id=246063 I should have given the full link line like this: CC -G -z text -o pyapi_launch.so $(OTHER_LIBS) -xildoff -ldl -lposix4 -lnsl -lsocket -lfftw_threads -lrfftw_threads -lfftw -lrfftw -lreadline -ltermcap $(OTHER_LIBS) just expands to a bunch of .so's that were themselves linked in the same way. pyapi_launch.so is my extension module. This does not solve the problem. The news about LINKCC is delightful. To make sure I understand, is it merely (csh syntax): setenv LINKCC CC make ? Or would I also/instead need to do /bin/env LINKCC=CC ./configure ... make ? This may well be all I need. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-16 01:03 Message: Logged In: YES user_id=21627 I believe the right fix to your problem would be to link your extension module using CC, not using ld. In theory, that should provide all required libraries to the shared object itself. Please report whether this solves the problem. As for the configure option: This is already configurable. Just set LINKCC when making python. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433481&group_id=5470 From noreply@sourceforge.net Mon Jun 25 21:23:23 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Jun 2001 13:23:23 -0700 Subject: [Python-bugs-list] [ python-Bugs-435026 ] SGI cores on 1.0 / 0 Message-ID: Bugs item #435026, was opened at 2001-06-20 23:18 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 Category: Python Interpreter Core Group: 3rd Party Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Drew Whitehouse (drw900) Assigned to: Tim Peters (tim_one) Summary: SGI cores on 1.0 / 0 Initial Comment: python21 cores evaluating 1.0 / 0 on SGI. MIPSpro Compilers: Version 7.3.1.1m SGI_ABI = -n32 ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-25 13:23 Message: Logged In: YES user_id=132786 I think we should we change the status of this bug. There is something we can do to keep the same speed on other platforms and work on the SGI. We can make condition code on the sgi by using #ifdef __sgi Please see bug/patch 436193 ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-22 20:00 Message: Logged In: YES user_id=31435 Nice work, Robert! Now we have a problem: when an optimizer is doing something wrong, then typically (a) there are any number of changes you could make that mask the problem under a specific release of the compiler, but (b) they're accidents, so it will just break again under some other release of the compiler. The PyFloat_AS_DOUBLE () is deliberately under the protection of an "if" test that ensures its legality, so the compiler is insane (not following the rules) in generating code that ignores this: how do you out-think an insane algorithm? You can fool it at random, but since it's not playing by the rules there's nothing *reliable* you can do. For that reason, I'm changing this to "3rd Party" and closing with "Won't Fix" -- it's not our doing, and there's nothing principled we can do about it short of slowing the code on all platforms (by, e.g., using an external function form of PyFloat_AS_DOUBLE, thus inhibiting the bad code generation). By the way, ask SGI to add Python to their standard compiler regression suite: *something* is always broken on SGI boxes, and disabling optimization always fixes it. ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-22 17:48 Message: Logged In: YES user_id=132786 After some more investigation it seems to be a CPU pipelining bug in the optimizer. #define CONVERT_TO_DOUBLE(obj, dbl) \ if (PyFloat_Check(obj)) \ dbl = PyFloat_AS_DOUBLE(obj); \ else if (convert_to_double(&(obj), &(dbl)) < 0) \ return obj; PyFloat_Check is a macro as well is PyFloat_AS_DOUBLE. Due to the CPU pipelining PyFloat_AS_DOUBLE (a cast to a double) is always being called. What happens is non-float objects that are not 8-byte aligned are being cast to a double. I am trying to figure out if I can reorder the code to not cause this pipelining issue. I will also see if I can somehow force a nop after PyFloat_Check. I will also open a bug with SGI. ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-21 20:42 Message: Logged In: YES user_id=132786 Thank you tim_one for the hint... I still trying to track it down but changing Include/intobject.h PyIntObject to include memory alignment pragmas seem to be the trick. On the SGI I've changed it to #pragma pack(8) typedef struct { PyObject_HEAD long ob_ival; } PyIntObject; #pragma pack(0) This is only a hack until I find a way to get the proper alignment. I'm begining to wonder if anywhere in the float code something is trying to cast a int object to a float object. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-21 19:52 Message: Logged In: YES user_id=31435 Robert, let us know if you find it! There's always *some* optimization bug on SGI boxes, but this one is particularly noxious. Someone on c.l.py suggested it may be a problem with Python accessing a double at an unaligned (for the platform) memory address. They didn't follow up, so I don't know whether that's the case, but if it is we would consider it a bug in Python (we try to stick to std C, so if there's an unaligned access it's a bug in our coding -- but I don't see anything like that by eyeball). ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-21 19:41 Message: Logged In: YES user_id=132786 Python 2.1 core dumps on any using the 7.3.1.2m compilers with -O2 or greater. If the file Objects/floatobject.c is compiled with -O1 everything seems fine. It is core dumping in the macro CONVERT_TO_DOUBLE. It seems the call stack gets corrupted. I'm trying to find a workaround besides -O1. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435026&group_id=5470 From noreply@sourceforge.net Mon Jun 25 21:51:59 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Jun 2001 13:51:59 -0700 Subject: [Python-bugs-list] [ python-Bugs-436207 ] "if 0: yield x" is ignored Message-ID: Bugs item #436207, was opened at 2001-06-25 13:51 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436207&group_id=5470 Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: Guido van Rossum (gvanrossum) Assigned to: Jeremy Hylton (jhylton) Summary: "if 0: yield x" is ignored Initial Comment: The parser doesn't descend into blocks that start with "if 0:" and a few other special cases like "if __debug__:". This breaks the semantics of the yield statement, which states that the mere *presence* of a yield in a function makes the function a generator. It's a bug that needs to be fixed before 2.2 is released. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436207&group_id=5470 From noreply@sourceforge.net Mon Jun 25 22:10:05 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Jun 2001 14:10:05 -0700 Subject: [Python-bugs-list] [ python-Bugs-432786 ] Python 2.1 test_locale fails Message-ID: Bugs item #432786, was opened at 2001-06-13 07:58 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432786&group_id=5470 Category: Extension Modules Group: None Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Paul M. Dubuc (dubuc) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.1 test_locale fails Initial Comment: I'm building Python 2.1 on Solaris 2.6. When I 'make test', the test_locale module is the only one that fails: test test_locale failed -- Writing: "'%f' % 1024 == '1024.000000' != '1,024.000000'", expected: '' ... The actual stdout doesn't match the expected stdout. This much did match (between asterisk lines): ********************************************************************** test_locale ********************************************************************** Then ... We expected (repr): '' But instead we got: "'%f' % 1024 == '1024.000000' != '1,024.000000'" ---------------------------------------------------------------------- Comment By: Robert Minsk (rminsk) Date: 2001-06-25 14:10 Message: Logged In: YES user_id=132786 The SGI has the same locale problem as Solaris. Is this a bug with the locale defination on linux? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-16 00:53 Message: Logged In: YES user_id=21627 That appears to be a bug in Solaris 2.6. To see the problem, please try the following program import locale locale.setlocale(locale.LC_ALL,"en_US") c=locale.localeconv() print c['grouping'],repr(c['thousands_sep']) In the en_US locale, the thousands separator *should* be a comma, but Solaris 2.6 reports that this locale has no thousands separator. For locale information, Python relies on what the operating system reports. As it is an OS bug, I'm closing the report as "won't fix". ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=432786&group_id=5470 From noreply@sourceforge.net Mon Jun 25 23:20:53 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Jun 2001 15:20:53 -0700 Subject: [Python-bugs-list] [ python-Bugs-435455 ] Python 2.0.1c1 fails to build on RH7.1 Message-ID: Bugs item #435455, was opened at 2001-06-22 06:58 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435455&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Ole H. Nielsen (ohnielse) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.0.1c1 fails to build on RH7.1 Initial Comment: Building Python 2.0.1c1 on a RedHat 7.1 (2.4.2-2 on i586) fails at this point: cd Modules; make OPT="-g -O2 -Wall -Wstrict-prototypes -fPIC" VERSION="2.0" \ prefix="/usr/local" exec_prefix="/usr/local" \ sharedmods make[1]: Entering directory `/scratch/ohnielse/Python-2.0.1/Modules' gcc -fpic -g -O2 -Wall -Wstrict-prototypes -fPIC -I./../Include -I.. -DHAVE_CONFIG_H -c ./bsddbmodule.c ./bsddbmodule.c: In function `newdbhashobject': ./bsddbmodule.c:55: `HASHINFO' undeclared (first use in this function) ./bsddbmodule.c:55: (Each undeclared identifier is reported only once ./bsddbmodule.c:55: for each function it appears in.) ./bsddbmodule.c:55: parse error before `info' ./bsddbmodule.c:60: `info' undeclared (first use in this function) ./bsddbmodule.c:71: warning: implicit declaration of function `dbopen' ./bsddbmodule.c:71: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `newdbbtobject': ./bsddbmodule.c:100: `BTREEINFO' undeclared (first use in this function) ./bsddbmodule.c:100: parse error before `info' ./bsddbmodule.c:105: `info' undeclared (first use in this function) ./bsddbmodule.c:118: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `newdbrnobject': ./bsddbmodule.c:147: `RECNOINFO' undeclared (first use in this function) ./bsddbmodule.c:147: parse error before `info' ./bsddbmodule.c:152: `info' undeclared (first use in this function) ./bsddbmodule.c:164: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `bsddb_dealloc': ./bsddbmodule.c:202: too few arguments to function ./bsddbmodule.c: In function `bsddb_length': ./bsddbmodule.c:232: structure has no member named `seq' ./bsddbmodule.c:233: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c:235: structure has no member named `seq' ./bsddbmodule.c:236: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c:229: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_subscript': ./bsddbmodule.c:265: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:265: too few arguments to function ./bsddbmodule.c: In function `bsddb_ass_sub': ./bsddbmodule.c:307: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:307: too few arguments to function ./bsddbmodule.c:330: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:330: too few arguments to function ./bsddbmodule.c: In function `bsddb_close': ./bsddbmodule.c:357: too few arguments to function ./bsddbmodule.c: In function `bsddb_keys': ./bsddbmodule.c:386: structure has no member named `seq' ./bsddbmodule.c:386: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c:407: structure has no member named `seq' ./bsddbmodule.c:407: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c:376: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_has_key': ./bsddbmodule.c:440: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:440: too few arguments to function ./bsddbmodule.c: In function `bsddb_set_location': ./bsddbmodule.c:466: structure has no member named `seq' ./bsddbmodule.c:466: `R_CURSOR' undeclared (first use in this function) ./bsddbmodule.c:453: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_seq': ./bsddbmodule.c:503: structure has no member named `seq' ./bsddbmodule.c:489: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_next': ./bsddbmodule.c:531: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_previous': ./bsddbmodule.c:536: `R_PREV' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_first': ./bsddbmodule.c:541: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_last': ./bsddbmodule.c:546: `R_LAST' undeclared (first use in this function) make[1]: *** [bsddbmodule.o] Error 1 ---------------------------------------------------------------------- >Comment By: Ole H. Nielsen (ohnielse) Date: 2001-06-25 15:20 Message: Logged In: YES user_id=27232 loewis wrote: > Please report the following things: > - the line in Setup that you activated to enable > compilation of bsddb > - the exact version of the bsddb RPM package that provides > db.h > - whether or not this packages includes a file db_185.h Sorry, I didn't change ANYTHING ! I was trying a vanilla build on RedHat 7.1 ! Should be easy to repeat... Maybe the build scripts make some incorrect choices on RedHat 7.1 ? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-23 13:58 Message: Logged In: YES user_id=21627 Please report the following things: - the line in Setup that you activated to enable compilation of bsddb - the exact version of the bsddb RPM package that provides db.h - whether or not this packages includes a file db_185.h ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435455&group_id=5470 From noreply@sourceforge.net Tue Jun 26 03:48:06 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Jun 2001 19:48:06 -0700 Subject: [Python-bugs-list] [ python-Bugs-436207 ] "if 0: yield x" is ignored Message-ID: Bugs item #436207, was opened at 2001-06-25 13:51 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436207&group_id=5470 Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: Guido van Rossum (gvanrossum) Assigned to: Jeremy Hylton (jhylton) >Summary: "if 0: yield x" is ignored Initial Comment: The parser doesn't descend into blocks that start with "if 0:" and a few other special cases like "if __debug__:". This breaks the semantics of the yield statement, which states that the mere *presence* of a yield in a function makes the function a generator. It's a bug that needs to be fixed before 2.2 is released. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-25 19:48 Message: Logged In: YES user_id=31435 Attached is a dirt-simple patch that addresses yield in these cases, but nothing else. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436207&group_id=5470 From noreply@sourceforge.net Tue Jun 26 04:17:09 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Jun 2001 20:17:09 -0700 Subject: [Python-bugs-list] [ python-Bugs-436259 ] exec*/spawn* problem with spaces in args Message-ID: Bugs item #436259, was opened at 2001-06-25 20:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436259&group_id=5470 Category: Windows Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Ben Hutchings (wom-work) Assigned to: Nobody/Anonymous (nobody) Summary: exec*/spawn* problem with spaces in args Initial Comment: DOS and Windows processes are not given an argument vector, as Unix processes are; instead they are given a command line and are expected to perform any necessary argument parsing themselves. Each C run-time library must convert command lines into argument vectors for the main() function, and if it includes exec* and spawn* functions then those must convert argument vectors into a command-line. Naturally, the various implementations differ in interesting ways. The Visual C++ run-time library (MSVCRT) implementation of the exec* and spawn* functions is particularly awful in that it simply concatenates the strings with spaces in-between (see source file cenvarg.c), which means that arguments with embedded spaces are likely to turn into multiple arguments in the new process. Obviously, when Python is built using Visual C++, its os.exec* and os.spawn* functions behave in this way too. MS prefers to work around this bug (see Knowledge Base article Q145937) rather than to fix it. Therefore I think Python must work around it too when built with Visual C++. I experimented with MSVCRT and Cygwin (using the attached program print_args.c) and could not find a way to convert an argument vector into a command line that they would both convert back to the same argument vector, but I got close. MSVCRT's parser requires spaces that are part of an argument to be enclosed in double-quotes. The double-quotes do not have to enclose the whole argument. Literal double-quotes must be escaped by preceding them with a backslash. If an argument contains literal backslashes before a literal or delimiting double-quote, those backslashes must be escaped by doubling them. If there is an unmatched enclosing double-quote then the parser behaves as if there was another double-quote at the end of the line. Cygwin's parser requires spaces that are part of an argument to be enclosed in double-quotes. The double-quotes do not have to enclose the whole argument. Literal double-quotes may be escaped by preceding them with a backslash, but then they count as enclosing double-quote as well, which appears to be a bug. They may also be escaped by doubling them, in which case they must be enclosed in double-quotes; since MSVCRT does not accept this, it's useless. As far as I can see, literal backslashes before a literal double-quote must not be escaped and literal backslashes before an enclosing double-quote *cannot* be escaped. It's really quite hard to understand what its rules are for backslashes and double-quotes, and I think it's broken. If there is an unmatched enclosing double-quote then the parser behaves as if there was another double-quote at the end of the line. Here's a Python version of a partial fix for use in nt.exec* and nt.spawn*. This function modifies argument strings so that the resulting command line will satisfy programs that use MSVCRT, and programs that use Cygwin if that's possible. def escape(arg): import re # If arg contains no space or double-quote then # no escaping is needed. if not re.search(r'[ "]', arg): return arg # Otherwise the argument must be quoted and all # double-quotes, preceding backslashes, and # trailing backslashes, must be escaped. def repl(match): if match.group(2): return match.group(1) * 2 + '\"' else: return match.group(1) * 2 return '"' + re.sub(r'(\*)("|$)', repl, arg) + '"' This could perhaps be used as a workaround for the problem. Unfortunately it would conflict with workarounds implemented at the Python level (which I have been using for a while). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436259&group_id=5470 From noreply@sourceforge.net Tue Jun 26 04:37:22 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Jun 2001 20:37:22 -0700 Subject: [Python-bugs-list] [ python-Bugs-436207 ] "if 0: yield x" is ignored Message-ID: Bugs item #436207, was opened at 2001-06-25 13:51 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436207&group_id=5470 Category: Parser/Compiler Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Guido van Rossum (gvanrossum) >Assigned to: Tim Peters (tim_one) >Summary: "if 0: yield x" is ignored Initial Comment: The parser doesn't descend into blocks that start with "if 0:" and a few other special cases like "if __debug__:". This breaks the semantics of the yield statement, which states that the mere *presence* of a yield in a function makes the function a generator. It's a bug that needs to be fixed before 2.2 is released. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-25 20:37 Message: Logged In: YES user_id=31435 Reassigned to me, Closed and Fixed, deleted the bogus patch. Lib/test/test_generators.py; new revision: 1.10 Python/compile.c; new revision: 2.205 ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-25 19:48 Message: Logged In: YES user_id=31435 Attached is a dirt-simple patch that addresses yield in these cases, but nothing else. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436207&group_id=5470 From noreply@sourceforge.net Tue Jun 26 08:02:04 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Jun 2001 00:02:04 -0700 Subject: [Python-bugs-list] [ python-Bugs-435455 ] Python 2.0.1c1 fails to build on RH7.1 Message-ID: Bugs item #435455, was opened at 2001-06-22 06:58 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435455&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Ole H. Nielsen (ohnielse) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.0.1c1 fails to build on RH7.1 Initial Comment: Building Python 2.0.1c1 on a RedHat 7.1 (2.4.2-2 on i586) fails at this point: cd Modules; make OPT="-g -O2 -Wall -Wstrict-prototypes -fPIC" VERSION="2.0" \ prefix="/usr/local" exec_prefix="/usr/local" \ sharedmods make[1]: Entering directory `/scratch/ohnielse/Python-2.0.1/Modules' gcc -fpic -g -O2 -Wall -Wstrict-prototypes -fPIC -I./../Include -I.. -DHAVE_CONFIG_H -c ./bsddbmodule.c ./bsddbmodule.c: In function `newdbhashobject': ./bsddbmodule.c:55: `HASHINFO' undeclared (first use in this function) ./bsddbmodule.c:55: (Each undeclared identifier is reported only once ./bsddbmodule.c:55: for each function it appears in.) ./bsddbmodule.c:55: parse error before `info' ./bsddbmodule.c:60: `info' undeclared (first use in this function) ./bsddbmodule.c:71: warning: implicit declaration of function `dbopen' ./bsddbmodule.c:71: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `newdbbtobject': ./bsddbmodule.c:100: `BTREEINFO' undeclared (first use in this function) ./bsddbmodule.c:100: parse error before `info' ./bsddbmodule.c:105: `info' undeclared (first use in this function) ./bsddbmodule.c:118: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `newdbrnobject': ./bsddbmodule.c:147: `RECNOINFO' undeclared (first use in this function) ./bsddbmodule.c:147: parse error before `info' ./bsddbmodule.c:152: `info' undeclared (first use in this function) ./bsddbmodule.c:164: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `bsddb_dealloc': ./bsddbmodule.c:202: too few arguments to function ./bsddbmodule.c: In function `bsddb_length': ./bsddbmodule.c:232: structure has no member named `seq' ./bsddbmodule.c:233: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c:235: structure has no member named `seq' ./bsddbmodule.c:236: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c:229: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_subscript': ./bsddbmodule.c:265: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:265: too few arguments to function ./bsddbmodule.c: In function `bsddb_ass_sub': ./bsddbmodule.c:307: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:307: too few arguments to function ./bsddbmodule.c:330: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:330: too few arguments to function ./bsddbmodule.c: In function `bsddb_close': ./bsddbmodule.c:357: too few arguments to function ./bsddbmodule.c: In function `bsddb_keys': ./bsddbmodule.c:386: structure has no member named `seq' ./bsddbmodule.c:386: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c:407: structure has no member named `seq' ./bsddbmodule.c:407: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c:376: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_has_key': ./bsddbmodule.c:440: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:440: too few arguments to function ./bsddbmodule.c: In function `bsddb_set_location': ./bsddbmodule.c:466: structure has no member named `seq' ./bsddbmodule.c:466: `R_CURSOR' undeclared (first use in this function) ./bsddbmodule.c:453: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_seq': ./bsddbmodule.c:503: structure has no member named `seq' ./bsddbmodule.c:489: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_next': ./bsddbmodule.c:531: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_previous': ./bsddbmodule.c:536: `R_PREV' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_first': ./bsddbmodule.c:541: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_last': ./bsddbmodule.c:546: `R_LAST' undeclared (first use in this function) make[1]: *** [bsddbmodule.o] Error 1 ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-26 00:02 Message: Logged In: YES user_id=21627 I have asked you to report things, not to change things. It is not easy to repeat for me, as I don't have Redhat 7.1. ---------------------------------------------------------------------- Comment By: Ole H. Nielsen (ohnielse) Date: 2001-06-25 15:20 Message: Logged In: YES user_id=27232 loewis wrote: > Please report the following things: > - the line in Setup that you activated to enable > compilation of bsddb > - the exact version of the bsddb RPM package that provides > db.h > - whether or not this packages includes a file db_185.h Sorry, I didn't change ANYTHING ! I was trying a vanilla build on RedHat 7.1 ! Should be easy to repeat... Maybe the build scripts make some incorrect choices on RedHat 7.1 ? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-23 13:58 Message: Logged In: YES user_id=21627 Please report the following things: - the line in Setup that you activated to enable compilation of bsddb - the exact version of the bsddb RPM package that provides db.h - whether or not this packages includes a file db_185.h ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435455&group_id=5470 From noreply@sourceforge.net Tue Jun 26 20:11:08 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Jun 2001 12:11:08 -0700 Subject: [Python-bugs-list] [ python-Bugs-429357 ] non-greedy regexp duplicating match bug Message-ID: Bugs item #429357, was opened at 2001-06-01 09:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429357&group_id=5470 Category: Regular Expressions Group: None Status: Open Resolution: None Priority: 5 Submitted By: Matthew Mueller (donut) >Assigned to: Fredrik Lundh (effbot) Summary: non-greedy regexp duplicating match bug Initial Comment: I found some weird bug, where when a non-greedy match doesn't match anything, it will duplicate the rest of the string instead of being None. #pyrebug.py: import re urlrebug=re.compile(""" (.*?):// #scheme ( (.*?) #user (?: :(.*) #pass )? @)? (.*?) #addr (?::([0-9]+))? #port (/.*)?$ #path """, re.VERBOSE) testbad='foo://bah:81/pth' print urlrebug.match(testbad).groups() Bug Output: >python2.1 pyrebug.py ('foo', None, 'bah:81/pth', None, 'bah', '81', '/pth') >python-cvs pyrebug.py ('foo', None, 'bah:81/pth', None, 'bah', '81', '/pth') Good (expected) Output: >python1.5 pyrebug.py ('foo', None, None, None, 'bah', '81', '/pth') ---------------------------------------------------------------------- Comment By: Matthew Mueller (donut) Date: 2001-06-14 00:59 Message: Logged In: YES user_id=65253 I think I understand what you are saying, and in the context of the test, it doesn't seem too bad. BUT, my original code (and what I'd like to have) did not have the surrounding group. So I'd just get: ('foo', 'bah:81/pth', None, 'bah', '81', '/pth') Knowing the general ease of messing up regexs when writing them, I'm sure you can image the pain I went through before actually realizing it was a python bug :) ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-06-13 10:12 Message: Logged In: NO What's happening makes sense, on one level. When the regex engine gets to the user:pass@ part ((.*?)(?::(.*))?@)? which fill groups 2, 3, and 4, the .*? of group 3 has to try at every character in the rest of the string before admitting overall defeat. In doing that, the last time that group 3 successfully completely locally, it has the rest of the string matched. Of course, overall, group three is enclosed within group 2, and when group two couldn't complete successfully, the engine knows it can skip group two (due to the ? modifying it), so it totally bails on groups 2, 3 and 4 to continue with the rest of the expression. What you'd like to happen is when that "bailing" happens for group 2, the enclosing groups 3 and 4 would get zereoed out (since they didn't participate in the *final* overall match). That makes sense, and is what I would expect to happen. However, what *is* happening is that group 3 is keeping the string that *it* last matched (even thought that last match didn't contribute to the final, overall match). I'm not explaining this well -- I hope you can understand it despite that. Sorry. Jeffrey ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=429357&group_id=5470 From noreply@sourceforge.net Tue Jun 26 20:21:14 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Jun 2001 12:21:14 -0700 Subject: [Python-bugs-list] [ python-Bugs-405227 ] sizeof(Py_UNICODE)==2 ???? Message-ID: Bugs item #405227, was opened at 2001-03-01 11:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 Category: Unicode Group: Platform-specific Status: Open Resolution: Postponed Priority: 5 Submitted By: Jon Saenz (jsaenz) Assigned to: M.-A. Lemburg (lemburg) Summary: sizeof(Py_UNICODE)==2 ???? Initial Comment: We are trying to install Python 2.0 in a Cray T3E. After a painful process of removing several modules which produce some errors (mmap, sha, md5), we get core dumps when we run python because under this platform, there does not exist a 16-bit numeric type. Unsigned short is 4 bytes long. We have finally defined unicode objects as unsigned short, despite they are 4 bytes long, and we have changed a sentence in Objects/unicodeobject.c to: if (sizeof(Py_UNICODE)!=sizeof(unsigned short){ It compiles and runs now, but the test on regular expressions crashes and the whole regression test does, too. Support of Unicode for this platform is not correct in version 2.0 of Python. ---------------------------------------------------------------------- >Comment By: Fredrik Lundh (effbot) Date: 2001-06-26 12:21 Message: Logged In: YES user_id=38376 in the current CVS codebase, there's a new (experimental) define in Include/unicodeobject.h: #undef USE_UCS4_STORAGE if this is defined, Py_UNICODE will be set to the same thing as Py_UCS4 (usually unsigned int or unsigned long). currently, basic unicode functions and SRE works just fine with this setting, but some other modules (including the UTF-16 codec) may not work (yet). ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-18 06:17 Message: Logged In: YES user_id=38388 Of course, you could declare Py_UNICODE as "unsigned int" and then store Unicode characters in e.g. 4 bytes each on platforms which don't have a 16-bit integer type. The reason for being picky about the 16 bits is that we chose UTF-16 as internal data storage format and that format defines the byte stream in terms of entities which have 2 bytes for each character. This format provides the best low-level integration with other Unicode storage formats such as wchar_t on Windows. That's why I would like to keep this compatibility if at all possible. I am not sure, but I think that sre also makes the 2-byte assumption internally in some places. A simple test for this would be to define Py_UNICODE as unsigned long and then run the regression suite... ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-18 05:38 Message: Logged In: YES user_id=6380 Huh? That depends on how ch is declared, and what kind of data is in the array. If it's an array of Py_UNICODE elements, and ch is declared as "Py_UNICODE *ch;", then ch++ will do the right thing (increment it by one Py_UNICODE unit). Now, the one thing you can NOT assume is that if you read external 16-bit data into a character buffer, that the Unicode characters correspond to Py_UNICODE characters -- perhaps this is what you're after? ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-18 01:19 Message: Logged In: YES user_id=38388 Ok, I agree that the math will probably work in most cases due to the fact that UTF-16 will never produce values outside the 16-bit range, but you still have the problem with iterating over Py_UNICODE arrays: the compiler will assume that ch++ means to move the pointer by sizeof(Py_UNICODE) bytes and this breaks in case you use e.g. a 32-bit integer type for Py_UNICODE. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:05 Message: Logged In: YES user_id=31435 The code snippet there will work fine with any integral type >= 2 bytes if you just add the line ch &= 0xffff; between the computation and the "if". It will actually work fine even if you *don't* put in that mask, but deducing that required analysis of the specific operations (you shift 4 bits left 12, 6 bits left 6 so they don't overlap with the first chunk and so the "+" can't cause a carry, and then add another chunk of non- overlapping 6 bits, so again there's no carry, and therefore the infinite-precision result fits in no more than 16 bits, and so there's no need to mask). About pointers, I don't see a problem there either, unless you're casting a Py_UNICODE* to a char* then adding a hardcoded 2. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 12:57 Message: Logged In: YES user_id=38388 The codecs are full of things like: ch = ((s[0] & 0x0f) << 12) + ((s[1] & 0x3f) << 6) + (s[2] & 0x3f); if (ch < 0x800 || (ch >= 0xd800 && ch < 0xe000)) { errmsg = "illegal encoding"; goto utf8Error; } where ch is a Py_UNICODE character. The other "problem" is that pointer dereferencing is used a lot in the code (using arrays of Py_UNICODE chars). We could probably shift the calculations to Py_UCS4 integers and then only do the data buffer access with Py_UNICODE which would then be mapped to a a 2-char-array to get the data buffer layout right. Still, I think this is low priority. Patches are welcome of course :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 12:44 Message: Logged In: YES user_id=31435 Point me to one of the calculations that's thought to be a problem, and happy to suggest something (I didn't find one on my own, but I'm not familiar with the details here). BTW, I reopened this because we got another report of T3E woes on c.l.py that day. You certainly need at least 16 bits, but it's hard to see how having more than that could be a genuine problem -- at worst "this kind of thing" usually requires no more than masking with 0xffff at the end. That can be hidden in a macro that's a nop on platforms that don't need it, if micro-efficiency is a concern. Often even that isn't needed. For example, binascii_crc32 absolutely must compute a 32-bit checksum, but works fine on platforms with 8-byte longs. The only "trick" needed to make that work was to compute the complement via crc ^ 0xFFFFFFFFUL instead of via ~crc ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 11:47 Message: Logged In: YES user_id=38388 It may be a design error, but getting this right for all platforms is hard and by choosing the 16-bit type we managed to handle 95% of all platforms in a fast and reliable way. Any idea how we could "emulate" a 16-bit integer type ? We need the integer type because we do calculcations on the values. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-13 22:28 Message: Logged In: YES user_id=31435 I opened this again. It's simply unacceptable to require that the platform have a 2-byte integer type. That doesn't mean it's easy to fix, but it's a design error all the same. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-03-16 11:27 Message: Logged In: YES user_id=38388 The current Unicode implementation needs Py_UNICODE to be a 16-bit entity and so does SRE. To get this to work on the Cray, you could try to use a 2-char struct which is then cast to a short in all those places which assume a 16-bit number representation. Simply using a 4-byte entity as basis will not work, since the fact that Py_UNICODE fits into 2 bytes is hard-coded into the implementation in a number of places. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-03-01 15:29 Message: Logged In: YES user_id=31435 Notes: + Python was ported to T3E last year, IIRC by Marc Poinot. May want to track him down. + Python's Unicode support doesn't rely on any platform Unicode support. Whether it's "useless" depends on the user, not the platform. + Face it : Crays are the only platforms that don't have a native 16-bit integer type. + Even so, I believe at least SRE is happy to work with 32- bit Unicode (glibc's wchar_t is 4 bytes, IIRC), so that much was likely a shallow problem. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:09 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-03-01 14:05 Message: Logged In: YES user_id=3066 Marc-Andre, can you deal with the general Unicode issues here and then pass this along to Fredrik for SRE updates? (Or better yet, work in parallel?) Thanks! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 From noreply@sourceforge.net Tue Jun 26 21:05:24 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Jun 2001 13:05:24 -0700 Subject: [Python-bugs-list] [ python-Bugs-405227 ] sizeof(Py_UNICODE)==2 ???? Message-ID: Bugs item #405227, was opened at 2001-03-01 11:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 Category: Unicode Group: Platform-specific Status: Open Resolution: Postponed Priority: 5 Submitted By: Jon Saenz (jsaenz) Assigned to: M.-A. Lemburg (lemburg) Summary: sizeof(Py_UNICODE)==2 ???? Initial Comment: We are trying to install Python 2.0 in a Cray T3E. After a painful process of removing several modules which produce some errors (mmap, sha, md5), we get core dumps when we run python because under this platform, there does not exist a 16-bit numeric type. Unsigned short is 4 bytes long. We have finally defined unicode objects as unsigned short, despite they are 4 bytes long, and we have changed a sentence in Objects/unicodeobject.c to: if (sizeof(Py_UNICODE)!=sizeof(unsigned short){ It compiles and runs now, but the test on regular expressions crashes and the whole regression test does, too. Support of Unicode for this platform is not correct in version 2.0 of Python. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-26 13:05 Message: Logged In: YES user_id=31435 Thank you, /F -- excellent news! ---------------------------------------------------------------------- Comment By: Fredrik Lundh (effbot) Date: 2001-06-26 12:21 Message: Logged In: YES user_id=38376 in the current CVS codebase, there's a new (experimental) define in Include/unicodeobject.h: #undef USE_UCS4_STORAGE if this is defined, Py_UNICODE will be set to the same thing as Py_UCS4 (usually unsigned int or unsigned long). currently, basic unicode functions and SRE works just fine with this setting, but some other modules (including the UTF-16 codec) may not work (yet). ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-18 06:17 Message: Logged In: YES user_id=38388 Of course, you could declare Py_UNICODE as "unsigned int" and then store Unicode characters in e.g. 4 bytes each on platforms which don't have a 16-bit integer type. The reason for being picky about the 16 bits is that we chose UTF-16 as internal data storage format and that format defines the byte stream in terms of entities which have 2 bytes for each character. This format provides the best low-level integration with other Unicode storage formats such as wchar_t on Windows. That's why I would like to keep this compatibility if at all possible. I am not sure, but I think that sre also makes the 2-byte assumption internally in some places. A simple test for this would be to define Py_UNICODE as unsigned long and then run the regression suite... ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-18 05:38 Message: Logged In: YES user_id=6380 Huh? That depends on how ch is declared, and what kind of data is in the array. If it's an array of Py_UNICODE elements, and ch is declared as "Py_UNICODE *ch;", then ch++ will do the right thing (increment it by one Py_UNICODE unit). Now, the one thing you can NOT assume is that if you read external 16-bit data into a character buffer, that the Unicode characters correspond to Py_UNICODE characters -- perhaps this is what you're after? ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-18 01:19 Message: Logged In: YES user_id=38388 Ok, I agree that the math will probably work in most cases due to the fact that UTF-16 will never produce values outside the 16-bit range, but you still have the problem with iterating over Py_UNICODE arrays: the compiler will assume that ch++ means to move the pointer by sizeof(Py_UNICODE) bytes and this breaks in case you use e.g. a 32-bit integer type for Py_UNICODE. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 14:05 Message: Logged In: YES user_id=31435 The code snippet there will work fine with any integral type >= 2 bytes if you just add the line ch &= 0xffff; between the computation and the "if". It will actually work fine even if you *don't* put in that mask, but deducing that required analysis of the specific operations (you shift 4 bits left 12, 6 bits left 6 so they don't overlap with the first chunk and so the "+" can't cause a carry, and then add another chunk of non- overlapping 6 bits, so again there's no carry, and therefore the infinite-precision result fits in no more than 16 bits, and so there's no need to mask). About pointers, I don't see a problem there either, unless you're casting a Py_UNICODE* to a char* then adding a hardcoded 2. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 12:57 Message: Logged In: YES user_id=38388 The codecs are full of things like: ch = ((s[0] & 0x0f) << 12) + ((s[1] & 0x3f) << 6) + (s[2] & 0x3f); if (ch < 0x800 || (ch >= 0xd800 && ch < 0xe000)) { errmsg = "illegal encoding"; goto utf8Error; } where ch is a Py_UNICODE character. The other "problem" is that pointer dereferencing is used a lot in the code (using arrays of Py_UNICODE chars). We could probably shift the calculations to Py_UCS4 integers and then only do the data buffer access with Py_UNICODE which would then be mapped to a a 2-char-array to get the data buffer layout right. Still, I think this is low priority. Patches are welcome of course :-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-17 12:44 Message: Logged In: YES user_id=31435 Point me to one of the calculations that's thought to be a problem, and happy to suggest something (I didn't find one on my own, but I'm not familiar with the details here). BTW, I reopened this because we got another report of T3E woes on c.l.py that day. You certainly need at least 16 bits, but it's hard to see how having more than that could be a genuine problem -- at worst "this kind of thing" usually requires no more than masking with 0xffff at the end. That can be hidden in a macro that's a nop on platforms that don't need it, if micro-efficiency is a concern. Often even that isn't needed. For example, binascii_crc32 absolutely must compute a 32-bit checksum, but works fine on platforms with 8-byte longs. The only "trick" needed to make that work was to compute the complement via crc ^ 0xFFFFFFFFUL instead of via ~crc ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-06-17 11:47 Message: Logged In: YES user_id=38388 It may be a design error, but getting this right for all platforms is hard and by choosing the 16-bit type we managed to handle 95% of all platforms in a fast and reliable way. Any idea how we could "emulate" a 16-bit integer type ? We need the integer type because we do calculcations on the values. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-13 22:28 Message: Logged In: YES user_id=31435 I opened this again. It's simply unacceptable to require that the platform have a 2-byte integer type. That doesn't mean it's easy to fix, but it's a design error all the same. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-03-16 11:27 Message: Logged In: YES user_id=38388 The current Unicode implementation needs Py_UNICODE to be a 16-bit entity and so does SRE. To get this to work on the Cray, you could try to use a 2-char struct which is then cast to a short in all those places which assume a 16-bit number representation. Simply using a 4-byte entity as basis will not work, since the fact that Py_UNICODE fits into 2 bytes is hard-coded into the implementation in a number of places. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-03-01 15:29 Message: Logged In: YES user_id=31435 Notes: + Python was ported to T3E last year, IIRC by Marc Poinot. May want to track him down. + Python's Unicode support doesn't rely on any platform Unicode support. Whether it's "useless" depends on the user, not the platform. + Face it : Crays are the only platforms that don't have a native 16-bit integer type. + Even so, I believe at least SRE is happy to work with 32- bit Unicode (glibc's wchar_t is 4 bytes, IIRC), so that much was likely a shallow problem. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:09 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Jon Saenz (jsaenz) Date: 2001-03-01 15:08 Message: Logged In: YES user_id=12122 We have finally given up to install Python in the Cray T3E due to its lack of support of shared objects. This causes difficulties in the loading of different external libraries (Numeric, Lapack, and so on) because of the static linking. In any case, we still think that this "bug" should be repaired. There may be other platforms which: 1) Do not support Unicode, so that the Unicode feature of Python is useless in these cases. 2) The users may be interested in using Python in them (for Numeric applications, for instance) 3) May not have a 16-bit native numerical type. Under these circunstances, the current version of Python can not be used. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-03-01 14:05 Message: Logged In: YES user_id=3066 Marc-Andre, can you deal with the general Unicode issues here and then pass this along to Fredrik for SRE updates? (Or better yet, work in parallel?) Thanks! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=405227&group_id=5470 From noreply@sourceforge.net Tue Jun 26 22:39:21 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Jun 2001 14:39:21 -0700 Subject: [Python-bugs-list] [ python-Bugs-436525 ] Wrong macro name Message-ID: Bugs item #436525, was opened at 2001-06-26 14:39 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436525&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Greg Kochanski (gpk) Assigned to: Nobody/Anonymous (nobody) Summary: Wrong macro name Initial Comment: 8.1 Thread State and the Global Interpreter Lock ( http://www.python.org/doc/current/api/threads.html ) refers to macros Py_BEGIN_BLOCK_THREADS and Py_BEGIN_UNBLOCK_THREADS . These do not exist. The correct names are Py_BLOCK_THREADS and Py_UNBLOCK_THREADS. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436525&group_id=5470 From noreply@sourceforge.net Tue Jun 26 22:52:18 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Jun 2001 14:52:18 -0700 Subject: [Python-bugs-list] [ python-Bugs-435455 ] Python 2.0.1c1 fails to build on RH7.1 Message-ID: Bugs item #435455, was opened at 2001-06-22 06:58 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435455&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Ole H. Nielsen (ohnielse) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.0.1c1 fails to build on RH7.1 Initial Comment: Building Python 2.0.1c1 on a RedHat 7.1 (2.4.2-2 on i586) fails at this point: cd Modules; make OPT="-g -O2 -Wall -Wstrict-prototypes -fPIC" VERSION="2.0" \ prefix="/usr/local" exec_prefix="/usr/local" \ sharedmods make[1]: Entering directory `/scratch/ohnielse/Python-2.0.1/Modules' gcc -fpic -g -O2 -Wall -Wstrict-prototypes -fPIC -I./../Include -I.. -DHAVE_CONFIG_H -c ./bsddbmodule.c ./bsddbmodule.c: In function `newdbhashobject': ./bsddbmodule.c:55: `HASHINFO' undeclared (first use in this function) ./bsddbmodule.c:55: (Each undeclared identifier is reported only once ./bsddbmodule.c:55: for each function it appears in.) ./bsddbmodule.c:55: parse error before `info' ./bsddbmodule.c:60: `info' undeclared (first use in this function) ./bsddbmodule.c:71: warning: implicit declaration of function `dbopen' ./bsddbmodule.c:71: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `newdbbtobject': ./bsddbmodule.c:100: `BTREEINFO' undeclared (first use in this function) ./bsddbmodule.c:100: parse error before `info' ./bsddbmodule.c:105: `info' undeclared (first use in this function) ./bsddbmodule.c:118: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `newdbrnobject': ./bsddbmodule.c:147: `RECNOINFO' undeclared (first use in this function) ./bsddbmodule.c:147: parse error before `info' ./bsddbmodule.c:152: `info' undeclared (first use in this function) ./bsddbmodule.c:164: warning: assignment makes pointer from integer without a cast ./bsddbmodule.c: In function `bsddb_dealloc': ./bsddbmodule.c:202: too few arguments to function ./bsddbmodule.c: In function `bsddb_length': ./bsddbmodule.c:232: structure has no member named `seq' ./bsddbmodule.c:233: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c:235: structure has no member named `seq' ./bsddbmodule.c:236: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c:229: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_subscript': ./bsddbmodule.c:265: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:265: too few arguments to function ./bsddbmodule.c: In function `bsddb_ass_sub': ./bsddbmodule.c:307: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:307: too few arguments to function ./bsddbmodule.c:330: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:330: too few arguments to function ./bsddbmodule.c: In function `bsddb_close': ./bsddbmodule.c:357: too few arguments to function ./bsddbmodule.c: In function `bsddb_keys': ./bsddbmodule.c:386: structure has no member named `seq' ./bsddbmodule.c:386: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c:407: structure has no member named `seq' ./bsddbmodule.c:407: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c:376: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_has_key': ./bsddbmodule.c:440: warning: passing arg 2 of pointer to function from incompatible pointer type ./bsddbmodule.c:440: too few arguments to function ./bsddbmodule.c: In function `bsddb_set_location': ./bsddbmodule.c:466: structure has no member named `seq' ./bsddbmodule.c:466: `R_CURSOR' undeclared (first use in this function) ./bsddbmodule.c:453: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_seq': ./bsddbmodule.c:503: structure has no member named `seq' ./bsddbmodule.c:489: warning: `status' might be used uninitialized in this function ./bsddbmodule.c: In function `bsddb_next': ./bsddbmodule.c:531: `R_NEXT' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_previous': ./bsddbmodule.c:536: `R_PREV' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_first': ./bsddbmodule.c:541: `R_FIRST' undeclared (first use in this function) ./bsddbmodule.c: In function `bsddb_last': ./bsddbmodule.c:546: `R_LAST' undeclared (first use in this function) make[1]: *** [bsddbmodule.o] Error 1 ---------------------------------------------------------------------- >Comment By: Ole H. Nielsen (ohnielse) Date: 2001-06-26 14:52 Message: Logged In: YES user_id=27232 loewis wrote: > I have asked you to report things, not to change things. > It is not easy to repeat for me, as I don't have Redhat 7.1. Sorry about the confusion. My report is for RedHat 7.1: 1. Untar the distribution 2.0.1c1 2. ./configure 3. make The errors reported occur. I have changed nothing. I don't know what bsddb is, nor why it's getting compiled. Hopefully, someone knowing Python 2.0.1 and having access to a RedHat 7.1 box could look into the problem. At this point Python building seems broken on RH7.1 :-( ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-26 00:02 Message: Logged In: YES user_id=21627 I have asked you to report things, not to change things. It is not easy to repeat for me, as I don't have Redhat 7.1. ---------------------------------------------------------------------- Comment By: Ole H. Nielsen (ohnielse) Date: 2001-06-25 15:20 Message: Logged In: YES user_id=27232 loewis wrote: > Please report the following things: > - the line in Setup that you activated to enable > compilation of bsddb > - the exact version of the bsddb RPM package that provides > db.h > - whether or not this packages includes a file db_185.h Sorry, I didn't change ANYTHING ! I was trying a vanilla build on RedHat 7.1 ! Should be easy to repeat... Maybe the build scripts make some incorrect choices on RedHat 7.1 ? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-23 13:58 Message: Logged In: YES user_id=21627 Please report the following things: - the line in Setup that you activated to enable compilation of bsddb - the exact version of the bsddb RPM package that provides db.h - whether or not this packages includes a file db_185.h ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435455&group_id=5470 From noreply@sourceforge.net Tue Jun 26 22:59:16 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Jun 2001 14:59:16 -0700 Subject: [Python-bugs-list] [ python-Bugs-435596 ] Fork/Thread problems on FreeBSD Message-ID: Bugs item #435596, was opened at 2001-06-22 15:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435596&group_id=5470 Category: Threads Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: Fork/Thread problems on FreeBSD Initial Comment: Run this code on both Linux and FreeBSD. On Linux you get a continuous stream of *'s. On FreeBSD you get 1. FreeBSD is wrong. import thread, os, sys, time def run(): while 1: if os.fork() == 0: time.sleep(0.001) sys.stderr.write('*') sys.stderr.flush() sys.exit(0) break os.wait() thread.start_new_thread(run, ()) while 1: time.sleep(0.001) pass I ran into this problem when trying to use Popen3 to run a system call from Zope. The fork in Popen3 never gets to the execvp. It works fine on Linux. I believe the problem in the above code is caused by the same issue. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-26 14:59 Message: Logged In: YES user_id=21627 As an additional comment, can you please verify what shared libraries are loaded at the time of the crash? If both the threaded and the non-threaded libc are loaded, problems will occur quite naturally. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-23 14:02 Message: Logged In: YES user_id=21627 Why do you think this is a bug in Python? Can you determine whether the thread is started, and whether the fork returns for the parent? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=435596&group_id=5470 From noreply@sourceforge.net Tue Jun 26 23:01:58 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Jun 2001 15:01:58 -0700 Subject: [Python-bugs-list] [ python-Bugs-436103 ] Compiling pygtk Message-ID: Bugs item #436103, was opened at 2001-06-25 07:36 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436103&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: Compiling pygtk Initial Comment: Hello, I wanted to install Narval from (www.logilab.org) I install Python 2.1 and i try to install pygtk. And i get this error Like i am a newbie it's perhaps nothing from python narval@tst03cn:~/install/pygtk-0.6.6$ make make all-recursive make[1]: Entering directory `/home/narval/install/pygtk-0.6.6' Making all in generate make[2]: Entering directory `/home/narval/install/pygtk-0.6.6/generate' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/narval/install/pygtk-0.6.6/generate' Making all in pyglade make[2]: Entering directory `/home/narval/install/pygtk-0.6.6/pyglade' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/narval/install/pygtk-0.6.6/pyglade' make[2]: Entering directory `/home/narval/install/pygtk-0.6.6' cd . && /usr/bin/python mkgtk.py 'import site' failed; use -v for traceback Traceback (innermost last): File "mkgtk.py", line 5, in ? import generate File "./generate/generate.py", line 1, in ? import os File "/home/narval/lib/python2.1/os.py", line 37 return [n for n in dir(module) if n[0] != '_'] ^ SyntaxError: invalid syntax make[2]: *** [gtkmodule_defs.c] Error 1 make[2]: Leaving directory `/home/narval/install/pygtk-0.6.6' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/narval/install/pygtk-0.6.6' make: *** [all-recursive-am] Error 2 ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-26 15:01 Message: Logged In: YES user_id=21627 You should be using Python 2.1 to execute this code. As you can see, /usr/bin/python mkgtk.py is invoked, which probably is not 2.1. Make sure your Python installation is found before /usr/bin/python is. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436103&group_id=5470 From noreply@sourceforge.net Tue Jun 26 23:07:47 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Jun 2001 15:07:47 -0700 Subject: [Python-bugs-list] [ python-Bugs-436130 ] solaris2.6 problems with readline Message-ID: Bugs item #436130, was opened at 2001-06-25 10:18 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436130&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Fredrik Stenberg (fredriks) Assigned to: Nobody/Anonymous (nobody) Summary: solaris2.6 problems with readline Initial Comment: having problem with compiling python2.0.1 2.0 (i think i always had this problem after 1.5.2) on solaris 2.6 gcc -g -O2 -Wall -Wstrict-prototypes -fPIC -I./../Include -I.. -DHAVE_CONFIG_H -c ./readline.c ./readline.c: In function `setup_readline': ./readline.c:414: `CPPFunction' undeclared (first use in this function) ./readline.c:414: (Each undeclared identifier is reported only once ./readline.c:414: for each function it appears in.) ./readline.c:414: parse error before `)' *** Error code 1 I have always used to exchange Modules/readline.c with the old file from the 1.5.2 release. I finally got around to checking whats wrong, (or atleast browse around the code). readline.c Line 414 in void setup_readline states, rl_attempted_completion_function = (CPPFunction *)flex_complete; should this not be; rl_attempted_completion_function = (Function *)flex_complete; I have no problems if i change CPPfunction into Function, i'm no readline expert but i think this is the problem. *sysinfo* gcc 2.95.2 solaris 2.6 readline4.1 ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2001-06-26 15:07 Message: Logged In: YES user_id=21627 CPPFunction is defined in readline 4.2, so one solution would be to update to 4.2. The real type of this variable is rl_completion_func_t. So if this typedef is already available in 4.1, we should probably change the cast to rl_completion_func_t. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436130&group_id=5470 From noreply@sourceforge.net Wed Jun 27 03:10:57 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Jun 2001 19:10:57 -0700 Subject: [Python-bugs-list] [ python-Bugs-436596 ] re.findall() bad with third argument Message-ID: Bugs item #436596, was opened at 2001-06-26 19:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436596&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: re.findall() bad with third argument Initial Comment: On Wed, 27 Jun 2001, Dan Tropp wrote: > I tried these in my python shell. Why do the last two give what they do? > > >>> print re.findall('<.*?>',' ') > ['', '', '', ''] > >>> print re.findall('<.*?>','<1> \n<3> ') > ['<1>', '', '<3>', ''] > >>> print re.findall('<.*?>','<1> \n<3> ', re.I|re.S) > [] > >>> print re.findall('<.*?>','<1> \n<3> ', re.I) > ['', '<3>', ''] Now this is curious, because according to the documentation at: http://python.org/doc/current/lib/Contents_of_Module_re .html re.findall() is only supposed to take in two arguments. In fact, in Python 1.52, Python complains that: ### # in Python 1.52: >> print re.findall('<.*?>','<1> \n<3> ', re.I) Traceback (innermost last): File "", line 1, in ? TypeError: too many arguments; expected 2, got 3 ## Let me check if the same behavior happens in 2.1: ### # in Python 2.1 >>> re.findall('<.*?>','<1> \n<3> ', re.I) ['', '<3>', ''] ### Now that is weird! This looks like it might be a bug. Let's take a look at the source code, to see why it's doing that. ### ## source code in sre.py def findall(pattern, string, maxsplit=0): """Return a list of all non-overlapping matches in the string. If one or more groups are present in the pattern, return a list of groups; this will be a list of tuples if the pattern has more than one group. Empty matches are included in the result.""" return _compile(pattern, 0).findall(string, maxsplit) ### Weird! findall() in its current incarnation does take in a third argument, contrary to the HTML documentation. But this makes no sense to me. Why should findall need a maxsplit parameter, when maxsplit is something that the split()ing operator works with? This really looks like a bug to me. Hmmm... well, the definition to findall() is adjacent to split(), so perhaps someone made a mistake and accidently added maxsplit as an argument. I believe that the corrected code in sre.py should be: ### def findall(pattern, string): """Return a list of all non-overlapping matches in the string. If one or more groups are present in the pattern, return a list of groups; this will be a list of tuples if the pattern has more than one group. Empty matches are included in the result.""" return _compile(pattern, 0).findall(string) ### instead. Ever since June 1, 2000, the findall() code in sre.py has contained this weird behavior: http://cvs.sourceforge.net/cgi- bin/viewcvs.cgi/python/python/dist/src/Lib/sre.py? rev=1.5&content-type=text/vnd.viewcvs-markup and even in the current development sources, it still has it! http://cvs.sourceforge.net/cgi- bin/viewcvs.cgi/python/python/dist/src/Lib/sre.py? rev=1.25.2.1&content-type=text/vnd.viewcvs-markup Dan, I think we should report this to the Implementors and see what they think about it. Good catch! *grin* Do you want to submit this to sourceforge? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436596&group_id=5470 From noreply@sourceforge.net Wed Jun 27 07:39:48 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Jun 2001 23:39:48 -0700 Subject: [Python-bugs-list] [ python-Bugs-436621 ] sgmllib tag/attrib regexpr too strict? Message-ID: Bugs item #436621, was opened at 2001-06-26 23:39 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436621&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dustin Boswell (boswell) Assigned to: Nobody/Anonymous (nobody) Summary: sgmllib tag/attrib regexpr too strict? Initial Comment: 1) I've seen tags like blah which the SGMLParser will not find correctly. I'm guessing it has to do with the reg-expr for tagfind: tagfind = re.compile('[a-zA-Z][-.a-zA-Z0-9]*') Does the spec allow for _ ? Even if it doesn't, maybe tagfind should be changed... tagfind ?= re.compile('[a-zA-Z][-.a-zA-Z0-9_]*') 2) I've seen attributes with backquotes ` in them. where key has the value val```junk`` Currently, attrfind (the regular expression for such things) is attrfind = re.compile( ... r'\s*([a-zA-Z_][-.a-zA-Z_0-9]*) ... (\s*=\s*'r'(\'[^\']*\'|"[^"]*"| ... [-a-zA-Z0-9./:;+*%?!&$\(\)_#=~]*))?') Would it hurt to add ` to long list of characters that are already there? Netscape seems to allow them. Thoughts? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436621&group_id=5470 From noreply@sourceforge.net Wed Jun 27 12:10:19 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Jun 2001 04:10:19 -0700 Subject: [Python-bugs-list] [ python-Bugs-436130 ] solaris2.6 problems with readline Message-ID: Bugs item #436130, was opened at 2001-06-25 10:18 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436130&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Fredrik Stenberg (fredriks) Assigned to: Nobody/Anonymous (nobody) Summary: solaris2.6 problems with readline Initial Comment: having problem with compiling python2.0.1 2.0 (i think i always had this problem after 1.5.2) on solaris 2.6 gcc -g -O2 -Wall -Wstrict-prototypes -fPIC -I./../Include -I.. -DHAVE_CONFIG_H -c ./readline.c ./readline.c: In function `setup_readline': ./readline.c:414: `CPPFunction' undeclared (first use in this function) ./readline.c:414: (Each undeclared identifier is reported only once ./readline.c:414: for each function it appears in.) ./readline.c:414: parse error before `)' *** Error code 1 I have always used to exchange Modules/readline.c with the old file from the 1.5.2 release. I finally got around to checking whats wrong, (or atleast browse around the code). readline.c Line 414 in void setup_readline states, rl_attempted_completion_function = (CPPFunction *)flex_complete; should this not be; rl_attempted_completion_function = (Function *)flex_complete; I have no problems if i change CPPfunction into Function, i'm no readline expert but i think this is the problem. *sysinfo* gcc 2.95.2 solaris 2.6 readline4.1 ---------------------------------------------------------------------- >Comment By: Fredrik Stenberg (fredriks) Date: 2001-06-27 04:10 Message: Logged In: YES user_id=5299 I tried it on solaris2.8 later that night, I installed readline 4.2 also (no problems with the installetion, all testprograms worked fine) But python Module/readline.c refused to compile once again. I could (as I always have) copy the old readline.c from 1.5.2 and get it to work..... Is it only me? /fredriks gcc -I/tmp/su96-fst/include -g -O2 -Wall -Wstrict-prototypes -fPIC -I./../Include -I.. -DHAVE_CONFIG_H -c ./readline.c In file included from /tmp/su96-fst/include/readline/keymaps.h:37, from /tmp/su96-fst/include/readline/readline.h:36, from ./readline.c:28: /tmp/su96-fst/include/readline/rltypedefs.h:35: warning: function declaration isn't a prototype /tmp/su96-fst/include/readline/rltypedefs.h:36: warning: function declaration isn't a prototype /tmp/su96-fst/include/readline/rltypedefs.h:37: warning: function declaration isn't a prototype /tmp/su96-fst/include/readline/rltypedefs.h:38: warning: function declaration isn't a prototype In file included from ./readline.c:28: /tmp/su96-fst/include/readline/readline.h:350: warning: function declaration isn't a prototype ./readline.c:31: conflicting types for `rl_read_init_file' /tmp/su96-fst/include/readline/readline.h:303: previous declaration of `rl_read_init_file' ./readline.c:32: conflicting types for `rl_insert_text' /tmp/su96-fst/include/readline/readline.h:363: previous declaration of `rl_insert_text' ./readline.c: In function `set_completer_delims': ./readline.c:227: warning: passing arg 1 of `free' discards qualifiers from pointer target type ./readline.c: In function `flex_complete': ./readline.c:399: warning: implicit declaration of function `completion_matches' ./readline.c:399: warning: return makes pointer from integer without a cast *** Error code 1 make: Fatal error: Command failed for target `readline.o' Current working directory /tmp/su96-fst/Python-2.0.1/Modules *** Error code 1 make: Fatal error: Command failed for target `Modules' ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-26 15:07 Message: Logged In: YES user_id=21627 CPPFunction is defined in readline 4.2, so one solution would be to update to 4.2. The real type of this variable is rl_completion_func_t. So if this typedef is already available in 4.1, we should probably change the cast to rl_completion_func_t. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436130&group_id=5470 From noreply@sourceforge.net Wed Jun 27 15:03:44 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Jun 2001 07:03:44 -0700 Subject: [Python-bugs-list] [ python-Bugs-436732 ] dinstall.py does not record path file Message-ID: Bugs item #436732, was opened at 2001-06-27 07:03 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436732&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: Jon Nelson (jnelson) Assigned to: Nobody/Anonymous (nobody) Summary: dinstall.py does not record path file Initial Comment: install.py does not record in INSTALLED_FILES when it creates the .pth file which is created when extra_path is used. Included is a patch: --- install.py.orig Wed Jun 27 08:55:39 2001 +++ install.py Wed Jun 27 08:56:30 2001 @@ -489,6 +489,9 @@ # write list of installed files, if requested. if self.record: outputs = self.get_outputs() + if self.path_file and self.install_path_file: + outputs.append(os.path.join(self.install_libbase, + self.path_file + ".pth")) if self.root: # strip any package prefix root_len = len(self.root) for counter in xrange(len(outputs)): ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436732&group_id=5470 From noreply@sourceforge.net Wed Jun 27 15:05:01 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Jun 2001 07:05:01 -0700 Subject: [Python-bugs-list] [ python-Bugs-436732 ] install.py does not record path file Message-ID: Bugs item #436732, was opened at 2001-06-27 07:03 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436732&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: Jon Nelson (jnelson) Assigned to: Nobody/Anonymous (nobody) >Summary: install.py does not record path file Initial Comment: install.py does not record in INSTALLED_FILES when it creates the .pth file which is created when extra_path is used. Included is a patch: --- install.py.orig Wed Jun 27 08:55:39 2001 +++ install.py Wed Jun 27 08:56:30 2001 @@ -489,6 +489,9 @@ # write list of installed files, if requested. if self.record: outputs = self.get_outputs() + if self.path_file and self.install_path_file: + outputs.append(os.path.join(self.install_libbase, + self.path_file + ".pth")) if self.root: # strip any package prefix root_len = len(self.root) for counter in xrange(len(outputs)): ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436732&group_id=5470 From noreply@sourceforge.net Wed Jun 27 16:30:50 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Jun 2001 08:30:50 -0700 Subject: [Python-bugs-list] [ python-Bugs-436596 ] re.findall() bad with third argument Message-ID: Bugs item #436596, was opened at 2001-06-26 19:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436596&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: re.findall() bad with third argument Initial Comment: On Wed, 27 Jun 2001, Dan Tropp wrote: > I tried these in my python shell. Why do the last two give what they do? > > >>> print re.findall('<.*?>',' ') > ['', '', '', ''] > >>> print re.findall('<.*?>','<1> \n<3> ') > ['<1>', '', '<3>', ''] > >>> print re.findall('<.*?>','<1> \n<3> ', re.I|re.S) > [] > >>> print re.findall('<.*?>','<1> \n<3> ', re.I) > ['', '<3>', ''] Now this is curious, because according to the documentation at: http://python.org/doc/current/lib/Contents_of_Module_re .html re.findall() is only supposed to take in two arguments. In fact, in Python 1.52, Python complains that: ### # in Python 1.52: >> print re.findall('<.*?>','<1> \n<3> ', re.I) Traceback (innermost last): File "", line 1, in ? TypeError: too many arguments; expected 2, got 3 ## Let me check if the same behavior happens in 2.1: ### # in Python 2.1 >>> re.findall('<.*?>','<1> \n<3> ', re.I) ['', '<3>', ''] ### Now that is weird! This looks like it might be a bug. Let's take a look at the source code, to see why it's doing that. ### ## source code in sre.py def findall(pattern, string, maxsplit=0): """Return a list of all non-overlapping matches in the string. If one or more groups are present in the pattern, return a list of groups; this will be a list of tuples if the pattern has more than one group. Empty matches are included in the result.""" return _compile(pattern, 0).findall(string, maxsplit) ### Weird! findall() in its current incarnation does take in a third argument, contrary to the HTML documentation. But this makes no sense to me. Why should findall need a maxsplit parameter, when maxsplit is something that the split()ing operator works with? This really looks like a bug to me. Hmmm... well, the definition to findall() is adjacent to split(), so perhaps someone made a mistake and accidently added maxsplit as an argument. I believe that the corrected code in sre.py should be: ### def findall(pattern, string): """Return a list of all non-overlapping matches in the string. If one or more groups are present in the pattern, return a list of groups; this will be a list of tuples if the pattern has more than one group. Empty matches are included in the result.""" return _compile(pattern, 0).findall(string) ### instead. Ever since June 1, 2000, the findall() code in sre.py has contained this weird behavior: http://cvs.sourceforge.net/cgi- bin/viewcvs.cgi/python/python/dist/src/Lib/sre.py? rev=1.5&content-type=text/vnd.viewcvs-markup and even in the current development sources, it still has it! http://cvs.sourceforge.net/cgi- bin/viewcvs.cgi/python/python/dist/src/Lib/sre.py? rev=1.25.2.1&content-type=text/vnd.viewcvs-markup Dan, I think we should report this to the Implementors and see what they think about it. Good catch! *grin* Do you want to submit this to sourceforge? ---------------------------------------------------------------------- Comment By: Danny Yoo (dyoo) Date: 2001-06-27 08:30 Message: Logged In: YES user_id=49843 More details here: http://mail.python.org/pipermail/tutor/2001-June/006891.html ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436596&group_id=5470 From noreply@sourceforge.net Wed Jun 27 17:10:59 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Jun 2001 09:10:59 -0700 Subject: [Python-bugs-list] [ python-Bugs-436757 ] popen parameters backword? Message-ID: Bugs item #436757, was opened at 2001-06-27 09:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436757&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: popen parameters backword? Initial Comment: popen documentation seems to have a backwards parameter: from /python2.1/lib/os-newstreams.html : popen2(cmd[, bufsize[, mode]]) however: >>> import os >>> f = os.popen2('ls /', 1500, 'r') Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.0/os.py", line 462, in popen2 stdout, stdin = popen2.popen2(cmd, bufsize) File "/usr/local/lib/python2.0/popen2.py", line 141, in popen2 inst = Popen3(cmd, 0, bufsize) File "/usr/local/lib/python2.0/popen2.py", line 46, in __init__ self.tochild = os.fdopen(p2cwrite, 'w', bufsize) TypeError: an integer is required popen2, 3 and 4 all are like this ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436757&group_id=5470 From noreply@sourceforge.net Wed Jun 27 17:24:44 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Jun 2001 09:24:44 -0700 Subject: [Python-bugs-list] [ python-Bugs-436757 ] popen parameters backword? Message-ID: Bugs item #436757, was opened at 2001-06-27 09:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436757&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: popen parameters backword? Initial Comment: popen documentation seems to have a backwards parameter: from /python2.1/lib/os-newstreams.html : popen2(cmd[, bufsize[, mode]]) however: >>> import os >>> f = os.popen2('ls /', 1500, 'r') Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.0/os.py", line 462, in popen2 stdout, stdin = popen2.popen2(cmd, bufsize) File "/usr/local/lib/python2.0/popen2.py", line 141, in popen2 inst = Popen3(cmd, 0, bufsize) File "/usr/local/lib/python2.0/popen2.py", line 46, in __init__ self.tochild = os.fdopen(p2cwrite, 'w', bufsize) TypeError: an integer is required popen2, 3 and 4 all are like this ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-06-27 09:24 Message: Logged In: YES user_id=3066 Already fixed in CVS; the fix will be included in both the Python 2.1.1 bugfix release (expected in the next few weeks) and in Python 2.2. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436757&group_id=5470 From noreply@sourceforge.net Wed Jun 27 20:23:09 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Jun 2001 12:23:09 -0700 Subject: [Python-bugs-list] [ python-Bugs-436058 ] _PyTrace_Init needs a prototype Message-ID: Bugs item #436058, was opened at 2001-06-25 03:03 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436058&group_id=5470 Category: Python Interpreter Core Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: _PyTrace_Init needs a prototype Initial Comment: _PyTrace_Init() needs a declaration in an include file somewhere. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-06-27 12:23 Message: Logged In: YES user_id=3066 _PyTrace_Init() was removed in Python/ceval.c revision 2.259. A new function, trace_init(), replaces it, but is static in Python/sysmodule.c (revision 2.88). ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-06-25 09:18 Message: Logged In: YES user_id=3066 _PyTrace_Init() will be removed as a side-effect of the new profiler interface I'm working on, which I only got word that I could talk about this morning. ;) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436058&group_id=5470 From noreply@sourceforge.net Thu Jun 28 05:04:15 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Jun 2001 21:04:15 -0700 Subject: [Python-bugs-list] [ python-Bugs-436948 ] cPickle.loads(): Insecure string pickle Message-ID: Bugs item #436948, was opened at 2001-06-27 21:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436948&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: wah meng wong (r32813) Assigned to: Nobody/Anonymous (nobody) Summary: cPickle.loads(): Insecure string pickle Initial Comment: Python Version = python 1.5.2 on AIX 4.3.3.0 Module used = cPickle I encountered the ValueError: Insecure string pickle problem when I wanted to unpickle a pickled data that I query from database. I guess there is a data corruption problem to the data string but I don't know what it is. Appreciate if someone can tell me what could cause this problem. I am not sure if this is related to the unicode new line character issue where it will break the loads() function as reported by someone else in the buglist. BTW, the data string that I tried to unpickle is 65535 bytes in size. Is that too big? I have attach the file containing the problematic data. With this data I will be able to reproduce the problem. Appreciate your helps! Regards, Wah Meng ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436948&group_id=5470 From noreply@sourceforge.net Thu Jun 28 12:19:29 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Jun 2001 04:19:29 -0700 Subject: [Python-bugs-list] [ python-Bugs-437041 ] strfime %Z isn't an RFC 822 timezone Message-ID: Bugs item #437041, was opened at 2001-06-28 04:19 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437041&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Carey Evans (carey) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: strfime %Z isn't an RFC 822 timezone Initial Comment: The section in the library reference manual for the time module says, under strftime: """Here is an example, a format for dates compatible with that specified in the RFC 822 Internet email standard.""" And goes on to use %Z with localtime(). However, %Z for me returns "NZST" and may return a full description under other OSes. RFC 822 only lists a few abbreviations as valid, and NZST isn't one of them. In addition, RFC 822 has now been obsoleted by RFC 2822, which deprecates the use of abbreviations for time zones. To generate an RFC 2822 date string, you can either use gmtime(): strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime()) or do a bit of math: t = localtime() dst = t[8] offs = (timezone, timezone, altzone)[1 + dst] zstr = "%+.2d%.2d" % (offs / -3600, abs(offs / 60) % 60) print strftime("%a, %d %b %Y %H:%M:%S ", t) + zstr Also note that these only work if the LC_TIME locale category hasn't been set to a non-English locale. Maybe "%Y-%m-%d %H:%M:%S" would be a better example, for an ISO8601 formatted time? On a positive note, RFC 2822 defines a year as four digits, so the footnote could be updated. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437041&group_id=5470 From noreply@sourceforge.net Thu Jun 28 13:28:41 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Jun 2001 05:28:41 -0700 Subject: [Python-bugs-list] [ python-Bugs-436948 ] cPickle.loads(): Insecure string pickle Message-ID: Bugs item #436948, was opened at 2001-06-27 21:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436948&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: wah meng wong (r32813) Assigned to: Nobody/Anonymous (nobody) Summary: cPickle.loads(): Insecure string pickle Initial Comment: Python Version = python 1.5.2 on AIX 4.3.3.0 Module used = cPickle I encountered the ValueError: Insecure string pickle problem when I wanted to unpickle a pickled data that I query from database. I guess there is a data corruption problem to the data string but I don't know what it is. Appreciate if someone can tell me what could cause this problem. I am not sure if this is related to the unicode new line character issue where it will break the loads() function as reported by someone else in the buglist. BTW, the data string that I tried to unpickle is 65535 bytes in size. Is that too big? I have attach the file containing the problematic data. With this data I will be able to reproduce the problem. Appreciate your helps! Regards, Wah Meng ---------------------------------------------------------------------- >Comment By: wah meng wong (r32813) Date: 2001-06-28 05:28 Message: Logged In: YES user_id=216234 I would like to cancel this question as I found out that the data was really 'corrupted' because it wasn't the complete string that I inserted into the table. The problem was due to there was a limitation on the Oracle where only the first 64k bytes of data if returned if one queries a column with long as datatype. My data was exceeding 64k thus the returned value from such SQL wasn't the complete data that I inserted. Sorry for this silly mistake. There is no bug on the cPickle. :). Thanks anyway to whom have at least read my question and intended to reply... Regards, Wah Meng ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436948&group_id=5470 From noreply@sourceforge.net Thu Jun 28 17:31:11 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Jun 2001 09:31:11 -0700 Subject: [Python-bugs-list] [ python-Bugs-436948 ] cPickle.loads(): Insecure string pickle Message-ID: Bugs item #436948, was opened at 2001-06-27 21:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436948&group_id=5470 >Category: Extension Modules >Group: Not a Bug >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: wah meng wong (r32813) Assigned to: Nobody/Anonymous (nobody) Summary: cPickle.loads(): Insecure string pickle Initial Comment: Python Version = python 1.5.2 on AIX 4.3.3.0 Module used = cPickle I encountered the ValueError: Insecure string pickle problem when I wanted to unpickle a pickled data that I query from database. I guess there is a data corruption problem to the data string but I don't know what it is. Appreciate if someone can tell me what could cause this problem. I am not sure if this is related to the unicode new line character issue where it will break the loads() function as reported by someone else in the buglist. BTW, the data string that I tried to unpickle is 65535 bytes in size. Is that too big? I have attach the file containing the problematic data. With this data I will be able to reproduce the problem. Appreciate your helps! Regards, Wah Meng ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-28 09:31 Message: Logged In: YES user_id=31435 Thanks for the followup! Closing as requested. You would have gotten a reply "eventually", but looking into problems nobody has seen before is sometimes a low priority. FYI, the "insecure" exception in cPickle is raised for things like strings with unbalanced quotes, implying that they could not possibly have been *created* by cPickle. Truncated data is a thoroughly believable cause for that. ---------------------------------------------------------------------- Comment By: wah meng wong (r32813) Date: 2001-06-28 05:28 Message: Logged In: YES user_id=216234 I would like to cancel this question as I found out that the data was really 'corrupted' because it wasn't the complete string that I inserted into the table. The problem was due to there was a limitation on the Oracle where only the first 64k bytes of data if returned if one queries a column with long as datatype. My data was exceeding 64k thus the returned value from such SQL wasn't the complete data that I inserted. Sorry for this silly mistake. There is no bug on the cPickle. :). Thanks anyway to whom have at least read my question and intended to reply... Regards, Wah Meng ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436948&group_id=5470 From noreply@sourceforge.net Thu Jun 28 19:20:02 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Jun 2001 11:20:02 -0700 Subject: [Python-bugs-list] [ python-Bugs-437152 ] compiling source code fails on aix 4.3.1 Message-ID: Bugs item #437152, was opened at 2001-06-28 11:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437152&group_id=5470 Category: Installation Group: None Status: Open Resolution: None Priority: 5 Submitted By: birgit kellner (birgitk) Assigned to: Nobody/Anonymous (nobody) Summary: compiling source code fails on aix 4.3.1 Initial Comment: python version 2.1, to be installed on an ibm rs/6000 r40, running aix 4.31 (apache 1.3.12). configure runs fine, but make stops with the following error: "fatal error in /usr/lpp/xlC/exe/xlCcode: signal 24 received the error code from the last command is 251" ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437152&group_id=5470 From noreply@sourceforge.net Thu Jun 28 19:41:38 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Jun 2001 11:41:38 -0700 Subject: [Python-bugs-list] [ python-Bugs-437158 ] null char in string processing Message-ID: Bugs item #437158, was opened at 2001-06-28 11:41 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437158&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vadim Suvorov (xxx-bad) Assigned to: Nobody/Anonymous (nobody) Summary: null char in string processing Initial Comment: The following program was excuted with different results in several environments: Windows ME: s = 8 < straaaaaa > Windows NT: expected result s = 8 < str > Sun Solaris 8: s = 8 < str > In all cases, the length and contents of file "s" was as expected, equal to s string. s = "str\0\0\0\0\0" print "s = ", len(s), "<" + s + ">" print "<", str(s), ">" f = open("s", "wb") f.write(s) f.close() ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437158&group_id=5470 From noreply@sourceforge.net Thu Jun 28 19:56:42 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Jun 2001 11:56:42 -0700 Subject: [Python-bugs-list] [ python-Bugs-437158 ] null char in string processing Message-ID: Bugs item #437158, was opened at 2001-06-28 11:41 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437158&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vadim Suvorov (xxx-bad) Assigned to: Nobody/Anonymous (nobody) Summary: null char in string processing Initial Comment: The following program was excuted with different results in several environments: Windows ME: s = 8 < straaaaaa > Windows NT: expected result s = 8 < str > Sun Solaris 8: s = 8 < str > In all cases, the length and contents of file "s" was as expected, equal to s string. s = "str\0\0\0\0\0" print "s = ", len(s), "<" + s + ">" print "<", str(s), ">" f = open("s", "wb") f.write(s) f.close() ---------------------------------------------------------------------- >Comment By: Vadim Suvorov (xxx-bad) Date: 2001-06-28 11:56 Message: Logged In: YES user_id=85081 Oops. Forgot: Python v. 2.1 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437158&group_id=5470 From noreply@sourceforge.net Thu Jun 28 20:57:06 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Jun 2001 12:57:06 -0700 Subject: [Python-bugs-list] [ python-Bugs-437158 ] null char in string processing Message-ID: Bugs item #437158, was opened at 2001-06-28 11:41 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437158&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vadim Suvorov (xxx-bad) Assigned to: Nobody/Anonymous (nobody) Summary: null char in string processing Initial Comment: The following program was excuted with different results in several environments: Windows ME: s = 8 < straaaaaa > Windows NT: expected result s = 8 < str > Sun Solaris 8: s = 8 < str > In all cases, the length and contents of file "s" was as expected, equal to s string. s = "str\0\0\0\0\0" print "s = ", len(s), "<" + s + ">" print "<", str(s), ">" f = open("s", "wb") f.write(s) f.close() ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-28 12:57 Message: Logged In: YES user_id=31435 What does this have to do with Python? That is, Python has no control over how your terminal displays non-printable characters. ---------------------------------------------------------------------- Comment By: Vadim Suvorov (xxx-bad) Date: 2001-06-28 11:56 Message: Logged In: YES user_id=85081 Oops. Forgot: Python v. 2.1 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437158&group_id=5470 From noreply@sourceforge.net Fri Jun 29 07:47:20 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Jun 2001 23:47:20 -0700 Subject: [Python-bugs-list] [ python-Bugs-231249 ] cgi.py opens too many (temporary) files Message-ID: Bugs item #231249, was opened at 2001-02-06 04:59 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=231249&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Richard van de Stadt (stadt) Assigned to: Guido van Rossum (gvanrossum) Summary: cgi.py opens too many (temporary) files Initial Comment: cgi.FieldStorage() is used to get the contents of a webform. It turns out that for each line, a new temporary file is opened. This causes the script that is using cgi.FieldStorage() to reach the webserver's limit of number of opened files, as described by 'ulimit -n'. The standard value for Solaris systems seems to be 64, so webforms with that many fields cannot be dealt with. A solution would seem to use the same temporary filename, since only a maxmimum one file is (temporarily) used at the same time. I did an "ls|wc -l" while the script was running, which showed only zeroes and ones. (I'm using Python for CyberChair, an online paper submission and reviewing system. The webform under discussion has one input field for each reviewer, stating the papers he or she is supposed to be reviewing. One conference that is using CyberChair has almost 140 reviewers. Their system's open file limit is 64. Using the same data on a system with an open file limit of 260 _is_ able to deal with this.) ---------------------------------------------------------------------- Comment By: Richard Jones (richard) Date: 2001-06-28 23:47 Message: Logged In: YES user_id=6405 Sorry for leaving this so long, but I wanted to say that I tried hacking a solution myself and gave up after it got too coplex. I have ended up adopting the solution in the patch here, and it's all working fine! ---------------------------------------------------------------------- Comment By: douglas bagnall (dbagnall) Date: 2001-06-09 15:06 Message: Logged In: YES user_id=107204 This has been causing me trouble too, on various machines. The patch from 2001-04-12 08:20 fixed the problem, but since then I haven't been able to upload files bigger than about 1k. I will try using 2.1 before I investigate that tho. Guido mentioned another more complicated, less likable, patch on 2001-04-13, which doesn't seem to have been uploaded. Or do I just not know where to look? ---------------------------------------------------------------------- Comment By: Richard Jones (richard) Date: 2001-06-07 22:19 Message: Logged In: YES user_id=6405 I've just encountered this bug myself on Mac OS X. The default number for "ulimit -n" is 256, so you can imagine that it's a little worrying that I ran out :) As has been discussed, the multipart/form-data sumission sends a sub-part for every form name=value pair. I ran into the bug in cgi.py because I have a select list with >256 options - which I selected all entries in. This tips me over the 256 open file limit. I have two half-baked alternative suggestions for a solution: 1. use a single tempfile, opened when the multipart parsing is started. That tempfile may then be passed to the child FieldStorage instances and used by the parse_single calls. The child instances just keep track of their index and length in the tempfile. 2. use StringIO for parts of type "text/plain" and use a tempfile for all the rest. This has the problem that someone could cut-paste a core image into a text field though. I might have a crack at a patch for approach #1 this weekend... ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2001-04-12 21:04 Message: Logged In: YES user_id=149084 The patch posted 11 Apr is a neat and compact solution! The only thing I can imagine would be a problem would be if a form had a large number of (small) fields which set the content-length attribute. I don't have an example of such, though. Text fields perhaps? If that was a realistic problem, a solution might be for make_file() to maintain a pool of temporary files; if the field (binary or not) turned out to be small a StringIO could be created and the temporary file returned to the pool. There are a couple of things I've been thinking about in cgi.py; the patch doesn't seem to change the situation one way or the other: There doesn't seem to be any RFC requirement that a file upload be accompanied by a content-length attribute, regardless of whether it is binary or ascii. In fact, some of the RFC examples I've seen omit it. If content-length is not specified, the upload will be processed by file.readline(). Can this cause problems for arbitrary binary files? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-12 11:59 Message: Logged In: YES user_id=6380 Uploading a new patch, more complicated. I don't like it as much. But it works even if the caller uses item.file.fileno(). ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2001-04-12 10:05 Message: Logged In: YES user_id=149084 I have a thought on this, but it will be about 10 hours before I can submit it. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-11 13:20 Message: Logged In: YES user_id=6380 Here's a proposed patch. Can anyone think of a reason why this should not be checked in as part of 2.1? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-10 11:54 Message: Logged In: YES user_id=6380 I wish I'd heard about this sooner. It does seem a problem and it does make sense to use StringIO unless there's a lot of data. But we can't fix this in time for 2.1... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-04-10 10:54 Message: Logged In: YES user_id=11375 Unassigning so someone else can take a look at it. ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2001-02-18 23:32 Message: In the particular HTML form referenced it appears that a workaround might be to eliminate the enctype attribute in the tag and take the application/x-www-form-urlencoded default since no files are being uploaded. When make_file creates the temporary files they are immediately unlinked. There is probably a brief period before the unlink is finalized during which the ls process might see a file; that would account for the output of ls | wc. It appears that the current cgi.py implementation leaves all the (hundreds of) files open until the cgi process releases the FieldStorage object or exits. My first thought was, if the filename recovered from the header is None have make_file create a StringIO object instead of a temp file. That way a temp file is only created when a file is uploaded. This is not inconsistent with the cgi.py docs. Unfortunately, RFC2388 4.4 states that a filename is not required to be sent, so it looks like your solution based on the size of the data received is the correct one. Below 1K you could copy the temp file contents to a StringIO and assign it to self.file, then explicitly close the temp file via its descriptor. If only I understood the module better ::-(( and had a way of tunnel testing it I might have had the temerity to offer a patch. (I'm away for a couple of weeks starting tomorrow.) ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-02-18 14:08 Message: Ah, I see; the traceback makes this much clearer. When you're uploading a file, everything in the form is sent as a MIME document in the body; every field is accompanied by a boundary separator and Content-Disposition header. In multipart mode, cgi.py copies each field into a temporary file. The first idea I had was to only use tempfiles for the actual upload field; unfortunately, that doesn't help because the upload field isn't special, and cgi.py has no way to know which it is ahead of time. Possible second approach: measure the size of the resulting file; if it's less than some threshold (1K? 10K?), read its contents into memory and close the tempfile. This means only the largest fields will require that a file descriptor be kept open. I'll explore this more after beta1. ---------------------------------------------------------------------- Comment By: Richard van de Stadt (stadt) Date: 2001-02-17 18:37 Message: I do *not* mean file upload fields. I stumbled upon this with a webform that contains 141 'simple' input fields like the form you can see here (which 'only' contains 31 of those input fields): http://www.cyberchair.org/cgi-cyb/genAssignPageReviewerPapers.py (use chair/chair to login) When the maximum number of file descriptors used per process was increased to 160 (by the sysadmins), the problem did not occur anymore, and the webform could be processed. This was the error message I got: Traceback (most recent call last): File "/usr/local/etc/httpd/DocumentRoot/ICML2001/cgi-bin/submitAssignRP.py", line 144, in main File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 504, in __init__ File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 593, in read_multi File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 506, in __init__ File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 603, in read_single File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 623, in read_lines File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 713, in make_file File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/tempfile.py", line 144, in TemporaryFile OSError: [Errno 24] Too many open files: '/home/yara/brodley/icml2001/tmp/@26048.61' I understand why you assume that it would concern *file* uploads, but this is not the case. As I reported before, it turns out that for each 'simple' field a temporary file is used in to transfer the contents to the script that uses the cgi.FieldStorage() method, even if no files are being uploaded. The problem is not that too many files are open at the same time (which is 1 at most). It is the *amount* of files that is causing the troubles. If the same temporary file would be used, this problem would probably not have happened. My colleague Fred Gansevles wrote a possible solution, but mentioned that this might introduce the need for protection against a 'symlink attack' (whatever that may be). This solution(?) concentrates on the open file descriptor's problem, while Fred suggests a redesign of FieldStorage() would probably be better. import os, tempfile AANTAL = 50 class TemporaryFile: def __init__(self): self.name = tempfile.mktemp("") open(self.name, 'w').close() self.offset = 0 def seek(self, offset): self.offset = offset def read(self): fd = open(self.name, 'w+b', -1) fd.seek(self.offset) data = fd.read() self.offset = fd.tell() fd.close() return data def write(self, data): fd = open(self.name, 'w+b', -1) fd.seek(self.offset) fd.write(data) self.offset = fd.tell() fd.close() def __del__(self): os.unlink(self.name) def add_fd(l, n) : map(lambda x,l=l: l.append(open('/dev/null')), range(n)) def add_tmp(l, n) : map(lambda x,l=l: l.append(TemporaryFile()), range(n)) def main (): import getopt, sys try: import resource soft, hard = resource.getrlimit (resource.RLIMIT_NOFILE) resource.setrlimit (resource.RLIMIT_NOFILE, (hard, hard)) except ImportError: soft, hard = 64, 1024 opts, args = getopt.getopt(sys.argv[1:], 'n:t') aantal = AANTAL tmp = add_fd for o, a in opts: if o == '-n': aantal = int(a) elif o == '-t': tmp = add_tmp print "aantal te gebruiken fd's:", aantal #dutch; English: 'number of fds to be used' print 'tmp:', tmp.func_name tmp_files = [] files=[] tmp(tmp_files, aantal) try: add_fd(files,hard) except IOError: pass print "aantal vrije gebruiken fd's:", len(files) #enlish: 'number of free fds' main() Running the above code: python ulimit.py [-n number] [-t] default number = 50, while using 'real' fd-s for temporary files. When using the '-t' flag 'smart' temporary files are used. Output: $ python ulimit.py aantal te gebruiken fd's: 50 tmp: add_fd aantal vrije gebruiken fd's: 970 $ python ulimit.py -t aantal te gebruiken fd's: 50 tmp: add_tmp aantal vrije gebruiken fd's: 1020 $ python ulimit.py -n 1000 aantal te gebruiken fd's: 1000 tmp: add_fd aantal vrije gebruiken fd's: 20 $ python ulimit.py -n 1000 -t aantal te gebruiken fd's: 1000 tmp: add_tmp aantal vrije gebruiken fd's: 1020 ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-02-16 21:41 Message: I assume you mean 64 file upload fields, right? Can you provide a small test program that triggers the problem? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=231249&group_id=5470 From noreply@sourceforge.net Fri Jun 29 13:39:14 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 29 Jun 2001 05:39:14 -0700 Subject: [Python-bugs-list] [ python-Bugs-233084 ] nis.match('username', 'aliases') does not work under Linux Message-ID: Bugs item #233084, was opened at 2001-02-19 07:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=233084&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: nis.match('username', 'aliases') does not work under Linux Initial Comment: The exception 'nis.error: No such key in map' is thrown when issuing >>> nis.match('username', 'aliases') under SuSE-Linux 6.4 and 7.0 with both Python 2.0 and Python 2.1a2, even if 'username' is valid and $ ypmatch username aliases works. Fix: Apply the following patch to Modules/nismodule.c --- nismodule.c.sv Mon Feb 19 16:12:10 2001 +++ nismodule.c Mon Feb 19 16:15:28 2001 @@ -43,7 +43,7 @@ {"hosts", "hosts.byname", 0}, {"protocols", "protocols.bynumber", 0}, {"services", "services.byname", 0}, - {"aliases", "mail.aliases", 1}, /* created with 'makedbm -a' */ + {"aliases", "mail.aliases", 0}, /* created with 'makedbm -a' */ {"ethers", "ethers.byname", 0}, {0L, 0L, 0} }; ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-06-29 05:39 Message: Logged In: NO The fix (or the Python 1.5.2 behavior) is correct for such platforms as RedHat Linux 6.2 (NIS server BSDI 2.1) and an unknown FreeBSD version. The need for the "fix" flag is likely dependent on the server, not the client, and may very well need to be guessed at runtime. A related bug causes a segmentation violation in memcpy() for 'nis.cat("aliases")' when a zero-length key or value appears in the map, because (unsigned)-1 is passed as the length argument. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-02-23 11:52 Message: You are all going down on this one ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-02-19 13:33 Message: Can anyone confirm this bug for other platforms? How about the fix? I don't have any access a network that uses NIS these days. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=233084&group_id=5470 From noreply@sourceforge.net Fri Jun 29 14:06:39 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 29 Jun 2001 06:06:39 -0700 Subject: [Python-bugs-list] [ python-Bugs-231249 ] cgi.py opens too many (temporary) files Message-ID: Bugs item #231249, was opened at 2001-02-06 04:59 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=231249&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Richard van de Stadt (stadt) Assigned to: Guido van Rossum (gvanrossum) Summary: cgi.py opens too many (temporary) files Initial Comment: cgi.FieldStorage() is used to get the contents of a webform. It turns out that for each line, a new temporary file is opened. This causes the script that is using cgi.FieldStorage() to reach the webserver's limit of number of opened files, as described by 'ulimit -n'. The standard value for Solaris systems seems to be 64, so webforms with that many fields cannot be dealt with. A solution would seem to use the same temporary filename, since only a maxmimum one file is (temporarily) used at the same time. I did an "ls|wc -l" while the script was running, which showed only zeroes and ones. (I'm using Python for CyberChair, an online paper submission and reviewing system. The webform under discussion has one input field for each reviewer, stating the papers he or she is supposed to be reviewing. One conference that is using CyberChair has almost 140 reviewers. Their system's open file limit is 64. Using the same data on a system with an open file limit of 260 _is_ able to deal with this.) ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2001-06-29 06:06 Message: Logged In: YES user_id=6380 Thanks for reminding me of this patch. I've finally checked it in, so I can close the bug report! ---------------------------------------------------------------------- Comment By: Richard Jones (richard) Date: 2001-06-28 23:47 Message: Logged In: YES user_id=6405 Sorry for leaving this so long, but I wanted to say that I tried hacking a solution myself and gave up after it got too coplex. I have ended up adopting the solution in the patch here, and it's all working fine! ---------------------------------------------------------------------- Comment By: douglas bagnall (dbagnall) Date: 2001-06-09 15:06 Message: Logged In: YES user_id=107204 This has been causing me trouble too, on various machines. The patch from 2001-04-12 08:20 fixed the problem, but since then I haven't been able to upload files bigger than about 1k. I will try using 2.1 before I investigate that tho. Guido mentioned another more complicated, less likable, patch on 2001-04-13, which doesn't seem to have been uploaded. Or do I just not know where to look? ---------------------------------------------------------------------- Comment By: Richard Jones (richard) Date: 2001-06-07 22:19 Message: Logged In: YES user_id=6405 I've just encountered this bug myself on Mac OS X. The default number for "ulimit -n" is 256, so you can imagine that it's a little worrying that I ran out :) As has been discussed, the multipart/form-data sumission sends a sub-part for every form name=value pair. I ran into the bug in cgi.py because I have a select list with >256 options - which I selected all entries in. This tips me over the 256 open file limit. I have two half-baked alternative suggestions for a solution: 1. use a single tempfile, opened when the multipart parsing is started. That tempfile may then be passed to the child FieldStorage instances and used by the parse_single calls. The child instances just keep track of their index and length in the tempfile. 2. use StringIO for parts of type "text/plain" and use a tempfile for all the rest. This has the problem that someone could cut-paste a core image into a text field though. I might have a crack at a patch for approach #1 this weekend... ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2001-04-12 21:04 Message: Logged In: YES user_id=149084 The patch posted 11 Apr is a neat and compact solution! The only thing I can imagine would be a problem would be if a form had a large number of (small) fields which set the content-length attribute. I don't have an example of such, though. Text fields perhaps? If that was a realistic problem, a solution might be for make_file() to maintain a pool of temporary files; if the field (binary or not) turned out to be small a StringIO could be created and the temporary file returned to the pool. There are a couple of things I've been thinking about in cgi.py; the patch doesn't seem to change the situation one way or the other: There doesn't seem to be any RFC requirement that a file upload be accompanied by a content-length attribute, regardless of whether it is binary or ascii. In fact, some of the RFC examples I've seen omit it. If content-length is not specified, the upload will be processed by file.readline(). Can this cause problems for arbitrary binary files? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-12 11:59 Message: Logged In: YES user_id=6380 Uploading a new patch, more complicated. I don't like it as much. But it works even if the caller uses item.file.fileno(). ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2001-04-12 10:05 Message: Logged In: YES user_id=149084 I have a thought on this, but it will be about 10 hours before I can submit it. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-11 13:20 Message: Logged In: YES user_id=6380 Here's a proposed patch. Can anyone think of a reason why this should not be checked in as part of 2.1? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-10 11:54 Message: Logged In: YES user_id=6380 I wish I'd heard about this sooner. It does seem a problem and it does make sense to use StringIO unless there's a lot of data. But we can't fix this in time for 2.1... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-04-10 10:54 Message: Logged In: YES user_id=11375 Unassigning so someone else can take a look at it. ---------------------------------------------------------------------- Comment By: Kurt B. Kaiser (kbk) Date: 2001-02-18 23:32 Message: In the particular HTML form referenced it appears that a workaround might be to eliminate the enctype attribute in the tag and take the application/x-www-form-urlencoded default since no files are being uploaded. When make_file creates the temporary files they are immediately unlinked. There is probably a brief period before the unlink is finalized during which the ls process might see a file; that would account for the output of ls | wc. It appears that the current cgi.py implementation leaves all the (hundreds of) files open until the cgi process releases the FieldStorage object or exits. My first thought was, if the filename recovered from the header is None have make_file create a StringIO object instead of a temp file. That way a temp file is only created when a file is uploaded. This is not inconsistent with the cgi.py docs. Unfortunately, RFC2388 4.4 states that a filename is not required to be sent, so it looks like your solution based on the size of the data received is the correct one. Below 1K you could copy the temp file contents to a StringIO and assign it to self.file, then explicitly close the temp file via its descriptor. If only I understood the module better ::-(( and had a way of tunnel testing it I might have had the temerity to offer a patch. (I'm away for a couple of weeks starting tomorrow.) ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-02-18 14:08 Message: Ah, I see; the traceback makes this much clearer. When you're uploading a file, everything in the form is sent as a MIME document in the body; every field is accompanied by a boundary separator and Content-Disposition header. In multipart mode, cgi.py copies each field into a temporary file. The first idea I had was to only use tempfiles for the actual upload field; unfortunately, that doesn't help because the upload field isn't special, and cgi.py has no way to know which it is ahead of time. Possible second approach: measure the size of the resulting file; if it's less than some threshold (1K? 10K?), read its contents into memory and close the tempfile. This means only the largest fields will require that a file descriptor be kept open. I'll explore this more after beta1. ---------------------------------------------------------------------- Comment By: Richard van de Stadt (stadt) Date: 2001-02-17 18:37 Message: I do *not* mean file upload fields. I stumbled upon this with a webform that contains 141 'simple' input fields like the form you can see here (which 'only' contains 31 of those input fields): http://www.cyberchair.org/cgi-cyb/genAssignPageReviewerPapers.py (use chair/chair to login) When the maximum number of file descriptors used per process was increased to 160 (by the sysadmins), the problem did not occur anymore, and the webform could be processed. This was the error message I got: Traceback (most recent call last): File "/usr/local/etc/httpd/DocumentRoot/ICML2001/cgi-bin/submitAssignRP.py", line 144, in main File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 504, in __init__ File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 593, in read_multi File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 506, in __init__ File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 603, in read_single File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 623, in read_lines File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/cgi.py", line 713, in make_file File "/opt/python/2.0/sparc-sunos5.6/lib/python2.0/tempfile.py", line 144, in TemporaryFile OSError: [Errno 24] Too many open files: '/home/yara/brodley/icml2001/tmp/@26048.61' I understand why you assume that it would concern *file* uploads, but this is not the case. As I reported before, it turns out that for each 'simple' field a temporary file is used in to transfer the contents to the script that uses the cgi.FieldStorage() method, even if no files are being uploaded. The problem is not that too many files are open at the same time (which is 1 at most). It is the *amount* of files that is causing the troubles. If the same temporary file would be used, this problem would probably not have happened. My colleague Fred Gansevles wrote a possible solution, but mentioned that this might introduce the need for protection against a 'symlink attack' (whatever that may be). This solution(?) concentrates on the open file descriptor's problem, while Fred suggests a redesign of FieldStorage() would probably be better. import os, tempfile AANTAL = 50 class TemporaryFile: def __init__(self): self.name = tempfile.mktemp("") open(self.name, 'w').close() self.offset = 0 def seek(self, offset): self.offset = offset def read(self): fd = open(self.name, 'w+b', -1) fd.seek(self.offset) data = fd.read() self.offset = fd.tell() fd.close() return data def write(self, data): fd = open(self.name, 'w+b', -1) fd.seek(self.offset) fd.write(data) self.offset = fd.tell() fd.close() def __del__(self): os.unlink(self.name) def add_fd(l, n) : map(lambda x,l=l: l.append(open('/dev/null')), range(n)) def add_tmp(l, n) : map(lambda x,l=l: l.append(TemporaryFile()), range(n)) def main (): import getopt, sys try: import resource soft, hard = resource.getrlimit (resource.RLIMIT_NOFILE) resource.setrlimit (resource.RLIMIT_NOFILE, (hard, hard)) except ImportError: soft, hard = 64, 1024 opts, args = getopt.getopt(sys.argv[1:], 'n:t') aantal = AANTAL tmp = add_fd for o, a in opts: if o == '-n': aantal = int(a) elif o == '-t': tmp = add_tmp print "aantal te gebruiken fd's:", aantal #dutch; English: 'number of fds to be used' print 'tmp:', tmp.func_name tmp_files = [] files=[] tmp(tmp_files, aantal) try: add_fd(files,hard) except IOError: pass print "aantal vrije gebruiken fd's:", len(files) #enlish: 'number of free fds' main() Running the above code: python ulimit.py [-n number] [-t] default number = 50, while using 'real' fd-s for temporary files. When using the '-t' flag 'smart' temporary files are used. Output: $ python ulimit.py aantal te gebruiken fd's: 50 tmp: add_fd aantal vrije gebruiken fd's: 970 $ python ulimit.py -t aantal te gebruiken fd's: 50 tmp: add_tmp aantal vrije gebruiken fd's: 1020 $ python ulimit.py -n 1000 aantal te gebruiken fd's: 1000 tmp: add_fd aantal vrije gebruiken fd's: 20 $ python ulimit.py -n 1000 -t aantal te gebruiken fd's: 1000 tmp: add_tmp aantal vrije gebruiken fd's: 1020 ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-02-16 21:41 Message: I assume you mean 64 file upload fields, right? Can you provide a small test program that triggers the problem? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=231249&group_id=5470 From noreply@sourceforge.net Fri Jun 29 15:47:12 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 29 Jun 2001 07:47:12 -0700 Subject: [Python-bugs-list] [ python-Bugs-437158 ] null char in string processing Message-ID: Bugs item #437158, was opened at 2001-06-28 11:41 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437158&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vadim Suvorov (xxx-bad) Assigned to: Nobody/Anonymous (nobody) Summary: null char in string processing Initial Comment: The following program was excuted with different results in several environments: Windows ME: s = 8 < straaaaaa > Windows NT: expected result s = 8 < str > Sun Solaris 8: s = 8 < str > In all cases, the length and contents of file "s" was as expected, equal to s string. s = "str\0\0\0\0\0" print "s = ", len(s), "<" + s + ">" print "<", str(s), ">" f = open("s", "wb") f.write(s) f.close() ---------------------------------------------------------------------- >Comment By: Vadim Suvorov (xxx-bad) Date: 2001-06-29 07:47 Message: Logged In: YES user_id=85081 I was able to confirm your opinion. This is the effect of terminal - I was thrown off by "a" as substitution for unprintable character. Sorry, and thank you ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-28 12:57 Message: Logged In: YES user_id=31435 What does this have to do with Python? That is, Python has no control over how your terminal displays non-printable characters. ---------------------------------------------------------------------- Comment By: Vadim Suvorov (xxx-bad) Date: 2001-06-28 11:56 Message: Logged In: YES user_id=85081 Oops. Forgot: Python v. 2.1 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437158&group_id=5470 From noreply@sourceforge.net Fri Jun 29 16:43:30 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 29 Jun 2001 08:43:30 -0700 Subject: [Python-bugs-list] [ python-Bugs-437041 ] strfime %Z isn't an RFC 822 timezone Message-ID: Bugs item #437041, was opened at 2001-06-28 04:19 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437041&group_id=5470 Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Carey Evans (carey) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: strfime %Z isn't an RFC 822 timezone Initial Comment: The section in the library reference manual for the time module says, under strftime: """Here is an example, a format for dates compatible with that specified in the RFC 822 Internet email standard.""" And goes on to use %Z with localtime(). However, %Z for me returns "NZST" and may return a full description under other OSes. RFC 822 only lists a few abbreviations as valid, and NZST isn't one of them. In addition, RFC 822 has now been obsoleted by RFC 2822, which deprecates the use of abbreviations for time zones. To generate an RFC 2822 date string, you can either use gmtime(): strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime()) or do a bit of math: t = localtime() dst = t[8] offs = (timezone, timezone, altzone)[1 + dst] zstr = "%+.2d%.2d" % (offs / -3600, abs(offs / 60) % 60) print strftime("%a, %d %b %Y %H:%M:%S ", t) + zstr Also note that these only work if the LC_TIME locale category hasn't been set to a non-English locale. Maybe "%Y-%m-%d %H:%M:%S" would be a better example, for an ISO8601 formatted time? On a positive note, RFC 2822 defines a year as four digits, so the footnote could be updated. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-06-29 08:43 Message: Logged In: YES user_id=3066 Fixed in Doc/lib/libtime.tex revisions 1.39 and 1.16.4.2. Thanks! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437041&group_id=5470 From noreply@sourceforge.net Fri Jun 29 16:49:13 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 29 Jun 2001 08:49:13 -0700 Subject: [Python-bugs-list] [ python-Bugs-417845 ] Python 2.1: SocketServer.ThreadingMixIn Message-ID: Bugs item #417845, was opened at 2001-04-21 08:28 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=417845&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Guido van Rossum (gvanrossum) Summary: Python 2.1: SocketServer.ThreadingMixIn Initial Comment: SocketServer.ThreadingMixIn does not work properly since it tries to close the socket of a request two times. Workaround for using SocketServer.ThreadingMixIn under Python 2.1: class MyThreadingHTTPServer( SocketServer.ThreadingMixIn, MyHTTPServer ): def close_request(self, request): pass ---------------------------------------------------------------------- Comment By: Greg Chapman (glchapman) Date: 2001-06-29 08:49 Message: Logged In: YES user_id=86307 Since the request socket object is only a lightweight wrapper around the real socket, why not simply pass request.dup() to the new thread? Then the server's call to close_request affects only its copy of request, not the copy being used in the thread. For example, the following change to ThreadingMixIn fixed this bug for me in a (very simple) test program: def process_request(self, request, client_address): """Start a new thread to process the request.""" import threading t = threading.Thread(target = self.finish_request, args = (request.dup(), client_address)) t.start() ---------------------------------------------------------------------- Comment By: Xavier Lagraula (xlagraula) Date: 2001-05-28 05:20 Message: Logged In: YES user_id=198402 I have now started a project here concerning a SOCKS proxy written in python (PySocks). It is aimed mostly at people who use a windows box to share their internet connection and uses the threading server from the SocketServer module. So it becomes VERY important to me to know of what will be done about this bug, in the next release/patch of Python library. Could Mr Guido Vanrossum tell us about it? As for now I am forced to provide my patched version of SocketServer.py with my releases, what is not quite satisfactory. SocketServer is provided in the python distribution, so I'd rather tell "there is a patch for python..." Well... In fact I forgot to put it in my first release, but I'll correct this this evening :) ---------------------------------------------------------------------- Comment By: Xavier Lagraula (xlagraula) Date: 2001-05-20 12:06 Message: Logged In: YES user_id=198402 Well I was wrong. We do need a "try" block to ensure the request is always correctly closed: def finish_request(self, request, client_address): """Finish one request by instantiating RequestHandlerClass.""" try: self.RequestHandlerClass(request, client_address, self) finally: self.close_request(request) This works better. ---------------------------------------------------------------------- Comment By: Xavier Lagraula (xlagraula) Date: 2001-05-13 09:09 Message: Logged In: YES user_id=198402 I forgot to tell: I can not test if it does not break the forking server. I only have a windows platform available for now, and forking doesn't work in the python/win32 environment for now as far as I know. ---------------------------------------------------------------------- Comment By: Xavier Lagraula (xlagraula) Date: 2001-05-13 08:51 Message: Logged In: YES user_id=198402 What I propose can be applied without any compatibility issue. I have tried something that seems to work, at least under windows (but it does need to be more fully tested though). Only 2 small modifications are required: -1- In BaseServer, modification of the last line of handle_request:
    def handle_request(self):
        """Handle one request, possibly blocking."""
        #import time
        try:
            request, client_address = self.get_request()
        except socket.error:
            return
        if self.verify_request(request, client_address):
            try:
                print 'handle 1'
                self.process_request(request, 
client_address)
                print 'handle 2'
            except:
                self.handle_error(request, client_address)
                self.close_request(request)

Note that only the indentation of the last has been modified so that the close_request is executed only if an exception occur. Still we need to close the request after it had been processed, so here comes the second modification: -2- Still in BaseServer:
    def finish_request(self, request, client_address):
        """Finish one request by instantiating 
RequestHandlerClass."""
        self.RequestHandlerClass(request, client_address, 
self)
        self.close_request(request)
There is already a try/except block in handle_request, so I thought it was not mandatory here to ensure the request was always closed. Oh... I don't know how <PRE> tags are supported by the bugtracking system of sourceforge, so the samples I give may not appear as I want. This is quite a problem with python :/ ---------------------------------------------------------------------- Comment By: Luke Kenneth Casson Leighton (lkcl) Date: 2001-05-12 15:46 Message: Logged In: YES user_id=80200 hi there mr xlagraula, yes, it would be a lot simpler... _if_ it wasn't for the fact that this code is likely to already be quite extensively used. a possible 'upgrade' path could be done by providing a... RequestHandler2 class, or some-such. it would be neater to do this, or similar: for t in self.thread_list: t.join(timeout=0.1) which would join all threads, or you do if stopped(), close_request. i looked into threads a bit more: join has a timeout, and there is a stopped-detection function. easy :) ---------------------------------------------------------------------- Comment By: Xavier Lagraula (xlagraula) Date: 2001-05-12 14:18 Message: Logged In: YES user_id=198402 Another solution could be to modify the behaviour of the server so that it would be the responsibility of the "child" thread/process to close the socket (except for the forking/threading error cases). Wouldn't it be simplier than child process tracking and thread tracking? ---------------------------------------------------------------------- Comment By: Luke Kenneth Casson Leighton (lkcl) Date: 2001-05-04 04:17 Message: Logged In: YES user_id=80200 okay. the forkingmixin code does a fork, records how many children there are, and waits for one of them to exit, before proceeding - in particular, before proceeding to close the request, etc. ... so why is not something similar done in ThreadingMixIn? this kinda- tells me that thread-tracking is really needed, in a similar way to that in forkingmixin. ---------------------------------------------------------------------- Comment By: Gregory P. Smith (greg) Date: 2001-05-03 16:26 Message: Logged In: YES user_id=413 Just a note of another casualty of this bug: I had to add the mentioned dummy close_request method hack to our own ThreadingMixIn class in mojo nation (in the sourceforge mojonation project's evil module, see the common/MojoNationHTTPServer.py file). Without it, python 2.1 would always raise an exception in the request handler as soon as it tried to call self.connection.makefile() because self.connection had apparently already been closed! (its fd was always -1) ---------------------------------------------------------------------- Comment By: Luke Kenneth Casson Leighton (lkcl) Date: 2001-05-02 01:41 Message: Logged In: YES user_id=80200 hi there mr jrielh, thank you very much for the details. what i am having a little difficulty with is, what's the difference between this and python 2.0 SocketServer.py? more specifically, i'm looking at python 2.0 SocketServer.py and, whilst i'm not a Threads expert, i see a t.start() but no t.join(). i've been looking at the Queue example code in the test method of threads.py, and start() is called on every thread, followed by join() on every thread. join waits for the thread to finish, yes? so... if that's the case, then python 2.0 SocketServer.py should suffer from exactly the same behaviour, yes? unless python behaves ever-so-slightly differently (timing issues?) when you have an extra base class like this, with the consequence that close_request() is more likely to be called before ThreadingMixIn.process_request(). ? ---------------------------------------------------------------------- Comment By: Jon Riehl (jriehl) Date: 2001-05-01 14:20 Message: Logged In: YES user_id=22448 This is related to bug #419873. The problem is not specifically in the ThreadingMixin specifically, but where BaseServer calls close_request() after calling process_request(). In the threading mixin, process_request() spins the thread and returns, causing the request socket to be invalidated while the thread is still running. The fix given above will keep the socket valid while the thread is running, but may cause the socket to not close properly (my threads generally close the socket when they are done anyway.) ---------------------------------------------------------------------- Comment By: Luke Kenneth Casson Leighton (lkcl) Date: 2001-04-26 05:41 Message: Logged In: YES user_id=80200 follow-up. i took a look at the differences between SocketServer.py in 2.0 and 2.1. there is one small change by guido to the ThreadingMixIn.process_request() function that calls self.server_close() instead of explicitly calling self.socket.close(), where TCPServer.server_close() calls self.socket.close(). if mr anonymous (hi!) has over-ridden server_close() and explicitly closes the *request* socket, then of course the socket will get closed twice. the rest of the code-mods is a straightforward code-shuffle moving code from TCPServer into BaseServer: from examining the diff, i really don't see how bypassing close_request(), as shown above with the Workaround in the original bug-report, will help: that will in fact cause the request _never_ to be closed! the rest of this report is part of an email exchange with guido, quoted here: "the bug-report doesn't state whether python 2.0 worked and 2.1 didn't: it also doesn't give enough info. for all we know, he's calling close_request() himself or request.close() directly somewhere in his code, and hasn't told anybody, which is why he has to over-ride close_request() and tell it to do nothing. or he's closing the socket in the HandlerClass, in finish(), or something. we just don't know. either that, or his HandlerClass creates a socket once and only once, with the result that close_request() closes the one socket, and he's _completely_ stuffed, then :)" ---------------------------------------------------------------------- Comment By: Luke Kenneth Casson Leighton (lkcl) Date: 2001-04-26 04:11 Message: Logged In: YES user_id=80200 hi there, i'm the person who wrote the BaseServer class. guido contacted me about it: could you please send me or post here a working test example that demonstrates the problem. i assume, but you do not state, that you have tested your MyHTTPServer with python 2.0, please let us know, here, if that is a correct assumption. thanks! luke ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=417845&group_id=5470 From noreply@sourceforge.net Fri Jun 29 17:10:23 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 29 Jun 2001 09:10:23 -0700 Subject: [Python-bugs-list] [ python-Bugs-437395 ] RFC 2822 conformance Message-ID: Bugs item #437395, was opened at 2001-06-29 09:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437395&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Fred L. Drake, Jr. (fdrake) Assigned to: Barry Warsaw (bwarsaw) Summary: RFC 2822 conformance Initial Comment: The rfc822 and smtplib modules need to be checked for conformance with RFC 2822, which obsoletes RFC 822. (Added this to the tracker so we don't lose track of this.) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437395&group_id=5470 From noreply@sourceforge.net Fri Jun 29 19:24:57 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 29 Jun 2001 11:24:57 -0700 Subject: [Python-bugs-list] [ python-Bugs-437158 ] null char in string processing Message-ID: Bugs item #437158, was opened at 2001-06-28 11:41 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437158&group_id=5470 Category: None >Group: Not a Bug >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Vadim Suvorov (xxx-bad) Assigned to: Nobody/Anonymous (nobody) Summary: null char in string processing Initial Comment: The following program was excuted with different results in several environments: Windows ME: s = 8 < straaaaaa > Windows NT: expected result s = 8 < str > Sun Solaris 8: s = 8 < str > In all cases, the length and contents of file "s" was as expected, equal to s string. s = "str\0\0\0\0\0" print "s = ", len(s), "<" + s + ">" print "<", str(s), ">" f = open("s", "wb") f.write(s) f.close() ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2001-06-29 11:24 Message: Logged In: YES user_id=31435 Thanks for following up! In much of the world, the only characters "safe" to display (or print) across all available platforms are those characters c such that 32 <= ord(c) < 128 That is, the printable ASCII characters. Go beyond that and it depends on all sorts of platform stuff, like the display drivers, the terminals, available fonts, locale settings, etc. ---------------------------------------------------------------------- Comment By: Vadim Suvorov (xxx-bad) Date: 2001-06-29 07:47 Message: Logged In: YES user_id=85081 I was able to confirm your opinion. This is the effect of terminal - I was thrown off by "a" as substitution for unprintable character. Sorry, and thank you ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-06-28 12:57 Message: Logged In: YES user_id=31435 What does this have to do with Python? That is, Python has no control over how your terminal displays non-printable characters. ---------------------------------------------------------------------- Comment By: Vadim Suvorov (xxx-bad) Date: 2001-06-28 11:56 Message: Logged In: YES user_id=85081 Oops. Forgot: Python v. 2.1 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437158&group_id=5470 From noreply@sourceforge.net Fri Jun 29 22:50:41 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 29 Jun 2001 14:50:41 -0700 Subject: [Python-bugs-list] [ python-Bugs-437472 ] MacPy21: sre "recursion limit" bug Message-ID: Bugs item #437472, was opened at 2001-06-29 14:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437472&group_id=5470 Category: Regular Expressions Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: MacPy21: sre "recursion limit" bug Initial Comment: As of Python 2.0, the sre module had bug wherein a "RuntimeError: maximum recursion limit exceeded" would be raised whenever an expression matched something on the order of 16,000+ characters. The bug, nominally fixed in time for Python 2.1, is still present in MacPython 2.1, as evidenced by the following transcript copied from an interactive session with the interpreter. Note success with pre module, however. It makes me curious, since the bug appears to be fixed in WinPython 2.1, whether the correct source was used when compiling MacPython 2.1.... ========== Python 2.1 (#92, Apr 24 2001, 23:59:23) [CW PPC GUSI2 THREADS] on mac >>> import sre, pre, string >>> l = ["XXX", "%"*20000, "XXX"] >>> sre_regex = sre.compile(r"XXX.*?XXX") >>> match_object = sre_regex.search(string.join(l)) Traceback (most recent call last): File "", line 1, in ? RuntimeError: maximum recursion limit exceeded >>> ### Above error first reported upon release of sre with Python 2.0 ### >>> ### Bug supposedly fixed in Python 2.0.1 release (no Mac version, I know) ### >>> pre_regex = pre.compile(r"XXX.*XXX") >>> match_object = pre_regex.search(string.join(l)) >>> match_object >>> ### Note above success with pre module instead of sre ### >>> ### Wrong sre module source used when compiling MacPython 2.1? ### ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437472&group_id=5470 From noreply@sourceforge.net Fri Jun 29 22:55:39 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 29 Jun 2001 14:55:39 -0700 Subject: [Python-bugs-list] [ python-Bugs-437475 ] MacPy21: sre "recursion limit" bug Message-ID: Bugs item #437475, was opened at 2001-06-29 14:55 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437475&group_id=5470 Category: Regular Expressions Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: MacPy21: sre "recursion limit" bug Initial Comment: As of Python 2.0, the sre module had bug wherein a "RuntimeError: maximum recursion limit exceeded" would be raised whenever an expression matched something on the order of 16,000+ characters. The bug, nominally fixed in time for Python 2.1, is still present in MacPython 2.1, as evidenced by the following transcript copied from an interactive session with the interpreter. Note success with pre module, however. It makes me curious, since the bug appears to be fixed in WinPython 2.1, whether the correct source was used when compiling MacPython 2.1.... ========== Python 2.1 (#92, Apr 24 2001, 23:59:23) [CW PPC GUSI2 THREADS] on mac >>> import sre, pre, string >>> l = ["XXX", "%"*20000, "XXX"] >>> sre_regex = sre.compile(r"XXX.*?XXX") >>> match_object = sre_regex.search(string.join(l)) Traceback (most recent call last): File "", line 1, in ? RuntimeError: maximum recursion limit exceeded >>> ### Above error first reported upon release of sre with Python 2.0 ### >>> ### Bug supposedly fixed in Python 2.0.1 release (no Mac version, I know) ### >>> pre_regex = pre.compile(r"XXX.*XXX") >>> match_object = pre_regex.search(string.join(l)) >>> match_object >>> ### Note above success with pre module instead of sre ### >>> ### Wrong sre module source used when compiling MacPython 2.1? ### ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437475&group_id=5470 From noreply@sourceforge.net Fri Jun 29 23:42:57 2001 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 29 Jun 2001 15:42:57 -0700 Subject: [Python-bugs-list] [ python-Bugs-437487 ] 2.1 build on Solaris fails if CC is set Message-ID: Bugs item #437487, was opened at 2001-06-29 15:42 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437487&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: 2.1 build on Solaris fails if CC is set Initial Comment: If you have CC set to "gcc", then the "setup.py" script compiles the extensions incorrectly (in particular, the "-fPIC" on the compile is not used): ... PYTHONPATH= ./python ./setup.py build running build running build_ext building 'struct' extension creating build creating build/temp.solaris-2.7-sun4u-2.1 gcc -I. -I/extra/Python-2.1/./Include -I/usr/local/include -IInclude/ -c /extra/Python-2.1/Modules/structmodule.c -o build/temp.solaris-2.7-sun4u-2.1/structmodule.o creating build/lib.solaris-2.7-sun4u-2.1 gcc -shared build/temp.solaris-2.7-sun4u-2.1/structmodule.o -L/usr/local/lib -o build/lib.solaris-2.7-sun4u-2.1/struct.so Text relocation remains referenced against symbol offset in file 0x2094 build/temp.solaris-2.7-sun4u-2.1/structmodule.o 0x200c build/temp.solaris-2.7-sun4u-2.1/structmodule.o 0x3398 build/temp.solaris-2.7-sun4u-2.1/structmodule.o 0x2098 build/temp.solaris-2.7-sun4u-2.1/structmodule.o 0x2070 build/temp.solaris-2.7-sun4u-2.1/structmodule.o 0x206c .... If you want to compile code configured via "configure" with the "gcc" compiler and you have both the "gcc" and the Sun C compiler installed on your system, then you need to have the environment variable "CC" set to "gcc": CC=gcc So people (like myself) have "CC=gcc" in their .profile or equivalent in .login, and have had that for years without thinking about it. If you don't have "CC=gcc" set in your environment, then things work fine: ... PYTHONPATH= ./python ./setup.py build running build running build_ext building 'struct' extension creating build creating build/temp.solaris-2.7-sun4u-2.1 gcc -g -O2 -Wall -Wstrict-prototypes -fPIC -I. -I/home/tflagg/projects/python/2.1sunos/Python-2.1/./Include -I/usr/local/include -IInclude/ -c /home/tflagg/projects/python/2.1sunos/Python-2.1/Modules/structmodule.c -o build/temp.solaris-2.7-sun4u-2.1/structmodule.o (In particular, the "-fPIC" flag needs to be there as shown above.) The problem lines in "setup.py" are: 112 # When you run "make CC=altcc" or something similar, you really want 113 # those environment variables passed into the setup.py phase. Here's 114 # a small set of useful ones. 115 compiler = os.environ.get('CC') <---------- 116 linker_so = os.environ.get('LDSHARED') 117 args = {} 118 # unfortunately, distutils doesn't let us provide separate C and C++ 119 # compilers 120 if compiler is not None: <--------------- 121 args['compiler_so'] = compiler <--------- It's true the user may want to set site-specific/user-specific compiler options. However, if "CC" is only "gcc" then the user is trying to communicate: - Use "gcc" rather than Sun's C compiler They are NOT trying to communicate: - Do not use any of the compiler flags for compiling shared objects. The reason this bug Solaris-specific is because Solaris installations are the most likely to have a non-gcc compiler installed, requiring the "CC=gcc" environment variable set. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=437487&group_id=5470