From techtonik at gmail.com Thu Feb 4 09:57:20 2010 From: techtonik at gmail.com (anatoly techtonik) Date: Thu, 4 Feb 2010 10:57:20 +0200 Subject: [Python-Dev] Fixed URL to 2.6 documentation Message-ID: Greetings, I'm writing a module for current Python 2.6 and I would like to reference documentation for Python 2.6, because I am not sure if behavior won't be changed in further series. So far I can link only to: http://docs.python.org/ (stable, 2.6) http://docs.python.org/dev/ (2.7) http://docs.python.org/dev/py3k/ When stable changes to 2.7 my reference will point to the different version than I was meant when writing comment. It is possible to link to docs for minor 2.6.4 http://www.python.org/doc/2.6.4/ but I would like to link to latest version of docs in 2.6 branch that may not yet found way into official minor release. -- anatoly t. From barry at python.org Thu Feb 4 10:13:25 2010 From: barry at python.org (Barry Warsaw) Date: Thu, 4 Feb 2010 01:13:25 -0800 Subject: [Python-Dev] The fate of the -U option In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <87wryzro4a.fsf@benfinney.id.au> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <20100202225711.16f93e10@freewill.wooz.org> <4B6975C6.1050402@gmail.com> <4B698DBF.7000603@egenix.com> <20100203150435.26099.1887226812.divmod.xquotient.106@localhost.localdomain> <4B699494.3080304@egenix.com> <20100203090931.01d1d198@freewill.wooz.org> Message-ID: <20100204011325.648c4250@freewill.wooz.org> On Feb 03, 2010, at 09:55 AM, Guido van Rossum wrote: >On Wed, Feb 3, 2010 at 9:09 AM, Barry Warsaw wrote: >> -----snip snip----- >> from __future__ import unicode_literals >> >> def func(foo, bar): >> ? ?print foo, bar >> >> kw = {'foo': 7, 'bar': 9} >> func(**kw) >> -----snip snip----- >> >> That will raise a TypeError in 2.6 but works in 2.7. ?Is it appropriate and >> feasible to back port that to Python 2.6? ?I remember talking about this a >> while back but I don't remember what we decided and I can't find a bug on the >> issue. > >I don't know about feasible but I think it's (borderline) appropriate. >There are various other paths that lead to this error and it feels to >me it's just a long-standing bug that we never took care of until 2.7. >However, I don't think it needs to support non-ASCII characters in the >keywords (even though 2.7 does seem to support those). The back port from trunk of r68805 to fix this was really pretty trivial. http://bugs.python.org/issue4978 http://bugs.python.org/file16124/py26-backport.patch Assigned to Benjamin for review. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From ncoghlan at gmail.com Thu Feb 4 10:47:54 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 04 Feb 2010 19:47:54 +1000 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <4B6520C5.4080103@gmail.com> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <4B69F007.5050101@gmail.com> Message-ID: <4B6A97CA.1050601@gmail.com> Brett Cannon wrote: > But what does "expected location" mean? If I am importing foo.bar > where foo.__path__ has multiple path entries, which one is supposed to > be used to set the hypothetical location of source for __file__? I > guess going with the first one would be somewhat reasonable, but it's > definitely a guess. No, it wouldn't be a guess, it would be based on where the compiled bytecode was actually found (wherever in foo.__path__ that may have been). So __file__ would be the same regardless of whether or not the source code was actually physically present or not. __compiled__ would be set only if the code being executed was read from that file rather than from __file__. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From chambon.pascal at gmail.com Thu Feb 4 10:51:54 2010 From: chambon.pascal at gmail.com (Pascal Chambon) Date: Thu, 04 Feb 2010 10:51:54 +0100 Subject: [Python-Dev] Forking and Multithreading - enemy brothers In-Reply-To: References: <4B6402CE.6020902@arcor.de> <4B6734FE.5030204@wanadoo.fr> <4222a8491002011319r31125609hac511b20ec3e1555@mail.gmail.com> <4222a8491002011401r5361cf61id7101bcd099f2340@mail.gmail.com> Message-ID: <4B6A98BA.6040800@wanadoo.fr> Matt Knox a ?crit : > Jesse Noller gmail.com> writes: > > >> We already have an implementation that spawns a >> subprocess and then pushes the required state to the child. The >> fundamental need for things to be pickleable *all the time* kinda >> makes it annoying to work with. >> >> > > just a lurker here... but this topic hits home with me so thought I'd chime > in. I'm a windows user and I would *love* to use multiprocessing a lot more > because *in theory* it solves a lot of the problems I deal with very nicely > (lot sof financial data number crunching). However, the pickling requirement > makes it very very difficult to actually get any reasonably complex code to > work properly with it. > > A lot of the time the functions I want to call in the spawned processes are > actually fairly self contained and don't need most of the environment of the > parent process shoved into it, so it's annoying that it fails because some data > I don't even need in the child process can't be pickled. > > What about having an option to skip all the parent environment data pickling > and require the user to manually invoke any imports that are needed in the > target functions as the first step inside their target function? > > for example... > > def target_function(object_from_module_xyz): > import xyz > return object_from_module_xyz.do_something() > > and if I forgot to import all the stuff necessary for the arguments being > passed into my function to work, then it's my own problem. > > Although maybe there is some obvious problem with this that I am not seeing. > > Anyway, just food for thought. > > - Matt > > Hello I don't really get it there... it seems to me that multiprocessing only requires picklability for the objects it needs to transfer, i.e those given as arguments to the called function, and thsoe put into multiprocessing queues/pipes. Global program data needn't be picklable - on windows it gets wholly recreated by the child process, from python bytecode. So if you're having pickle errors, it must be because the "object_from_module_xyz" itself is *not* picklable, maybe because it contains references to unpicklable objets. In such case, properly implementing pickle magic methods inside the object should do it, shouldn't it ? Regards, Pascal -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Thu Feb 4 10:59:40 2010 From: barry at python.org (Barry Warsaw) Date: Thu, 4 Feb 2010 01:59:40 -0800 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> <5535359D-2EF2-4B74-88A8-819ECFFB8536@zope.com> <4B69D772.7040904@v.loewis.de> <50B80C22-9738-4F31-9B86-580BF3E4734C@zope.com> Message-ID: <20100204015940.5c0a2ec2@freewill.wooz.org> On Feb 03, 2010, at 11:50 PM, Ronald Oussoren wrote: >> Barry's answer was "yes" back in October. > >I will backport the patch if Barry says it's fine. Feel free to ping me if >that doesn't happen before the end of next week. I still think this should go in 2.6.5. The patch does not apply to the current 2.6 branch because of changes in setup.py. If the patch is ported, reviewed and works with no regressions (when libreadline is both installed on OS X 10.5 and 10.6 or not), then I'm okay with this going in. I've made it a release blocker for 2.6.5 for now. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Thu Feb 4 11:28:47 2010 From: barry at python.org (Barry Warsaw) Date: Thu, 4 Feb 2010 02:28:47 -0800 Subject: [Python-Dev] python -U In-Reply-To: <4B69D414.5010106@v.loewis.de> References: <20100130190005.058c8187@freewill.wooz.org> <2987c46d1001301821n72606673x1c84ba7fc9b4712@mail.gmail.com> <87wryzro4a.fsf@benfinney.id.au> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <20100202225711.16f93e10@freewill.wooz.org> <4B69D414.5010106@v.loewis.de> Message-ID: <20100204022847.1deeeaa8@freewill.wooz.org> On Feb 03, 2010, at 08:52 PM, Martin v. L?wis wrote: >>> As an aside, I think this should be documented *somewhere* other than >>> just in import.c! I'd totally forgotten about it until I read the >>> source and almost missed it. Either it should be documented or it >>> should be ripped out. >> >> The -U option is already gone in 3.x. > >Precisely my view. I took the "or document it" approach. http://bugs.python.org/file16127/7847.patch http://bugs.python.org/issue7847 -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jaedan31 at gmail.com Thu Feb 4 17:58:56 2010 From: jaedan31 at gmail.com (Ben Walker) Date: Thu, 4 Feb 2010 10:58:56 -0600 Subject: [Python-Dev] Forking and Multithreading - enemy brothers Message-ID: Pascal Chambon writes: > I don't really get it there... it seems to me that multiprocessing only > requires picklability for the objects it needs to transfer, i.e those > given as arguments to the called function, and thsoe put into > multiprocessing queues/pipes. Global program data needn't be picklable - > on windows it gets wholly recreated by the child process, from python > bytecode. > > So if you're having pickle errors, it must be because the > "object_from_module_xyz" itself is *not* picklable, maybe because it > contains references to unpicklable objets. In such case, properly > implementing pickle magic methods inside the object should do it, > shouldn't it ? I'm also a long time lurker (and in financial software, coincidentally). Pascal is correct here. We use a number of C++ libraries wrapped via Boost.Python to do various calculations. The typical function calls return wrapped C++ types. Boost.Python types are not, unfortunately, pickleable. For a number of technical reasons, and also unfortunately, we often have to load these libraries in their own process, but we want to hide this from our users. We accomplish this by pickling the instance data, but importing the types fresh when we unpickle, all implemented in the magic pickle methods. We would lose any information that was dynamically added to the type in the remote process, but we simply don't do that. We very often have many unpickleable objects imported somewhere when we spin off our processes using the multiprocess library, and this does not cause any problems. Jesse Noller gmail.com> writes: > We already have an implementation that spawns a > subprocess and then pushes the required state to the child. The > fundamental need for things to be pickleable *all the time* kinda > makes it annoying to work with. This requirement puts a fairly large additional strain on working with unwieldy, wrapped C++ libraries in a multiprocessing environment. I'm not very knowledgeable on the internals of the system, but would it be possible to have some kind of fallback system whereby if an object fails to pickle we instead send information about how to import it? This has all kinds of limitations - it only works for importable things (i.e. not instances), it can potentially lose information dynamically added to the object, etc., but I thought I would throw the idea out there so someone knowledgeable can decide if it has any merit. Ben From exarkun at twistedmatrix.com Thu Feb 4 18:12:11 2010 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Thu, 04 Feb 2010 17:12:11 -0000 Subject: [Python-Dev] Forking and Multithreading - enemy brothers In-Reply-To: References: Message-ID: <20100204171211.26099.1601744879.divmod.xquotient.160@localhost.localdomain> On 04:58 pm, jaedan31 at gmail.com wrote: >Jesse Noller gmail.com> writes: >>We already have an implementation that spawns a >>subprocess and then pushes the required state to the child. The >>fundamental need for things to be pickleable *all the time* kinda >>makes it annoying to work with. > >This requirement puts a fairly large additional strain on working with >unwieldy, wrapped C++ libraries in a multiprocessing environment. >I'm not very knowledgeable on the internals of the system, but would >it be possible to have some kind of fallback system whereby if an >object >fails to pickle we instead send information about how to import it? >This >has all kinds of limitations - it only works for importable things >(i.e. not >instances), it can potentially lose information dynamically added to >the >object, etc., but I thought I would throw the idea out there so someone >knowledgeable can decide if it has any merit. It's already possible to define pickling for arbitrary objects. You should be able to do this for the kinds of importable objects you're talking about, and perhaps even for some of the actual instances (though that depends on how introspectable they are from Python, and whether the results of this introspection can be used to re-instantiate the object somewhere else). Take a look at the copy_reg module. Jean-Paul From guido at python.org Thu Feb 4 20:20:43 2010 From: guido at python.org (Guido van Rossum) Date: Thu, 4 Feb 2010 11:20:43 -0800 Subject: [Python-Dev] PEP 345 and PEP 386 Message-ID: All, I've reviewed PEP 345 and PEP 386 and am satisfied that after some small improvements they will be accepted. Most of the discussion has already taken place. I have one comment on PEP 345: Why is author-email mandatory? I'm sure there are plenty of cases where either the author doesn't want their email address published, or their last know email address is no longer valid. (Tarek responded off-line that it isn't all that mandatory; I propose to say so in the PEP.) I am also looking at PEP 376 but I expect that Tarek will start another round of discussion on it. Hopefully all three PEPs will be accepted in time for inclusion in Python 2.7. -- --Guido van Rossum (python.org/~guido) From brett at python.org Thu Feb 4 21:30:00 2010 From: brett at python.org (Brett Cannon) Date: Thu, 4 Feb 2010 12:30:00 -0800 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B69EBBB.4090304@v.loewis.de> References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B697AAD.2010307@gmail.com> <20100203084539.47559c8f@freewill.wooz.org> <4B69E0F9.5050103@gmail.com> <4B69EBBB.4090304@v.loewis.de> Message-ID: On Wed, Feb 3, 2010 at 13:33, "Martin v. L?wis" wrote: > Guido van Rossum wrote: > > On Wed, Feb 3, 2010 at 12:47 PM, Nick Coghlan > wrote: > >> On the issue of __file__, I'd suggesting not being too hasty in > >> deprecating that in favour of __source__. While I can see a lot of value > >> in having it point to the source file more often with a different > >> attribute that points to the cached file, I don't see a lot of gain to > >> compensate for the pain of changing the name of __file__ itself. > > > > Can you clarify? In Python 3, __file__ always points to the source. > > Clearly that is the way of the future. For 99.99% of uses of __file__, > > if it suddenly never pointed to a .pyc file any more (even if one > > existed) that would be just fine. So what's this talk of switching to > > __source__? > > I originally proposed it, not knowing that Python 3 already changed the > meaning of __file__ for byte code files. > > What I really wanted to suggest is that it should be possible to tell > what gets really executed, plus what source file had been considered. > > So if __file__ is always the source file, a second attribute should tell > whether a byte code file got read (so that you can delete that in case > you doubt it's current, for example). > > What should be done for loaders? Right now we have get_filename() which is what __file__ is to be set to. For importlib there is source_path and bytecode_path, but both of those are specified to return None in the cases of source or bytecode are not available, respectively. The bare minimum, I think, is we need loaders to have mehod(s) that return the path to the source -- whether it exists or not, to set __file__ to -- and the path to bytecode if it exists -- to set __compiled__ or whatever attribute we come up with. That suggests to me either two new methods or one that returns a two-item tuple. We could possibly keep get_filename() and say that people need to compare its output to what source_path()-equivalent method returns, but that seems bad if the source location needs to be based on the bytecode location. My thinking is we deprecate get_filename() and introduce some new method that returns a two-item tuple (get_paths?). First item is where the source should be, and the second is where the bytecode is if it exists (else it's None). Putting both calculations into a single method seems better than a source_path()/bytecode_path() as the latter would quite possibly need source_path() to call bytecode_path() on its own to calculate where the source should be if it doesn't exist on top of the direct call to get_bytecode() for setting __compiled__ itself. -Brett > Regards, > Martin > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Feb 4 21:35:30 2010 From: brett at python.org (Brett Cannon) Date: Thu, 4 Feb 2010 12:35:30 -0800 Subject: [Python-Dev] Fixed URL to 2.6 documentation In-Reply-To: References: Message-ID: On Thu, Feb 4, 2010 at 00:57, anatoly techtonik wrote: > Greetings, > > I'm writing a module for current Python 2.6 and I would like to > reference documentation for Python 2.6, because I am not sure if > behavior won't be changed in further series. So far I can link only > to: > > http://docs.python.org/ (stable, 2.6) > http://docs.python.org/dev/ (2.7) > http://docs.python.org/dev/py3k/ > > When stable changes to 2.7 my reference will point to the different > version than I was meant when writing comment. It is possible to link > to docs for minor 2.6.4 http://www.python.org/doc/2.6.4/ but I would > like to link to latest version of docs in 2.6 branch that may not yet > found way into official minor release. > > As you said, something in 2.6 could change, so linking to a 2.6.x doc link instead of a specific version could be misleading in your docs if a change did occur that you were not aware about. But since micro releases rarely have anything change in them you should be able to link to practically any release and feel comfortable in knowing that things should stay the same through all of 2.6.x. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From v+python at g.nevcal.com Thu Feb 4 22:38:26 2010 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 04 Feb 2010 13:38:26 -0800 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <20100130190005.058c8187@freewill.wooz.org> References: <20100130190005.058c8187@freewill.wooz.org> Message-ID: <4B6B3E52.8040708@g.nevcal.com> On approximately 1/30/2010 4:00 PM, came the following characters from the keyboard of Barry Warsaw: > When the Python executable is given a `-R` flag, or the environment > variable `$PYTHONPYR` is set, then Python will create a `foo.pyr` > directory and write a `pyc` file to that directory with the hexlified > magic number as the base name. > After the discussion so far, my opinion is that if the source directory contains an appropriate python repositiory directory [1], and the version of Python implements PEP 3147, that there should be no need for -R or $PYTHONPYR to exist, but that such versions of Python would simply, and always look in the python repository directory for binaries. I've reached this conclusion for several reasons/benefits: 1) it makes the rules simpler for people finding the binaries 2) there is no "double lookup" to find a binary at run time 3) if the PEP changes to implement alternatives B or C in [1], then I hear a large consensus of people that like that behavior, to clean up the annoying clutter of .pyc files mixed with source. 4) There is no need to add or document the command line option or environment variable. [1] Alternative A... source-file-root.pyr, as in the PEP, Alt. B... source-file-dir/__pyr__ all versions/files in same lookaside directory, Alt. C... source-file-dir/__pyr_version__, each Python version with different bytecode would have some sort of version string or magic number that identifies it, and would look only in that directory for its .pyc/.pyo files. I prefer C for 4 reasons: 1) easier to blow away one version; 2) easier to see what that version has compiled; 3) most people use only one or two versions, so directory proliferation is limited; 4) even when there are 30 versions of Python, the subdirectories would contain the same order-of-magnitude count of files as the source directory for performance issues, if the file system has a knee in the performance curve as some do. -- Glenn -- http://nevcal.com/ =========================== A protocol is complete when there is nothing left to remove. -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking From zvezdan at zope.com Thu Feb 4 22:50:52 2010 From: zvezdan at zope.com (Zvezdan Petkovic) Date: Thu, 4 Feb 2010 16:50:52 -0500 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <20100204015940.5c0a2ec2@freewill.wooz.org> References: <20100202100859.00d34437@heresy.wooz.org> <5535359D-2EF2-4B74-88A8-819ECFFB8536@zope.com> <4B69D772.7040904@v.loewis.de> <50B80C22-9738-4F31-9B86-580BF3E4734C@zope.com> <20100204015940.5c0a2ec2@freewill.wooz.org> Message-ID: <51EB75BD-9095-42F8-A270-E9D73CCAA435@zope.com> On Feb 4, 2010, at 4:59 AM, Barry Warsaw wrote: > I still think this should go in 2.6.5. The patch does not apply to the current 2.6 branch because of changes in setup.py. If the patch is ported, reviewed and works with no regressions (when libreadline is both installed on OS X 10.5 and 10.6 or not), then I'm okay with this going in. > > I've made it a release blocker for 2.6.5 for now. I attached the patch that applies cleanly to 2.6 branch to issue6877. The details are in the comments (msg98858 and msg98859). Zvezdan From ncoghlan at gmail.com Thu Feb 4 22:51:35 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 05 Feb 2010 07:51:35 +1000 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B697AAD.2010307@gmail.com> <20100203084539.47559c8f@freewill.wooz.org> <4B69E0F9.5050103@gmail.com> <4B69EBBB.4090304@v.loewis.de> Message-ID: <4B6B4167.8040502@gmail.com> Brett Cannon wrote: > My thinking is we deprecate get_filename() and introduce some new method > that returns a two-item tuple (get_paths?). First item is where the > source should be, and the second is where the bytecode is if it exists > (else it's None). Putting both calculations into a single method seems > better than a source_path()/bytecode_path() as the latter would quite > possibly need source_path() to call bytecode_path() on its own to > calculate where the source should be if it doesn't exist on top of the > direct call to get_bytecode() for setting __compiled__ itself. If we add a new method like get_filenames(), I would suggest going with Antoine's suggestion of a tuple for __compiled__ (allowing loaders to indicate that they actually constructed the runtime bytecode from multiple cached files on-disk). The runpy logic would then be something like: try: method = loader.get_filenames except AttributeError: __compiled__ = () try: method = loader.get_filename except: __file__ = None else: __file__ = method() else: __file__, *__compiled__ = method() For the import machinery itself, setting __compiled__ would be the responsibility of the loaders due to the way load_module is specified. I still sometimes wonder if we would be better off splitting that method into separate "prepare_module" and "exec_module" methods to allow the interpreter a chance to fiddle with the module globals before the module code gets executed. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Thu Feb 4 22:54:18 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 05 Feb 2010 07:54:18 +1000 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B6B3E52.8040708@g.nevcal.com> References: <20100130190005.058c8187@freewill.wooz.org> <4B6B3E52.8040708@g.nevcal.com> Message-ID: <4B6B420A.7000002@gmail.com> Glenn Linderman wrote: > Alt. C... source-file-dir/__pyr_version__, each Python version with > different bytecode would have some sort of version string or magic > number that identifies it, and would look only in that directory for its > .pyc/.pyo files. I prefer C for 4 reasons: 1) easier to blow away one > version; 2) easier to see what that version has compiled; 3) most people > use only one or two versions, so directory proliferation is limited; 4) > even when there are 30 versions of Python, the subdirectories would > contain the same order-of-magnitude count of files as the source > directory for performance issues, if the file system has a knee in the > performance curve as some do. I don't think this suggestion had come up before, but I like it. It also reduces the amount of filename adjustment needed in the individual cache directories. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From brett at python.org Thu Feb 4 23:16:36 2010 From: brett at python.org (Brett Cannon) Date: Thu, 4 Feb 2010 14:16:36 -0800 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B6B4167.8040502@gmail.com> References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B697AAD.2010307@gmail.com> <20100203084539.47559c8f@freewill.wooz.org> <4B69E0F9.5050103@gmail.com> <4B69EBBB.4090304@v.loewis.de> <4B6B4167.8040502@gmail.com> Message-ID: On Thu, Feb 4, 2010 at 13:51, Nick Coghlan wrote: > Brett Cannon wrote: > > My thinking is we deprecate get_filename() and introduce some new method > > that returns a two-item tuple (get_paths?). First item is where the > > source should be, and the second is where the bytecode is if it exists > > (else it's None). Putting both calculations into a single method seems > > better than a source_path()/bytecode_path() as the latter would quite > > possibly need source_path() to call bytecode_path() on its own to > > calculate where the source should be if it doesn't exist on top of the > > direct call to get_bytecode() for setting __compiled__ itself. > > If we add a new method like get_filenames(), I would suggest going with > Antoine's suggestion of a tuple for __compiled__ (allowing loaders to > indicate that they actually constructed the runtime bytecode from > multiple cached files on-disk). > > Does code exist out there where people are constructing bytecode from multiple files for a single module? > The runpy logic would then be something like: > > try: > method = loader.get_filenames > except AttributeError: > __compiled__ = () > try: > method = loader.get_filename > except: > __file__ = None > else: > __file__ = method() > else: > __file__, *__compiled__ = method() > > Should it really be a flat sequence that get_filenames returns? That first value has a very special meaning compared to the rest which suggests to me keeping the returned sequence to two items, just with the second item being a sequence itself. > > For the import machinery itself, setting __compiled__ would be the > responsibility of the loaders due to the way load_module is specified. Yep. > I > still sometimes wonder if we would be better off splitting that method > into separate "prepare_module" and "exec_module" methods to allow the > interpreter a chance to fiddle with the module globals before the module > code gets executed. > There's a reason why importlib has its ABCs abstracted the way it does; there's a bunch of stuff that can be automated and is common to all loaders that load_module has to cover. We could consider refactoring the API, but I don't know if it is worth the hassle since importlib has decorators that take care of low-level commonality and has ABCs for higher-level stuff. But yes, given a do-over, I would abstract loaders to a finer grain to let import handle more of the details. -Brett > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > --------------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Thu Feb 4 23:28:25 2010 From: eric at trueblade.com (Eric Smith) Date: Thu, 04 Feb 2010 17:28:25 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B6B3E52.8040708@g.nevcal.com> References: <20100130190005.058c8187@freewill.wooz.org> <4B6B3E52.8040708@g.nevcal.com> Message-ID: <4B6B4A09.4070000@trueblade.com> Glenn Linderman wrote: > On approximately 1/30/2010 4:00 PM, came the following characters from > the keyboard of Barry Warsaw: >> When the Python executable is given a `-R` flag, or the environment >> variable `$PYTHONPYR` is set, then Python will create a `foo.pyr` >> directory and write a `pyc` file to that directory with the hexlified >> magic number as the base name. >> > > After the discussion so far, my opinion is that if the source directory > contains an appropriate python repositiory directory [1], and the > version of Python implements PEP 3147, that there should be no need for > -R or $PYTHONPYR to exist, but that such versions of Python would > simply, and always look in the python repository directory for binaries. How would the python repository directory ever get created? From ziade.tarek at gmail.com Thu Feb 4 23:55:22 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Thu, 4 Feb 2010 23:55:22 +0100 Subject: [Python-Dev] PEP 345 and PEP 386 In-Reply-To: References: Message-ID: <94bdd2611002041455w69a50000v74f989d1235d9b2a@mail.gmail.com> On Thu, Feb 4, 2010 at 8:20 PM, Guido van Rossum wrote: [..] > I have one comment on PEP 345: Why is author-email mandatory? I'm sure > there are plenty of cases where either the author doesn't want their > email address published, or their last know email address is no longer > valid. (Tarek responded off-line that it isn't all that mandatory; I > propose to say so in the PEP.) Yes, I propose to remove the mandatory flag from that field. > I am also looking at PEP 376 but I expect that Tarek will start > another round of discussion on it. Hopefully all three PEPs will be > accepted in time for inclusion in Python 2.7. We will work on 376 in the two coming weeks in distutils-SIG and try to come up with something ready for Pycon, if feasible. Thanks for the reviews ! Tarek -- Tarek Ziad? | http://ziade.org From glenn at nevcal.com Fri Feb 5 00:00:52 2010 From: glenn at nevcal.com (Glenn Linderman) Date: Thu, 04 Feb 2010 15:00:52 -0800 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B6B4A09.4070000@trueblade.com> References: <20100130190005.058c8187@freewill.wooz.org> <4B6B3E52.8040708@g.nevcal.com> <4B6B4A09.4070000@trueblade.com> Message-ID: <4B6B51A4.9070404@nevcal.com> On approximately 2/4/2010 2:28 PM, came the following characters from the keyboard of Eric Smith: > Glenn Linderman wrote: >> On approximately 1/30/2010 4:00 PM, came the following characters >> from the keyboard of Barry Warsaw: >>> When the Python executable is given a `-R` flag, or the environment >>> variable `$PYTHONPYR` is set, then Python will create a `foo.pyr` >>> directory and write a `pyc` file to that directory with the hexlified >>> magic number as the base name. >> >> After the discussion so far, my opinion is that if the source >> directory contains an appropriate python repositiory directory [1], >> and the version of Python implements PEP 3147, that there should be >> no need for -R or $PYTHONPYR to exist, but that such versions of >> Python would simply, and always look in the python repository >> directory for binaries. > > How would the python repository directory ever get created? When a PEP 3147 (if modified by my suggestion) version of Python runs, and the directory doesn't exist, and it wants to create a .pyc, it would create the directory, and put the .pyc there. Sort of just like how it creates .pyc files, now, but an extra step of creating the repository directory if it doesn't exist. After the first run, it would exist. It is described in the PEP, and I quoted that section... "Python will create a 'foo.pyr' directory"... I'm just suggesting different semantics for how many directories, and what is contained in them. -- Glenn ------------------------------------------------------------------------ ?Everyone is entitled to their own opinion, but not their own facts. In turn, everyone is entitled to their own opinions of the facts, but not their own facts based on their opinions.? -- Guy Rocha, retiring NV state archivist From g.brandl at gmx.net Fri Feb 5 01:54:10 2010 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 05 Feb 2010 00:54:10 +0000 Subject: [Python-Dev] Fixed URL to 2.6 documentation In-Reply-To: References: Message-ID: Am 04.02.2010 08:57, schrieb anatoly techtonik: > Greetings, > > I'm writing a module for current Python 2.6 and I would like to > reference documentation for Python 2.6, because I am not sure if > behavior won't be changed in further series. So far I can link only > to: > > http://docs.python.org/ (stable, 2.6) > http://docs.python.org/dev/ (2.7) > http://docs.python.org/dev/py3k/ > > When stable changes to 2.7 my reference will point to the different > version than I was meant when writing comment. It is possible to link > to docs for minor 2.6.4 http://www.python.org/doc/2.6.4/ but I would > like to link to latest version of docs in 2.6 branch that may not yet > found way into official minor release. You can always use http://docs.python.org/2.6/ as the base for 2.6 docs. It currently redirects to /, but it will redirect to some other place with up-to-date 2.6 docs as soon as 2.7 is released. Georg From ncoghlan at gmail.com Fri Feb 5 10:37:47 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 05 Feb 2010 19:37:47 +1000 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B697AAD.2010307@gmail.com> <20100203084539.47559c8f@freewill.wooz.org> <4B69E0F9.5050103@gmail.com> <4B69EBBB.4090304@v.loewis.de> <4B6B4167.8040502@gmail.com> Message-ID: <4B6BE6EB.2040000@gmail.com> Brett Cannon wrote: > If we add a new method like get_filenames(), I would suggest going > with Antoine's suggestion of a tuple for __compiled__ (allowing > loaders to indicate that they actually constructed the runtime > bytecode from multiple cached files on-disk). > > > Does code exist out there where people are constructing bytecode from > multiple files for a single module? I'm quite prepared to call YAGNI on that idea and just return a 2-tuple of source filename and compiled filename. The theoretical use case was for a module that was partially compiled to native code in advance, so it's "compiled" version was a combination of a shared library and a bytecode file. It isn't really all that compelling an idea - it would be easy enough for a loader to pick one or the other and stick that in __compiled__. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From pythoniks at gmail.com Fri Feb 5 13:35:26 2010 From: pythoniks at gmail.com (Pascal Chambon) Date: Fri, 05 Feb 2010 13:35:26 +0100 Subject: [Python-Dev] IO module improvements Message-ID: <4B6C108E.3010600@gmail.com> Hello The new modular io system of python is awesome, but I'm running into some of its limits currently, while replacing the raw FileIO with a more advanced stream. So here are a few ideas and questions regarding the mechanisms of this IO system. Note that I'm speaking in python terms, but these ideas should also apply to the C implementation (with more programming hassle of course). - some streams have specific attributes (i.e mode, name...), but since they'll often been wrapped inside buffering or encoding streams, these attributes will not be available to the end user. So wouldn't it be great to implement some "transversal inheritance", simply by delegating to the underlying buffer/raw-stream, attribute retrievals which fail on the current stream ? A little __getattr__ should do it fine, shouldn't it ? By the way, I'm having trouble with the "name" attribute of raw files, which can be string or integer (confusing), ambiguous if containing a relative path, and which isn't able to handle the new case of my library, i.e opening a file from an existing file handle (which is ALSO an integer, like C file descriptors...) ; I propose we deprecate it for the benefit or more precise attributes, like "path" (absolute path) and "origin" (which can be "path", "fileno", "handle" and can be extended...). Methods too would deserve some auto-forwarding. If you want to bufferize a raw stream which also offers size(), times(), lock_file() and other methods, how can these be accessed from a top-level buffering/text stream ? So it would be interesting to have a system through which a stream can expose its additional features to top level streams, and at the same time tell these if they must flush() or not before calling these new methods (eg. asking the inode number of a file doesn't require flushing, but knowing its real size DOES require it.). - I feel thread-safety locking and stream stream status checking are currently overly complicated. All methods are filled with locking calls and CheckClosed() calls, which is both a performance loss (most io streams will have 3 such levels of locking, when 1 would suffice) and error-prone (some times ago I've seen in sources several functions in which checks and locks seemed lacking). Since we're anyway in a mood of imbricating streams, why not simply adding a "safety stream" on top of each stream chain returned by open() ? That layer could gracefully handle mutex locking, CheckClosed() calls, and even, maybe, the attribute/method forwarding I evocated above. I know a pure metaprogramming solution would maybe not suffice for performance-seekers, but static implementations should be doable as well. - some semantic decisions of the current system are somehow dangerous. For example, flushing errors occuring on close are swallowed. It seems to me that it's of the utmost importance that the user be warned if the bytes he wrote disappeared before reaching the kernel ; shouldn't we decidedly enforce a "don't hide errors" everywhere in the io module ?. Regards, Pascal From solipsis at pitrou.net Fri Feb 5 14:28:27 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 5 Feb 2010 13:28:27 +0000 (UTC) Subject: [Python-Dev] IO module improvements References: <4B6C108E.3010600@gmail.com> Message-ID: Pascal Chambon gmail.com> writes: > > By the way, I'm having trouble with the "name" attribute of raw files, > which can be string or integer (confusing), ambiguous if containing a > relative path, and which isn't able to handle the new case of my > library, i.e opening a file from an existing file handle (which is ALSO > an integer, like C file descriptors...) What is the difference between "file handle" and a regular C file descriptor? Is it some Windows-specific thing? If so, then perhaps it deserves some Windows-specific attribute ("handle"?). > Methods too would deserve some auto-forwarding. If you want to bufferize > a raw stream which also offers size(), times(), lock_file() and other > methods, how can these be accessed from a top-level buffering/text > stream ? I think it's a bad idea. If you forget to implement one of the standard IO methods (e.g. seek()), it will get forwarded to the raw stream, but with the wrong semantics (because it won't take buffering into account). It's better to require the implementor to do the forwarding explicitly if desired, IMO. > - I feel thread-safety locking and stream stream status checking are > currently overly complicated. All methods are filled with locking calls > and CheckClosed() calls, which is both a performance loss (most io > streams will have 3 such levels of locking, when 1 would suffice) FileIO objects don't have a lock, so there are 2 levels of locking at worse, not 3 (and, actually, TextIOWrapper doesn't have a lock either, although perhaps it should). As for the checkClosed() calls, they are probably cheap, especially if they bypass regular attribute lookup. > Since we're anyway in a mood of imbricating streams, why not simply > adding a "safety stream" on top of each stream chain returned by open() > ? That layer could gracefully handle mutex locking, CheckClosed() calls, > and even, maybe, the attribute/method forwarding I evocated above. It's an interesting idea, but it could also end up slower than the current situation. First because you are adding a level of indirection (i.e. additional method lookups and method calls). Second because currently the locks aren't always taken. For example, in BufferedIOReader, we needn't take the lock when the requested data is available in our buffer (the GIL already protects us). Having a separate "synchronizing" wrapper would forbid such micro-optimizations. If you want to experiment with this, you can use iobench (in the Tools directory) to measure file IO performance. > - some semantic decisions of the current system are somehow dangerous. > For example, flushing errors occuring on close are swallowed. It seems > to me that it's of the utmost importance that the user be warned if the > bytes he wrote disappeared before reaching the kernel ; shouldn't we > decidedly enforce a "don't hide errors" everywhere in the io module ? It may be a bug. Can you report it, along with a script or test showcasing it? Regards Antoine. From guido at python.org Fri Feb 5 16:57:10 2010 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Feb 2010 07:57:10 -0800 Subject: [Python-Dev] IO module improvements In-Reply-To: References: <4B6C108E.3010600@gmail.com> Message-ID: On Fri, Feb 5, 2010 at 5:28 AM, Antoine Pitrou wrote: > Pascal Chambon gmail.com> writes: >> >> By the way, I'm having trouble with the "name" attribute of raw files, >> which can be string or integer (confusing), ambiguous if containing a >> relative path, Why is it ambiguous? It sounds like you're using str() of the name and then can't tell whether the file is named e.g. '1' or whether it refers to file descriptor 1 (i.e. sys.stdout). >> and which isn't able to handle the new case of my >> library, i.e opening a file from an existing file handle (which is ALSO >> an integer, like C file descriptors...) > > What is the difference between "file handle" and a regular C file descriptor? > Is it some Windows-specific thing? > If so, then perhaps it deserves some Windows-specific attribute ("handle"?). Make it mirror the fileno() attribute. >> Methods too would deserve some auto-forwarding. If you want to bufferize >> a raw stream which also offers size(), times(), lock_file() and other >> methods, how can these be accessed from a top-level buffering/text >> stream ? > > I think it's a bad idea. If you forget to implement one of the standard IO > methods (e.g. seek()), it will get forwarded to the raw stream, but with the > wrong semantics (because it won't take buffering into account). > > It's better to require the implementor to do the forwarding explicitly if > desired, IMO. Agreed. If an underlying stream has a certain property that doesn't mean the above stream has the same property. Calling methods on the underlying stream that move the file position may wreak havoc on the buffer consistency of the above stream. Etc., etc. Please don't do this. Antoine has the right idea. >> - I feel thread-safety locking and stream stream status checking are >> currently overly complicated. All methods are filled with locking calls >> and CheckClosed() calls, which is both a performance loss (most io >> streams will have 3 such levels of locking, when 1 would suffice) > > FileIO objects don't have a lock, so there are 2 levels of locking at worse, not > 3 (and, actually, TextIOWrapper doesn't have a lock either, although perhaps it > should). > As for the checkClosed() calls, they are probably cheap, especially if they > bypass regular attribute lookup. > >> Since we're anyway in a mood of imbricating streams, why not simply >> adding a "safety stream" on top of each stream chain returned by open() >> ? That layer could gracefully handle mutex locking, CheckClosed() calls, >> and even, maybe, the attribute/method forwarding I evocated above. > > It's an interesting idea, but it could also end up slower than the current > situation. > First because you are adding a level of indirection (i.e. additional method > lookups and method calls). > Second because currently the locks aren't always taken. For example, in > BufferedIOReader, we needn't take the lock when the requested data is available > in our buffer (the GIL already protects us). Having a separate "synchronizing" > wrapper would forbid such micro-optimizations. > > If you want to experiment with this, you can use iobench (in the Tools > directory) to measure file IO performance. > >> - some semantic decisions of the current system are somehow dangerous. >> For example, flushing errors occuring on close are swallowed. It seems >> to me that it's of the utmost importance that the user be warned if the >> bytes he wrote disappeared before reaching the kernel ; shouldn't we >> decidedly enforce a "don't hide errors" everywhere in the io module ? > > It may be a bug. Can you report it, along with a script or test showcasing it? > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) From exarkun at twistedmatrix.com Fri Feb 5 17:46:43 2010 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Fri, 05 Feb 2010 16:46:43 -0000 Subject: [Python-Dev] IO module improvements In-Reply-To: References: <4B6C108E.3010600@gmail.com> Message-ID: <20100205164643.26099.1939554697.divmod.xquotient.497@localhost.localdomain> On 03:57 pm, guido at python.org wrote: >On Fri, Feb 5, 2010 at 5:28 AM, Antoine Pitrou >wrote: >>Pascal Chambon gmail.com> writes: >>> >>>By the way, I'm having trouble with the "name" attribute of raw >>>files, >>>which can be string or integer (confusing), ambiguous if containing a >>>relative path, > >Why is it ambiguous? It sounds like you're using str() of the name and >then can't tell whether the file is named e.g. '1' or whether it >refers to file descriptor 1 (i.e. sys.stdout). I think string/integer and ambiguity were different points. Here's the ambiguity: exarkun at boson:~$ python Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) [GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import os, io >>> f = io.open('.bashrc') >>> os.chdir('/') >>> f.name '.bashrc' >>> os.path.abspath(f.name) '/.bashrc' >>> Jean-Paul From status at bugs.python.org Fri Feb 5 18:07:24 2010 From: status at bugs.python.org (Python tracker) Date: Fri, 5 Feb 2010 18:07:24 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20100205170724.C066F785C6@psf.upfronthosting.co.za> ACTIVITY SUMMARY (01/29/10 - 02/05/10) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue number. Do NOT respond to this message. 2602 open (+38) / 17079 closed (+15) / 19681 total (+53) Open issues with patches: 1069 Average duration of open issues: 707 days. Median duration of open issues: 461 days. Open Issues Breakdown open 2568 (+38) pending 33 ( +0) Issues Created Or Reopened (57) _______________________________ allow unicode keyword args 02/04/10 http://bugs.python.org/issue4978 reopened barry patch, needs review A selection of spelling errors and typos throughout source 02/04/10 http://bugs.python.org/issue5341 reopened ezio.melotti patch enable compilation of readline module on Mac OS X 10.5 and 10.6 02/04/10 http://bugs.python.org/issue6877 reopened barry patch, 26backport, needs review segfault when deleting from a list using slice with very big `st 02/03/10 http://bugs.python.org/issue7788 reopened mark.dickinson patch test_macostools fails on OS X 10.6: no attribute 'FSSpec' 01/29/10 http://bugs.python.org/issue7807 created mark.dickinson test_bsddb3 leaks references 01/29/10 http://bugs.python.org/issue7808 created flox patch Documentation for random module should indicate that a call to s 01/30/10 CLOSED http://bugs.python.org/issue7809 created Justin.Lebar fix_callable breakage 01/30/10 CLOSED http://bugs.python.org/issue7810 created loewis [decimal] ValueError -> TypeError in from_tuple 01/30/10 http://bugs.python.org/issue7811 created skrah Call to gestalt('sysu') on OSX can lead to freeze in wxPython ap 01/30/10 http://bugs.python.org/issue7812 created phansen Bug in command-line module launcher 01/30/10 http://bugs.python.org/issue7813 created pakal patch SimpleXMLRPCServer Example uses "mul" instead of "div" in client 01/30/10 CLOSED http://bugs.python.org/issue7814 created mnewman Regression in unittest traceback formating extensibility 01/30/10 http://bugs.python.org/issue7815 created gz test_capi crashes when run with "-R" 01/30/10 CLOSED http://bugs.python.org/issue7816 created flox patch Pythonw.exe fails to start 01/31/10 CLOSED http://bugs.python.org/issue7817 created ZDan Improve set().test_c_api(): don't expect a set("abc"), modify th 01/31/10 http://bugs.python.org/issue7818 created haypo patch sys.call_tracing(): check arguments type 01/31/10 CLOSED http://bugs.python.org/issue7819 created haypo patch parser: restores all bytes in the right order if check_bom() fai 01/31/10 http://bugs.python.org/issue7820 created haypo patch Command line option -U not documented 01/31/10 CLOSED http://bugs.python.org/issue7821 created stevenjd 2to3 does not convert output lines in doctests 01/31/10 CLOSED http://bugs.python.org/issue7822 created Merwok multiplying a list of dictionaries 01/31/10 CLOSED http://bugs.python.org/issue7823 created Andrew.Hays assertion error in 2to3 01/31/10 CLOSED http://bugs.python.org/issue7824 created ferringb test_threadsignals leaks references 01/31/10 http://bugs.python.org/issue7825 created haypo support caching for 2to3 02/01/10 http://bugs.python.org/issue7826 created ferringb patch, needs review recv_into() argument 1 must be pinned buffer, not bytearray 02/01/10 http://bugs.python.org/issue7827 created dalke patch chr() and ord() documentation for wide characters 02/01/10 http://bugs.python.org/issue7828 created nudgenudge dis module documentation gives no indication of the dangers of b 02/01/10 http://bugs.python.org/issue7829 created exarkun Flatten nested functools.partial 02/01/10 http://bugs.python.org/issue7830 created Alexander.Belopolsky patch cmp() is missing in 3.x 02/01/10 CLOSED http://bugs.python.org/issue7831 created flox assertSameElements([0, 1, 1], [0, 0, 1]) does not fail 02/01/10 http://bugs.python.org/issue7832 created flox patch Bdist_wininst installers fail to load extensions built with Issu 02/01/10 http://bugs.python.org/issue7833 created cgohlke patch socket.connect() no longer works with AF_BLUETOOTH L2CAP sockets 02/02/10 http://bugs.python.org/issue7834 created Mathew.Martineau easy Minor bug in 2.6.4 related to cleanup at end of program 02/02/10 http://bugs.python.org/issue7835 created Jesse.Aldridge patch, easy Add /usr/sfw/lib to OpenSSL search path for Solaris. 02/02/10 http://bugs.python.org/issue7836 created drkirkby assertSameElements doesn't filter enough py3k warnings 02/02/10 http://bugs.python.org/issue7837 created ezio.melotti patch, patch, easy, needs review Undocumented subprocess functions on Windows 02/02/10 http://bugs.python.org/issue7838 created brian.curtin patch, easy, needs review Popen should raise ValueError if pass a string when shell=False 02/02/10 http://bugs.python.org/issue7839 created r.david.murray easy Lib/ctypes/test/test_pep3118.py should not shadow the memoryview 02/02/10 http://bugs.python.org/issue7840 created pitrou test_capi fails when run more than once 02/02/10 CLOSED http://bugs.python.org/issue7841 created pitrou py_compile.compile SyntaxError output 02/02/10 http://bugs.python.org/issue7842 created nheron patch python-dev archives are not updated 02/03/10 http://bugs.python.org/issue7843 created isandler Add -3 warning for absolute imports. 02/03/10 http://bugs.python.org/issue7844 created mark.dickinson complex.__lt__ should return NotImplemented instead of raising T 02/03/10 http://bugs.python.org/issue7845 created mark.dickinson Fnmatch cache is never cleared during usage 02/03/10 http://bugs.python.org/issue7846 created andrewclegg patch, needs review Remove 'python -U' or document it 02/03/10 http://bugs.python.org/issue7847 created barry patch, needs review copy.copy corrupts objects that return false value from __getsta 02/03/10 http://bugs.python.org/issue7848 created alga Improve "test_support.check_warnings()" 02/03/10 http://bugs.python.org/issue7849 created flox patch platform.system() should be "macosx" instead of "Darwin" on OSX 02/03/10 http://bugs.python.org/issue7850 created ronaldoussoren WatchedFileHandler needs to be references as handlers.WatchedFil 02/03/10 CLOSED http://bugs.python.org/issue7851 created Tom.Aratyn [PATCH] Drop "Computer" from "Apple Computer" in plistlib 02/04/10 http://bugs.python.org/issue7852 created wangchun patch on __exit__(), exc_value does not contain the exception. 02/04/10 CLOSED http://bugs.python.org/issue7853 created flox patch term paper 02/04/10 CLOSED http://bugs.python.org/issue7854 created allisonlong Add test cases for ctypes/winreg for issues found in IronPython 02/04/10 http://bugs.python.org/issue7855 created DinoV patch cannot decode from or encode to big5 \xf9\xd8 02/05/10 http://bugs.python.org/issue7856 created Xuefer.x test_logging fails 02/05/10 http://bugs.python.org/issue7857 created pitrou buildbot os.utime(file, (0,0,)) fails on on vfat, but doesn't fail immedi 02/05/10 http://bugs.python.org/issue7858 created Damien.Elmes support "with self.assertRaises(SomeException) as exc:" syntax 02/05/10 http://bugs.python.org/issue7859 created flox Issues Now Closed (37) ______________________ Remove use of the stat module in the stdlib 628 days http://bugs.python.org/issue2874 brett.cannon patch Python 2.6rc2: Tix ComboBox error 504 days http://bugs.python.org/issue3872 matthieu.labbe patch xmlrpc.client - default 'SlowParser' not defined 439 days http://bugs.python.org/issue4340 haypo undesired switch fall-through in socketmodule.c 402 days http://bugs.python.org/issue4772 pitrou patch, needs review Can SGMLParser properly handle tags? 325 days http://bugs.python.org/issue5498 ezio.melotti easy Cannot build extension in amd64 using msvc9compiler 233 days http://bugs.python.org/issue6283 tarek patch to subprocess docs to better explain Popen's 'args' argume 167 days http://bugs.python.org/issue6760 r.david.murray patch, needs review shadows around the io truncate() semantics 136 days http://bugs.python.org/issue6939 pakal patch xmlrpc.server assumes sys.stdout will have a buffer attribute 105 days http://bugs.python.org/issue7165 loewis patch pydoc doesn't work from the command line 77 days http://bugs.python.org/issue7328 Merwok MemoryView_FromObject crashes if PyBuffer_GetBuffer fails 71 days http://bugs.python.org/issue7385 pitrou patch decimal.py: infinity coefficients in tuples 17 days http://bugs.python.org/issue7684 rhettinger test_xmlrpc fails with non-ascii path 15 days http://bugs.python.org/issue7708 haypo patch, buildbot pydoc error - "No module named tempfile" 11 days http://bugs.python.org/issue7749 r.david.murray newgil backport 12 days http://bugs.python.org/issue7753 twhitema patch, needs review Add PyLong_AsLongLongAndOverflow 6 days http://bugs.python.org/issue7767 mark.dickinson patch, needs review patch for making list/insert at the top of the list avoid memmov 3 days http://bugs.python.org/issue7784 Steve Howell patch xmlrpc.client binary object examples needs to use binary mode 1 days http://bugs.python.org/issue7801 haypo socket.gaierror before ProtocolError for xmlrpc.client 2 days http://bugs.python.org/issue7802 georg.brandl Documentation for random module should indicate that a call to s 1 days http://bugs.python.org/issue7809 rhettinger fix_callable breakage 0 days http://bugs.python.org/issue7810 benjamin.peterson SimpleXMLRPCServer Example uses "mul" instead of "div" in client 0 days http://bugs.python.org/issue7814 georg.brandl test_capi crashes when run with "-R" 3 days http://bugs.python.org/issue7816 benjamin.peterson patch Pythonw.exe fails to start 1 days http://bugs.python.org/issue7817 georg.brandl sys.call_tracing(): check arguments type 1 days http://bugs.python.org/issue7819 haypo patch Command line option -U not documented 0 days http://bugs.python.org/issue7821 loewis 2to3 does not convert output lines in doctests 0 days http://bugs.python.org/issue7822 Merwok multiplying a list of dictionaries 1 days http://bugs.python.org/issue7823 r.david.murray assertion error in 2to3 0 days http://bugs.python.org/issue7824 loewis cmp() is missing in 3.x 1 days http://bugs.python.org/issue7831 ezio.melotti test_capi fails when run more than once 0 days http://bugs.python.org/issue7841 flox WatchedFileHandler needs to be references as handlers.WatchedFil 1 days http://bugs.python.org/issue7851 vinay.sajip on __exit__(), exc_value does not contain the exception. 1 days http://bugs.python.org/issue7853 benjamin.peterson patch term paper 0 days http://bugs.python.org/issue7854 brian.curtin Proposal to implement comment rows in csv module 1687 days http://bugs.python.org/issue1225769 andrewmcnamara patch uninitialized memory read in parsetok() 1228 days http://bugs.python.org/issue1562308 benjamin.peterson object.__init__ shouldn't allow args/kwds 1051 days http://bugs.python.org/issue1683368 gregory.p.smith Top Issues Most Discussed (10) ______________________________ 29 patch to subprocess docs to better explain Popen's 'args' argum 167 days closed http://bugs.python.org/issue6760 15 Serious interpreter crash and/or arbitrary memory leak using .r 309 days open http://bugs.python.org/issue5677 13 Test suite emits many DeprecationWarnings when -3 is enabled 119 days open http://bugs.python.org/issue7092 9 Add test cases for ctypes/winreg for issues found in IronPython 1 days open http://bugs.python.org/issue7855 9 improved allocation of PyUnicode objects 257 days open http://bugs.python.org/issue1943 8 Fix complex type to avoid coercion in 2.7. 360 days open http://bugs.python.org/issue5211 7 support caching for 2to3 5 days open http://bugs.python.org/issue7826 6 Undocumented subprocess functions on Windows 3 days open http://bugs.python.org/issue7838 6 decimal.py: type conversion in context methods 32 days open http://bugs.python.org/issue7633 6 MemoryView_FromObject crashes if PyBuffer_GetBuffer fails 71 days closed http://bugs.python.org/issue7385 From jmatejek at suse.cz Fri Feb 5 18:24:58 2010 From: jmatejek at suse.cz (=?UTF-8?B?SmFuIE1hdMSbamVr?=) Date: Fri, 05 Feb 2010 18:24:58 +0100 Subject: [Python-Dev] Rational for PEP 3147 (PYC Respository Directories) In-Reply-To: References: <20100203173521.GA4039@arctrix.com> Message-ID: <4B6C546A.9080806@suse.cz> Dne 3.2.2010 18:39, Antoine Pitrou napsal(a): > Neil Schemenauer arctrix.com> writes: >> >> Thanks for doing the work of writing a PEP. The rational section >> could use some strengthing, I think. Who is benefiting from this >> feature? Is it the distribution package maintainers? Maybe people >> who use a distribution packaged Python and install packages from >> PyPI. It's not clear to me, anyhow. > > It would also be nice to have other packagers' take on this (Redhat, Mandriva, > etc.). But of course you aren't responsible if they don't show up. As the SUSE guy, i don't care either way. This has simply no benefits or drawbacks for us. This solution can only be beneficial in systems like Debian's python-support, where you byte-compile when installing. We byte-compile at build time, so if we wanted to support more than one python within one package, we would need to distribute a rpm full of different .pycs for all supported python versions. Yes, that was not possible before and it is possible with this PEP, but there is no sense in doing it ;) That said, i don't particularly care whether the installed pycs are in a separate __pycache__ directory or next to their sources. (there were very good arguments in the other thread against subdir clutter, one more from me: each subdirectory has a separate entry in rpm database, so by creating subdir clutter you're also cluttering our packaging system) +0 from me regards m. > > cheers > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/jmatejek%40suse.cz From guido at python.org Fri Feb 5 19:38:12 2010 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Feb 2010 10:38:12 -0800 Subject: [Python-Dev] IO module improvements In-Reply-To: <20100205164643.26099.1939554697.divmod.xquotient.497@localhost.localdomain> References: <4B6C108E.3010600@gmail.com> <20100205164643.26099.1939554697.divmod.xquotient.497@localhost.localdomain> Message-ID: On Fri, Feb 5, 2010 at 8:46 AM, wrote: > On 03:57 pm, guido at python.org wrote: >> >> On Fri, Feb 5, 2010 at 5:28 AM, Antoine Pitrou >> wrote: >>> >>> Pascal Chambon gmail.com> writes: >>>> >>>> By the way, I'm having trouble with the "name" attribute of raw files, >>>> which can be string or integer (confusing), ambiguous if containing a >>>> relative path, >> >> Why is it ambiguous? It sounds like you're using str() of the name and >> then can't tell whether the file is named e.g. '1' or whether it >> refers to file descriptor 1 (i.e. sys.stdout). > > I think string/integer and ambiguity were different points. ?Here's the > ambiguity: > > ? exarkun at boson:~$ python > ? Python 2.6.4 (r264:75706, Dec ?7 2009, 18:45:15) ? ?[GCC 4.4.1] on linux2 > ? Type "help", "copyright", "credits" or "license" for more information. > ? >>> import os, io > ? >>> f = io.open('.bashrc') > ? >>> os.chdir('/') > ? >>> f.name > ? '.bashrc' > ? >>> os.path.abspath(f.name) > ? '/.bashrc' > ? >>> > Jean-Paul You're right, I didn't see the OP's comma. :-) I don't think this can be helped though -- I really don't want open() to be slowed down or complicated by an attempt to do path manipulation. If this matters to the app author they should use os.path.abspath() or os.path.realpath() or whatever before calling open(). -- --Guido van Rossum (python.org/~guido) From lists at cheimes.de Sat Feb 6 00:47:15 2010 From: lists at cheimes.de (Christian Heimes) Date: Sat, 06 Feb 2010 00:47:15 +0100 Subject: [Python-Dev] IO module improvements In-Reply-To: References: <4B6C108E.3010600@gmail.com> <20100205164643.26099.1939554697.divmod.xquotient.497@localhost.localdomain> Message-ID: <4B6CAE03.5090603@cheimes.de> Guido van Rossum wrote: > You're right, I didn't see the OP's comma. :-) > > I don't think this can be helped though -- I really don't want open() > to be slowed down or complicated by an attempt to do path > manipulation. If this matters to the app author they should use > os.path.abspath() or os.path.realpath() or whatever before calling > open(). I had the idea to add a property that returns the file name based on the file descriptor. However there isn't a plain way to lookup the file based on the fd on POSIX OSes. fstat() returns only the inode and device. The combination of inode + device references 0 to n files due to anonymous files and hard links. On POSIX OSes with a /proc file systems it's possible to do a reverse lookup by (ab)using /proc/self/fd/, but that's a hack. >>> import os >>> f = open("/etc/passwd") >>> fd = f.fileno() >>> os.readlink("/proc/self/fds/%i" % fd) '/etc/passwd' On Windows it's possible to get the file name from the handle with GetFileInformationByHandleEx(). This doesn't strike me as a feasible options ... Christian From guido at python.org Sat Feb 6 00:51:51 2010 From: guido at python.org (Guido van Rossum) Date: Fri, 5 Feb 2010 15:51:51 -0800 Subject: [Python-Dev] IO module improvements In-Reply-To: <4B6CAE03.5090603@cheimes.de> References: <4B6C108E.3010600@gmail.com> <20100205164643.26099.1939554697.divmod.xquotient.497@localhost.localdomain> <4B6CAE03.5090603@cheimes.de> Message-ID: On Fri, Feb 5, 2010 at 3:47 PM, Christian Heimes wrote: > I had the idea to add a property that returns the file name based on the > file descriptor. However there isn't a plain way to lookup the file > based on the fd on POSIX OSes. fstat() returns only the inode and > device. The combination of inode + device references 0 to n files due to > anonymous files and hard links. On POSIX OSes with a /proc file systems > it's possible to do a reverse lookup by (ab)using /proc/self/fd/, but > that's a hack. > >>>> import os >>>> f = open("/etc/passwd") >>>> fd = f.fileno() >>>> os.readlink("/proc/self/fds/%i" % fd) > '/etc/passwd' > > On Windows it's possible to get the file name from the handle with > GetFileInformationByHandleEx(). > > This doesn't strike me as a feasible options ... It's good to know about such options, but I really don't like to add such brittle APIs to the standard I/O objects. So, agreed, this is not feasible. -- --Guido van Rossum (python.org/~guido) From tseaver at palladion.com Sat Feb 6 08:31:17 2010 From: tseaver at palladion.com (Tres Seaver) Date: Sat, 06 Feb 2010 02:31:17 -0500 Subject: [Python-Dev] IO module improvements In-Reply-To: References: <4B6C108E.3010600@gmail.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Antoine Pitrou wrote: > Pascal Chambon gmail.com> writes: >> By the way, I'm having trouble with the "name" attribute of raw files, >> which can be string or integer (confusing), ambiguous if containing a >> relative path, and which isn't able to handle the new case of my >> library, i.e opening a file from an existing file handle (which is ALSO >> an integer, like C file descriptors...) > > What is the difference between "file handle" and a regular C file descriptor? > Is it some Windows-specific thing? > If so, then perhaps it deserves some Windows-specific attribute ("handle"?). File descriptors are integer indexes into a process-specific table. File handles are pointers to opaque structs which contain other information the kernel knows about the file. MS Windows muddies the distinction, using "file handle" to refer to the integer index. [1] http://en.wikipedia.org/wiki/File_handle Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkttGsUACgkQ+gerLs4ltQ733gCgqrkKNryUrWvLLEjoOWL7z5IY PnkAnREQKkY3CbPikOdEq4sYQcUylKxw =Sr71 -----END PGP SIGNATURE----- From cournape at gmail.com Sat Feb 6 08:46:34 2010 From: cournape at gmail.com (David Cournapeau) Date: Sat, 6 Feb 2010 16:46:34 +0900 Subject: [Python-Dev] IO module improvements In-Reply-To: References: <4B6C108E.3010600@gmail.com> Message-ID: <5b8d13221002052346i4d7f1548j2733a95fd54b89f@mail.gmail.com> On Sat, Feb 6, 2010 at 4:31 PM, Tres Seaver wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Antoine Pitrou wrote: >> Pascal Chambon gmail.com> writes: >>> By the way, I'm having trouble with the "name" attribute of raw files, >>> which can be string or integer (confusing), ambiguous if containing a >>> relative path, and which isn't able to handle the new case of my >>> library, i.e opening a file from an existing file handle (which is ALSO >>> an integer, like C file descriptors...) >> >> What is the difference between "file handle" and a regular C file descriptor? >> Is it some Windows-specific thing? >> If so, then perhaps it deserves some Windows-specific attribute ("handle"?). > > File descriptors are integer indexes into a process-specific table. AFAIK, they aren't simple indexes in windows, and that's partly why even file descriptors cannot be safely passed between C runtimes on windows (whereas they can in most unices). David From chambon.pascal at gmail.com Sat Feb 6 12:43:08 2010 From: chambon.pascal at gmail.com (Pascal Chambon) Date: Sat, 06 Feb 2010 12:43:08 +0100 Subject: [Python-Dev] IO module improvements In-Reply-To: References: <4B6C108E.3010600@gmail.com> Message-ID: <4B6D55CC.8070904@wanadoo.fr> Antoine Pitrou a ?crit : > > What is the difference between "file handle" and a regular C file descriptor? > Is it some Windows-specific thing? > If so, then perhaps it deserves some Windows-specific attribute ("handle"?). > At the moment it's windows-specific, but it's not impossible that some other OSes also rely on specific file handles (only emulating C file descriptors for compatibility). I've indeed mirrored the fileno concept, with a "handle" argument for constructors, and a handle() getter. > On Fri, Feb 5, 2010 at 5:28 AM, Antoine Pitrou wrote: > >> Pascal Chambon gmail.com> writes: >> >>> By the way, I'm having trouble with the "name" attribute of raw files, >>> which can be string or integer (confusing), ambiguous if containing a >>> relative path, >>> > > Why is it ambiguous? It sounds like you're using str() of the name and > then can't tell whether the file is named e.g. '1' or whether it > refers to file descriptor 1 (i.e. sys.stdout). > > As Jean-Paul mentioned, I find confusing the fact that it can be a relative path, and sometimes not a path at all. I'm pretty sure many programmers haven't even cared in their library code that it could be a non-string, using concatenation etc. on it... However I guess that the history is so high on it, that I'll have to conform to this semantic, putting all paths/fileno/handle in the same "name" property, and adding an "origin" property telling how to interpret the "name"... >> Methods too would deserve some auto-forwarding. If you want to bufferize >> a raw stream which also offers size(), times(), lock_file() and other >> methods, how can these be accessed from a top-level buffering/text >> stream ? >> > > I think it's a bad idea. If you forget to implement one of the standard IO > methods (e.g. seek()), it will get forwarded to the raw stream, but with the > wrong semantics (because it won't take buffering into account). > > It's better to require the implementor to do the forwarding explicitly if > desired, IMO. > The problem is, doing that forwarding is quite complicated. IO is a collection of "core tools for working with streams", but it's currently not flexible enough to let people customize them too... For example, if I want to add a new series of methods to all standard streams, which simply forward calls to new raw stream features, what do I do ? Monkey-patching base classes (RawFileIO, BufferedIOBase...) ? Not a good pattern. Subclassing FileIO+BufferedWriter+BufferredReader+BufferedRandom+TextIOWrapper ? That's really redundant... And there are sepecially flaws around BufferedRandom. This stream inherits BufferedWriter and BufferedRandom, and overrides some methods. How do I do to extend it ? I'd want to reuse its methods, but then have it forward calls to MY buffered classes, not original BufferedWriter or BufferredReader classes. Should I modify its __bases__ to edit the inheritance tree ? Handy but not a good pattern... I'm currently getting what I want with a triple inheritance (praying for the MRO to be as I expect), but it's really not straightforward. Having BufferedRandom as an additional layer would slow down the system, but allow its reuse with custom buffered writers and readers... >> - I feel thread-safety locking and stream stream status checking are >> currently overly complicated. All methods are filled with locking calls >> and CheckClosed() calls, which is both a performance loss (most io >> streams will have 3 such levels of locking, when 1 would suffice) >> > > FileIO objects don't have a lock, so there are 2 levels of locking at worse, not > 3 (and, actually, TextIOWrapper doesn't have a lock either, although perhaps it > should). > As for the checkClosed() calls, they are probably cheap, especially if they > bypass regular attribute lookup. > CheckClosed calls are cheap, but they can easily be forgotten in one of the dozens of methods involved... My own FileIO class alas needs locking, because for example, on windows truncating a file means seeking + setting end of file + restoring pointer. And I TextIOWrapper seems to deserve locks. Maybe excerpts like this one really are thread-safe, but a long study would be required to ensure it. if whence == 2: # seek relative to end of file if cookie != 0: raise IOError("can't do nonzero end-relative seeks") self.flush() position = self.buffer.seek(0, 2) self._set_decoded_chars('') self._snapshot = None if self._decoder: self._decoder.reset() return position > >> Since we're anyway in a mood of imbricating streams, why not simply >> adding a "safety stream" on top of each stream chain returned by open() >> ? That layer could gracefully handle mutex locking, CheckClosed() calls, >> and even, maybe, the attribute/method forwarding I evocated above. >> > > It's an interesting idea, but it could also end up slower than the current > situation. > First because you are adding a level of indirection (i.e. additional method > lookups and method calls). > Second because currently the locks aren't always taken. For example, in > BufferedIOReader, we needn't take the lock when the requested data is available > in our buffer (the GIL already protects us). Having a separate "synchronizing" > wrapper would forbid such micro-optimizations. > > If you want to experiment with this, you can use iobench (in the Tools > directory) to measure file IO performance. > > There are chances that my approach is slower, but the gains are so high in terms of maintainability and use of use, that I would definitely advocate it. Typically, the micro-optimizations you speak about can please heavy programs, but they make code a mined land (maybe that's why they haven't been put into _pyio :p). When the order of every instruction matters, when all is carefully crafted so that the Gil is sufficient, I personally don't dare touching anything anymore... There is for sure an important trade-off between speed and robustness here, but I fear speed has won too much so far (and now that the main implementation is in C, it's getting real hard to apprehend). Maybe I should take the latest _pyio version, and make a fork offering high level flexibility and security, for those who don't care about so high performances ? >> - some semantic decisions of the current system are somehow dangerous. >> For example, flushing errors occuring on close are swallowed. It seems >> to me that it's of the utmost importance that the user be warned if the >> bytes he wrote disappeared before reaching the kernel ; shouldn't we >> decidedly enforce a "don't hide errors" everywhere in the io module ? >> > > It may be a bug. Can you report it, along with a script or test showcasing it? > > Regards > > Antoine. > It seems a rather decided semantic (with comments like "#If flush() fails, just give up"), but yep I'll file a bug to be sure. > I don't think this can be helped though -- I really don't want open() > to be slowed down or complicated by an attempt to do path > manipulation. If this matters to the app author they should use > os.path.abspath() or os.path.realpath() or whatever before calling > open(). > > On second thought, having more precise "name" or "path" attributes might give users the impression that they can rely on them, whereas indeed the filesystem might have been modified a lot during the use of the stream (even on windows, where files can actually be renamed/deleted while they're open)... > AFAIK, they aren't simple indexes in windows, and that's partly why > even file descriptors cannot be safely passed between C runtimes on > windows (whereas they can in most unices). > > David > Yep, windows file descriptors are actually emulated (with bugs...) on top of native file handles, that's why we can't rely on them for advanced stream operations. Regards, Pascal -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Feb 6 13:25:33 2010 From: cournape at gmail.com (David Cournapeau) Date: Sat, 6 Feb 2010 21:25:33 +0900 Subject: [Python-Dev] IO module improvements In-Reply-To: References: <4B6C108E.3010600@gmail.com> Message-ID: <5b8d13221002060425t423f65c4h854d04086fdfaf53@mail.gmail.com> On Fri, Feb 5, 2010 at 10:28 PM, Antoine Pitrou wrote: > Pascal Chambon gmail.com> writes: >> >> By the way, I'm having trouble with the "name" attribute of raw files, >> which can be string or integer (confusing), ambiguous if containing a >> relative path, and which isn't able to handle the new case of my >> library, i.e opening a file from an existing file handle (which is ALSO >> an integer, like C file descriptors...) > > What is the difference between "file handle" and a regular C file descriptor? > Is it some Windows-specific thing? > If so, then perhaps it deserves some Windows-specific attribute ("handle"?). When wondering about the same issue, I found the following useful: http://www.codeproject.com/KB/files/handles.aspx The C library file descriptor as returned by C open is emulated by win32. Only HANDLE is considered "native" (can be passed freely however you want within one process). cheers, David From benjamin at python.org Sat Feb 6 18:56:40 2010 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 6 Feb 2010 11:56:40 -0600 Subject: [Python-Dev] [RELEASED] Python 2.7 alpha 3 Message-ID: <1afaf6161002060956o4b303053pfd8b37922bea904f@mail.gmail.com> On behalf of the Python development team, I'm cheerful to announce the third alpha release of Python 2.7. Python 2.7 is scheduled (by Guido and Python-dev) to be the last major version in the 2.x series. Though more major releases have not been absolutely ruled out, it's likely that the 2.7 release will an extended period of maintenance for the 2.x series. 2.7 includes many features that were first released in Python 3.1. The faster io module, the new nested with statement syntax, improved float repr, set literals, dictionary views, and the memoryview object have been backported from 3.1. Other features include an ordered dictionary implementation, unittests improvements, a new sysconfig module, and support for ttk Tile in Tkinter. For a more extensive list of changes in 2.7, see http://doc.python.org/dev/whatsnew/2.7.html or Misc/NEWS in the Python distribution. To download Python 2.7 visit: http://www.python.org/download/releases/2.7/ Please note that this is a development release, intended as a preview of new features for the community, and is thus not suitable for production use. The 2.7 documentation can be found at: http://docs.python.org/2.7 Please consider trying Python 2.7 with your code and reporting any bugs you may notice to: http://bugs.python.org Enjoy! -- Benjamin Peterson 2.7 Release Manager benjamin at python.org (on behalf of the entire python-dev team and 2.7's contributors) From barry at python.org Sat Feb 6 21:21:29 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 6 Feb 2010 15:21:29 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B697AAD.2010307@gmail.com> <20100203084539.47559c8f@freewill.wooz.org> <4B69E0F9.5050103@gmail.com> Message-ID: <20100206152129.5aad111b@freewill.wooz.org> On Feb 03, 2010, at 01:17 PM, Guido van Rossum wrote: >Can you clarify? In Python 3, __file__ always points to the source. >Clearly that is the way of the future. For 99.99% of uses of __file__, >if it suddenly never pointed to a .pyc file any more (even if one >existed) that would be just fine. So what's this talk of switching to >__source__? Upon further reflection, I agree. __file__ also points to the source in Python 2.7. Do we need an attribute to point to the compiled bytecode file? -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sat Feb 6 21:28:50 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 6 Feb 2010 15:28:50 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B6BE6EB.2040000@gmail.com> References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B697AAD.2010307@gmail.com> <20100203084539.47559c8f@freewill.wooz.org> <4B69E0F9.5050103@gmail.com> <4B69EBBB.4090304@v.loewis.de> <4B6B4167.8040502@gmail.com> <4B6BE6EB.2040000@gmail.com> Message-ID: <20100206152850.0adf6a66@freewill.wooz.org> On Feb 05, 2010, at 07:37 PM, Nick Coghlan wrote: >Brett Cannon wrote: >> Does code exist out there where people are constructing bytecode from >> multiple files for a single module? > >I'm quite prepared to call YAGNI on that idea and just return a 2-tuple >of source filename and compiled filename. Me too. I think a 2-tuple of (source-path, compiled-path) is probably going to be fine for all practical purposes. I'd assign the former to a module's __file__ (as is done today in Python >= 2.7) and the latter to a module's __cached__. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sat Feb 6 21:33:08 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 6 Feb 2010 15:33:08 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B69571F.80809@egenix.com> References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B6943FC.5080303@voidspace.org.uk> <4B69571F.80809@egenix.com> Message-ID: <20100206153308.7a05db87@freewill.wooz.org> On Feb 03, 2010, at 11:59 AM, M.-A. Lemburg wrote: >How about using an optionally relative cache dir setting to let >the user decide ? Why do we need that level of flexibility? -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sat Feb 6 21:35:43 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 6 Feb 2010 15:35:43 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B697517.5060903@gmail.com> References: <20100130190005.058c8187@freewill.wooz.org> <2987c46d1001301821n72606673x1c84ba7fc9b4712@mail.gmail.com> <87wryzro4a.fsf@benfinney.id.au> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <20100202230302.34fb7906@freewill.wooz.org> <87636eok7b.fsf@benfinney.id.au> <4B697517.5060903@gmail.com> Message-ID: <20100206153543.47c2194f@freewill.wooz.org> On Feb 03, 2010, at 11:07 PM, Nick Coghlan wrote: >It's also the case that having to run Python to manage my own filesystem >would very annoying. If a dev has a broken .pyc that prevents the >affected Python build from even starting how are they meant to use the >nonfunctioning interpreter to find and delete the offending file? How is >someone meant to find and delete the .pyc files if they prefer to use a >graphical file manager over (or in conjunction with) the command line? I agree. I'd prefer to have a predictable place for the cached files, independent of having to run Python to tell you where that is. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sat Feb 6 21:37:00 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 6 Feb 2010 15:37:00 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <20100203092616.GA11385@laurie.devork> References: <20100130190005.058c8187@freewill.wooz.org> <2987c46d1001301821n72606673x1c84ba7fc9b4712@mail.gmail.com> <20100202225407.6a556f26@freewill.wooz.org> <87bpg6oknv.fsf@benfinney.id.au> <20100203092616.GA11385@laurie.devork> Message-ID: <20100206153700.5f892e92@freewill.wooz.org> On Feb 03, 2010, at 09:26 AM, Floris Bruynooghe wrote: >On Wed, Feb 03, 2010 at 06:14:44PM +1100, Ben Finney wrote: >> I don't understand the distinction you're making between those two >> options. Can you explain what you mean by each of ?siblings? and >> ?folder-per-folder?? > >sibilings: the original proposal, i.e.: > >foo.py >foo.pyr/ > MAGIC1.pyc > MAGIC1.pyo > ... >bar.py >bar.pyr/ > MAGIC1.pyc > MAGIC1.pyo > ... > >folder-per-folder: > >foo.py >bar.py >__pyr__/ > foo.MAGIC1.pyc > foo.MAGIC1.pyo > foo.MAGIC2.pyc > bar.MAGIC1.pyc > ... > >IIUC Correct. If necessary, I'll define those two terms in the PEP. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From ezio.melotti at gmail.com Sat Feb 6 21:49:05 2010 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Sat, 06 Feb 2010 22:49:05 +0200 Subject: [Python-Dev] __file__ is not always an absolute path Message-ID: <4B6DD5C1.3080608@gmail.com> In #7712 I was trying to change regrtest to always run the tests in a temporary CWD (e.g. /tmp/@test_1234_cwd/). The patches attached to the issue add a context manager that changes the CWD, and it works fine when I run ./python -m test.regrtest from trunk/. However, when I try from trunk/Lib/ it fails with ImportErrors (note that the latest patch by Florent Xicluna already tries to workaround the problem). The traceback points to "the_package = __import__(abstest, globals(), locals(), [])" in runtest_inner (in regrtest.py), and a "print __import__('test').__file__" there returns 'test/__init__.pyc'. This can be reproduced quite easily: trunk$ ./python Python 2.7a2+ (trunk:77941M, Feb 3 2010, 06:40:49) [GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import os, sys >>> os.getcwd() '/home/wolf/dev/trunk' >>> import test >>> test.__file__ # absolute '/home/wolf/dev/trunk/Lib/test/__init__.pyc' >>> os.chdir('/tmp') >>> test.__file__ '/home/wolf/dev/trunk/Lib/test/__init__.pyc' >>> from test import test_unicode # works >>> test_unicode.__file__ '/home/wolf/dev/trunk/Lib/test/test_unicode.pyc' >>> [21]+ Stopped ./python trunk$ cd Lib/ trunk/Lib$ ../python Python 2.7a2+ (trunk:77941M, Feb 3 2010, 06:40:49) [GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import os, sys >>> os.getcwd() '/home/wolf/dev/trunk/Lib' >>> import test >>> test.__file__ # relative 'test/__init__.pyc' >>> os.chdir('/tmp') >>> from test import test_unicode # fails Traceback (most recent call last): File "", line 1, in ImportError: cannot import name test_unicode Is there a reason why in the second case test.__file__ is relative? From exarkun at twistedmatrix.com Sat Feb 6 22:08:30 2010 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Sat, 06 Feb 2010 21:08:30 -0000 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <20100206152129.5aad111b@freewill.wooz.org> References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B697AAD.2010307@gmail.com> <20100203084539.47559c8f@freewill.wooz.org> <4B69E0F9.5050103@gmail.com> <20100206152129.5aad111b@freewill.wooz.org> Message-ID: <20100206210830.26099.330344071.divmod.xquotient.543@localhost.localdomain> On 08:21 pm, barry at python.org wrote: >On Feb 03, 2010, at 01:17 PM, Guido van Rossum wrote: >>Can you clarify? In Python 3, __file__ always points to the source. >>Clearly that is the way of the future. For 99.99% of uses of __file__, >>if it suddenly never pointed to a .pyc file any more (even if one >>existed) that would be just fine. So what's this talk of switching to >>__source__? > >Upon further reflection, I agree. __file__ also points to the source >in >Python 2.7. Do we need an attribute to point to the compiled bytecode >file? What if, instead of trying to annotate the module object with this assortment of metadata - metadata which depends on lots of things, and can vary from interpreter to interpreter, and even from module to module (depending on how it was loaded) - we just stuck with the __loader__ annotation, and encouraged/allowed/facilitated the use of the loader object to learn all of this extra information? Jean-Paul From solipsis at pitrou.net Sat Feb 6 22:15:15 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 6 Feb 2010 21:15:15 +0000 (UTC) Subject: [Python-Dev] IO module improvements References: <4B6C108E.3010600@gmail.com> <4B6D55CC.8070904@wanadoo.fr> Message-ID: Pascal Chambon gmail.com> writes: > > The problem is, doing that forwarding is quite complicated. Hmm, why is it complicated? I agree it can be tedious (especially in C), but it doesn't seem complicated in itself. > My own FileIO class alas needs locking, because for example, on windows > truncating a file means seeking + setting end of file + restoring > pointer. That's assuming you need FileIO to be thread-safe at all. If you always wrap it in a Buffered object, the Buffered object will ensure thread-safety using its own lock. (I suppose use cases for unbuffered file IO *in Python* must be pretty rare, so most of the time you shouln't use an unwrapped FileIO anyway) > And I TextIOWrapper seems to deserve locks. Maybe excerpts like this > one really are thread-safe, but a long study would be required to > ensure it. > [snip] Actually, TextIOWrapper is simply not thread-safe for most of its operations. I think we did the work for simple writing, though, since it's better for multi-threaded use of print(). > There are chances that my approach is slower, but the gains are so high > in terms of maintainability and use of use, that I would definitely > advocate it. I agree that optimizations must always be balanced with maintainability and simplicity. In this case, though, the IO system is really a critical part and I'm not sure users would like us to pessimize the implementation. > Typically, the micro-optimizations you speak about can please heavy > programs, but they make code a mined land (maybe that's why they > haven't been put into _pyio :p). Well, there's no point in micro-optimizing _pyio since it's dramatically slower than the C version :) It's there more as a reference implementation. > Maybe I should take the latest _pyio version, and make a fork offering > high level flexibility and security, for those who don't care about so > high performances ? You can, but be aware that _pyio is *really* slow... I'm not sure it would be a service to many users. cheers Antoine. From brett at python.org Sat Feb 6 22:50:44 2010 From: brett at python.org (Brett Cannon) Date: Sat, 6 Feb 2010 13:50:44 -0800 Subject: [Python-Dev] Absolute imports in Python 2.x? In-Reply-To: <4B6980B5.4070006@gmail.com> References: <5c6f2a5d1002020424u5420664ct12a2563d42abeee7@mail.gmail.com> <4B6826B9.50808@trueblade.com> <5c6f2a5d1002030545j4a403987rd9ebcc24d31f71cf@mail.gmail.com> <4B6980B5.4070006@gmail.com> Message-ID: On Wed, Feb 3, 2010 at 05:57, Nick Coghlan wrote: > Mark Dickinson wrote: >> Agreed on all points. ?Would it be terrible to simply add all relevant >> tags the moment a PEP is accepted? ?E.g., if a PEP pronounces some >> particular behaviour deprecated in Python 3.3 and removed in Python >> 3.4, then corresponding release blockers for 3.3 and 3.4 could be >> opened as part of implementing the PEP. > > That strikes me as a really good idea, since the tracker already serves > as a global "to do" list for the project (heck, I've used it as a > semi-private to do list at times, when I've thought of things I need to > fix when I don't have the roundtuits free to actually fix them). Martin filled me in on how to add new versions, so when we need a new one people can let me know. -Brett > > Cheers, > Nick. > > -- > Nick Coghlan ? | ? ncoghlan at gmail.com ? | ? Brisbane, Australia > --------------------------------------------------------------- > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org > From g.brandl at gmx.net Sat Feb 6 23:06:35 2010 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 06 Feb 2010 23:06:35 +0100 Subject: [Python-Dev] Making loggerClass an attribute of the logger manager? In-Reply-To: References: Message-ID: <4B6DE7EB.8090100@gmx.net> Am 25.11.2009 11:32, schrieb Vinay Sajip: > Georg Brandl gmx.net> writes: > >> Making the loggerClass configurable per manager would solve the >> problem for me, and AFAICS since most applications don't use >> different managers anyway, there should not be any detrimental >> effects. What do you think? > > Seems reasonable. Apart from the API to set/get, _loggerClass is only used by > the manager when instantiating a new logger. I've created a patch for this, see . I'd like to get this into 2.7 before beta1 :) Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From guido at python.org Sat Feb 6 23:20:14 2010 From: guido at python.org (Guido van Rossum) Date: Sat, 6 Feb 2010 14:20:14 -0800 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <20100206152129.5aad111b@freewill.wooz.org> References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B697AAD.2010307@gmail.com> <20100203084539.47559c8f@freewill.wooz.org> <4B69E0F9.5050103@gmail.com> <20100206152129.5aad111b@freewill.wooz.org> Message-ID: On Sat, Feb 6, 2010 at 12:21 PM, Barry Warsaw wrote: > On Feb 03, 2010, at 01:17 PM, Guido van Rossum wrote: >>Can you clarify? In Python 3, __file__ always points to the source. >>Clearly that is the way of the future. For 99.99% of uses of __file__, >>if it suddenly never pointed to a .pyc file any more (even if one >>existed) that would be just fine. So what's this talk of switching to >>__source__? > > Upon further reflection, I agree. ?__file__ also points to the source in > Python 2.7. Not in the 2.7 svn repo I have access to. It still points to the .pyc file if it was used. And I propose not to disturb this in 2.7, at least not by default. I'm fine though with a flag or distro-overridable config setting to change this behavior. > Do we need an attribute to point to the compiled bytecode file? I think we do. Quite unrelated to this discussion I have a use case for knowing easily whether a module was actually loaded from bytecode or not -- but I also have a need for __file__ to point to the source. So having both __file__ and __compiled__ makes sense to me. When there is no source code but only bytecode I am file with both pointing to the bytecode; in that case I presume that the bytecode is not in a __pyr__ subdirectory. For dynamically loaded extension modules I think both should be left unset, and some other __xxx__ variable could point to the .so or .dll file. FWIW the most common use case for __file__ is probably to find data files relative to it. Since the data won't be in the __pyr__ directory we couldn't make __file__ point to the __pyr__/....pyc file without much code breakage. (Yes, I am still in favor of the folder-per-folder model.) -- --Guido van Rossum (python.org/~guido) From guido at python.org Sat Feb 6 23:29:38 2010 From: guido at python.org (Guido van Rossum) Date: Sat, 6 Feb 2010 14:29:38 -0800 Subject: [Python-Dev] __file__ is not always an absolute path In-Reply-To: <4B6DD5C1.3080608@gmail.com> References: <4B6DD5C1.3080608@gmail.com> Message-ID: On Sat, Feb 6, 2010 at 12:49 PM, Ezio Melotti wrote: > In #7712 I was trying to change regrtest to always run the tests in a > temporary CWD (e.g. /tmp/@test_1234_cwd/). > The patches attached to the issue add a context manager that changes the > CWD, and it works fine when I run ./python -m test.regrtest from trunk/. > However, when I try from trunk/Lib/ it fails with ImportErrors (note that > the latest patch by Florent Xicluna already tries to workaround the > problem). The traceback points to "the_package = __import__(abstest, > globals(), locals(), [])" in runtest_inner (in regrtest.py), and a "print > __import__('test').__file__" there returns 'test/__init__.pyc'. > This can be reproduced quite easily: > > trunk$ ./python > Python 2.7a2+ (trunk:77941M, Feb ?3 2010, 06:40:49) > [GCC 4.4.1] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> import os, sys >>>> os.getcwd() > '/home/wolf/dev/trunk' >>>> import test >>>> test.__file__ ?# absolute > '/home/wolf/dev/trunk/Lib/test/__init__.pyc' >>>> os.chdir('/tmp') >>>> test.__file__ > '/home/wolf/dev/trunk/Lib/test/__init__.pyc' >>>> from test import test_unicode ?# works >>>> test_unicode.__file__ > '/home/wolf/dev/trunk/Lib/test/test_unicode.pyc' >>>> > [21]+ ?Stopped ? ? ? ? ? ? ? ? ./python > > trunk$ cd Lib/ > trunk/Lib$ ../python > Python 2.7a2+ (trunk:77941M, Feb ?3 2010, 06:40:49) > [GCC 4.4.1] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> import os, sys >>>> os.getcwd() > '/home/wolf/dev/trunk/Lib' >>>> import test >>>> test.__file__ ?# relative > 'test/__init__.pyc' >>>> os.chdir('/tmp') >>>> from test import test_unicode ?# fails > Traceback (most recent call last): > ?File "", line 1, in > ImportError: cannot import name test_unicode > > Is there a reason why in the second case test.__file__ is relative? I haven't tried to repro this particular example, but the reason is that we don't want to have to call getpwd() on every import nor do we want to have some kind of in-process variable to cache the current directory. (getpwd() is relatively slow and can sometimes fail outright, and trying to cache it has a certain risk of being wrong.) What we do instead, is code in site.py that walks over the elements of sys.path and turns them into absolute paths. However this code runs before '' is inserted in the front of sys.path, so that the initial value of sys.path is ''. You may want to print the value of sys.path at various points to see for yourself. -- --Guido van Rossum (python.org/~guido) From barry at python.org Sun Feb 7 00:22:50 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 6 Feb 2010 18:22:50 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> Message-ID: <20100206182250.7f7f5f5c@freewill.wooz.org> On Jan 31, 2010, at 11:04 AM, Raymond Hettinger wrote: >> It does this by >> allowing many different byte compilation files (.pyc files) to be >> co-located with the Python source file (.py file). > >It would be nice if all the compilation files could be tucked >into one single zipfile per directory to reduce directory clutter. > >It has several benefits besides tidiness. It hides the implementation >details of when magic numbers get shifted. And it may allow faster >start-up times when the zipfile is in the disk cache. This is closer in spirit to the original (uncirculated) PEP which called for fat pyc files, but without the complicated implementation details. It's still an interesting approach to explore. Writer concurrency can be handled with dot-lock files, but that does incur some extra overhead, such as the remove() of the lock file. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From exarkun at twistedmatrix.com Sun Feb 7 00:22:56 2010 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Sat, 06 Feb 2010 23:22:56 -0000 Subject: [Python-Dev] __file__ is not always an absolute path In-Reply-To: References: <4B6DD5C1.3080608@gmail.com> Message-ID: <20100206232256.26099.489521845.divmod.xquotient.566@localhost.localdomain> On 10:29 pm, guido at python.org wrote: >On Sat, Feb 6, 2010 at 12:49 PM, Ezio Melotti >wrote: >>In #7712 I was trying to change regrtest to always run the tests in a >>temporary CWD (e.g. /tmp/@test_1234_cwd/). >>The patches attached to the issue add a context manager that changes >>the >>CWD, and it works fine when I run ./python -m test.regrtest from >>trunk/. >>However, when I try from trunk/Lib/ it fails with ImportErrors (note >>that >>the latest patch by Florent Xicluna already tries to workaround the >>problem). The traceback points to "the_package = __import__(abstest, >>globals(), locals(), [])" in runtest_inner (in regrtest.py), and a >>"print >>__import__('test').__file__" there returns 'test/__init__.pyc'. >>This can be reproduced quite easily: >[snip] > >I haven't tried to repro this particular example, but the reason is >that we don't want to have to call getpwd() on every import nor do we >want to have some kind of in-process variable to cache the current >directory. (getpwd() is relatively slow and can sometimes fail >outright, and trying to cache it has a certain risk of being wrong.) Assuming you mean os.getcwd(): exarkun at boson:~$ python -m timeit -s 'def f(): pass' 'f()' 10000000 loops, best of 3: 0.132 usec per loop exarkun at boson:~$ python -m timeit -s 'from os import getcwd' 'getcwd()' 1000000 loops, best of 3: 1.02 usec per loop exarkun at boson:~$ So it's about 7x more expensive than a no-op function call. I'd call this pretty quick. Compared to everything else that happens during an import, I'm not convinced this wouldn't be lost in the noise. I think it's at least worth implementing and measuring. Jean-Paul From barry at python.org Sun Feb 7 00:28:01 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 6 Feb 2010 18:28:01 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <85f6a31f1002011404u16d74e7cjfd0aca6d8c9f7ca3@mail.gmail.com> References: <20100130190005.058c8187@freewill.wooz.org> <4B674555.8060104@v.loewis.de> <85f6a31f1002011404u16d74e7cjfd0aca6d8c9f7ca3@mail.gmail.com> Message-ID: <20100206182801.5a275084@freewill.wooz.org> On Feb 01, 2010, at 02:04 PM, Paul Du Bois wrote: >It's an interesting challenge to write the file in such a way that >it's safe for a reader and writer to co-exist. Like Brett, I >considered an append-only scheme, but one needs to handle the case >where the bytecode for a particular magic number changes. At some >point you'd need to sweep garbage from the file. All solutions seem >unnecessarily complex, and unnecessary since in practice the case >should not come up. I don't think that part's difficult. The byte code's only going to change if the source file has changed, and in that case, /all/ the byte code in the "fat pyc" file will be invalidated, so the whole thing can be deleted by the first writer. I'd worked that out in the original fat pyc version of the PEP. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sun Feb 7 00:29:28 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 6 Feb 2010 18:29:28 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B67559F.6050401@v.loewis.de> References: <20100130190005.058c8187@freewill.wooz.org> <4B674555.8060104@v.loewis.de> <85f6a31f1002011404u16d74e7cjfd0aca6d8c9f7ca3@mail.gmail.com> <4B67559F.6050401@v.loewis.de> Message-ID: <20100206182928.720c9f04@freewill.wooz.org> On Feb 01, 2010, at 11:28 PM, Martin v. L?wis wrote: >So what would you do for concurrent writers, then? The current >implementation relies on creat(O_EXCL) to be atomic, so a second >writer would just fail. This is but the only IO operation that is >guaranteed to be atomic (along with mkdir(2)), so reusing the current >approach doesn't work. I believe rename(2) is atomic also, at least on POSIX. I'm not sure if that helps us though. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From guido at python.org Sun Feb 7 00:53:00 2010 From: guido at python.org (Guido van Rossum) Date: Sat, 6 Feb 2010 15:53:00 -0800 Subject: [Python-Dev] __file__ is not always an absolute path In-Reply-To: <20100206232256.26099.489521845.divmod.xquotient.566@localhost.localdomain> References: <4B6DD5C1.3080608@gmail.com> <20100206232256.26099.489521845.divmod.xquotient.566@localhost.localdomain> Message-ID: On Sat, Feb 6, 2010 at 3:22 PM, wrote: > On 10:29 pm, guido at python.org wrote: >> >> On Sat, Feb 6, 2010 at 12:49 PM, Ezio Melotti >> wrote: >>> >>> In #7712 I was trying to change regrtest to always run the tests in a >>> temporary CWD (e.g. /tmp/@test_1234_cwd/). >>> The patches attached to the issue add a context manager that changes the >>> CWD, and it works fine when I run ./python -m test.regrtest from trunk/. >>> However, when I try from trunk/Lib/ it fails with ImportErrors (note that >>> the latest patch by Florent Xicluna already tries to workaround the >>> problem). The traceback points to "the_package = __import__(abstest, >>> globals(), locals(), [])" in runtest_inner (in regrtest.py), and a "print >>> __import__('test').__file__" there returns 'test/__init__.pyc'. >>> This can be reproduced quite easily: >> >> [snip] >> >> I haven't tried to repro this particular example, but the reason is >> that we don't want to have to call getpwd() on every import nor do we >> want to have some kind of in-process variable to cache the current >> directory. (getpwd() is relatively slow and can sometimes fail >> outright, and trying to cache it has a certain risk of being wrong.) > > Assuming you mean os.getcwd(): Yes. > exarkun at boson:~$ python -m timeit -s 'def f(): pass' 'f()' > 10000000 loops, best of 3: 0.132 usec per loop > exarkun at boson:~$ python -m timeit -s 'from os import getcwd' 'getcwd()' > 1000000 loops, best of 3: 1.02 usec per loop > exarkun at boson:~$ > So it's about 7x more expensive than a no-op function call. ?I'd call this > pretty quick. ?Compared to everything else that happens during an import, > I'm not convinced this wouldn't be lost in the noise. ?I think it's at least > worth implementing and measuring. But it's a system call, and its speed depends on a lot more than the speed of a simple function call. It depends on the OS kernel, possibly on the filesystem, and so on. Also "os.getcwd()" abstracts away various platform details that the C import code would have to replicate. Really, the approach of preprocessing sys.path makes much more sense. If an app wants sys.path[0] to be an absolute path too they can modify it themselves. -- --Guido van Rossum (python.org/~guido) From guido at python.org Sun Feb 7 01:02:27 2010 From: guido at python.org (Guido van Rossum) Date: Sat, 6 Feb 2010 16:02:27 -0800 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <20100206182801.5a275084@freewill.wooz.org> References: <20100130190005.058c8187@freewill.wooz.org> <4B674555.8060104@v.loewis.de> <85f6a31f1002011404u16d74e7cjfd0aca6d8c9f7ca3@mail.gmail.com> <20100206182801.5a275084@freewill.wooz.org> Message-ID: On Sat, Feb 6, 2010 at 3:28 PM, Barry Warsaw wrote: > On Feb 01, 2010, at 02:04 PM, Paul Du Bois wrote: > >>It's an interesting challenge to write the file in such a way that >>it's safe for a reader and writer to co-exist. Like Brett, I >>considered an append-only scheme, but one needs to handle the case >>where the bytecode for a particular magic number changes. At some >>point you'd need to sweep garbage from the file. All solutions seem >>unnecessarily complex, and unnecessary since in practice the case >>should not come up. > > I don't think that part's difficult. ?The byte code's only going to change if > the source file has changed, and in that case, /all/ the byte code in the "fat > pyc" file will be invalidated, so the whole thing can be deleted by the first > writer. ?I'd worked that out in the original fat pyc version of the PEP. I'm sorry, but I'm totally against fat bytecode files. They make things harder for all tools. The beauty of the existing bytecode format is that it's totally trivial: magic number, source mtime, unmarshalled code object. You can't beat the beauty of that. For the traditional "skinny" bytecode files, I believe that the existing algorithm which writes zeros in the place of the magic number first, writes the rest of the file, and then goes back to write the correct magic number, is correct with a single writer and multiple readers (assuming the readers ignore the file if its magic number is invalid). The creat(O_EXCL) option ensures that there won't be multiple writers. No rename() is necessary; POSIX rename() may be atomic, but it's a directory modification which makes it potentially slow. -- --Guido van Rossum (python.org/~guido) From lists at cheimes.de Sun Feb 7 01:04:27 2010 From: lists at cheimes.de (Christian Heimes) Date: Sun, 07 Feb 2010 01:04:27 +0100 Subject: [Python-Dev] __file__ is not always an absolute path In-Reply-To: References: <4B6DD5C1.3080608@gmail.com> Message-ID: <4B6E038B.8010507@cheimes.de> Guido van Rossum wrote: > What we do instead, is code in site.py that walks over the elements of > sys.path and turns them into absolute paths. However this code runs > before '' is inserted in the front of sys.path, so that the initial > value of sys.path is ''. > > You may want to print the value of sys.path at various points to see > for yourself. I ran into the issue on Debian or Ubuntu (can't remember) several years ago. The post-install script of the Python package did something like "cd /usr/lib/pythonX.Y && ./compileall.py", so all pyc files were created relative to the library root of Python. The __file__ attribute of all pre-compiled Python files were relative, too. Christian From ziade.tarek at gmail.com Sun Feb 7 01:08:19 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Sun, 7 Feb 2010 01:08:19 +0100 Subject: [Python-Dev] Proposal for the getpass module Message-ID: <94bdd2611002061608q109e6502kd65cc5df44c71646@mail.gmail.com> Hello, I would like to propose a small change in the getpass module so it's able to get passwords from keyrings (like KWallet, Keychain, etc) The idea is to provide a getpass.cfg configuration file where people can provide the name of a function to use when getpass is called. Then third-party projects can implement this function. For example the Python Keyring library.[1] could be installed and configured to be used by people that wants getpass calls to be handled by this tool. That's a backward compatible change, and it avoids adding any new module in the stdlib. Plus, it offers a greatly improved getpass module with no risks for the stdlib stability : it becomes a reference implementation with an interface for third-party implementers. A prototype is here : http://bitbucket.org/tarek/getpass/ (work in progress but you can get the idea) [1] http://pypi.python.org/pypi/keyring Regards Tarek -- Tarek Ziad? | http://ziade.org From guido at python.org Sun Feb 7 01:13:38 2010 From: guido at python.org (Guido van Rossum) Date: Sat, 6 Feb 2010 16:13:38 -0800 Subject: [Python-Dev] __file__ is not always an absolute path In-Reply-To: <4B6E038B.8010507@cheimes.de> References: <4B6DD5C1.3080608@gmail.com> <4B6E038B.8010507@cheimes.de> Message-ID: On Sat, Feb 6, 2010 at 4:04 PM, Christian Heimes wrote: > Guido van Rossum wrote: >> What we do instead, is code in site.py that walks over the elements of >> sys.path and turns them into absolute paths. However this code runs >> before '' is inserted in the front of sys.path, so that the initial >> value of sys.path is ''. >> >> You may want to print the value of sys.path at various points to see >> for yourself. > > I ran into the issue on Debian or Ubuntu (can't remember) several years > ago. The post-install script of the Python package did something like > "cd /usr/lib/pythonX.Y && ./compileall.py", so all pyc files were > created relative to the library root of Python. The __file__ attribute > of all pre-compiled Python files were relative, too. Are you sure you remember this right? The .co_filename attributes will be unmarshalled straight from the bytecode file which indeed will have the relative path in this case (hopefully we'll finally fix this in 3.2 and 2.7). But if I read the code in import.c correctly, __file__ is set on the basis of the path of the file read, which in turn comes from sys.path which will have been "absolufied" by site.py. Or maybe this was so long ago that site.py didn't yet do that? -- --Guido van Rossum (python.org/~guido) From ben+python at benfinney.id.au Sun Feb 7 01:27:42 2010 From: ben+python at benfinney.id.au (Ben Finney) Date: Sun, 07 Feb 2010 11:27:42 +1100 Subject: [Python-Dev] PEP 3147: PYC Repository Directories References: <20100130190005.058c8187@freewill.wooz.org> <2987c46d1001301821n72606673x1c84ba7fc9b4712@mail.gmail.com> <87wryzro4a.fsf@benfinney.id.au> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <20100202230302.34fb7906@freewill.wooz.org> <87636eok7b.fsf@benfinney.id.au> <4B697517.5060903@gmail.com> <20100206153543.47c2194f@freewill.wooz.org> Message-ID: <878wb5lwjl.fsf@benfinney.id.au> Barry Warsaw writes: > On Feb 03, 2010, at 11:07 PM, Nick Coghlan wrote: > > >It's also the case that having to run Python to manage my own > >filesystem would very annoying. [?] Files that are problematic wouldn't need Python to manage any more than currently. The suggestion was just that, a suggestion for Python to expose information to assist; it wouldn't be required. > I agree. I'd prefer to have a predictable place for the cached files, > independent of having to run Python to tell you where that is. Right; I don't see who would disagree with that. I don't see any conflict between ?decouple compiled bytecode file locations from source file locations? versus ?predictable location for the compiled bytecode files?. -- \ ?All television is educational television. The question is: | `\ what is it teaching?? ?Nicholas Johnson | _o__) | Ben Finney From lists at cheimes.de Sun Feb 7 01:36:12 2010 From: lists at cheimes.de (Christian Heimes) Date: Sun, 07 Feb 2010 01:36:12 +0100 Subject: [Python-Dev] __file__ is not always an absolute path In-Reply-To: References: <4B6DD5C1.3080608@gmail.com> <4B6E038B.8010507@cheimes.de> Message-ID: <4B6E0AFC.3040204@cheimes.de> Guido van Rossum schrieb: > Are you sure you remember this right? The .co_filename > attributes will be unmarshalled straight from the bytecode file which > indeed will have the relative path in this case (hopefully we'll > finally fix this in 3.2 and 2.7). But if I read the code in import.c > correctly, __file__ is set on the basis of the path of the file read, > which in turn comes from sys.path which will have been "absolufied" by > site.py. Or maybe this was so long ago that site.py didn't yet do > that? I ran into the problem years ago. I can recall the Python version but it must have been 2.2 or 2.3, maybe 2.1. I'm not entirely sure how it happened, too. All I can remember that I traced the cause down to the way compileall was called. I've tried to reproduce the issue with Python 2.6 but failed. It looks like the code does the right thing. Christian From guido at python.org Sun Feb 7 01:30:55 2010 From: guido at python.org (Guido van Rossum) Date: Sat, 6 Feb 2010 16:30:55 -0800 Subject: [Python-Dev] Proposal for the getpass module In-Reply-To: <94bdd2611002061608q109e6502kd65cc5df44c71646@mail.gmail.com> References: <94bdd2611002061608q109e6502kd65cc5df44c71646@mail.gmail.com> Message-ID: [redirecting to python-ideas] On Sat, Feb 6, 2010 at 4:08 PM, Tarek Ziad? wrote: > Hello, > > I would like to propose a small change in the getpass module so it's > able to get passwords from keyrings (like KWallet, Keychain, etc) > > The idea is to provide a getpass.cfg configuration file where people > can provide the name of a function to use when getpass is called. > Then third-party projects can implement this function. For example the > Python Keyring library.[1] could be installed and configured to be > used by people that wants getpass calls to be handled by this tool. > > That's a backward compatible change, and it avoids adding any new > module in the stdlib. Plus, it offers a greatly improved getpass > module with no risks for the stdlib stability : it becomes a reference > implementation with an interface for third-party implementers. > > A prototype is here : http://bitbucket.org/tarek/getpass/ (work in > progress but you can get the idea) > > [1] http://pypi.python.org/pypi/keyring Don't you usually have to tell the keyring which password to retrieve? The signature for getpass() doesn't have enough info for that. I'm not sure that not adding a new module to the stdlib is a sufficient reason to foist the new semantics on an old function. -- --Guido van Rossum (python.org/~guido) From guido at python.org Sun Feb 7 01:39:15 2010 From: guido at python.org (Guido van Rossum) Date: Sat, 6 Feb 2010 16:39:15 -0800 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <878wb5lwjl.fsf@benfinney.id.au> References: <20100130190005.058c8187@freewill.wooz.org> <87wryzro4a.fsf@benfinney.id.au> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <20100202230302.34fb7906@freewill.wooz.org> <87636eok7b.fsf@benfinney.id.au> <4B697517.5060903@gmail.com> <20100206153543.47c2194f@freewill.wooz.org> <878wb5lwjl.fsf@benfinney.id.au> Message-ID: On Sat, Feb 6, 2010 at 4:27 PM, Ben Finney wrote: > Barry Warsaw writes: >> I agree. I'd prefer to have a predictable place for the cached files, >> independent of having to run Python to tell you where that is. > > Right; I don't see who would disagree with that. I don't see any > conflict between ?decouple compiled bytecode file locations from source > file locations? versus ?predictable location for the compiled bytecode > files?. The conflict is purely that PEP 3147 proposes the new behavior to be optional, and adds a flag (-R) and an environment variable ($PYTHONPYR) to change it. I presume Barry is proposing this out of fear that the new behavior might upset somebody; personally I think it would be better if the behavior weren't optional. At least not in new Python releases -- in backports such as a distribution that wants this feature might make, it may make sense to be more conservative, or at least to have a way to turn it off. -- --Guido van Rossum (python.org/~guido) From guido at python.org Sun Feb 7 01:42:09 2010 From: guido at python.org (Guido van Rossum) Date: Sat, 6 Feb 2010 16:42:09 -0800 Subject: [Python-Dev] __file__ is not always an absolute path In-Reply-To: <4B6E0AFC.3040204@cheimes.de> References: <4B6DD5C1.3080608@gmail.com> <4B6E038B.8010507@cheimes.de> <4B6E0AFC.3040204@cheimes.de> Message-ID: On Sat, Feb 6, 2010 at 4:36 PM, Christian Heimes wrote: > Guido van Rossum schrieb: >> Are you sure you remember this right? The .co_filename >> attributes will be unmarshalled straight from the bytecode file which >> indeed will have the relative path in this case (hopefully we'll >> finally fix this in 3.2 and 2.7). But if I read the code in import.c >> correctly, __file__ is set on the basis of the path of the file read, >> which in turn comes from sys.path which will have been "absolufied" by >> site.py. Or maybe this was so long ago that site.py didn't yet do >> that? > > I ran into the problem years ago. I can recall the Python version but it > must have been 2.2 or 2.3, maybe 2.1. I'm not entirely sure how it > happened, too. All I can remember that I traced the cause down to the > way compileall was called. I've tried to reproduce the issue with Python > 2.6 but failed. It looks like the code does the right thing. Hm. The timing doesn't match. From the svn logs for site.py looks like this was introduced in r17768 on 2000-09-28, which puts it before 2.0 was released. -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Sun Feb 7 02:10:29 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 07 Feb 2010 11:10:29 +1000 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <878wb5lwjl.fsf@benfinney.id.au> References: <20100130190005.058c8187@freewill.wooz.org> <2987c46d1001301821n72606673x1c84ba7fc9b4712@mail.gmail.com> <87wryzro4a.fsf@benfinney.id.au> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <20100202230302.34fb7906@freewill.wooz.org> <87636eok7b.fsf@benfinney.id.au> <4B697517.5060903@gmail.com> <20100206153543.47c2194f@freewill.wooz.org> <878wb5lwjl.fsf@benfinney.id.au> Message-ID: <4B6E1305.2080602@gmail.com> Ben Finney wrote: > Right; I don't see who would disagree with that. I don't see any > conflict between ?decouple compiled bytecode file locations from source > file locations? versus ?predictable location for the compiled bytecode > files?. The more decoupled they are, the harder it is to manually find the bytecode file. With the current .pyc scheme, .pyr folders or an SVN style Python cache directory, finding the bytecode file is pretty easy, since the cached file is either in the same directory as the source file or in a subdirectory. With any form of shadow hierarchy though, it gets trickier because you have to: 1. Find the root of the shadow hierarchy 2. Navigate within the shadow hierarchy down to the point that matches where your source file was It's a fairly significant increase in mental overhead. It gets much worse if the location of the shadow hierarchy root is configurable in any way (e.g. based on sys.path contents or an environment variable). Restricting the caching mechanism to the folder containing the source file keeps things a lot simpler. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From guido at python.org Sun Feb 7 02:32:15 2010 From: guido at python.org (Guido van Rossum) Date: Sat, 6 Feb 2010 17:32:15 -0800 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B6E1305.2080602@gmail.com> References: <20100130190005.058c8187@freewill.wooz.org> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <20100202230302.34fb7906@freewill.wooz.org> <87636eok7b.fsf@benfinney.id.au> <4B697517.5060903@gmail.com> <20100206153543.47c2194f@freewill.wooz.org> <878wb5lwjl.fsf@benfinney.id.au> <4B6E1305.2080602@gmail.com> Message-ID: On Sat, Feb 6, 2010 at 5:10 PM, Nick Coghlan wrote: > Ben Finney wrote: >> Right; I don't see who would disagree with that. I don't see any >> conflict between ?decouple compiled bytecode file locations from source >> file locations? versus ?predictable location for the compiled bytecode >> files?. > > The more decoupled they are, the harder it is to manually find the > bytecode file. > > With the current .pyc scheme, .pyr folders or an SVN style Python cache > directory, finding the bytecode file is pretty easy, since the cached > file is either in the same directory as the source file or in a > subdirectory. > > With any form of shadow hierarchy though, it gets trickier because you > have to: > 1. Find the root of the shadow hierarchy > 2. Navigate within the shadow hierarchy down to the point that matches > where your source file was > > It's a fairly significant increase in mental overhead. It gets much > worse if the location of the shadow hierarchy root is configurable in > any way (e.g. based on sys.path contents or an environment variable). > > Restricting the caching mechanism to the folder containing the source > file keeps things a lot simpler. Great way of explaining why the basic folder-per-folder model wins over the folder-per-sys.path-entry model! The basic folder-per-folder model doesn't need to know what sys.path is. (And I hadn't followed previous messages in the thread with enough care to understand the subtlen implications of Ben's point. Sorry!) -- --Guido van Rossum (python.org/~guido) From ben+python at benfinney.id.au Sun Feb 7 03:04:45 2010 From: ben+python at benfinney.id.au (Ben Finney) Date: Sun, 07 Feb 2010 13:04:45 +1100 Subject: [Python-Dev] PEP 3147: PYC Repository Directories References: <20100130190005.058c8187@freewill.wooz.org> <2987c46d1001301821n72606673x1c84ba7fc9b4712@mail.gmail.com> <87wryzro4a.fsf@benfinney.id.au> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <20100202230302.34fb7906@freewill.wooz.org> <87636eok7b.fsf@benfinney.id.au> <4B697517.5060903@gmail.com> <20100206153543.47c2194f@freewill.wooz.org> <878wb5lwjl.fsf@benfinney.id.au> <4B6E1305.2080602@gmail.com> Message-ID: <874oltls1u.fsf@benfinney.id.au> Nick Coghlan writes: > The more decoupled they are, the harder it is to manually find the > bytecode file. Okay. So it's not so much about ?predictable?, but rather about ?predictable by a human without too much cognitive effort?. I can see value in that, though it's best to be explicit that this is a goal (to be clear that ?a program can tell you where they live? isn't a solution). > It's a fairly significant increase in mental overhead. It gets much > worse if the location of the shadow hierarchy root is configurable in > any way (e.g. based on sys.path contents or an environment variable). > > Restricting the caching mechanism to the folder containing the source > file keeps things a lot simpler. Simpler for the human working on the source code; not for the human trying to fit this scheme in with an OS package management system. (Again, I'm just clarifying and making the contrast explicit, not judging relative values.) This makes it clearer to me that there is a glaring incompatibility between this desire for ?keep the compiled bytecode files close to the source files? versus ?decouple the locations so the OS package manager can do its job of managing installed files?. I recognise after earlier discussion in this thread that's not an issue being addressed by PEP 3147. -- \ ?Those are my principles. If you don't like them I have | `\ others.? ?Groucho Marx | _o__) | Ben Finney From exarkun at twistedmatrix.com Sun Feb 7 05:27:09 2010 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Sun, 07 Feb 2010 04:27:09 -0000 Subject: [Python-Dev] __file__ is not always an absolute path Message-ID: <20100207042709.26099.1983212382.divmod.xquotient.613@localhost.localdomain> On 6 Feb, 11:53 pm, guido at python.org wrote: >On Sat, Feb 6, 2010 at 3:22 PM, wrote: >>On 10:29 pm, guido at python.org wrote: >>> >>>[snip] >>> >>>I haven't tried to repro this particular example, but the reason is >>>that we don't want to have to call getpwd() on every import nor do we >>>want to have some kind of in-process variable to cache the current >>>directory. (getpwd() is relatively slow and can sometimes fail >>>outright, and trying to cache it has a certain risk of being wrong.) >> >>Assuming you mean os.getcwd(): > >Yes. >>exarkun at boson:~$ python -m timeit -s 'def f(): pass' 'f()' >>10000000 loops, best of 3: 0.132 usec per loop >>exarkun at boson:~$ python -m timeit -s 'from os import getcwd' >>'getcwd()' >>1000000 loops, best of 3: 1.02 usec per loop >>exarkun at boson:~$ >>So it's about 7x more expensive than a no-op function call. I'd call >>this >>pretty quick. Compared to everything else that happens during an >>import, >>I'm not convinced this wouldn't be lost in the noise. I think it's at >>least >>worth implementing and measuring. > >But it's a system call, and its speed depends on a lot more than the >speed of a simple function call. It depends on the OS kernel, possibly >on the filesystem, and so on. Do you know of a case where it's actually slow? If not, how convincing should this argument really be? Perhaps we can measure it on a few platforms before passing judgement. For reference, my numbers are from Linux 2.6.31 and my filesystem (though I don't think it really matters) is ext3. I have eglibc 2.10.1 compiled by gcc version 4.4.1. >Also "os.getcwd()" abstracts away >various platform details that the C import code would have to >replicate. That logic can all be hidden behind a C API which os.getcwd() can then be implemented in terms of. There's no reason for it to be any harder to invoke from C than it is from Python. >Really, the approach of preprocessing sys.path makes much >more sense. If an app wants sys.path[0] to be an absolute path too >they can modify it themselves. That may turn out to be the less expensive approach. I'm not sure in what other ways it is the approach that makes much more sense. Quite the opposite: centralizing the responsibility for normalizing this value makes a lot of sense if you consider things like reducing code duplication and, in turn, removing the possibility for bugs. Adding better documentation for __file__ is another task which I think is worth undertaking, regardless of whether any change is made to how its value is computed. At the moment, the two or three sentences about it in PEP 302 are all I've been able to find, and they don't really get the job done. Jean-Paul From ncoghlan at gmail.com Sun Feb 7 07:15:39 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 07 Feb 2010 16:15:39 +1000 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <20100206210830.26099.330344071.divmod.xquotient.543@localhost.localdomain> References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B697AAD.2010307@gmail.com> <20100203084539.47559c8f@freewill.wooz.org> <4B69E0F9.5050103@gmail.com> <20100206152129.5aad111b@freewill.wooz.org> <20100206210830.26099.330344071.divmod.xquotient.543@localhost.localdomain> Message-ID: <4B6E5A8B.5090705@gmail.com> exarkun at twistedmatrix.com wrote: > On 08:21 pm, barry at python.org wrote: >> On Feb 03, 2010, at 01:17 PM, Guido van Rossum wrote: >>> Can you clarify? In Python 3, __file__ always points to the source. >>> Clearly that is the way of the future. For 99.99% of uses of __file__, >>> if it suddenly never pointed to a .pyc file any more (even if one >>> existed) that would be just fine. So what's this talk of switching to >>> __source__? >> >> Upon further reflection, I agree. __file__ also points to the source in >> Python 2.7. Do we need an attribute to point to the compiled bytecode >> file? > > What if, instead of trying to annotate the module object with this > assortment of metadata - metadata which depends on lots of things, and > can vary from interpreter to interpreter, and even from module to module > (depending on how it was loaded) - we just stuck with the __loader__ > annotation, and encouraged/allowed/facilitated the use of the loader > object to learn all of this extra information? Trickier than it sounds. In the case of answering the question "was this module loaded from bytecode or not?", the loader will need somewhere to store the answer for each file. The easiest per-module store is the module's own global namespace - the loader's own attribute namespace isn't appropriate, since one loader may handle multiple modules. The filesystem can't be used as a reference because even when the file is loaded from source, the bytecode file will usually be created as a side effect. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From solipsis at pitrou.net Sun Feb 7 16:59:39 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 07 Feb 2010 16:59:39 +0100 Subject: [Python-Dev] IO module improvements In-Reply-To: <4B6E7FCD.60701@gmail.com> References: <4B6C108E.3010600@gmail.com> <4B6D55CC.8070904@wanadoo.fr> <4B6E7FCD.60701@gmail.com> Message-ID: <1265558379.3427.7.camel@localhost> Le dimanche 07 f?vrier 2010 ? 09:54 +0100, Pascal Chambon a ?crit : > > Actually, TextIOWrapper is simply not thread-safe for most of its operations. I > > think we did the work for simple writing, though, since it's better for > > multi-threaded use of print(). > > Argh, I had the impression that all io streams were theoretically > thread-safe (although it's not documented so indeed). It needs > clarification maybe. It should first be discussed which classes need to be thread-safe. There is nothing about it in PEP 3116, and the first (pure Python) implementation of the io module had no locks anywhere. We later added locks to the Buffered classes because it seemed an obvious requirement for many use cases (for example object databases such as ZODB or Durus). > > You can, but be aware that _pyio is *really* slow... I'm not sure it would be a > > service to many users. > > > Hum... would a pure python module, augmented with cython declarations, > offer a speed similar to c modules ? Maybe I shall investigate that > way, because it would be great to have an implemntation which is both > safer and sufficiently quick... There's no obvious answer. I suspect that it won't be as fast as the current C implementation, because some things simply aren't possible or available in Python. But it could be "fast enough". You have to experiment. Regards Antoine. From barry at python.org Sun Feb 7 18:48:10 2010 From: barry at python.org (Barry Warsaw) Date: Sun, 7 Feb 2010 12:48:10 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B697AAD.2010307@gmail.com> <20100203084539.47559c8f@freewill.wooz.org> <4B69E0F9.5050103@gmail.com> <20100206152129.5aad111b@freewill.wooz.org> Message-ID: <20100207124810.1324bced@freewill.wooz.org> On Feb 06, 2010, at 02:20 PM, Guido van Rossum wrote: >> Upon further reflection, I agree. ?__file__ also points to the source in >> Python 2.7. > >Not in the 2.7 svn repo I have access to. It still points to the .pyc >file if it was used. Ah, I was fooled by a missing pyc file. Run it a second time and you're right, it points to the pyc. >And I propose not to disturb this in 2.7, at least not by default. I'm >fine though with a flag or distro-overridable config setting to change >this behavior. Cool. I'm not sure this is absolutely necessary for Debian/Ubuntu, so I'll call YAGNI on it for 2.x (until and unless it isn't ;). >> Do we need an attribute to point to the compiled bytecode file? > >I think we do. Quite unrelated to this discussion I have a use case >for knowing easily whether a module was actually loaded from bytecode >or not -- but I also have a need for __file__ to point to the source. >So having both __file__ and __compiled__ makes sense to me. __compiled__ or __cached__? I like the latter but don't have strong feelings about it either way. >When there is no source code but only bytecode I am file with both >pointing to the bytecode; in that case I presume that the bytecode is >not in a __pyr__ subdirectory. For dynamically loaded extension >modules I think both should be left unset, and some other __xxx__ >variable could point to the .so or .dll file. FWIW the most common use >case for __file__ is probably to find data files relative to it. Since >the data won't be in the __pyr__ directory we couldn't make __file__ >point to the __pyr__/....pyc file without much code breakage. The other main use case for having such an attribute on extension modules is diagnostics. I want to be able to find out where on the file system a .so actually lives: Python 2.7a3+ (trunk:78030, Feb 6 2010, 15:18:29) [GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import _socket >>> _socket.__file__ '/home/barry/projects/python/trunk/build/lib.linux-x86_64-2.7/_socket.so' >(Yes, I am still in favor of the folder-per-folder model.) Cool. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sun Feb 7 18:53:36 2010 From: barry at python.org (Barry Warsaw) Date: Sun, 7 Feb 2010 12:53:36 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B6B51A4.9070404@nevcal.com> References: <20100130190005.058c8187@freewill.wooz.org> <4B6B3E52.8040708@g.nevcal.com> <4B6B4A09.4070000@trueblade.com> <4B6B51A4.9070404@nevcal.com> Message-ID: <20100207125336.7ce8afde@freewill.wooz.org> On Feb 04, 2010, at 03:00 PM, Glenn Linderman wrote: >When a PEP 3147 (if modified by my suggestion) version of Python runs, >and the directory doesn't exist, and it wants to create a .pyc, it would >create the directory, and put the .pyc there. Sort of just like how it >creates .pyc files, now, but an extra step of creating the repository >directory if it doesn't exist. After the first run, it would exist. It >is described in the PEP, and I quoted that section... "Python will >create a 'foo.pyr' directory"... I'm just suggesting different semantics >for how many directories, and what is contained in them. I've added __pyr_version__ as an open question in the PEP (not yet committed), as is making this default behavior (no -R flag required). -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sun Feb 7 18:58:24 2010 From: barry at python.org (Barry Warsaw) Date: Sun, 7 Feb 2010 12:58:24 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> Message-ID: <20100207125824.0ff9dc14@freewill.wooz.org> On Jan 31, 2010, at 01:06 PM, Ron Adam wrote: >With a single cache directory, we could have an option to force writing >bytecode to a desired location. That might be useful on it's own for >creating runtime bytecode only installations for installers. One important reason for wanting to keep the bytecode cache files colocated with the source files is that I want to be able to continue to manipulate $PYTHONPATH to control how Python finds its modules. With a single system-wide cache directory that won't be easy. E.g. $PYTHONPATH might be hacked to find the source file you expect, but how would that interact with how Python finds its cache files? I'm strongly in favor of keeping the cache files as close to the source they were generated from as possible. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From fuzzyman at voidspace.org.uk Sun Feb 7 18:59:27 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sun, 07 Feb 2010 17:59:27 +0000 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <20100207124810.1324bced@freewill.wooz.org> References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B697AAD.2010307@gmail.com> <20100203084539.47559c8f@freewill.wooz.org> <4B69E0F9.5050103@gmail.com> <20100206152129.5aad111b@freewill.wooz.org> <20100207124810.1324bced@freewill.wooz.org> Message-ID: <4B6EFF7F.6080208@voidspace.org.uk> On 07/02/2010 17:48, Barry Warsaw wrote: > [snip...] >> And I propose not to disturb this in 2.7, at least not by default. I'm >> fine though with a flag or distro-overridable config setting to change >> this behavior. >> > Cool. I'm not sure this is absolutely necessary for Debian/Ubuntu, so I'll > call YAGNI on it for 2.x (until and unless it isn't ;). > > What are the chances of getting this into 2.x at all? For it to get into the 2.7, likely to be the last major version in the 2.x series, the PEP needs to be approved and the implementation needs to be feature complete by April 3rd (first beta release according to the schedule [1]). Michael Foord [1] http://www.python.org/dev/peps/pep-0373/#release-schedule -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From barry at python.org Sun Feb 7 19:04:30 2010 From: barry at python.org (Barry Warsaw) Date: Sun, 7 Feb 2010 13:04:30 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B65D5B9.6020509@arcor.de> References: <4B652E57.6000602@arcor.de> <4B65437E.5080305@gmail.com> <4B6554D2.2020109@v.loewis.de> <4B65D5B9.6020509@arcor.de> Message-ID: <20100207130430.0849e260@freewill.wooz.org> On Jan 31, 2010, at 08:10 PM, Silke von Bargen wrote: >Martin v. L?wis schrieb: >> There is also the issue of race conditions with multiple simultaneous >> accesses. The original format for the PEP had race conditions for >> multiple simultaneous writers; ZIP will also have race conditions for >> concurrent readers/writers (as any new writer will have to overwrite >> the central directory, making the zip file temporarily unavailable - >> unless they copy it, in which case we are back to writer/writer >> races). >> >> Regards, >> Martin >> >> >Good point. OTOH the probability for this to happen actually is very small. And yet, when it does happen, it's probably a monster to debug and defend against. Unless we have a convincing cross-platform story for preventing these race conditions, I think a single-file (e.g. zipfile) approach is infeasible. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sun Feb 7 19:17:29 2010 From: barry at python.org (Barry Warsaw) Date: Sun, 7 Feb 2010 13:17:29 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B6586F8.2060806@gmail.com> References: <20100130190005.058c8187@freewill.wooz.org> <5d44f72f1001302104n18a4ecd4j7a69bd161e57e627@mail.gmail.com> <5cae42b21001310354x4a27fb7aq119666f53ba99008@mail.gmail.com> <5cae42b21001310413i131d125bh895a270671fbf2f8@mail.gmail.com> <4B6586F8.2060806@gmail.com> Message-ID: <20100207131729.7fcc2808@freewill.wooz.org> On Jan 31, 2010, at 11:34 PM, Nick Coghlan wrote: >I must admit I quite like the __pyr__ directory approach as well. Since >the interpreter knows the suffix it is looking for, names shouldn't >conflict. Using a single directory allows the name to be less cryptic, >too (e.g. __pycache__). Something else that occurs to me; the name of the directory (under folder-per-folder approach) probably ought to be the same as the name of the module attribute. There's probably no good reason to make it different, and making it the same makes the association stronger. That still gives us plenty of opportunity to bikeshed, but __pycache__ seems reasonable to me (it's the cache of parsing and compiling the .py file). -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sun Feb 7 19:22:36 2010 From: barry at python.org (Barry Warsaw) Date: Sun, 7 Feb 2010 13:22:36 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <98985ab21001311326y753e1babp6877070ddd2d3768@mail.gmail.com> References: <20100130190005.058c8187@freewill.wooz.org> <5d44f72f1001302104n18a4ecd4j7a69bd161e57e627@mail.gmail.com> <5cae42b21001310354x4a27fb7aq119666f53ba99008@mail.gmail.com> <5cae42b21001310413i131d125bh895a270671fbf2f8@mail.gmail.com> <4B6586F8.2060806@gmail.com> <98985ab21001311326y753e1babp6877070ddd2d3768@mail.gmail.com> Message-ID: <20100207132236.467c3c6c@freewill.wooz.org> On Feb 01, 2010, at 08:26 AM, Tim Delaney wrote: >The pyc/pyo files are just an optimisation detail, and are essentially >temporary. Given that, if they were to live in a single directory, to me it >seems obvious that the default location for that should be in the system >temporary directory. I an immediately think of the following advantages: > >1. No one really complains too much about putting things in /tmp unless it >starts taking up too much space. In which case they delete it and if it gets >reused, it gets recreated. IIUC the Filesystem Hierarchy Standard correctly, then these files really should go under /var/cache/python. (Don't ask me where that would be on non-FHS compliant systems Windows). I've explained in other followups why I don't particularly like separating the source from the cache files though, but if you wanted a sick approach: Take the full absolutely path to the .py file, plus the magic number, plus the time stamp and hash that. Cache the pyc file under /var/cache/python/. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sun Feb 7 19:32:47 2010 From: barry at python.org (Barry Warsaw) Date: Sun, 7 Feb 2010 13:32:47 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B674555.8060104@v.loewis.de> <85f6a31f1002011404u16d74e7cjfd0aca6d8c9f7ca3@mail.gmail.com> <20100206182801.5a275084@freewill.wooz.org> Message-ID: <20100207133247.60faf0c1@freewill.wooz.org> On Feb 06, 2010, at 04:02 PM, Guido van Rossum wrote: >On Sat, Feb 6, 2010 at 3:28 PM, Barry Warsaw wrote: >> On Feb 01, 2010, at 02:04 PM, Paul Du Bois wrote: >> >>>It's an interesting challenge to write the file in such a way that >>>it's safe for a reader and writer to co-exist. Like Brett, I >>>considered an append-only scheme, but one needs to handle the case >>>where the bytecode for a particular magic number changes. At some >>>point you'd need to sweep garbage from the file. All solutions seem >>>unnecessarily complex, and unnecessary since in practice the case >>>should not come up. >> >> I don't think that part's difficult. ?The byte code's only going to change if >> the source file has changed, and in that case, /all/ the byte code in the "fat >> pyc" file will be invalidated, so the whole thing can be deleted by the first >> writer. ?I'd worked that out in the original fat pyc version of the PEP. > >I'm sorry, but I'm totally against fat bytecode files. They make >things harder for all tools. The beauty of the existing bytecode >format is that it's totally trivial: magic number, source mtime, >unmarshalled code object. You can't beat the beauty of that. Just for the record, I totally agree. I was just explaining something I had figured out in the original version of the PEP, which wasn't published but which Martin had seen an early draft of. When Martin made the suggestion of sibling cache directories, I immediately realized that it was much cleaner, better, and easier to implement than fat files (especially because I already had some nasty complex code that implemented the fat files ;). I'm beginning to be convinced that a folder-per-folder approach is the best take on this yet. >For the traditional "skinny" bytecode files, I believe that the >existing algorithm which writes zeros in the place of the magic number >first, writes the rest of the file, and then goes back to write the >correct magic number, is correct with a single writer and multiple >readers (assuming the readers ignore the file if its magic number is >invalid). The creat(O_EXCL) option ensures that there won't be >multiple writers. No rename() is necessary; POSIX rename() may be >atomic, but it's a directory modification which makes it potentially >slow. Agreed, and the current approach is time and battle tested. I don't think we need to be mucking around with it. My current effort on this PEP will be spent on fleshing out the folder-per-folder approach, understanding the implications of that, and integrating all the other great comments in this thread. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sun Feb 7 19:44:27 2010 From: barry at python.org (Barry Warsaw) Date: Sun, 7 Feb 2010 13:44:27 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <87wryzro4a.fsf@benfinney.id.au> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <20100202230302.34fb7906@freewill.wooz.org> <87636eok7b.fsf@benfinney.id.au> <4B697517.5060903@gmail.com> <20100206153543.47c2194f@freewill.wooz.org> <878wb5lwjl.fsf@benfinney.id.au> Message-ID: <20100207134427.032d772c@freewill.wooz.org> On Feb 06, 2010, at 04:39 PM, Guido van Rossum wrote: >The conflict is purely that PEP 3147 proposes the new behavior to be >optional, and adds a flag (-R) and an environment variable >($PYTHONPYR) to change it. I presume Barry is proposing this out of >fear that the new behavior might upset somebody; personally I think it >would be better if the behavior weren't optional. At least not in new >Python releases Good to know! Yes, that's one reason why I made it option, the other being that I suspect most people don't care about the original use case (making sure pyc files from different Python versions don't conflict). However, with a folder-per-folder approach, the side benefit of reducing directory clutter by hiding all the pyc files becomes more compelling. > -- in backports such as a distribution that wants this >feature might make, it may make sense to be more conservative, or at >least to have a way to turn it off. For backports I think the most conservative approach is to require a flag to enable this behavior. If we make this the default for new versions of Python (something I'd support) then tools written for Python >= 3.2 will know this is just how it's done. I worry about existing deployed tools for Python < 2.7 and 3.1. How about this: enable it by default in 3.2 and 2.7. No option to disable it. Allow distro back ports to define a flag or environment variable to enable it. The PEP can even be silent about how that's actually done, and a Debian implementation for Python 2.6 or 3.1 could even use the (now documented :) -X flag. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sun Feb 7 19:47:50 2010 From: barry at python.org (Barry Warsaw) Date: Sun, 7 Feb 2010 13:47:50 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B6EFF7F.6080208@voidspace.org.uk> References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B697AAD.2010307@gmail.com> <20100203084539.47559c8f@freewill.wooz.org> <4B69E0F9.5050103@gmail.com> <20100206152129.5aad111b@freewill.wooz.org> <20100207124810.1324bced@freewill.wooz.org> <4B6EFF7F.6080208@voidspace.org.uk> Message-ID: <20100207134750.06818a6d@freewill.wooz.org> On Feb 07, 2010, at 05:59 PM, Michael Foord wrote: >On 07/02/2010 17:48, Barry Warsaw wrote: >> [snip...] >>> And I propose not to disturb this in 2.7, at least not by default. I'm >>> fine though with a flag or distro-overridable config setting to change >>> this behavior. >>> >> Cool. I'm not sure this is absolutely necessary for Debian/Ubuntu, so I'll >> call YAGNI on it for 2.x (until and unless it isn't ;). Sorry, I was calling YAGNI on any change in behavior of module.__file__. >What are the chances of getting this into 2.x at all? For it to get into >the 2.7, likely to be the last major version in the 2.x series, the PEP >needs to be approved and the implementation needs to be feature complete >by April 3rd (first beta release according to the schedule [1]). I'd like to consult with my Debian/Ubuntu Python maintainer colleagues to see if it's worth getting into 2.7. If it is, and we can get a BDFL pronouncement on the PEP (after the next rounds of updates), then I think it will be feasible to implement in the time remaining. Heck, that's what Pycon sprints are for, no? :) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sun Feb 7 19:54:16 2010 From: barry at python.org (Barry Warsaw) Date: Sun, 7 Feb 2010 13:54:16 -0500 Subject: [Python-Dev] __file__ is not always an absolute path In-Reply-To: <20100206232256.26099.489521845.divmod.xquotient.566@localhost.localdomain> References: <4B6DD5C1.3080608@gmail.com> <20100206232256.26099.489521845.divmod.xquotient.566@localhost.localdomain> Message-ID: <20100207135416.366ad818@freewill.wooz.org> On Feb 06, 2010, at 11:22 PM, exarkun at twistedmatrix.com wrote: >>I haven't tried to repro this particular example, but the reason is >>that we don't want to have to call getpwd() on every import nor do we >>want to have some kind of in-process variable to cache the current >>directory. (getpwd() is relatively slow and can sometimes fail >>outright, and trying to cache it has a certain risk of being wrong.) > >Assuming you mean os.getcwd(): > >exarkun at boson:~$ python -m timeit -s 'def f(): pass' 'f()' >10000000 loops, best of 3: 0.132 usec per loop >exarkun at boson:~$ python -m timeit -s 'from os import getcwd' 'getcwd()' >1000000 loops, best of 3: 1.02 usec per loop >exarkun at boson:~$ >So it's about 7x more expensive than a no-op function call. I'd call >this pretty quick. Compared to everything else that happens during an >import, I'm not convinced this wouldn't be lost in the noise. I think >it's at least worth implementing and measuring. I'd like to see the effect on command line scripts that are run often and then exit, e.g. Bazaar or Mercurial. Start up time due to import overhead seems to be a constant battle for those types of projects. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From solipsis at pitrou.net Sun Feb 7 20:05:14 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 7 Feb 2010 19:05:14 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?=5F=5Ffile=5F=5F_is_not_always_an_absolute?= =?utf-8?q?_path?= References: <4B6DD5C1.3080608@gmail.com> <20100206232256.26099.489521845.divmod.xquotient.566@localhost.localdomain> <20100207135416.366ad818@freewill.wooz.org> Message-ID: Barry Warsaw python.org> writes: > > >exarkun boson:~$ python -m timeit -s 'from os import getcwd' 'getcwd()' > >1000000 loops, best of 3: 1.02 usec per loop [...] > > I'd like to see the effect on command line scripts that are run often and then > exit, e.g. Bazaar or Mercurial. Start up time due to import overhead seems to > be a constant battle for those types of projects. If os.getcwd() is only called once when "normalizing" sys.path, and if it just takes one microsecond, I don't really see the point. :-) Antoine. From mal at egenix.com Sun Feb 7 20:26:23 2010 From: mal at egenix.com (M.-A. Lemburg) Date: Sun, 07 Feb 2010 20:26:23 +0100 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <20100206153308.7a05db87@freewill.wooz.org> References: <20100130190005.058c8187@freewill.wooz.org> <4B64DE20.9060708@g.nevcal.com> <20100202225011.3d018a47@freewill.wooz.org> <4B6943FC.5080303@voidspace.org.uk> <4B69571F.80809@egenix.com> <20100206153308.7a05db87@freewill.wooz.org> Message-ID: <4B6F13DF.6030501@egenix.com> Barry Warsaw wrote: > On Feb 03, 2010, at 11:59 AM, M.-A. Lemburg wrote: > >> How about using an optionally relative cache dir setting to let >> the user decide ? > > Why do we need that level of flexibility? It's very easy to implement (see the code I posted) and gives you a lot of control with a single env variable. Some use cases: 1. PYTHONCACHE=. (store the cache files in the same dir as the .py file) This settings mimics what we've had in Python for decades. Users know about this Python behavior and expect it. It's also the only reasonable way of shipping byte-code only packages. 2. PYTHONCACHE=.pycache (store the cache files in a subdir of the dir where the .py file is stored) When using lots of cache files for multiple Python versions or variants, .py source code directory can easily get cluttered with too many such files. Putting them into a subdir solves this problem. This would be useful for developers running and testing the code with different Python versions. 3. PYTHONCACHE=~/.python/cache (store the cache files in a user dir, outside the Python source file dir) This allows easy removal of all cache files and prevents cluttering up the sys.path dirs with cache files or directories altogether. It's also handy if the source code dirs are not writable by the user importing them. OTOH, every user would create a copy of the cache files (this is what currently happens with setuptools eggs and is very annoying). -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Feb 07 2010) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From guido at python.org Sun Feb 7 20:38:30 2010 From: guido at python.org (Guido van Rossum) Date: Sun, 7 Feb 2010 11:38:30 -0800 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <20100207131729.7fcc2808@freewill.wooz.org> References: <20100130190005.058c8187@freewill.wooz.org> <5d44f72f1001302104n18a4ecd4j7a69bd161e57e627@mail.gmail.com> <5cae42b21001310354x4a27fb7aq119666f53ba99008@mail.gmail.com> <5cae42b21001310413i131d125bh895a270671fbf2f8@mail.gmail.com> <4B6586F8.2060806@gmail.com> <20100207131729.7fcc2808@freewill.wooz.org> Message-ID: On Sun, Feb 7, 2010 at 10:17 AM, Barry Warsaw wrote: > On Jan 31, 2010, at 11:34 PM, Nick Coghlan wrote: > >>I must admit I quite like the __pyr__ directory approach as well. Since >>the interpreter knows the suffix it is looking for, names shouldn't >>conflict. Using a single directory allows the name to be less cryptic, >>too (e.g. __pycache__). > > Something else that occurs to me; the name of the directory (under > folder-per-folder approach) probably ought to be the same as the name of the > module attribute. ?There's probably no good reason to make it different, and > making it the same makes the association stronger. I'm not sure I follow. The directory doesn't suddenly become an attribute. Moreover, the directory contains many files (assuming folder-per-folder) and the attribute would point to a single file inside that directory. > That still gives us plenty of opportunity to bikeshed, but __pycache__ seems > reasonable to me (it's the cache of parsing and compiling the .py file). While technically it is a cache, I don't think that emphasizing that point is helpful. For 20 years people have thought of it as "compiled bytecode". Also while on the filesystem it makes sense for it to have "py" in the directory name, that does not make sense for the attribute name. After all we don't go around calling things __pyfile__, __pygetattr__, __pysys__... ;-) I'm still for __compiled__ as the attribute; I don't have a particular preference for the directory name or the naming scheme used inside it, as long as neither starts with '.' (and probably the directory should be __something__). -- --Guido van Rossum (python.org/~guido) From brett at python.org Sun Feb 7 21:23:10 2010 From: brett at python.org (Brett Cannon) Date: Sun, 7 Feb 2010 12:23:10 -0800 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <20100207134427.032d772c@freewill.wooz.org> References: <20100130190005.058c8187@freewill.wooz.org> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <20100202230302.34fb7906@freewill.wooz.org> <87636eok7b.fsf@benfinney.id.au> <4B697517.5060903@gmail.com> <20100206153543.47c2194f@freewill.wooz.org> <878wb5lwjl.fsf@benfinney.id.au> <20100207134427.032d772c@freewill.wooz.org> Message-ID: On Sun, Feb 7, 2010 at 10:44, Barry Warsaw wrote: > On Feb 06, 2010, at 04:39 PM, Guido van Rossum wrote: > >>The conflict is purely that PEP 3147 proposes the new behavior to be >>optional, and adds a flag (-R) and an environment variable >>($PYTHONPYR) to change it. I presume Barry is proposing this out of >>fear that the new behavior might upset somebody; personally I think it >>would be better if the behavior weren't optional. At least not in new >>Python releases > > Good to know! ?Yes, that's one reason why I made it option, the other being > that I suspect most people don't care about the original use case (making sure > pyc files from different Python versions don't conflict). ?However, with a > folder-per-folder approach, the side benefit of reducing directory clutter by > hiding all the pyc files becomes more compelling. > >> -- in backports such as a distribution that wants this >>feature might make, it may make sense to be more conservative, or at >>least to have a way to turn it off. > > For backports I think the most conservative approach is to require a flag to > enable this behavior. ?If we make this the default for new versions of Python > (something I'd support) then tools written for Python >= 3.2 will know this is > just how it's done. ?I worry about existing deployed tools for Python < 2.7 > and 3.1. > > How about this: enable it by default in 3.2 and 2.7. ?No option to disable it. > Allow distro back ports to define a flag or environment variable to enable it. > The PEP can even be silent about how that's actually done, and a Debian > implementation for Python 2.6 or 3.1 could even use the (now documented :) -X > flag. Would you keep the old behavior around as well, or simply drop it? I personally vote for the latter for simplicity and performance reasons (by not having to look in so many places for bytecode), but I can see tool people who magically calculate the location of the bytecode not loving the idea (another reason why giving loaders a method to return all relevant paths is a good idea; no more guessing). -Brett > > -Barry > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org > > From dirkjan at ochtman.nl Sun Feb 7 22:35:58 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Sun, 7 Feb 2010 22:35:58 +0100 Subject: [Python-Dev] PEP 385 progress report Message-ID: It's been a long time! So for the past few weeks, Mercurial crew member Patrick Mezard has been hunting for the ugly bug in hgsubversion that I'd previously been looking at, and it finally got fixed. A new bug popped up, but then we managed to fix that, too (thanks to the PSF for partially funding our sprint, it was very succesful!). In a joyous moment, I nagged Augie Fackler to actually put a hgsubversion release out there so hopefully more people can start using it, so we now have that, too. Another sponsor for our sprint was Logilab (who provided their brand new office for us to work in), and one of their employees, Andre Espaze, fortunately wanted to help out and managed to write up a patch for the sys.mercurial attribute (now in the pymigr repo). In fact, a few weeks ago I talked to Brett and we figured that we should probably pin down a deadline. We discussed aiming at May 1, and at this time I think that should be feasible. That also seems to coincide with the release of 2.7b2, though, so maybe we need to do it one week later (or sooner?). Anyway, we figured that a weekend would probably be a good time. If we manage to find a good date, I'll put it in the PEP. As for the current state of The Dreaded EOL Issue, there is an extension which seems to be provide all the needed features, but it appears there are some nasty corner cases still to be fixed. Martin Geisler has been working on it over the sprint, but I think there's more work to be done here. Anyone who wants to jump in would be quite welcome (maybe Martin will clarify here what exactly the remaining issues are). The current version of the repository (latest SVN revision is 78055, clone it from hg.python.org) weighs in at about 1.4G, but still needs branch pruning (which will be my primary focus for the coming few weeks). The good part about it now being a year later than, well, last year is that named branches are much more solid than before, and so I feel much better about using those for Python's release branches. Any questions and/or concerns? I will also be at PyCon; I'll be doing a more advanced talk on Mercurial internals on Sunday but I'd also be happy to do some handholding or introductory stuff in an open space. If there's anyone who'd like help converting their SVN repository, I might be able to help there too (during the sprints). For other conversions, I know for a fact that an expert in CVS conversions will be there. Cheers, Dirkjan From benjamin at python.org Sun Feb 7 22:51:49 2010 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 7 Feb 2010 15:51:49 -0600 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: Message-ID: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> 2010/2/7 Dirkjan Ochtman : > It's been a long time! Thank you very much for staying on this task! I'm still excited. > > In fact, a few weeks ago I talked to Brett and we figured that we > should probably pin down a deadline. We discussed aiming at May 1, and > at this time I think that should be feasible. That also seems to > coincide with the release of 2.7b2, though, so maybe we need to do it > one week later (or sooner?). Anyway, we figured that a weekend would > probably be a good time. If we manage to find a good date, I'll put it > in the PEP. How about a week after, so we have more time to adjust release procedures? > Any questions and/or concerns? Will you do test conversions of the sandbox projects, too? Also I think we should have some document (perhaps the dev FAQ) explaining exactly how to do common tasks in mercurial. For example - A bug fix, which needs to be in 4 branches. - A bug fix, which only belongs in 2.7 and 2.6 or 3.2 and 3.1. - Which way do we merge (What's a subset of what?) -- Regards, Benjamin From skippy.hammond at gmail.com Sun Feb 7 22:58:59 2010 From: skippy.hammond at gmail.com (Mark Hammond) Date: Mon, 08 Feb 2010 08:58:59 +1100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: Message-ID: <4B6F37A3.9020002@gmail.com> Hi Dirkjan, On 8/02/2010 8:35 AM, Dirkjan Ochtman wrote: ... > In fact, a few weeks ago I talked to Brett and we figured that we > should probably pin down a deadline. We discussed aiming at May 1, and > at this time I think that should be feasible. That also seems to > coincide with the release of 2.7b2, though, so maybe we need to do it > one week later (or sooner?). Anyway, we figured that a weekend would > probably be a good time. If we manage to find a good date, I'll put it > in the PEP. Isn't setting a date premature while outstanding issues remain without a timetable for their resolution? > As for the current state of The Dreaded EOL Issue, there is an > extension which seems to be provide all the needed features, but it > appears there are some nasty corner cases still to be fixed. Martin > Geisler has been working on it over the sprint, but I think there's > more work to be done here. Anyone who wants to jump in would be quite > welcome (maybe Martin will clarify here what exactly the remaining > issues are). See http://mercurial.selenic.com/wiki/EOLTranslationPlan#TODO - of particular note: * There are transient errors in the tests which Martin is yet to identify. These tests do not require windows to reproduce or fix. * The mercurial tests do not run on Windows. Given the above, most sane Windows developers would hold off on "live" testing of the extension until at least the first issue is resolved - but the second issue makes it very difficult for them to help resolve that. Cheers, Mark From guido at python.org Mon Feb 8 03:18:40 2010 From: guido at python.org (Guido van Rossum) Date: Sun, 7 Feb 2010 18:18:40 -0800 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <87sk9msysa.fsf@benfinney.id.au> <20100202230302.34fb7906@freewill.wooz.org> <87636eok7b.fsf@benfinney.id.au> <4B697517.5060903@gmail.com> <20100206153543.47c2194f@freewill.wooz.org> <878wb5lwjl.fsf@benfinney.id.au> <20100207134427.032d772c@freewill.wooz.org> Message-ID: On Sun, Feb 7, 2010 at 12:23 PM, Brett Cannon wrote: > On Sun, Feb 7, 2010 at 10:44, Barry Warsaw wrote: >> On Feb 06, 2010, at 04:39 PM, Guido van Rossum wrote: >> >>>The conflict is purely that PEP 3147 proposes the new behavior to be >>>optional, and adds a flag (-R) and an environment variable >>>($PYTHONPYR) to change it. I presume Barry is proposing this out of >>>fear that the new behavior might upset somebody; personally I think it >>>would be better if the behavior weren't optional. At least not in new >>>Python releases >> >> Good to know! ?Yes, that's one reason why I made it option, the other being >> that I suspect most people don't care about the original use case (making sure >> pyc files from different Python versions don't conflict). ?However, with a >> folder-per-folder approach, the side benefit of reducing directory clutter by >> hiding all the pyc files becomes more compelling. >> >>> -- in backports such as a distribution that wants this >>>feature might make, it may make sense to be more conservative, or at >>>least to have a way to turn it off. >> >> For backports I think the most conservative approach is to require a flag to >> enable this behavior. ?If we make this the default for new versions of Python >> (something I'd support) then tools written for Python >= 3.2 will know this is >> just how it's done. ?I worry about existing deployed tools for Python < 2.7 >> and 3.1. >> >> How about this: enable it by default in 3.2 and 2.7. ?No option to disable it. >> Allow distro back ports to define a flag or environment variable to enable it. >> The PEP can even be silent about how that's actually done, and a Debian >> implementation for Python 2.6 or 3.1 could even use the (now documented :) -X >> flag. > > Would you keep the old behavior around as well, or simply drop it? I > personally vote for the latter for simplicity and performance reasons > (by not having to look in so many places for bytecode), but I can see > tool people who magically calculate the location of the bytecode not > loving the idea (another reason why giving loaders a method to return > all relevant paths is a good idea; no more guessing). For 3.2 I think it's fine to simply drop the old behavior (as long as a good loader API is added at the same time). But for 2.7 I think we ought to be a lot more conservative and not force tools to upgrade, so I think we should keep the old behavior in 2.7 as the default (though distros can change this if they want to, and backport if they need to). -- --Guido van Rossum (python.org/~guido) From rrr at ronadam.com Mon Feb 8 06:28:46 2010 From: rrr at ronadam.com (Ron Adam) Date: Sun, 07 Feb 2010 23:28:46 -0600 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <20100207125824.0ff9dc14@freewill.wooz.org> References: <20100130190005.058c8187@freewill.wooz.org> <20100207125824.0ff9dc14@freewill.wooz.org> Message-ID: Barry Warsaw wrote: > On Jan 31, 2010, at 01:06 PM, Ron Adam wrote: > >> With a single cache directory, we could have an option to force writing >> bytecode to a desired location. That might be useful on it's own for >> creating runtime bytecode only installations for installers. > > One important reason for wanting to keep the bytecode cache files colocated > with the source files is that I want to be able to continue to manipulate > $PYTHONPATH to control how Python finds its modules. With a single > system-wide cache directory that won't be easy. E.g. $PYTHONPATH might be > hacked to find the source file you expect, but how would that interact with > how Python finds its cache files? I'm strongly in favor of keeping the cache > files as close to the source they were generated from as possible. Yes, I agree, after thinking about it, it does seems like it may be more complex than I first thought. I think the folder-per-folder option sounds like the best default option at this time. It reduces folder clutter for the python developer and may loosen the link between source files and byte code files just enough that it will be easier to experiment with more flexible modes later. It seems to me that in the long run, (probably no time soon), it might be nice to even do away with on disk byte code altogether unless it's explicitly asked for. As computers get faster, the time it takes to compile byte code may become a smaller and smaller percent of the total run time. That is unless the size of python programs increase at the same rate or faster. To tell the truth in most cases I hardly notice the extra time the first run takes compared to later runs with the precompiled byte code. Yes it may be a few seconds at start up, but after that it's usually not a big part of the execution time. Hmmm, I wonder if there's a threshold in file size where it really doesn't make a significant difference? Regards, Ron From ncoghlan at gmail.com Mon Feb 8 13:45:43 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 08 Feb 2010 22:45:43 +1000 Subject: [Python-Dev] __file__ is not always an absolute path In-Reply-To: References: <4B6DD5C1.3080608@gmail.com> <20100206232256.26099.489521845.divmod.xquotient.566@localhost.localdomain> <20100207135416.366ad818@freewill.wooz.org> Message-ID: <4B700777.1000907@gmail.com> Antoine Pitrou wrote: > Barry Warsaw python.org> writes: >>> exarkun boson:~$ python -m timeit -s 'from os import getcwd' 'getcwd()' >>> 1000000 loops, best of 3: 1.02 usec per loop > [...] >> I'd like to see the effect on command line scripts that are run often and then >> exit, e.g. Bazaar or Mercurial. Start up time due to import overhead seems to >> be a constant battle for those types of projects. > > If os.getcwd() is only called once when "normalizing" sys.path, and if it just > takes one microsecond, I don't really see the point. :-) The problem is that having '' as the first entry in sys.path currently means "do the import relative to the current directory". Unless we want to change the language semantics so we stick os.getcwd() at the front instead of '', then __file__ is still going to be relative sometimes. Alternatively, we could special case those specific imports to do os.getcwd() at the time of the import. That won't affect the import speed significantly for imports from locations other than '' (i.e. most of them) and will more accurately reflect the true meaning of __file__ in that case (since we put the module in sys.modules, future imports won't see different versions of that module even if the working directory is changed, so the relative value for __file__ becomes a lie as soon as the working directory changes) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From solipsis at pitrou.net Mon Feb 8 13:51:22 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 8 Feb 2010 12:51:22 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?=5F=5Ffile=5F=5F_is_not_always_an_absolute?= =?utf-8?q?_path?= References: <4B6DD5C1.3080608@gmail.com> <20100206232256.26099.489521845.divmod.xquotient.566@localhost.localdomain> <20100207135416.366ad818@freewill.wooz.org> <4B700777.1000907@gmail.com> Message-ID: Nick Coghlan gmail.com> writes: > > The problem is that having '' as the first entry in sys.path currently > means "do the import relative to the current directory". Unless we want > to change the language semantics so we stick os.getcwd() at the front > instead of '', then __file__ is still going to be relative sometimes. "Changing the language semantics" is actually what I was thinking about :) Do some people actually rely on the fact that changing the current directory will also change the import path? cheers Antoine. From ncoghlan at gmail.com Mon Feb 8 13:54:03 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 08 Feb 2010 22:54:03 +1000 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <20100207125824.0ff9dc14@freewill.wooz.org> Message-ID: <4B70096B.4080508@gmail.com> Ron Adam wrote: > To tell the truth in most cases I hardly notice the extra time the first > run takes compared to later runs with the precompiled byte code. Yes it > may be a few seconds at start up, but after that it's usually not a big > part of the execution time. Hmmm, I wonder if there's a threshold in > file size where it really doesn't make a significant difference? It's relative to runtime for the application itself (long-running applications aren't going to notice as much of a percentage effect on runtime) as well as to how many Python files are actually imported at startup (only importing a limited number of modules, importing primarily extension modules or effective use of a lazy module loading mechanism will all drastically reduce the proportional impact of precompiled bytecode) We struggle enough with startup time that doing anything that makes it slower is rather undesirable though. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Mon Feb 8 14:04:25 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 08 Feb 2010 23:04:25 +1000 Subject: [Python-Dev] __file__ is not always an absolute path In-Reply-To: References: <4B6DD5C1.3080608@gmail.com> <20100206232256.26099.489521845.divmod.xquotient.566@localhost.localdomain> <20100207135416.366ad818@freewill.wooz.org> <4B700777.1000907@gmail.com> Message-ID: <4B700BD9.1000300@gmail.com> Antoine Pitrou wrote: > Nick Coghlan gmail.com> writes: >> The problem is that having '' as the first entry in sys.path currently >> means "do the import relative to the current directory". Unless we want >> to change the language semantics so we stick os.getcwd() at the front >> instead of '', then __file__ is still going to be relative sometimes. > > "Changing the language semantics" is actually what I was thinking about :) > Do some people actually rely on the fact that changing the current directory > will also change the import path? I've learned that no matter how insane our current semantics for something may be, someone, somewhere will be relying on them :) In this case, the current semantics aren't even all that insane. A bit odd maybe, but not insane. I think they're even documented, but I couldn't say exactly where without some digging. I think we also use the trick of checking for an empty string in sys.path[0] in a couple of places before deciding whether or not to remove it (I seem to recall applying a patch to pydoc along those lines so it worked properly with the -m switch). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From floris.bruynooghe at gmail.com Mon Feb 8 14:18:46 2010 From: floris.bruynooghe at gmail.com (Floris Bruynooghe) Date: Mon, 8 Feb 2010 13:18:46 +0000 Subject: [Python-Dev] __file__ is not always an absolute path In-Reply-To: References: <4B6DD5C1.3080608@gmail.com> <20100206232256.26099.489521845.divmod.xquotient.566@localhost.localdomain> <20100207135416.366ad818@freewill.wooz.org> <4B700777.1000907@gmail.com> Message-ID: <20100208131846.GA18397@laurie.devork> On Mon, Feb 08, 2010 at 12:51:22PM +0000, Antoine Pitrou wrote: > Do some people actually rely on the fact that changing the current directory > will also change the import path? On the interactive prompt, yes. But I guess that's a habit that could be easily un-learnt. Regards Floris -- Debian GNU/Linux -- The Power of Freedom www.debian.org | www.gnu.org | www.kernel.org From tjreedy at udel.edu Mon Feb 8 17:46:14 2010 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 08 Feb 2010 11:46:14 -0500 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: <4B70096B.4080508@gmail.com> References: <20100130190005.058c8187@freewill.wooz.org> <20100207125824.0ff9dc14@freewill.wooz.org> <4B70096B.4080508@gmail.com> Message-ID: On 2/8/2010 7:54 AM, Nick Coghlan wrote: > Ron Adam wrote: >> To tell the truth in most cases I hardly notice the extra time the first >> run takes compared to later runs with the precompiled byte code. Yes it >> may be a few seconds at start up, but after that it's usually not a big >> part of the execution time. Hmmm, I wonder if there's a threshold in >> file size where it really doesn't make a significant difference? > > It's relative to runtime for the application itself (long-running > applications aren't going to notice as much of a percentage effect on > runtime) as well as to how many Python files are actually imported at > startup (only importing a limited number of modules, importing primarily > extension modules or effective use of a lazy module loading mechanism > will all drastically reduce the proportional impact of precompiled bytecode) > > We struggle enough with startup time that doing anything that makes it > slower is rather undesirable though. Definitely. I have even wondered whether it would be possible to cache not just the bytecode for initializing a module, but also the initialized module itself (perhaps minus the name bindings for other imported modules). Terry Jan Reedy From dirkjan at ochtman.nl Mon Feb 8 18:14:51 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Mon, 8 Feb 2010 18:14:51 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> Message-ID: On Sun, Feb 7, 2010 at 22:51, Benjamin Peterson wrote: > How about a week after, so we have more time to adjust release procedures? Sounds fine to me. > Will you do test conversions of the sandbox projects, too? Got any particular projects in mind? > Also I think we should have some document (perhaps the dev FAQ) > explaining exactly how to do common tasks in mercurial. For example > - A bug fix, which needs to be in 4 branches. > - A bug fix, which only belongs in 2.7 and 2.6 or 3.2 and 3.1. > - Which way do we merge (What's a subset of what?) Yes, writing lots of docs is part of the plan. Cheers, Dirkjan From dirkjan at ochtman.nl Mon Feb 8 18:24:00 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Mon, 8 Feb 2010 18:24:00 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B6F37A3.9020002@gmail.com> References: <4B6F37A3.9020002@gmail.com> Message-ID: On Sun, Feb 7, 2010 at 22:58, Mark Hammond wrote: > Isn't setting a date premature while outstanding issues remain without a > timetable for their resolution? If we set a date, that would imply a timetable for their resolution. > See http://mercurial.selenic.com/wiki/EOLTranslationPlan#TODO - of > particular note: > > * There are transient errors in the tests which Martin is yet to identify. > ?These tests do not require windows to reproduce or fix. > > * The mercurial tests do not run on Windows. > > Given the above, most sane Windows developers would hold off on "live" > testing of the extension until at least the first issue is resolved - but > the second issue makes it very difficult for them to help resolve that. The Mercurial tests can actually run on Windows -- and I've updated the page to that effect. They require something called pysh, though. I've also asked Patrick Mezard to include the eol extension in his nightly test run on Windows. I guess since some of the test errors do not require Windows to reproduce or fix, I'd invite anyone to jump in and help fix these issues. Cheers, Dirkjan From benjamin at python.org Tue Feb 9 02:11:43 2010 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 8 Feb 2010 19:11:43 -0600 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> Message-ID: <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> 2010/2/8 Dirkjan Ochtman : > On Sun, Feb 7, 2010 at 22:51, Benjamin Peterson wrote: >> Will you do test conversions of the sandbox projects, too? > > Got any particular projects in mind? 2to3. -- Regards, Benjamin From collinwinter at google.com Tue Feb 9 03:47:22 2010 From: collinwinter at google.com (Collin Winter) Date: Mon, 8 Feb 2010 18:47:22 -0800 Subject: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython In-Reply-To: References: <3c8293b61001201427y30fc9f28ke6f7152b2a112b4e@mail.gmail.com> <3c8293b61001201756g26212a44m9abe7f5b471e6bb4@mail.gmail.com> <3c8293b61001210932i9c5d31i4bc71b7d9e0611f2@mail.gmail.com> <3c8293b61001211214m4b24c3b9x3738cf9e5375b0f8@mail.gmail.com> <3c8293b61002021454w664c7646ya5e2dd7395380f5f@mail.gmail.com> Message-ID: <3c8293b61002081847k5b649f66q87e415328a682d3c@mail.gmail.com> Hi Craig, On Tue, Feb 2, 2010 at 4:42 PM, Craig Citro wrote: >> Done. The diff is at >> http://codereview.appspot.com/186247/diff2/5014:8003/7002. I listed >> Cython, Shedskin and a bunch of other alternatives to pure CPython. >> Some of that information is based on conversations I've had with the >> respective developers, and I'd appreciate corrections if I'm out of >> date. >> > > Well, it's a minor nit, but it might be more fair to say something > like "Cython provides the biggest improvements once type annotations > are added to the code." After all, Cython is more than happy to take > arbitrary Python code as input -- it's just much more effective when > it knows something about types. The code to make Cython handle > closures has just been merged ... hopefully support for the full > Python language isn't so far off. (Let me know if you want me to > actually make a comment on Rietveld ...) Indeed, you're quite right. I've corrected the description here: http://codereview.appspot.com/186247/diff2/7005:9001/10001 > Now what's more interesting is whether or not U-S and Cython could > play off one another -- take a Python program, run it with some > "generic input data" under Unladen and record info about which > functions are hot, and what types they tend to take, then let > Cython/gcc -O3 have a go at these, and lather, rinse, repeat ... JIT > compilation and static compilation obviously serve different purposes, > but I'm curious if there aren't other interesting ways to take > advantage of both. Definitely! Someone approached me about possibly reusing the profile data for a feedback-enhanced code coverage tool, which has interesting potential, too. I've added a note about this under the "Future Work" section: http://codereview.appspot.com/186247/diff2/9001:10002/9003 Thanks, Collin Winter From martin at v.loewis.de Tue Feb 9 04:39:34 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 09 Feb 2010 04:39:34 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> Message-ID: <4B70D8F6.3010806@v.loewis.de> Benjamin Peterson wrote: > 2010/2/8 Dirkjan Ochtman : >> On Sun, Feb 7, 2010 at 22:51, Benjamin Peterson wrote: >>> Will you do test conversions of the sandbox projects, too? >> Got any particular projects in mind? > > 2to3. Does Mercurial even support merge tracking the way we are doing it for 2to3 right now? Regards, Martin From benjamin at python.org Tue Feb 9 04:47:34 2010 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 8 Feb 2010 21:47:34 -0600 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B70D8F6.3010806@v.loewis.de> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> Message-ID: <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> 2010/2/8 "Martin v. L?wis" : > Benjamin Peterson wrote: >> 2010/2/8 Dirkjan Ochtman : >>> On Sun, Feb 7, 2010 at 22:51, Benjamin Peterson wrote: >>>> Will you do test conversions of the sandbox projects, too? >>> Got any particular projects in mind? >> >> 2to3. > > Does Mercurial even support merge tracking the way we are doing it for > 2to3 right now? I don't believe so. My plan was to manually sync updates or use subrepos. -- Regards, Benjamin From ncoghlan at gmail.com Tue Feb 9 09:44:55 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 09 Feb 2010 18:44:55 +1000 Subject: [Python-Dev] PEP 3147: PYC Repository Directories In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <20100207125824.0ff9dc14@freewill.wooz.org> <4B70096B.4080508@gmail.com> Message-ID: <4B712087.4020308@gmail.com> Terry Reedy wrote: > Definitely. I have even wondered whether it would be possible to cache > not just the bytecode for initializing a module, but also the > initialized module itself (perhaps minus the name bindings for other > imported modules). Not easily, since running the module may have other side effects that can't be cached. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From techtonik at gmail.com Tue Feb 9 11:16:15 2010 From: techtonik at gmail.com (anatoly techtonik) Date: Tue, 9 Feb 2010 12:16:15 +0200 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <20100202100859.00d34437@heresy.wooz.org> References: <20100202100859.00d34437@heresy.wooz.org> Message-ID: On Tue, Feb 2, 2010 at 8:08 PM, Barry Warsaw wrote: > I'm thinking about doing a Python 2.6.5 release soon. ?I've added the > following dates to the Python release schedule Google calendar: > > 2009-03-01 Python 2.6.5 rc 1 > 2009-03-15 Python 2.6.5 final > > This allows us to spend some time on 2.6.5 at Pycon if we want. ?Please let me > know if you have any concerns about those dates. I've noticed a couple of issues that 100% crash Python 2.6.4 like this one - http://bugs.python.org/issue6608 Is it ok to release new versions that are known to crash? -- anatoly t. From dirkjan at ochtman.nl Tue Feb 9 11:26:38 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Tue, 9 Feb 2010 11:26:38 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> Message-ID: On Tue, Feb 9, 2010 at 04:47, Benjamin Peterson wrote: > I don't believe so. My plan was to manually sync updates or use subrepos. Using subrepos should work well for this. It turned out that my local copy of the Subversion repository contained the Python dir only, so I'm now syncing a full copy so that I can convert other parts. I believe 2to3 might be a little tricky because it was moved at some point, but I can look at getting that right (and this will help in converting other parts of the larger Python repository). Cheers, Dirkjan From solipsis at pitrou.net Tue Feb 9 11:57:39 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 9 Feb 2010 10:57:39 +0000 (UTC) Subject: [Python-Dev] Python 2.6.5 References: <20100202100859.00d34437@heresy.wooz.org> Message-ID: Le Tue, 09 Feb 2010 12:16:15 +0200, anatoly techtonik a ?crit?: > > I've noticed a couple of issues that 100% crash Python 2.6.4 like this > one - http://bugs.python.org/issue6608 Is it ok to release new versions > that are known to crash? I've changed this issue to release blocker. What are the other issues? From solipsis at pitrou.net Tue Feb 9 13:53:14 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 09 Feb 2010 13:53:14 +0100 Subject: [Python-Dev] crashers In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> Message-ID: <1265719994.3346.0.camel@localhost> > There are 65 entries and among them I can additionally confirm: > http://bugs.python.org/issue3720 > http://bugs.python.org/issue7788 > http://bugs.python.org/issue5765 One of them is fixed and the other two are pathological cases. You can't really trigger them by mistake. Regards Antoine. From techtonik at gmail.com Tue Feb 9 13:45:03 2010 From: techtonik at gmail.com (anatoly techtonik) Date: Tue, 9 Feb 2010 14:45:03 +0200 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> Message-ID: On Tue, Feb 9, 2010 at 12:57 PM, Antoine Pitrou wrote: > Le Tue, 09 Feb 2010 12:16:15 +0200, anatoly techtonik a ?crit?: >> >> I've noticed a couple of issues that 100% crash Python 2.6.4 like this >> one - http://bugs.python.org/issue6608 ?Is it ok to release new versions >> that are known to crash? > > I've changed this issue to release blocker. What are the other issues? I've basically run a query to get all "crash" type issues for Python 2.6 http://bugs.python.org/issue?@search_text=&title=&@columns=title&id=&@columns=id&stage=&creation=&creator=&activity=&@columns=activity&@sort=activity&actor=&nosy=&type=1&components=&versions=1&dependencies=&assignee=&keywords=&priority=&@group=priority&status=1&@columns=status&resolution=&nosy_count=&message_count=&@pagesize=50&@startwith=0&@queryname=&@old-queryname=&@action=search There are 65 entries and among them I can additionally confirm: http://bugs.python.org/issue3720 http://bugs.python.org/issue7788 http://bugs.python.org/issue5765 -- anatoly t. From michael at voidspace.org.uk Tue Feb 9 17:40:33 2010 From: michael at voidspace.org.uk (Michael Foord) Date: Tue, 09 Feb 2010 16:40:33 +0000 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues Message-ID: <4B719001.7080201@voidspace.org.uk> Hello all, I've been looking at outstanding unittest issues as part of my preparation for my PyCon talk. There are a couple of changes (minor) I'd like to make that I thought I ought to run past Python-Dev first. If I don't get any responses then I'll just do it, so you have been warned. :-) The great google merge into unittest happened at PyCon last year [1]. This included a change to TestCase.shortDescription() so that it would *always* include the test name, whereas previously it would return the test docstring or None. The problem this change solved was that tests with a docstring would not have their name (test class and method name) reported during the test run. Unfortunately the change broke part of twisted test running. Reported as issue 7588: http://bugs.python.org/issue7588 It seems to me that the same effect (always reporting test name) can be achieved in _TextTestResult.getDescription(). I propose to revert the change to TestCase.shortDescription() (which has both a horrible name and a horrible implementation and should probably be renamed getDocstring so that what it does is obvious but never mind) and put the change into _TextTestResult. It annoys me that _TextTestResult is private, as you will almost certainly want to use it or subclass it when implementing custom test systems. I am going to rename it TextTestResult, alias the old name and document the old name as being deprecated. Another issue that I would like to address, but there are various possible approaches, is issue 7559: http://bugs.python.org/issue7559 Currently loadTestsFromName catches ImportError and rethrows as AttributeError. This is horrible (it obscures the original error) but there are backwards compatibility issues with fixing it. There are three possible approaches: 1) Leave it (the default) 2) Only throw an AttributeError if the import fails due to the name being invalid (the module not existing) otherwise allow the error through. (A minor but less serious change in behavior). 3) A new method that turns failures into pseudo-tests that fail with the original error when run. Possibly deprecating loadTestsFromName I favour option 3, but can't think of a good replacement name. :-) Comments welcomed. Despite deprecating (in the documentation - no actual deprecations warnings I believe) a lot of the duplicate ways of doing things (assert* favoured over fail* and assertEqual over assertEquals) we didn't include deprecating assert_ in favour of assertTrue. I would like to add that to the documentation. After 3.2 is out I would like to clean up the documentation, removing mention of the deprecated methods from the *main* documentation into a separate 'deprecated methods' section. They currently make the documentation very untidy. The unittest page should probably be split into several pages anyway and needs improving. Other outstanding minor issues: Allow dotted names for test discovery http://bugs.python.org/issue7780 - I intend to implement this as described in the last comment A 'check_order' optional argument (defaulting to True) for assertSequenceEqual http://bugs.python.org/issue7832 - needs patch The breaking of __unittest caused by splitting unittesst into a package needs fixing. The fix needs to work when Python is run without frames support (IronPython). http://bugs.python.org/issue7815 - needs patch Allow a __unittest (or similar) decorator for user implemented assert functions http://bugs.python.org/issue1705520 - needs patch Allow modules to define test_suite callable. http://bugs.python.org/issue7501 - I propose to close as rejected. Use load_tests instead. Display time taken by individual tests when in verbose mode. http://bugs.python.org/issue4080 - anyone any opinions? Allow automatic formatting of arguments in assert* failure messages. http://bugs.python.org/issue6966 - I propose to close as rejected removeTest() method on TestSuite http://bugs.python.org/issue1778410 - anyone any opinions? expect methods (delayed fail) http://bugs.python.org/issue3615 - any opinions? Personally I think that the TestCase API is big enough already All the best, Michael Foord [1] Mostly in revision 7-837. http://svn.python.org/view?view=rev&revision=70837 -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From fuzzyman at voidspace.org.uk Tue Feb 9 17:42:50 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 09 Feb 2010 16:42:50 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest Message-ID: <4B71908A.3080306@voidspace.org.uk> Hello all, The next 'big' change to unittest will (may?) be the introduction of class and module level setUp and tearDown. This was discussed on Python-ideas and Guido supported them. They can be useful but are also very easy to abuse (too much shared state, monolithic test classes and modules). Several authors of other Python testing frameworks spoke up *against* them, but several *users* of test frameworks spoke up in favour of them. ;-) I'm pretty sure I can introduce setUpClass and setUpModule without breaking compatibility with existing unittest extensions or backwards compatibility issues - with the possible exception of test sorting. Where you have a class level setUp (for example creating a database connection) you don't want the tearDown executed a *long* time after the setUp. In the presence of class or module level setUp /tearDown (but only if they are used) I would expect test sorting to only sort within the class or module [1]. I will introduce the setUp and tearDown as new 'tests' - so failures are reported separately, and all tests in the class / module will have an explicit skip in the event of a setUp failure. A *better* (more general) solution for sharing and managing resources between tests is to use something like TestResources by Robert Collins. http://pypi.python.org/pypi/testresources/ A minimal example of using test resources shows very little boilerplate overhead from what setUpClass (etc) would need, and with the addition of some helper functions could be almost no overhead. I've challenged Robert that if he can provide examples of using Test Resources to meet the class and module level use-cases then I would support bringing Test Resources into the standard library as part of unittest (modulo licensing issues which he is happy to work on). I'm not sure what response I expect from this email, and neither option will be implemented without further discussion - possibly at the PyCon sprints - but I thought I would make it clear what the possible directions are. All the best, Michael Foord [1] I *could* allow sorting of all tests within a module, inserting the setUpClass / tearDownClass in the right place after the sort. It would probably be better to group tests per class anyway and in fact the existing suite sorting support may do this already (in which case it isn't an issue) - I haven't looked into the implementation. -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From brian.curtin at gmail.com Tue Feb 9 18:25:51 2010 From: brian.curtin at gmail.com (Brian Curtin) Date: Tue, 9 Feb 2010 11:25:51 -0600 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> Message-ID: On Tue, Feb 9, 2010 at 06:45, anatoly techtonik wrote: > On Tue, Feb 9, 2010 at 12:57 PM, Antoine Pitrou > wrote: > > Le Tue, 09 Feb 2010 12:16:15 +0200, anatoly techtonik a ?crit : > >> > >> I've noticed a couple of issues that 100% crash Python 2.6.4 like this > >> one - http://bugs.python.org/issue6608 Is it ok to release new > versions > >> that are known to crash? > > > > I've changed this issue to release blocker. What are the other issues? > > I've basically run a query to get all "crash" type issues for Python 2.6 > > http://bugs.python.org/issue?@search_text=&title=&@columns=title&id=&@columns=id&stage=&creation=&creator=&activity=&@columns=activity&@sort=activity&actor=&nosy=&type=1&components=&versions=1&dependencies=&assignee=&keywords=&priority=&@group=priority&status=1&@columns=status&resolution=&nosy_count=&message_count=&@pagesize=50&@startwith=0&@queryname=&@old-queryname=&@action=search > > There are 65 entries and among them I can additionally confirm: > http://bugs.python.org/issue3720 > http://bugs.python.org/issue7788 > http://bugs.python.org/issue5765 > > -- > anatoly t. > > After taking a quick look, at least 14 of them were misreported as crashes. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Feb 9 18:57:14 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 9 Feb 2010 17:57:14 +0000 (UTC) Subject: [Python-Dev] setUpClass and setUpModule in unittest References: <4B71908A.3080306@voidspace.org.uk> Message-ID: Le Tue, 09 Feb 2010 16:42:50 +0000, Michael Foord a ?crit?: > > The next 'big' change to unittest will (may?) be the introduction of > class and module level setUp and tearDown. This was discussed on > Python-ideas and Guido supported them. They can be useful but are also > very easy to abuse (too much shared state, monolithic test classes and > modules). Several authors of other Python testing frameworks spoke up > *against* them, but several *users* of test frameworks spoke up in > favour of them. ;-) One problem is that it is not obvious what happens with inheritance. If I have a class-level setUp for class B, and class C inherits from B, will there be a separate invocation of setUp for C, or not? (I guess both possibilities have use cases) Antoine. From olemis at gmail.com Tue Feb 9 19:29:04 2010 From: olemis at gmail.com (Olemis Lang) Date: Tue, 9 Feb 2010 13:29:04 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B71908A.3080306@voidspace.org.uk> References: <4B71908A.3080306@voidspace.org.uk> Message-ID: <24ea26601002091029j6f70efd3g539589eda9c8c873@mail.gmail.com> On Tue, Feb 9, 2010 at 11:42 AM, Michael Foord wrote: > Hello all, > > Several > authors of other Python testing frameworks spoke up *against* them, but > several *users* of test frameworks spoke up in favour of them. ;-) > +1 for having something like that included in unittest > I'm pretty sure I can introduce setUpClass and setUpModule without breaking > compatibility with existing unittest extensions or backwards compatibility > issues Is it possible to use the names `BeforeClass` and `AfterClass` (just to be make it look similar to JUnit naming conventions ;o) ? > - with the possible exception of test sorting. Where you have a class > level setUp (for example creating a database connection) you don't want the > tearDown executed a *long* time after the setUp. In the presence of class or > module level setUp /tearDown (but only if they are used) I would expect test > sorting to only sort within the class or module [1]. I will introduce the > setUp and tearDown as new 'tests' - so failures are reported separately, Perhaps I am missing something, but could you please mention what will happen if a failure is raised inside class-level `tearDown` ? > and > all tests in the class / module will have an explicit skip in the event of a > setUp failure. > +1 > A *better* (more general) solution for sharing and managing resources > between tests is to use something like TestResources by Robert Collins. > http://pypi.python.org/pypi/testresources/ > > A minimal example of using test resources shows very little boilerplate > overhead from what setUpClass (etc) would need, and with the addition of > some helper functions could be almost no overhead. I've challenged Robert > that if he can provide examples of using Test Resources to meet the class > and module level use-cases then I would support bringing Test Resources into > the standard library as part of unittest (modulo licensing issues which he > is happy to work on). > I am not really sure about whether unittest API should grow, and grow, and grow, and ... but if that means that TestResources will not be even imported if testers don't do it explicitly in the code (which is not the case of something like class level setUp/tearDown) then +1, otherwise -0.5 -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: TracGViz plugin downloaded more than 1000 times (> 300 from PyPI) - http://feedproxy.google.com/~r/simelo-en/~3/06Exn-JPLIA/tracgviz-plugin-downloaded-more-than.html From fuzzyman at voidspace.org.uk Tue Feb 9 19:44:16 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 09 Feb 2010 18:44:16 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> Message-ID: <4B71AD00.1060900@voidspace.org.uk> On 09/02/2010 17:57, Antoine Pitrou wrote: > Le Tue, 09 Feb 2010 16:42:50 +0000, Michael Foord a ?crit : > >> The next 'big' change to unittest will (may?) be the introduction of >> class and module level setUp and tearDown. This was discussed on >> Python-ideas and Guido supported them. They can be useful but are also >> very easy to abuse (too much shared state, monolithic test classes and >> modules). Several authors of other Python testing frameworks spoke up >> *against* them, but several *users* of test frameworks spoke up in >> favour of them. ;-) >> > One problem is that it is not obvious what happens with inheritance. > If I have a class-level setUp for class B, and class C inherits from B, > will there be a separate invocation of setUp for C, or not? > (I guess both possibilities have use cases) > Well, what I would expect (others may disagree) is that you only have class level setup invoked for classes that have tests (so not for base classes) and that the base-class setUpClass is only called if invoked by the subclass. I haven't thought about *where* the code to do this should go. It *could* go in TestSuite, but that feels like the wrong place. Michael > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From olemis at gmail.com Tue Feb 9 19:55:34 2010 From: olemis at gmail.com (Olemis Lang) Date: Tue, 9 Feb 2010 13:55:34 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <24ea26601002091029j6f70efd3g539589eda9c8c873@mail.gmail.com> References: <4B71908A.3080306@voidspace.org.uk> <24ea26601002091029j6f70efd3g539589eda9c8c873@mail.gmail.com> Message-ID: <24ea26601002091055x46f8228dk3f210931434ef61@mail.gmail.com> On Tue, Feb 9, 2010 at 1:29 PM, Olemis Lang wrote: > On Tue, Feb 9, 2010 at 11:42 AM, Michael Foord > wrote: >> Hello all, >> >> Several >> authors of other Python testing frameworks spoke up *against* them, but >> several *users* of test frameworks spoke up in favour of them. ;-) >> > > +1 for having something like that included in unittest > >> I'm pretty sure I can introduce setUpClass and setUpModule without breaking >> compatibility with existing unittest extensions or backwards compatibility >> issues > > Is it possible to use the names `BeforeClass` and `AfterClass` (just > to be make it look similar to JUnit naming conventions ;o) ? > Another Q: - class setup method will be a `classmethod` isn't it ? It should not be a regular instance method because IMO it is not bound to a particular `TestCase` instance. -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: Embedding pages? - Trac Users | Google Groups - http://feedproxy.google.com/~r/TracGViz-full/~3/-XtS7h-wjcI/e4cf16474aa3cb87 From olemis at gmail.com Tue Feb 9 20:00:04 2010 From: olemis at gmail.com (Olemis Lang) Date: Tue, 9 Feb 2010 14:00:04 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <24ea26601002091055x46f8228dk3f210931434ef61@mail.gmail.com> References: <4B71908A.3080306@voidspace.org.uk> <24ea26601002091029j6f70efd3g539589eda9c8c873@mail.gmail.com> <24ea26601002091055x46f8228dk3f210931434ef61@mail.gmail.com> Message-ID: <24ea26601002091100i56b745e0m4c441daca3df8941@mail.gmail.com> Sorry. I had not finished the previous message On Tue, Feb 9, 2010 at 1:55 PM, Olemis Lang wrote: > On Tue, Feb 9, 2010 at 1:29 PM, Olemis Lang wrote: >> On Tue, Feb 9, 2010 at 11:42 AM, Michael Foord >> wrote: >>> Hello all, >>> >>> Several >>> authors of other Python testing frameworks spoke up *against* them, but >>> several *users* of test frameworks spoke up in favour of them. ;-) >>> >> >> +1 for having something like that included in unittest >> >>> I'm pretty sure I can introduce setUpClass and setUpModule without breaking >>> compatibility with existing unittest extensions or backwards compatibility >>> issues >> >> Is it possible to use the names `BeforeClass` and `AfterClass` (just >> to be make it look similar to JUnit naming conventions ;o) ? >> > > Another Q: > > ?- class setup method will be a `classmethod` isn't it ? It should not be > ? ? a regular instance method because IMO it is not bound to a particular > ? ? `TestCase` instance. > - Is it possible to rely on the fact that all class-level tear down methods will be guaranteed to run even if class-level setup method throws an exception ? -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: PEP 391 - Please Vote! - http://feedproxy.google.com/~r/TracGViz-full/~3/hY2h6ZSAFRE/110617 From fuzzyman at voidspace.org.uk Tue Feb 9 20:04:00 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 09 Feb 2010 19:04:00 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <24ea26601002091100i56b745e0m4c441daca3df8941@mail.gmail.com> References: <4B71908A.3080306@voidspace.org.uk> <24ea26601002091029j6f70efd3g539589eda9c8c873@mail.gmail.com> <24ea26601002091055x46f8228dk3f210931434ef61@mail.gmail.com> <24ea26601002091100i56b745e0m4c441daca3df8941@mail.gmail.com> Message-ID: <4B71B1A0.3070904@voidspace.org.uk> On 09/02/2010 19:00, Olemis Lang wrote: > Sorry. I had not finished the previous message > > On Tue, Feb 9, 2010 at 1:55 PM, Olemis Lang wrote: > >> On Tue, Feb 9, 2010 at 1:29 PM, Olemis Lang wrote: >> >>> On Tue, Feb 9, 2010 at 11:42 AM, Michael Foord >>> wrote: >>> >>>> Hello all, >>>> >>>> Several >>>> authors of other Python testing frameworks spoke up *against* them, but >>>> several *users* of test frameworks spoke up in favour of them. ;-) >>>> >>>> >>> +1 for having something like that included in unittest >>> >>> >>>> I'm pretty sure I can introduce setUpClass and setUpModule without breaking >>>> compatibility with existing unittest extensions or backwards compatibility >>>> issues >>>> >>> Is it possible to use the names `BeforeClass` and `AfterClass` (just >>> to be make it look similar to JUnit naming conventions ;o) ? >>> >>> >> Another Q: >> >> - class setup method will be a `classmethod` isn't it ? It should not be >> a regular instance method because IMO it is not bound to a particular >> `TestCase` instance. >> >> > - Is it possible to rely on the fact that all class-level tear down > methods will be guaranteed to run even if class-level setup > method throws an exception ? > > Yes it will be a classmethod rather than an instance method. I would expect that in common with instance setUp the tearDown would *not* be run if setUp fails. What would be nice would be an extension of addCleanUp so that it can be used by class and module level setUp. Clean-ups largely obsolete the need for tearDown anyway. Michael -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From olemis at gmail.com Tue Feb 9 20:13:40 2010 From: olemis at gmail.com (Olemis Lang) Date: Tue, 9 Feb 2010 14:13:40 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> Message-ID: <24ea26601002091113t5ee1f437kd218a602deb8f835@mail.gmail.com> On Tue, Feb 9, 2010 at 12:57 PM, Antoine Pitrou wrote: > Le Tue, 09 Feb 2010 16:42:50 +0000, Michael Foord a ?crit?: >> >> The next 'big' change to unittest will (may?) be the introduction of >> class and module level setUp and tearDown. This was discussed on >> Python-ideas and Guido supported them. They can be useful but are also >> very easy to abuse (too much shared state, monolithic test classes and >> modules). Several authors of other Python testing frameworks spoke up >> *against* them, but several *users* of test frameworks spoke up in >> favour of them. ;-) > > One problem is that it is not obvious what happens with inheritance. > If I have a class-level setUp for class B, and class C inherits from B, > will there be a separate invocation of setUp for C, or not? > (I guess both possibilities have use cases) > Considering JUnit : - The @BeforeClass methods of superclasses will be run before those the current class. - The @AfterClass methods declared in superclasses will be run after those of the current class. However considering that PyUnit is not based on annotations, isn't it possible to specify that explicitly (and assume super-class method not called by default) ? -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: gmane.comp.version-control.subversion.trac.general - http://feedproxy.google.com/~r/TracGViz-full/~3/SLY6s0RazcA/28067 From holger.krekel at gmail.com Tue Feb 9 20:14:13 2010 From: holger.krekel at gmail.com (Holger Krekel) Date: Tue, 9 Feb 2010 20:14:13 +0100 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <24ea26601002091029j6f70efd3g539589eda9c8c873@mail.gmail.com> References: <4B71908A.3080306@voidspace.org.uk> <24ea26601002091029j6f70efd3g539589eda9c8c873@mail.gmail.com> Message-ID: On Tue, Feb 9, 2010 at 7:29 PM, Olemis Lang wrote: > On Tue, Feb 9, 2010 at 11:42 AM, Michael Foord > wrote: >> Hello all, >> >> Several >> authors of other Python testing frameworks spoke up *against* them, but >> several *users* of test frameworks spoke up in favour of them. ;-) >> > > +1 for having something like that included in unittest hey Olemis, aren't you a test tool author as well? :) >> I'm pretty sure I can introduce setUpClass and setUpModule without breaking >> compatibility with existing unittest extensions or backwards compatibility >> issues > > Is it possible to use the names `BeforeClass` and `AfterClass` (just > to be make it look similar to JUnit naming conventions ;o) ? > >> - with the possible exception of test sorting. Where you have a class >> level setUp (for example creating a database connection) you don't want the >> tearDown executed a *long* time after the setUp. In the presence of class or >> module level setUp /tearDown (but only if they are used) I would expect test >> sorting to only sort within the class or module [1]. I will introduce the >> setUp and tearDown as new 'tests' - so failures are reported separately, > > Perhaps I am missing something, but could you please mention what will > happen if a failure is raised inside class-level `tearDown` ? > >> and >> all tests in the class / module will have an explicit skip in the event of a >> setUp failure. >> I think reporting tests as skipped when the setup failed is a bad idea. Out of several years of practise with skips and large test suites (and talking/experiencing many users :) i recommend to reserve skips for platform/dependency/environment mismatches. A Setup Error should just error or fail all the tests in its scope. cheers, holger From brian.curtin at gmail.com Tue Feb 9 20:16:51 2010 From: brian.curtin at gmail.com (Brian Curtin) Date: Tue, 9 Feb 2010 13:16:51 -0600 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <24ea26601002091029j6f70efd3g539589eda9c8c873@mail.gmail.com> References: <4B71908A.3080306@voidspace.org.uk> <24ea26601002091029j6f70efd3g539589eda9c8c873@mail.gmail.com> Message-ID: On Tue, Feb 9, 2010 at 12:29, Olemis Lang wrote: > On Tue, Feb 9, 2010 at 11:42 AM, Michael Foord > wrote: > > I'm pretty sure I can introduce setUpClass and setUpModule without > breaking > > compatibility with existing unittest extensions or backwards > compatibility > > issues > > Is it possible to use the names `BeforeClass` and `AfterClass` (just > to be make it look similar to JUnit naming conventions ;o) ? > -- > Regards, > > Olemis. > -1. setUp/tearDown is already well established here so I think it should follow the same convention. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at voidspace.org.uk Tue Feb 9 20:17:05 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 09 Feb 2010 19:17:05 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <24ea26601002091029j6f70efd3g539589eda9c8c873@mail.gmail.com> Message-ID: <4B71B4B1.90607@voidspace.org.uk> On 09/02/2010 19:14, Holger Krekel wrote: > [snip...] >>> and >>> all tests in the class / module will have an explicit skip in the event of a >>> setUp failure. >>> >>> > I think reporting tests as skipped when the setup failed is a bad idea. > Out of several years of practise with skips and large test suites (and > talking/experiencing many users :) i recommend to reserve skips for > platform/dependency/environment mismatches. A Setup Error should > just error or fail all the tests in its scope. > A SetupError instead of a skip sounds good to me. Thanks (although technically the test has been 'skipped' but that's playing with semantics...) Michael > cheers, > holger > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From olemis at gmail.com Tue Feb 9 20:24:33 2010 From: olemis at gmail.com (Olemis Lang) Date: Tue, 9 Feb 2010 14:24:33 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B71AD00.1060900@voidspace.org.uk> References: <4B71908A.3080306@voidspace.org.uk> <4B71AD00.1060900@voidspace.org.uk> Message-ID: <24ea26601002091124y30596b4q8779f33fa5f77126@mail.gmail.com> On Tue, Feb 9, 2010 at 1:44 PM, Michael Foord wrote: > On 09/02/2010 17:57, Antoine Pitrou wrote: >> >> Le Tue, 09 Feb 2010 16:42:50 +0000, Michael Foord a ?crit : >> >>> >>> The next 'big' change to unittest will (may?) be the introduction of >>> class and module level setUp and tearDown. This was discussed on >>> Python-ideas and Guido supported them. They can be useful but are also >>> very easy to abuse (too much shared state, monolithic test classes and >>> modules). Several authors of other Python testing frameworks spoke up >>> *against* them, but several *users* of test frameworks spoke up in >>> favour of them. ;-) >>> >> >> One problem is that it is not obvious what happens with inheritance. >> If I have a class-level setUp for class B, and class C inherits from B, >> will there be a separate invocation of setUp for C, or not? >> (I guess both possibilities have use cases) >> > > Well, what I would expect (others may disagree) is that you only have class > level setup invoked for classes that have tests (so not for base classes) > and that the base-class setUpClass is only called if invoked by the > subclass. > > I haven't thought about *where* the code to do this should go. It *could* go > in TestSuite, but that feels like the wrong place. > When I implemented this in `dutest` I did it as follows : - Changed suiteClass (since it was an extension ;o) - I had to override the suite's `run` method PS: Probably it's not the right place, but AFAIK it's the only place ?we? have to do such things ;o) -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: Embedding pages? - Trac Users | Google Groups - http://feedproxy.google.com/~r/TracGViz-full/~3/-XtS7h-wjcI/e4cf16474aa3cb87 From olemis at gmail.com Tue Feb 9 20:26:53 2010 From: olemis at gmail.com (Olemis Lang) Date: Tue, 9 Feb 2010 14:26:53 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <24ea26601002091029j6f70efd3g539589eda9c8c873@mail.gmail.com> Message-ID: <24ea26601002091126w21edb33dt8dde61023173923f@mail.gmail.com> On Tue, Feb 9, 2010 at 2:16 PM, Brian Curtin wrote: > On Tue, Feb 9, 2010 at 12:29, Olemis Lang wrote: >> >> On Tue, Feb 9, 2010 at 11:42 AM, Michael Foord >> wrote: >> > I'm pretty sure I can introduce setUpClass and setUpModule without >> > breaking >> > compatibility with existing unittest extensions or backwards >> > compatibility >> > issues >> >> Is it possible to use the names `BeforeClass` and `AfterClass` (just >> to be make it look similar to JUnit naming conventions ;o) ? >> -- >> Regards, >> >> Olemis. > > > -1. setUp/tearDown is already well established here so I think it should > follow the same convention. > ok no big deal ;o) -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: Nabble - Trac Users - Embedding pages? - http://feedproxy.google.com/~r/TracGViz-full/~3/MWT7MJBi08w/Embedding-pages--td27358804.html From chambon.pascal at gmail.com Tue Feb 9 20:42:50 2010 From: chambon.pascal at gmail.com (Pascal Chambon) Date: Tue, 09 Feb 2010 20:42:50 +0100 Subject: [Python-Dev] Forking and Multithreading - enemy brothers In-Reply-To: <20100204171211.26099.1601744879.divmod.xquotient.160@localhost.localdomain> References: <20100204171211.26099.1601744879.divmod.xquotient.160@localhost.localdomain> Message-ID: <4B71BABA.30904@wanadoo.fr> Hello Some update about the spawnl() thingy ; I've adapted the win32 code to have a new unix Popen object, which works with a spawn() semantic. It's quite straightforward, and the mutiprocessing call of a python functions works OK. But I've run into some trouble : synchronization primitives. Win32 semaphore can be "teleported" to another process via the DuplicateHandle() call. But unix named semaphores don't work that way - instead, they must be opened with the same name by each spawned subprocess. The problem here, the current semaphore C code is optimized to forbid semaphore sharing (other than via fork) : use of (O_EXL|O_CREAT) on opening, immediate unlinking of new semaphores.... So if we want to benefit from sync primitives with this spawn() implementation, we need a working named semaphore implementation, too... What's the best in your opinion ? Editing the current multiprocessing semaphore's behaviour to allow (with specific options, attributes and methods) its use in this case ? Or adding a new NamedSemaphore type like this one ? http://semanchuk.com/philip/posix_ipc/ Regards, Pascal > > > From mfoord at python.org Tue Feb 9 19:49:40 2010 From: mfoord at python.org (Michael Foord) Date: Tue, 09 Feb 2010 18:49:40 +0000 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues In-Reply-To: <4B719001.7080201@voidspace.org.uk> References: <4B719001.7080201@voidspace.org.uk> Message-ID: <4B71AE44.2030009@python.org> I missed another minor issue. In the interests of completeness... You currently have to subclass TextTestRunner (and override _makeResult) for it to use a custom TestResult. Implementing a custom test result is one of extensibility points of unittest, so I propose adding an optional argument to TextTestRunner allowing you to pass in a result class (or other callable) that _makeResult will use to create the result object. Should be uncontroversial. :-) http://bugs.python.org/issue7893 Michael On 09/02/2010 16:40, Michael Foord wrote: > Hello all, > > I've been looking at outstanding unittest issues as part of my > preparation for my PyCon talk. There are a couple of changes (minor) > I'd like to make that I thought I ought to run past Python-Dev first. > If I don't get any responses then I'll just do it, so you have been > warned. :-) > > The great google merge into unittest happened at PyCon last year [1]. > This included a change to TestCase.shortDescription() so that it would > *always* include the test name, whereas previously it would return the > test docstring or None. > > The problem this change solved was that tests with a docstring would > not have their name (test class and method name) reported during the > test run. Unfortunately the change broke part of twisted test running. > Reported as issue 7588: > > http://bugs.python.org/issue7588 > > It seems to me that the same effect (always reporting test name) can > be achieved in _TextTestResult.getDescription(). I propose to revert > the change to TestCase.shortDescription() (which has both a horrible > name and a horrible implementation and should probably be renamed > getDocstring so that what it does is obvious but never mind) and put > the change into _TextTestResult. > > It annoys me that _TextTestResult is private, as you will almost > certainly want to use it or subclass it when implementing custom test > systems. I am going to rename it TextTestResult, alias the old name > and document the old name as being deprecated. > > Another issue that I would like to address, but there are various > possible approaches, is issue 7559: http://bugs.python.org/issue7559 > Currently loadTestsFromName catches ImportError and rethrows as > AttributeError. This is horrible (it obscures the original error) but > there are backwards compatibility issues with fixing it. There are > three possible approaches: > > 1) Leave it (the default) > 2) Only throw an AttributeError if the import fails due to the name > being invalid (the module not existing) otherwise allow the error > through. (A minor but less serious change in behavior). > 3) A new method that turns failures into pseudo-tests that fail with > the original error when run. Possibly deprecating loadTestsFromName > > I favour option 3, but can't think of a good replacement name. :-) > > Comments welcomed. > > Despite deprecating (in the documentation - no actual deprecations > warnings I believe) a lot of the duplicate ways of doing things > (assert* favoured over fail* and assertEqual over assertEquals) we > didn't include deprecating assert_ in favour of assertTrue. I would > like to add that to the documentation. After 3.2 is out I would like > to clean up the documentation, removing mention of the deprecated > methods from the *main* documentation into a separate 'deprecated > methods' section. They currently make the documentation very untidy. > The unittest page should probably be split into several pages anyway > and needs improving. > > Other outstanding minor issues: > > Allow dotted names for test discovery > http://bugs.python.org/issue7780 - I intend to implement this as > described in the last comment > > A 'check_order' optional argument (defaulting to True) for > assertSequenceEqual > http://bugs.python.org/issue7832 - needs patch > > The breaking of __unittest caused by splitting unittesst into a > package needs fixing. The fix needs to work when Python is run without > frames support (IronPython). > http://bugs.python.org/issue7815 - needs patch > > Allow a __unittest (or similar) decorator for user implemented assert > functions > http://bugs.python.org/issue1705520 - needs patch > > Allow modules to define test_suite callable. > http://bugs.python.org/issue7501 - I propose to close as rejected. Use > load_tests instead. > > Display time taken by individual tests when in verbose mode. > http://bugs.python.org/issue4080 - anyone any opinions? > > Allow automatic formatting of arguments in assert* failure messages. > http://bugs.python.org/issue6966 - I propose to close as rejected > > removeTest() method on TestSuite > http://bugs.python.org/issue1778410 - anyone any opinions? > > expect methods (delayed fail) > http://bugs.python.org/issue3615 - any opinions? Personally I think > that the TestCase API is big enough already > > > All the best, > > Michael Foord > > [1] Mostly in revision 7-837. > http://svn.python.org/view?view=rev&revision=70837 > > > > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From martin at v.loewis.de Tue Feb 9 22:47:53 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 09 Feb 2010 22:47:53 +0100 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> Message-ID: <4B71D809.8050708@v.loewis.de> > I've noticed a couple of issues that 100% crash Python 2.6.4 like this > one - http://bugs.python.org/issue6608 Is it ok to release new > versions that are known to crash? As a general principle: yes, that's ok. We even distribute known crashers with every release. Regards, Martin From ben+python at benfinney.id.au Tue Feb 9 22:50:07 2010 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 10 Feb 2010 08:50:07 +1100 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues References: <4B719001.7080201@voidspace.org.uk> Message-ID: <87sk9ahyeo.fsf@benfinney.id.au> Michael Foord writes: > It seems to me that the same effect (always reporting test name) can > be achieved in _TextTestResult.getDescription(). I propose to revert > the change to TestCase.shortDescription() (which has both a horrible > name and a horrible implementation and should probably be renamed > getDocstring so that what it does is obvious but never mind) and put > the change into _TextTestResult. I understood the point of ?TestCase.shortDescription?, and indeed the point of that particular name, was to be clear that some *other* text could be the short description for the test case. Indeed, this is what you've come up with: a different implementation for generating a short description. The default implementation uses *part of* the docstring (the PEP 257 specified single-line summary), but that's just one possible way to make a short test case description. Calling it ?getDocstring? would not only be disruptive, but clearly false even in the default implementation. I've overridden that method to provide better, more specific, test case short descriptions, and the name works fine since I'm providing an overridden implementation of ?the short description of this test case?. I've even presented a patch to the third-party ?testscenarios? library to decorate the short description with the scenario name. I'd suggest this method, with its existing name, is the correct way to embellish test case descriptions for report output. -- \ ?I do not believe in forgiveness as it is preached by the | `\ church. We do not need the forgiveness of God, but of each | _o__) other and of ourselves.? ?Robert G. Ingersoll | Ben Finney From martin at v.loewis.de Tue Feb 9 22:55:46 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 09 Feb 2010 22:55:46 +0100 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> Message-ID: <4B71D9E2.5020201@v.loewis.de> > Le Tue, 09 Feb 2010 12:16:15 +0200, anatoly techtonik a ?crit : >> I've noticed a couple of issues that 100% crash Python 2.6.4 like this >> one - http://bugs.python.org/issue6608 Is it ok to release new versions >> that are known to crash? > > I've changed this issue to release blocker. What are the other issues? For a bug fix release, it should (IMO) be a release blocker *only* if this is a regression in the branch or some recent bug fix release over some earlier bug fix release. E.g. if 2.6.2 had broken something that worked in 2.6.1, it would be ok to delay 2.6.5. If 2.6.2 breaks in a case where all prior releases also broke, it would NOT be ok, IMO, to block 2.6.5 for that. There can always be a 2.6.6 release. Of course, if this gets fixed before the scheduled release of 2.6.5, anyway, that would be nice. Regards, Martin From ben+python at benfinney.id.au Tue Feb 9 22:57:15 2010 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 10 Feb 2010 08:57:15 +1100 Subject: [Python-Dev] setUpClass and setUpModule in unittest References: <4B71908A.3080306@voidspace.org.uk> Message-ID: <87ocjyhy2s.fsf@benfinney.id.au> Michael Foord writes: > The next 'big' change to unittest will (may?) be the introduction of > class and module level setUp and tearDown. This was discussed on > Python-ideas and Guido supported them. They can be useful but are also > very easy to abuse (too much shared state, monolithic test classes and > modules). Several authors of other Python testing frameworks spoke up > *against* them, but several *users* of test frameworks spoke up in > favour of them. ;-) I think the perceived need for these is from people trying to use the ?unittest? API for test that are *not* unit tests. That is, people have a need for integration tests (test this module's interaction with some other module) or system tests (test the behaviour of the whole running system). They then try to crowbar those tests into ?unittest? and finding it lacking, since ?unittest? is designed for tests of function-level units, without persistent state between those test cases. Is there a better third-party framework for use in these cases? As Olemis points out later in this thread, I don't think it's good for the ?unittest? module to keep growing for uses that aren't focussed on unit tests (as contrasted with other kinds of tests). -- \ ?The industrial system is profoundly dependent on commercial | `\ television and could not exist in its present form without it.? | _o__) ?John Kenneth Galbraith, _The New Industrial State_, 1967 | Ben Finney From solipsis at pitrou.net Tue Feb 9 23:02:45 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 09 Feb 2010 23:02:45 +0100 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <4B71D9E2.5020201@v.loewis.de> References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> Message-ID: <1265752965.3367.1.camel@localhost> Le mardi 09 f?vrier 2010 ? 22:55 +0100, "Martin v. L?wis" a ?crit : > > Le Tue, 09 Feb 2010 12:16:15 +0200, anatoly techtonik a ?crit : > >> I've noticed a couple of issues that 100% crash Python 2.6.4 like this > >> one - http://bugs.python.org/issue6608 Is it ok to release new versions > >> that are known to crash? > > > > I've changed this issue to release blocker. What are the other issues? > > For a bug fix release, it should (IMO) be a release blocker *only* if > this is a regression in the branch or some recent bug fix release over > some earlier bug fix release. As far as I remember, I think we have had release blockers which weren't regressions. Not that I strongly want to argue in favour of this one anyway. cheers Antoine. From martin at v.loewis.de Tue Feb 9 23:20:59 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 09 Feb 2010 23:20:59 +0100 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <1265752965.3367.1.camel@localhost> References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> Message-ID: <4B71DFCB.9080304@v.loewis.de> >>> I've changed this issue to release blocker. What are the other issues? >> For a bug fix release, it should (IMO) be a release blocker *only* if >> this is a regression in the branch or some recent bug fix release over >> some earlier bug fix release. > > As far as I remember, I think we have had release blockers which weren't > regressions. Of course, the release manager can always declare anything a release blocker, so that may have been the reason (I don't recall the details). Also, on a feature release, many more kinds of blockers may exist (e.g. for features that aren't complete yet). It simply may also have been that nobody argued in favor of process. I know that I have, from time to time, unblocked release blockers because I thought that they shouldn't have blocked the release out of principle. IOW, I feel that release blockers should only be used if something really bad would happen that can be prevented by not releasing. If nothing actually gets worse by the release, the release shouldn't be blocked. Regards, Martin From olemis at gmail.com Tue Feb 9 23:22:04 2010 From: olemis at gmail.com (Olemis Lang) Date: Tue, 9 Feb 2010 17:22:04 -0500 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues In-Reply-To: <87sk9ahyeo.fsf@benfinney.id.au> References: <4B719001.7080201@voidspace.org.uk> <87sk9ahyeo.fsf@benfinney.id.au> Message-ID: <24ea26601002091422u4b9915fbw17d1755ae552cc9c@mail.gmail.com> On Tue, Feb 9, 2010 at 4:50 PM, Ben Finney wrote: > Michael Foord writes: > >> It seems to me that the same effect (always reporting test name) can >> be achieved in _TextTestResult.getDescription(). I propose to revert >> the change to TestCase.shortDescription() (which has both a horrible >> name and a horrible implementation and should probably be renamed >> getDocstring so that what it does is obvious but never mind) and put >> the change into _TextTestResult. > [...] > > I've overridden that method to provide better, more specific, test case > short descriptions, and the name works fine since I'm providing an > overridden implementation of ?the short description of this test case?. Oh yes ! Thnx for mentioning that ! Very much ! If you move or remove shortDescription then I think dutest will be broken. In that case there is an automatically generated short description comprising the doctest name or id (e.g. class name + method name ;o) and example index (just remember that every interactive example is considered to be a test case ;o) In that case there is no other way to get this done unless an all-mighty & heavy test result be implemented . So I am *VERY* -1 for removing `shortDescription` (and I also think that TC should be the one to provide the short desc rather than the test result, just like what Ben Finney said before ;o) -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: PEP 391 - Please Vote! - http://feedproxy.google.com/~r/TracGViz-full/~3/hY2h6ZSAFRE/110617 From olemis at gmail.com Tue Feb 9 23:25:48 2010 From: olemis at gmail.com (Olemis Lang) Date: Tue, 9 Feb 2010 17:25:48 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <87ocjyhy2s.fsf@benfinney.id.au> References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> Message-ID: <24ea26601002091425u65997983rcb8c2bdff7e565b2@mail.gmail.com> On Tue, Feb 9, 2010 at 4:57 PM, Ben Finney wrote: > Michael Foord writes: > >> The next 'big' change to unittest will (may?) be the introduction of >> class and module level setUp and tearDown. This was discussed on >> Python-ideas and Guido supported them. They can be useful but are also >> very easy to abuse (too much shared state, monolithic test classes and >> modules). Several authors of other Python testing frameworks spoke up >> *against* them, but several *users* of test frameworks spoke up in >> favour of them. ;-) > > I think the perceived need for these is from people trying to use the > ?unittest? API for test that are *not* unit tests. > I dont't think so. I'll try to explain what I consider is a real use case tomorrow ... -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: Free milestone ranch Download - mac software - http://feedproxy.google.com/~r/TracGViz-full/~3/rX6_RmRWThE/ From holger.krekel at gmail.com Tue Feb 9 23:34:00 2010 From: holger.krekel at gmail.com (Holger Krekel) Date: Tue, 9 Feb 2010 23:34:00 +0100 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <87ocjyhy2s.fsf@benfinney.id.au> References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> Message-ID: On Tue, Feb 9, 2010 at 10:57 PM, Ben Finney wrote: > Michael Foord writes: > >> The next 'big' change to unittest will (may?) be the introduction of >> class and module level setUp and tearDown. This was discussed on >> Python-ideas and Guido supported them. They can be useful but are also >> very easy to abuse (too much shared state, monolithic test classes and >> modules). Several authors of other Python testing frameworks spoke up >> *against* them, but several *users* of test frameworks spoke up in >> favour of them. ;-) > > I think the perceived need for these is from people trying to use the > ?unittest? API for test that are *not* unit tests. > > That is, people have a need for integration tests (test this module's > interaction with some other module) or system tests (test the behaviour > of the whole running system). They then try to crowbar those tests into > ?unittest? and finding it lacking, since ?unittest? is designed for > tests of function-level units, without persistent state between those > test cases. > > Is there a better third-party framework for use in these cases? As > Olemis points out later in this thread, I don't think it's good for the > ?unittest? module to keep growing for uses that aren't focussed on unit > tests (as contrasted with other kinds of tests). My general view these days is that for unit tests there is practically not much of a a difference in using unittest, nose or py.test (give or take reporting niceness and flexibility). However, Functional and integration tests involve more complex fixture management and i came to find the setup/teardown on classes and modules lacking. Which is why there is testresources from Rob and funcargs in py.test. The latter allow to setup and teardown resources from a fixture factory which can determine the setup/teardown scope and perform whole-session caching without changing test code. In my Pycon testing tutorial (http://tinyurl.com/ya6b3vr ) i am going to exercise it in depth with beginners and here are docs: http://pytest.org/funcargs.html One nice bit is that you can for a given test module issue "py.test --funcargs" and get a list of resources you can use in your test function - by simply specifying them in the test function. In principle it's possible to port this approach to the stdlib - actually i consider to do it for the std-unittest- running part of py.test because people asked for it - if that proves useful i can imagine to refine it and offer it for inclusion. cheers, holger From fuzzyman at voidspace.org.uk Tue Feb 9 23:36:23 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 09 Feb 2010 22:36:23 +0000 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues In-Reply-To: <24ea26601002091422u4b9915fbw17d1755ae552cc9c@mail.gmail.com> References: <4B719001.7080201@voidspace.org.uk> <87sk9ahyeo.fsf@benfinney.id.au> <24ea26601002091422u4b9915fbw17d1755ae552cc9c@mail.gmail.com> Message-ID: <4B71E367.9050700@voidspace.org.uk> On 09/02/2010 22:22, Olemis Lang wrote: > On Tue, Feb 9, 2010 at 4:50 PM, Ben Finney wrote: > >> Michael Foord writes: >> >> >>> It seems to me that the same effect (always reporting test name) can >>> be achieved in _TextTestResult.getDescription(). I propose to revert >>> the change to TestCase.shortDescription() (which has both a horrible >>> name and a horrible implementation and should probably be renamed >>> getDocstring so that what it does is obvious but never mind) and put >>> the change into _TextTestResult. >>> >> > [...] > >> I've overridden that method to provide better, more specific, test case >> short descriptions, and the name works fine since I'm providing an >> overridden implementation of ?the short description of this test case?. >> > > Oh yes ! Thnx for mentioning that ! Very much ! > > If you move or remove shortDescription then I think dutest will be > broken. In that case there is an automatically generated short > description comprising the doctest name or id (e.g. class name + > method name ;o) and example index (just remember that every > interactive example is considered to be a test case ;o) > I am *not* suggesting removing shortDescription I am suggesting reverting to its behavior in Python 2.6. That would not affect your or Ben's use case (obviously). Given the name 'short description' it is being argued that making the description longer by adding the test name is inappropriate and that if this needs to be reported (which it should be) then this rightly belongs in the TestResult. Michael Foord > In that case there is no other way to get this done unless an > all-mighty& heavy test result be implemented . > > So I am *VERY* -1 for removing `shortDescription` (and I also think > that TC should be the one to provide the short desc rather than the > test result, just like what Ben Finney said before ;o) > > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From robert.kern at gmail.com Tue Feb 9 23:36:38 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 09 Feb 2010 16:36:38 -0600 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <87ocjyhy2s.fsf@benfinney.id.au> References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> Message-ID: On 2010-02-09 15:57 PM, Ben Finney wrote: > Is there a better third-party framework for use in these cases? As > Olemis points out later in this thread, I don't think it's good for the > ?unittest? module to keep growing for uses that aren't focussed on unit > tests (as contrasted with other kinds of tests). nosetests allows you to write such module-level and class-level setup and teardown functions. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fuzzyman at voidspace.org.uk Tue Feb 9 23:39:30 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 09 Feb 2010 22:39:30 +0000 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues In-Reply-To: <87sk9ahyeo.fsf@benfinney.id.au> References: <4B719001.7080201@voidspace.org.uk> <87sk9ahyeo.fsf@benfinney.id.au> Message-ID: <4B71E422.8000402@voidspace.org.uk> On 09/02/2010 21:50, Ben Finney wrote: > Michael Foord writes: > > >> It seems to me that the same effect (always reporting test name) can >> be achieved in _TextTestResult.getDescription(). I propose to revert >> the change to TestCase.shortDescription() (which has both a horrible >> name and a horrible implementation and should probably be renamed >> getDocstring so that what it does is obvious but never mind) and put >> the change into _TextTestResult. >> > I understood the point of ?TestCase.shortDescription?, and indeed the > point of that particular name, was to be clear that some *other* text > could be the short description for the test case. Indeed, this is what > you've come up with: a different implementation for generating a short > description. > > The default implementation uses *part of* the docstring (the PEP 257 > specified single-line summary), but that's just one possible way to make > a short test case description. Calling it ?getDocstring? would not only > be disruptive, but clearly false even in the default implementation. > > I'm not actually suggesting doing that - I was just musing out loud. So do you think the *new* implementation would be better, given that it breaks part of the twisted test suite, or would you be fine with me putting the current change into TestResult instead? Given that the change broke something, and the desired effect can be gained with a different change, I don't really see a downside to the change I'm proposing (reverting shortDescription and moving the code that adds the test name to TestResult). Michael > I've overridden that method to provide better, more specific, test case > short descriptions, and the name works fine since I'm providing an > overridden implementation of ?the short description of this test case?. > I've even presented a patch to the third-party ?testscenarios? library > to decorate the short description with the scenario name. > > I'd suggest this method, with its existing name, is the correct way to > embellish test case descriptions for report output. > > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From fuzzyman at voidspace.org.uk Tue Feb 9 23:42:39 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 09 Feb 2010 22:42:39 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <87ocjyhy2s.fsf@benfinney.id.au> References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> Message-ID: <4B71E4DF.6060309@voidspace.org.uk> On 09/02/2010 21:57, Ben Finney wrote: > Michael Foord writes: > > >> The next 'big' change to unittest will (may?) be the introduction of >> class and module level setUp and tearDown. This was discussed on >> Python-ideas and Guido supported them. They can be useful but are also >> very easy to abuse (too much shared state, monolithic test classes and >> modules). Several authors of other Python testing frameworks spoke up >> *against* them, but several *users* of test frameworks spoke up in >> favour of them. ;-) >> > I think the perceived need for these is from people trying to use the > ?unittest? API for test that are *not* unit tests. > > That is, people have a need for integration tests (test this module's > interaction with some other module) or system tests (test the behaviour > of the whole running system). They then try to crowbar those tests into > ?unittest? and finding it lacking, since ?unittest? is designed for > tests of function-level units, without persistent state between those > test cases. > I've used unittest for long running functional and integration tests (in both desktop and web applications). The infrastructure it provides is great for this. Don't get hung up on the fact that it is called unittest. In fact for many users the biggest reason it isn't suitable for tests like these is the lack of shared fixture support - which is why the other Python test frameworks provide them and we are going to bring it into unittest. Michael > Is there a better third-party framework for use in these cases? As > Olemis points out later in this thread, I don't think it's good for the > ?unittest? module to keep growing for uses that aren't focussed on unit > tests (as contrasted with other kinds of tests). > > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From collinwinter at google.com Tue Feb 9 23:47:26 2010 From: collinwinter at google.com (Collin Winter) Date: Tue, 9 Feb 2010 14:47:26 -0800 Subject: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython In-Reply-To: <3c8293b61001201427y30fc9f28ke6f7152b2a112b4e@mail.gmail.com> References: <3c8293b61001201427y30fc9f28ke6f7152b2a112b4e@mail.gmail.com> Message-ID: <3c8293b61002091447o42d207a1g84fbecff8b62e070@mail.gmail.com> To follow up on some of the open issues: On Wed, Jan 20, 2010 at 2:27 PM, Collin Winter wrote: [snip] > Open Issues > =========== > > - *Code review policy for the ``py3k-jit`` branch.* How does the CPython > ?community want us to procede with respect to checkins on the ``py3k-jit`` > ?branch? Pre-commit reviews? Post-commit reviews? > > ?Unladen Swallow has enforced pre-commit reviews in our trunk, but we realize > ?this may lead to long review/checkin cycles in a purely-volunteer > ?organization. We would like a non-Google-affiliated member of the CPython > ?development team to review our work for correctness and compatibility, but we > ?realize this may not be possible for every commit. The feedback we've gotten so far is that at most, only larger, more critical commits should be sent for review, while most commits can just go into the branch. Is that broadly agreeable to python-dev? > - *How to link LLVM.* Should we change LLVM to better support shared linking, > ?and then use shared linking to link the parts of it we need into CPython? The consensus has been that we should link shared against LLVM. Jeffrey Yasskin is now working on this in upstream LLVM. We are tracking this at http://code.google.com/p/unladen-swallow/issues/detail?id=130 and http://llvm.org/PR3201. > - *Prioritization of remaining issues.* We would like input from the CPython > ?development team on how to prioritize the remaining issues in the Unladen > ?Swallow codebase. Some issues like memory usage are obviously critical before > ?merger with ``py3k``, but others may fall into a "nice to have" category that > ?could be kept for resolution into a future CPython 3.x release. The big-ticket items here are what we expected: reducing memory usage and startup time. We also need to improve profiling options, both for oProfile and cProfile. > - *Create a C++ style guide.* Should PEP 7 be extended to include C++, or > ?should a separate C++ style PEP be created? Unladen Swallow maintains its own > ?style guide [#us-styleguide]_, which may serve as a starting point; the > ?Unladen Swallow style guide is based on both LLVM's [#llvm-styleguide]_ and > ?Google's [#google-styleguide]_ C++ style guides. Any thoughts on a CPython C++ style guide? My personal preference would be to extend PEP 7 to cover C++ by taking elements from http://code.google.com/p/unladen-swallow/wiki/StyleGuide and the LLVM and Google style guides (which is how we've been developing Unladen Swallow). If that's broadly agreeable, Jeffrey and I will work on a patch to PEP 7. Thanks, Collin Winter From exarkun at twistedmatrix.com Wed Feb 10 00:15:05 2010 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Tue, 09 Feb 2010 23:15:05 -0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B71E4DF.6060309@voidspace.org.uk> References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> <4B71E4DF.6060309@voidspace.org.uk> Message-ID: <20100209231505.26099.1257895177.divmod.xquotient.852@localhost.localdomain> On 10:42 pm, fuzzyman at voidspace.org.uk wrote: >On 09/02/2010 21:57, Ben Finney wrote: >>Michael Foord writes: >>>The next 'big' change to unittest will (may?) be the introduction of >>>class and module level setUp and tearDown. This was discussed on >>>Python-ideas and Guido supported them. They can be useful but are >>>also >>>very easy to abuse (too much shared state, monolithic test classes >>>and >>>modules). Several authors of other Python testing frameworks spoke up >>>*against* them, but several *users* of test frameworks spoke up in >>>favour of them. ;-) >>I think the perceived need for these is from people trying to use the >> 18unittest 19 API for test that are *not* unit tests. >> >>That is, people have a need for integration tests (test this module's >>interaction with some other module) or system tests (test the >>behaviour >>of the whole running system). They then try to crowbar those tests >>into >> 18unittest 19 and finding it lacking, since 18unittest 19 is designed for >>tests of function-level units, without persistent state between those >>test cases. > >I've used unittest for long running functional and integration tests >(in both desktop and web applications). The infrastructure it provides >is great for this. Don't get hung up on the fact that it is called >unittest. In fact for many users the biggest reason it isn't suitable >for tests like these is the lack of shared fixture support - which is >why the other Python test frameworks provide them and we are going to >bring it into unittest. For what it's worth, we just finished *removing* support for setUpClass and tearDownClass from Trial. Jean-Paul From solipsis at pitrou.net Wed Feb 10 00:16:25 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 9 Feb 2010 23:16:25 +0000 (UTC) Subject: [Python-Dev] Python 2.6.5 References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > > IOW, I feel that release blockers should only be used if something > really bad would happen that can be prevented by not releasing. If > nothing actually gets worse by the release, the release shouldn't be > blocked. I think most blocking bugs we've had had been existing for a long time, but had only been discovered or at least reported quite recently. Other blockers were also about features not yet implemented, or missing backports. So whether something would have "gone bad" is really in the eye of the beholder :) Regards Antoine. From guido at python.org Wed Feb 10 00:29:33 2010 From: guido at python.org (Guido van Rossum) Date: Tue, 9 Feb 2010 15:29:33 -0800 Subject: [Python-Dev] PEP 345 and PEP 386 In-Reply-To: <94bdd2611002041455w69a50000v74f989d1235d9b2a@mail.gmail.com> References: <94bdd2611002041455w69a50000v74f989d1235d9b2a@mail.gmail.com> Message-ID: On Thu, Feb 4, 2010 at 2:55 PM, Tarek Ziad? wrote: > On Thu, Feb 4, 2010 at 8:20 PM, Guido van Rossum wrote: > [..] >> I have one comment on PEP 345: Why is author-email mandatory? I'm sure >> there are plenty of cases where either the author doesn't want their >> email address published, or their last know email address is no longer >> valid. (Tarek responded off-line that it isn't all that mandatory; I >> propose to say so in the PEP.) > > Yes, I propose to remove the mandatory flag from that field. Since this is done I now approve both PEP 345 and PEP 386 (which is not to say that small editorial changes to the text couldn't be made). -- --Guido van Rossum (python.org/~guido) From ziade.tarek at gmail.com Wed Feb 10 00:32:45 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Wed, 10 Feb 2010 00:32:45 +0100 Subject: [Python-Dev] PEP 345 and PEP 386 In-Reply-To: References: <94bdd2611002041455w69a50000v74f989d1235d9b2a@mail.gmail.com> Message-ID: <94bdd2611002091532w1aa4b720gbe7887fb1500236c@mail.gmail.com> On Wed, Feb 10, 2010 at 12:29 AM, Guido van Rossum wrote: [..] > > Since this is done I now approve both PEP 345 and PEP 386 (which is > not to say that small editorial changes to the text couldn't be made). Thanks ! Thanks to all the people that helped in those PEPs Tarek From steve at pearwood.info Wed Feb 10 00:32:04 2010 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 10 Feb 2010 10:32:04 +1100 Subject: [Python-Dev] Request for review of issue 4037 Message-ID: <201002101032.05084.steve@pearwood.info> Hello, I have submitted a patch and a test script for issue 4037 on the bug tracker, "doctest.py should include method descriptors when looking inside a class __dict__" http://bugs.python.org/issue4037 I would be grateful if somebody could review it please, and if suitable, commit it. Thank you. -- Steven D'Aprano From van.lindberg at gmail.com Wed Feb 10 01:24:53 2010 From: van.lindberg at gmail.com (VanL) Date: Tue, 09 Feb 2010 18:24:53 -0600 Subject: [Python-Dev] PyCon is coming! Tomorrow, Feb. 10th is the last day for pre-conference rates Message-ID: PyCon is coming! Tomorrow (February 10th) is the last day for pre-conference rates. You can register for PyCon online at: Register while it is still Feb. 10th somewhere in the world and rest easy in the knowledge that within 10 days you will enjoying the company of some of the finest Python hackers in the world. As an additional bonus, PyCon this year will be in Atlanta, making it an ideal location for those looking for a way to escape the late winter blizzards in the northeastern United States, or the dreary fog of the Bay area. See you at PyCon 2010! From benjamin at python.org Wed Feb 10 02:03:01 2010 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 9 Feb 2010 19:03:01 -0600 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> Message-ID: <1afaf6161002091703u38768eb9m5e900227e8df2b14@mail.gmail.com> 2010/2/9 Dirkjan Ochtman : > On Tue, Feb 9, 2010 at 04:47, Benjamin Peterson wrote: >> I don't believe so. My plan was to manually sync updates or use subrepos. > > Using subrepos should work well for this. Excellent. > > It turned out that my local copy of the Subversion repository > contained the Python dir only, so I'm now syncing a full copy so that > I can convert other parts. I believe 2to3 might be a little tricky > because it was moved at some point, but I can look at getting that > right (and this will help in converting other parts of the larger > Python repository). What do you mean by moved? I don't it has ever moved around in the sandbox. -- Regards, Benjamin From ben+python at benfinney.id.au Wed Feb 10 02:07:11 2010 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 10 Feb 2010 12:07:11 +1100 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues References: <4B719001.7080201@voidspace.org.uk> <87sk9ahyeo.fsf@benfinney.id.au> <4B71E422.8000402@voidspace.org.uk> Message-ID: <87aavhj3uo.fsf@benfinney.id.au> Michael Foord writes: > On 09/02/2010 21:50, Ben Finney wrote: > > I understood the point of ?TestCase.shortDescription?, and indeed > > the point of that particular name, was to be clear that some *other* > > text could be the short description for the test case. Indeed, this > > is what you've come up with: a different implementation for > > generating a short description. > > Given that the change broke something, and the desired effect can be > gained with a different change, I don't really see a downside to the > change I'm proposing (reverting shortDescription and moving the code > that adds the test name to TestResult). What you describe (adding the class and method name when reporting the test) sounds like it belongs in the TestRunner, since it's more a case of ?give me more information about the test result?. That is, a TestRunner that reports each result *with* the extra information would be useful, for some cases, but should not modify the TestResult instance to do that. Am I right that this approach would avoid breakage in the case of frameworks that don't expect their TestRunner to behave that way? e.g. Twisted could simply use the TestRunner that doesn't behave this way, and (since the TestResult instances aren't any different) continue to get the expected behaviour. -- \ ?If nothing changes, everything will remain the same.? ?Barne's | `\ Law | _o__) | Ben Finney From ben+python at benfinney.id.au Wed Feb 10 02:10:30 2010 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 10 Feb 2010 12:10:30 +1100 Subject: [Python-Dev] setUpClass and setUpModule in unittest References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> <4B71E4DF.6060309@voidspace.org.uk> Message-ID: <876365j3p5.fsf@benfinney.id.au> Michael Foord writes: > I've used unittest for long running functional and integration tests > (in both desktop and web applications). The infrastructure it provides > is great for this. Don't get hung up on the fact that it is called > unittest. In fact for many users the biggest reason it isn't suitable > for tests like these is the lack of shared fixture support - which is > why the other Python test frameworks provide them and we are going to > bring it into unittest. I would argue that one of the things that makes ?unittest? good is that it makes it difficult to do the wrong thing ? or at least *this* wrong thing. Fixtures persist for the lifetime of a single test case, and no more; that's the way unit tests should work. Making the distinction clearer by using a different API (and *not* extending the ?unittest? API) seems to be the right way to go. -- \ ?I object to doing things that computers can do.? ?Olin Shivers | `\ | _o__) | Ben Finney From martin at v.loewis.de Wed Feb 10 05:26:31 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 10 Feb 2010 05:26:31 +0100 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> Message-ID: <4B723577.8000206@v.loewis.de> Antoine Pitrou wrote: > Martin v. L?wis v.loewis.de> writes: >> IOW, I feel that release blockers should only be used if something >> really bad would happen that can be prevented by not releasing. If >> nothing actually gets worse by the release, the release shouldn't be >> blocked. > > I think most blocking bugs we've had had been existing for a long time, but had > only been discovered or at least reported quite recently. > Other blockers were also about features not yet implemented, or missing > backports. So whether something would have "gone bad" is really in the eye of > the beholder :) Maybe I'm being pedantic, but I really think there should be more objective criteria for such things. We could set a policy that we don't want to release Python if there are known ways of crashing it, but I think that would be useless as it would mean that we can't make any releases for the next five years or so (because we all know of ways of crashing the VM that aren't fixed yet; when I run out of ideas, I just ask Armin Rigo :-). So the policy that I would suggest to follow instead is that known crashes (and other "serious" bugs, like incompatibilities, or failures to build) can block releases only if they are regressions, or if the release manager decides to make them release blockers. In particular, I think that requests for blocking a release should be accompanies with a promise from a committer to resolve the issue by some point in time. When that point has passed with the issue unresolved, the release manager should be free to unblock the release. Regards, Martin From barry at python.org Wed Feb 10 06:24:03 2010 From: barry at python.org (Barry Warsaw) Date: Wed, 10 Feb 2010 00:24:03 -0500 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <4B71D9E2.5020201@v.loewis.de> References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> Message-ID: On Feb 9, 2010, at 4:55 PM, Martin v. L?wis wrote: >> Le Tue, 09 Feb 2010 12:16:15 +0200, anatoly techtonik a ?crit : >>> I've noticed a couple of issues that 100% crash Python 2.6.4 like this >>> one - http://bugs.python.org/issue6608 Is it ok to release new versions >>> that are known to crash? >> >> I've changed this issue to release blocker. What are the other issues? > > For a bug fix release, it should (IMO) be a release blocker *only* if > this is a regression in the branch or some recent bug fix release over > some earlier bug fix release. > > E.g. if 2.6.2 had broken something that worked in 2.6.1, it would be ok > to delay 2.6.5. If 2.6.2 breaks in a case where all prior releases also > broke, it would NOT be ok, IMO, to block 2.6.5 for that. There can > always be a 2.6.6 release. > > Of course, if this gets fixed before the scheduled release of 2.6.5, > anyway, that would be nice. I completely agree. Besides, unless we have volunteers to step up, create, review, and apply patches, it makes no sense to hold up releases. In the case of the first posted bug, we need a Windows core developer to test, bless and apply the patch. -Barry From barry at python.org Wed Feb 10 06:28:33 2010 From: barry at python.org (Barry Warsaw) Date: Wed, 10 Feb 2010 00:28:33 -0500 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <4B71DFCB.9080304@v.loewis.de> References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> Message-ID: <0F501E79-505A-4190-A845-27E200BDBD25@python.org> On Feb 9, 2010, at 5:20 PM, Martin v. L?wis wrote: > Of course, the release manager can always declare anything a release > blocker, so that may have been the reason (I don't recall the details). I should probably clarify my last statement. I will sometimes mark an issue "release blocker" because I'd really like it to be fixed for the next point release, or because we're very close to having an applicable patch. Think of it more as a reminder to address the issue before the next release is created. However, I have also knocked issues down from blocker if it's clear that we won't have a fix in time, and it meets the other criteria that Martin has laid out. So feel free to mark issues as release blockers for 2.6.5. That doesn't mean it will actually block the release. -Barry From techtonik at gmail.com Wed Feb 10 06:54:13 2010 From: techtonik at gmail.com (anatoly techtonik) Date: Wed, 10 Feb 2010 07:54:13 +0200 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <4B71D9E2.5020201@v.loewis.de> References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> Message-ID: On Tue, Feb 9, 2010 at 11:55 PM, "Martin v. L?wis" wrote: >> Le Tue, 09 Feb 2010 12:16:15 +0200, anatoly techtonik a ?crit : >>> I've noticed a couple of issues that 100% crash Python 2.6.4 like this >>> one - http://bugs.python.org/issue6608 ?Is it ok to release new versions >>> that are known to crash? >> >> I've changed this issue to release blocker. What are the other issues? > > For a bug fix release, it should (IMO) be a release blocker *only* if > this is a regression in the branch or some recent bug fix release over > some earlier bug fix release. Is it possible to make exploits out of crashers? -- anatoly t. From raymond.hettinger at gmail.com Wed Feb 10 06:59:12 2010 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Tue, 9 Feb 2010 21:59:12 -0800 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> Message-ID: <838180B6-5A1C-404D-A3DA-553011A65DA6@gmail.com> On Feb 9, 2010, at 9:54 PM, anatoly techtonik wrote: > > Is it possible to make exploits out of crashers? The crashers involve creating convoluted python code, but then if you're in a position to execute arbitrary Python code, then you don't have to resort to any tricks to do something nasty within the scope of your user permissions. Raymond From martin at v.loewis.de Wed Feb 10 07:22:16 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 10 Feb 2010 07:22:16 +0100 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> Message-ID: <4B725098.5070908@v.loewis.de> > On Tue, Feb 9, 2010 at 11:55 PM, "Martin v. L?wis" wrote: >>> Le Tue, 09 Feb 2010 12:16:15 +0200, anatoly techtonik a ?crit : >>>> I've noticed a couple of issues that 100% crash Python 2.6.4 like this >>>> one - http://bugs.python.org/issue6608 Is it ok to release new versions >>>> that are known to crash? >>> I've changed this issue to release blocker. What are the other issues? >> For a bug fix release, it should (IMO) be a release blocker *only* if >> this is a regression in the branch or some recent bug fix release over >> some earlier bug fix release. > > Is it possible to make exploits out of crashers? It depends on the specific crasher. In Python, it depends on the application as well. In the specific issue you mentioned, it doesn't crash because of a memory overwrite, but because of a deliberate process shutdown in the C runtime. So you can't construct arbitrary code execution out of that. Regards, Martin From solipsis at pitrou.net Wed Feb 10 07:22:05 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 10 Feb 2010 06:22:05 +0000 (UTC) Subject: [Python-Dev] Python 2.6.5 References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> Message-ID: anatoly techtonik gmail.com> writes: > > Is it possible to make exploits out of crashers? It depends which ones. If it's something like a buffer overflow or a memory management problem, it may be possible to exploit it through carefully crafted input (in order to make the interpreter execute arbitrary machine code). A security expert would be able to shed more light on this. Regards Antoine. From solipsis at pitrou.net Wed Feb 10 07:25:13 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 10 Feb 2010 07:25:13 +0100 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <4B723577.8000206@v.loewis.de> References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> <4B723577.8000206@v.loewis.de> Message-ID: <1265783113.3344.11.camel@localhost> Le mercredi 10 f?vrier 2010 ? 05:26 +0100, "Martin v. L?wis" a ?crit : > > Maybe I'm being pedantic, but I really think there should be more > objective criteria for such things. Well we could try to find objective criteria but I'm not sure we'll find agreement on them. > We could set a policy that we don't > want to release Python if there are known ways of crashing it, but I > think that would be useless as it would mean that we can't make any > releases for the next five years or so It really boils down to what kind of crasher it is. When it is triggered by giving 24 instead of 23 as an argument to time.asctime() (i.e. a simple off-by-one error), it is more severe than a crasher which can only be triggered through artificially convoluted code. > So the policy that I would suggest to follow instead is that known > crashes (and other "serious" bugs, like incompatibilities, or failures > to build) can block releases only if they are regressions, or if the > release manager decides to make them release blockers. I disagree with the idea that the severity of a bug is correlated with it being a regression. If e.g. a critical security problem is found, it has to be corrected even though it may have been present for 5 years in the source. Besides, as Barry said, classifying a bug as blocker is also a good way to attract some attention on it. Other classifications, even "critical", don't have the same effect. Regards Antoine. From dirkjan at ochtman.nl Wed Feb 10 09:08:18 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Wed, 10 Feb 2010 09:08:18 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <1afaf6161002091703u38768eb9m5e900227e8df2b14@mail.gmail.com> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <1afaf6161002091703u38768eb9m5e900227e8df2b14@mail.gmail.com> Message-ID: On Wed, Feb 10, 2010 at 02:03, Benjamin Peterson wrote: > What do you mean by moved? I don't it has ever moved around in the sandbox. IIRC it was moved into the sandbox from some other location at some point? Cheers, Dirkjan From fuzzyman at voidspace.org.uk Wed Feb 10 12:11:53 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Wed, 10 Feb 2010 11:11:53 +0000 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues In-Reply-To: <87aavhj3uo.fsf@benfinney.id.au> References: <4B719001.7080201@voidspace.org.uk> <87sk9ahyeo.fsf@benfinney.id.au> <4B71E422.8000402@voidspace.org.uk> <87aavhj3uo.fsf@benfinney.id.au> Message-ID: <4B729479.7080609@voidspace.org.uk> On 10/02/2010 01:07, Ben Finney wrote: > Michael Foord writes: > > >> On 09/02/2010 21:50, Ben Finney wrote: >> >>> I understood the point of ?TestCase.shortDescription?, and indeed >>> the point of that particular name, was to be clear that some *other* >>> text could be the short description for the test case. Indeed, this >>> is what you've come up with: a different implementation for >>> generating a short description. >>> >> Given that the change broke something, and the desired effect can be >> gained with a different change, I don't really see a downside to the >> change I'm proposing (reverting shortDescription and moving the code >> that adds the test name to TestResult). >> > What you describe (adding the class and method name when reporting > the test) sounds like it belongs in the TestRunner, since it's more a > case of ?give me more information about the test result?. > The code for giving information about individual test results is the TestResult. The TestRunner knows nothing about each individual result (or even about each individual test as it happens). The TestRunner is responsible for the whole test run, the TestCase runs individual tests and the TestResult reports (or holds) individual test results (at the behest of the TestCase). Given this structure it is not possible for test descriptions to be the responsibility of the TestRunner and I don't feel like re-structuring unittest today. :-) Michael > That is, a TestRunner that reports each result *with* the extra > information would be useful, for some cases, but should not modify the > TestResult instance to do that. > > Am I right that this approach would avoid breakage in the case of > frameworks that don't expect their TestRunner to behave that way? e.g. > Twisted could simply use the TestRunner that doesn't behave this way, > and (since the TestResult instances aren't any different) continue to > get the expected behaviour. > > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From techtonik at gmail.com Wed Feb 10 12:57:31 2010 From: techtonik at gmail.com (anatoly techtonik) Date: Wed, 10 Feb 2010 13:57:31 +0200 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <1265783113.3344.11.camel@localhost> References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> <4B723577.8000206@v.loewis.de> <1265783113.3344.11.camel@localhost> Message-ID: On Wed, Feb 10, 2010 at 8:25 AM, Antoine Pitrou wrote: > > Besides, as Barry said, classifying a bug as blocker is also a good way > to attract some attention on it. Other classifications, even "critical", > don't have the same effect. Unfortunately, not many people have privilege to change bug properties to attract attention to the issues. For example, this patch - http://bugs.python.org/issue7582 is ready to be committed, it is trivial, not a release blocker, but would be nice be released. How to make it evident if nobody except committers is able to add any keywords to the issue? I suspect that even committers do not receive this privilege automatically. -- anatoly t. From solipsis at pitrou.net Wed Feb 10 13:14:50 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 10 Feb 2010 12:14:50 +0000 (UTC) Subject: [Python-Dev] Python 2.6.5 References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> <4B723577.8000206@v.loewis.de> <1265783113.3344.11.camel@localhost> Message-ID: anatoly techtonik gmail.com> writes: > > Unfortunately, not many people have privilege to change bug properties > to attract attention to the issues. For example, this patch - > http://bugs.python.org/issue7582 is ready to be committed, it is > trivial, not a release blocker, but would be nice be released. Well not every bug deserves special attention. The patch above is IMO low priority, since it's a minor addition to a script in the Tools directory... Not something which will make a big difference, and I'm being kind. :) As for setting keywords, there doesn't seem to be much you could have an authority to decide as a non-committer. You might think (and perhaps with good reason) that the patch is ready for commit into the SVN, but it's precisely a committer's job to decide that. (if you want to apply for commit rights, you can do so on this mailing-list; I cannot say if it could be accepted or not, since I haven't followed your contributions very closely. But given you don't even seem to be mentioned in the ACKS file the answer would probably be no at this point) Regards Antoine. From benjamin at python.org Wed Feb 10 13:59:38 2010 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 10 Feb 2010 06:59:38 -0600 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <1afaf6161002091703u38768eb9m5e900227e8df2b14@mail.gmail.com> Message-ID: <1afaf6161002100459x2e15375dgd12b5e7d2ce8fd07@mail.gmail.com> 2010/2/10 Dirkjan Ochtman : > On Wed, Feb 10, 2010 at 02:03, Benjamin Peterson wrote: >> What do you mean by moved? I don't it has ever moved around in the sandbox. > > IIRC it was moved into the sandbox from some other location at some point? r52858 | guido.van.rossum | 2006-11-29 11:38:40 -0600 (Wed, 29 Nov 2006) | 4 lines Changed paths: A /sandbox/trunk/2to3 A /sandbox/trunk/2to3/Grammar.pickle A /sandbox/trunk/2to3/Grammar.txt A /sandbox/trunk/2to3/pgen2 A /sandbox/trunk/2to3/pgen2/__init__.py A /sandbox/trunk/2to3/pgen2/__init__.pyc A /sandbox/trunk/2to3/pgen2/astnode.py A /sandbox/trunk/2to3/pgen2/conv.py A /sandbox/trunk/2to3/pgen2/driver.py A /sandbox/trunk/2to3/pgen2/grammar.py A /sandbox/trunk/2to3/pgen2/literals.py A /sandbox/trunk/2to3/pgen2/parse.py A /sandbox/trunk/2to3/pgen2/pgen.py A /sandbox/trunk/2to3/pgen2/python.py A /sandbox/trunk/2to3/pgen2/test.py A /sandbox/trunk/2to3/play.py A /sandbox/trunk/2to3/pynode.py Checkpoint of alternative Python 2.x-to-3.0 conversion tool. This contains a modified copy of pgen2 which was open-sourced by Elemental Security through a contributor's agreement with the PSF. ------------------------------------------------------------------------ The only moving was moving a lot of the files into a lib2to3 directory. It would be nice if the hg history could be preserved for those files. -- Regards, Benjamin From stephen at xemacs.org Wed Feb 10 15:10:31 2010 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 10 Feb 2010 23:10:31 +0900 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <1265783113.3344.11.camel@localhost> References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> <4B723577.8000206@v.loewis.de> <1265783113.3344.11.camel@localhost> Message-ID: <87fx59xju0.fsf@uwakimon.sk.tsukuba.ac.jp> Antoine Pitrou writes: > Besides, as Barry said, classifying a bug as blocker is also a good way > to attract some attention on it. Other classifications, even "critical", > don't have the same effect. If done for the sole purpose of attracting attention, it's no different from spam. Opinions will differ about what is and is not a blocker, and I'm sure your sense is as conservative as the next guy's. But really, let's at least be in the grey zone; "attracting attention" is not a consideration. From stephen at xemacs.org Wed Feb 10 15:14:55 2010 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 10 Feb 2010 23:14:55 +0900 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> Message-ID: <87eiktxjmo.fsf@uwakimon.sk.tsukuba.ac.jp> anatoly techtonik writes: > Is it possible to make exploits out of crashers? Depends on how you define "exploit". If your definition includes denial of service, yes, crashing a server application would count. Privilege escalation is harder to achieve. The general answer is "yes", but each case is different, and requires expert analysis. From barry at python.org Wed Feb 10 15:09:53 2010 From: barry at python.org (Barry Warsaw) Date: Wed, 10 Feb 2010 09:09:53 -0500 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> <4B723577.8000206@v.loewis.de> <1265783113.3344.11.camel@localhost> Message-ID: <20100210090953.22b02263@freewill.wooz.org> On Feb 10, 2010, at 01:57 PM, anatoly techtonik wrote: >Unfortunately, not many people have privilege to change bug properties >to attract attention to the issues. For example, this patch - >http://bugs.python.org/issue7582 is ready to be committed, it is >trivial, not a release blocker, but would be nice be released. How to >make it evident if nobody except committers is able to add any >keywords to the issue? I suspect that even committers do not receive >this privilege automatically. You do exactly what you've done here: email python-dev and plead your case. This particular issue seems like a new feature so it's not appropriate for Python 2.6.5. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From olemis at gmail.com Wed Feb 10 15:45:41 2010 From: olemis at gmail.com (Olemis Lang) Date: Wed, 10 Feb 2010 09:45:41 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> Message-ID: <24ea26601002100645l5ea6f727j657debc10c5ef8b1@mail.gmail.com> On Tue, Feb 9, 2010 at 5:34 PM, Holger Krekel wrote: > On Tue, Feb 9, 2010 at 10:57 PM, Ben Finney wrote: >> Michael Foord writes: >> >>> The next 'big' change to unittest will (may?) be the introduction of >>> class and module level setUp and tearDown. This was discussed on >>> Python-ideas and Guido supported them. They can be useful but are also >>> very easy to abuse (too much shared state, monolithic test classes and >>> modules). Several authors of other Python testing frameworks spoke up >>> *against* them, but several *users* of test frameworks spoke up in >>> favour of them. ;-) >> >> I think the perceived need for these is from people trying to use the >> ?unittest? API for test that are *not* unit tests. >> Well the example I was talking about before is when some (critical) resource needed for unittesting requires a very, very heavy initialization process. I'll employ the most recent example (hope it doesn't look like too much biased, it's just to illustrate the whole picture ;o) which is unittests for a framework like Trac . In that case it is critical to have a Trac environment, a ready-to-use DB and backend, initialize the plugins cache by loading relevant plugins, run the actions specified by each IEnvironmentSetup participant, sometimes a ready to use repository (if testing code depending on Trac VCS API) and more ... Just considering these cases someone could : - Create a fake environment used as a stub - But having a single global environment is not a good idea because it would be very difficult to run multiple (independent) tests concurrently (e.g. test multiple Trac plugins concurrently in a dedicated CI environment). So an environment has to be started for every test run and be as isolated as possible from other similar stub environments - The DB and backend can be replaced by using in-memory SQLite connection - Plugins cache and loading is mandatory as well running the actions specified by each IEnvironmentSetup participant - VCS can be mocked, but if it's needed it has to be initialized as well And all this is needed to run *ANY* test of *ANY* kind (that includes unittests ;o) . I hope that, up to this point, you all are convinced of the fact that all this cannot be done for each TestCase instance. That's why something like class-level setup | teardown might be useful to get all this done just once ... but it's not enough Something I consider a limitation of that approach is that it is a little hard to control the scope of setup and teardown. For instance, if I was trying to run Trac test suite I'd like to create the environment stub just once, and not once for every (module | class) containing tests. The current approach does not fit very well scenarios like this (i.e. setup | teardown actions span even beyond single modules ;o) So that's why it seems that the approach included in Trac testing code (i.e. a global shared fixture ) will still be needed, but AFAICR it breaks a little the interface of TC class and setup and tear down has to be performed from the outside. OTOH another minimalistic framework I've been building on top of `dutest` to cope with such scenarios (aka TracDuTest but not oficially released yet :-/ ) seems to handle all those features well enough by using doctest extraglobs or by modifying the global namespace at any given time inside setUp and tearDown (thus hiding all this code from doctests ;o). > One nice bit is that you can for a given test module issue "py.test --funcargs" > and get a list of resources you can use in your test function - by simply > specifying them in the test function. > > In principle it's possible to port this approach to the stdlib - actually i > consider to do it for the std-unittest- running part of py.test because > people asked for it - if that proves useful i can imagine to refine it > and offer it for inclusion. > Considering part of what I've mentioned above: Q: - How could py.test help in cases like this ? - Considering the similitudes with unittest style (at least IMO) I think I'd prefer something like PeckCheck to generate and run parameterized TCs. What d'u think ? (I confess that I don't use py.test , nose ... because I see they use too much magic & ..., but that's just my *VERY* biased opinion, so I won't start a war or alike ;o) -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: Nabble - Trac Users - Embedding pages? - http://feedproxy.google.com/~r/TracGViz-full/~3/MWT7MJBi08w/Embedding-pages--td27358804.html From olemis at gmail.com Wed Feb 10 15:47:59 2010 From: olemis at gmail.com (Olemis Lang) Date: Wed, 10 Feb 2010 09:47:59 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <20100209231505.26099.1257895177.divmod.xquotient.852@localhost.localdomain> References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> <4B71E4DF.6060309@voidspace.org.uk> <20100209231505.26099.1257895177.divmod.xquotient.852@localhost.localdomain> Message-ID: <24ea26601002100647w177f576dgf3deb1771263c58a@mail.gmail.com> On Tue, Feb 9, 2010 at 6:15 PM, wrote: > On 10:42 pm, fuzzyman at voidspace.org.uk wrote: >> >> On 09/02/2010 21:57, Ben Finney wrote: >>> >>> Michael Foord ?writes: >>>> >>>> The next 'big' change to unittest will (may?) be the introduction of >>>> class and module level setUp and tearDown. This was discussed on >>>> Python-ideas and Guido supported them. They can be useful but are also >>>> very easy to abuse (too much shared state, monolithic test classes and >>>> modules). Several authors of other Python testing frameworks spoke up >>>> *against* them, but several *users* of test frameworks spoke up in >>>> favour of them. ;-) >>> >>> I think the perceived need for these is from people trying to use the >>> 18unittest 19 API for test that are *not* unit tests. >>> >>> That is, people have a need for integration tests (test this module's >>> interaction with some other module) or system tests (test the behaviour >>> of the whole running system). They then try to crowbar those tests into >>> 18unittest 19 and finding it lacking, since ?18unittest 19 is designed >>> for >>> tests of function-level units, without persistent state between those >>> test cases. >> >> I've used unittest for long running functional and integration tests (in >> both desktop and web applications). The infrastructure it provides is great >> for this. Don't get hung up on the fact that it is called unittest. In fact >> for many users the biggest reason it isn't suitable for tests like these is >> the lack of shared fixture support - which is why the other Python test >> frameworks provide them and we are going to bring it into unittest. > > For what it's worth, we just finished *removing* support for setUpClass and > tearDownClass from Trial. > Ok ... but why ? Are they considered dangerous for modern societies ? -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: PEP 391 - Please Vote! - http://feedproxy.google.com/~r/TracGViz-full/~3/hY2h6ZSAFRE/110617 From olemis at gmail.com Wed Feb 10 15:56:30 2010 From: olemis at gmail.com (Olemis Lang) Date: Wed, 10 Feb 2010 09:56:30 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <876365j3p5.fsf@benfinney.id.au> References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> <4B71E4DF.6060309@voidspace.org.uk> <876365j3p5.fsf@benfinney.id.au> Message-ID: <24ea26601002100656l12897ce1jd64705df968c4139@mail.gmail.com> On Tue, Feb 9, 2010 at 8:10 PM, Ben Finney wrote: > Michael Foord writes: > >> I've used unittest for long running functional and integration tests >> (in both desktop and web applications). The infrastructure it provides >> is great for this. Don't get hung up on the fact that it is called >> unittest. In fact for many users the biggest reason it isn't suitable >> for tests like these is the lack of shared fixture support - which is >> why the other Python test frameworks provide them and we are going to >> bring it into unittest. > > I would argue that one of the things that makes ?unittest? good is that > it makes it difficult to do the wrong thing ? or at least *this* wrong > thing. Fixtures persist for the lifetime of a single test case, and no > more; that's the way unit tests should work. > > Making the distinction clearer by using a different API (and *not* > extending the ?unittest? API) seems to be the right way to go. > If that means that development should be focused on including mechanisms to make unittest more extensible instead of complicating the current ?relatively simple? API , then I agree . I think about unittest as a framework for writing test cases; but OTOH as a meta-framework to be used as the basic building blocks to build or integrate third-party testing infrastructures (and that includes third-party packages ;o) -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: Free milestone ranch Download - mac software - http://feedproxy.google.com/~r/TracGViz-full/~3/rX6_RmRWThE/ From olemis at gmail.com Wed Feb 10 16:04:43 2010 From: olemis at gmail.com (Olemis Lang) Date: Wed, 10 Feb 2010 10:04:43 -0500 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues In-Reply-To: <4B729479.7080609@voidspace.org.uk> References: <4B719001.7080201@voidspace.org.uk> <87sk9ahyeo.fsf@benfinney.id.au> <4B71E422.8000402@voidspace.org.uk> <87aavhj3uo.fsf@benfinney.id.au> <4B729479.7080609@voidspace.org.uk> Message-ID: <24ea26601002100704v71bb7e9ei2c69996467e051ac@mail.gmail.com> On Wed, Feb 10, 2010 at 6:11 AM, Michael Foord wrote: > On 10/02/2010 01:07, Ben Finney wrote: >> Michael Foord ?writes: >>> On 09/02/2010 21:50, Ben Finney wrote: >>>> >>>> I understood the point of ?TestCase.shortDescription?, and indeed >>>> the point of that particular name, was to be clear that some *other* >>>> text could be the short description for the test case. Indeed, this >>>> is what you've come up with: a different implementation for >>>> generating a short description. >>>> >>> >>> Given that the change broke something, and the desired effect can be >>> gained with a different change, I don't really see a downside to the >>> change I'm proposing (reverting shortDescription and moving the code >>> that adds the test name to TestResult). >>> >> >> What you describe (adding the class and method name when reporting >> the test) sounds like it belongs in the TestRunner, since it's more a >> case of ?give me more information about the test result?. > > The code for giving information about individual test results is the > TestResult. The TestRunner knows nothing about each individual result (or > even about each individual test as it happens). The TestRunner is > responsible for the whole test run, the TestCase runs individual tests and > the TestResult reports (or holds) individual test results (at the behest of > the TestCase). > > Given this structure it is not possible for test descriptions to be the > responsibility of the TestRunner and I don't feel like re-structuring > unittest today. :-) > FWIW +1 -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: Nabble - Trac Users - Embedding pages? - http://feedproxy.google.com/~r/TracGViz-full/~3/MWT7MJBi08w/Embedding-pages--td27358804.html From dirkjan at ochtman.nl Wed Feb 10 19:03:05 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Wed, 10 Feb 2010 19:03:05 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <1afaf6161002100459x2e15375dgd12b5e7d2ce8fd07@mail.gmail.com> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <1afaf6161002091703u38768eb9m5e900227e8df2b14@mail.gmail.com> <1afaf6161002100459x2e15375dgd12b5e7d2ce8fd07@mail.gmail.com> Message-ID: On Wed, Feb 10, 2010 at 13:59, Benjamin Peterson wrote: > The only moving was moving a lot of the files into a lib2to3 > directory. It would be nice if the hg history could be preserved for > those files. Please see if hg.python.org/2to3 would satisfy your needs. Cheers, Dirkjan From brett at python.org Wed Feb 10 21:52:32 2010 From: brett at python.org (Brett Cannon) Date: Wed, 10 Feb 2010 12:52:32 -0800 Subject: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython In-Reply-To: <3c8293b61002091447o42d207a1g84fbecff8b62e070@mail.gmail.com> References: <3c8293b61001201427y30fc9f28ke6f7152b2a112b4e@mail.gmail.com> <3c8293b61002091447o42d207a1g84fbecff8b62e070@mail.gmail.com> Message-ID: On Tue, Feb 9, 2010 at 14:47, Collin Winter wrote: > To follow up on some of the open issues: > > On Wed, Jan 20, 2010 at 2:27 PM, Collin Winter wrote: > [snip] >> Open Issues >> =========== >> >> - *Code review policy for the ``py3k-jit`` branch.* How does the CPython >> ?community want us to procede with respect to checkins on the ``py3k-jit`` >> ?branch? Pre-commit reviews? Post-commit reviews? >> >> ?Unladen Swallow has enforced pre-commit reviews in our trunk, but we realize >> ?this may lead to long review/checkin cycles in a purely-volunteer >> ?organization. We would like a non-Google-affiliated member of the CPython >> ?development team to review our work for correctness and compatibility, but we >> ?realize this may not be possible for every commit. > > The feedback we've gotten so far is that at most, only larger, more > critical commits should be sent for review, while most commits can > just go into the branch. Is that broadly agreeable to python-dev? > >> - *How to link LLVM.* Should we change LLVM to better support shared linking, >> ?and then use shared linking to link the parts of it we need into CPython? > > The consensus has been that we should link shared against LLVM. > Jeffrey Yasskin is now working on this in upstream LLVM. We are > tracking this at > http://code.google.com/p/unladen-swallow/issues/detail?id=130 and > http://llvm.org/PR3201. > >> - *Prioritization of remaining issues.* We would like input from the CPython >> ?development team on how to prioritize the remaining issues in the Unladen >> ?Swallow codebase. Some issues like memory usage are obviously critical before >> ?merger with ``py3k``, but others may fall into a "nice to have" category that >> ?could be kept for resolution into a future CPython 3.x release. > > The big-ticket items here are what we expected: reducing memory usage > and startup time. We also need to improve profiling options, both for > oProfile and cProfile. > >> - *Create a C++ style guide.* Should PEP 7 be extended to include C++, or >> ?should a separate C++ style PEP be created? Unladen Swallow maintains its own >> ?style guide [#us-styleguide]_, which may serve as a starting point; the >> ?Unladen Swallow style guide is based on both LLVM's [#llvm-styleguide]_ and >> ?Google's [#google-styleguide]_ C++ style guides. > > Any thoughts on a CPython C++ style guide? My personal preference > would be to extend PEP 7 to cover C++ by taking elements from > http://code.google.com/p/unladen-swallow/wiki/StyleGuide and the LLVM > and Google style guides (which is how we've been developing Unladen > Swallow). If that's broadly agreeable, Jeffrey and I will work on a > patch to PEP 7. > I have found the Google C++ style guide good so I am fine with taking ideas from that and adding them to PEP 7. -Brett > Thanks, > Collin Winter > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org > From brett at python.org Wed Feb 10 21:56:22 2010 From: brett at python.org (Brett Cannon) Date: Wed, 10 Feb 2010 12:56:22 -0800 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> Message-ID: On Tue, Feb 9, 2010 at 21:24, Barry Warsaw wrote: > On Feb 9, 2010, at 4:55 PM, Martin v. L?wis wrote: > >>> Le Tue, 09 Feb 2010 12:16:15 +0200, anatoly techtonik a ?crit : >>>> I've noticed a couple of issues that 100% crash Python 2.6.4 like this >>>> one - http://bugs.python.org/issue6608 ?Is it ok to release new versions >>>> that are known to crash? >>> >>> I've changed this issue to release blocker. What are the other issues? >> >> For a bug fix release, it should (IMO) be a release blocker *only* if >> this is a regression in the branch or some recent bug fix release over >> some earlier bug fix release. >> >> E.g. if 2.6.2 had broken something that worked in 2.6.1, it would be ok >> to delay 2.6.5. If 2.6.2 breaks in a case where all prior releases also >> broke, it would NOT be ok, IMO, to block 2.6.5 for that. There can >> always be a 2.6.6 release. >> >> Of course, if this gets fixed before the scheduled release of 2.6.5, >> anyway, that would be nice. > > I completely agree. > Ditto from me. -Brett > Besides, unless we have volunteers to step up, create, review, and apply patches, it makes no sense to hold up releases. ?In the case of the first posted bug, we need a Windows core developer to test, bless and apply the patch. > > -Barry > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org > From rdmurray at bitdance.com Wed Feb 10 21:56:44 2010 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 10 Feb 2010 15:56:44 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <24ea26601002100645l5ea6f727j657debc10c5ef8b1@mail.gmail.com> References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> <24ea26601002100645l5ea6f727j657debc10c5ef8b1@mail.gmail.com> Message-ID: <20100210205644.C11B11D6C2C@kimball.webabinitio.net> On Wed, 10 Feb 2010 09:45:41 -0500, Olemis Lang wrote: > On Tue, Feb 9, 2010 at 5:34 PM, Holger Krekel wrote: > > On Tue, Feb 9, 2010 at 10:57 PM, Ben Finney wrote: > >> Michael Foord writes: > >> > >>> The next 'big' change to unittest will (may?) be the introduction of > >>> class and module level setUp and tearDown. This was discussed on > >>> Python-ideas and Guido supported them. They can be useful but are also > >>> very easy to abuse (too much shared state, monolithic test classes and > >>> modules). Several authors of other Python testing frameworks spoke up > >>> *against* them, but several *users* of test frameworks spoke up in > >>> favour of them. ;-) > >> > >> I think the perceived need for these is from people trying to use the > >> unittest API for test that are *not* unit tests. > >> > > Well the example I was talking about before is when some (critical) > resource needed for unittesting requires a very, very heavy > initialization process. I'll employ the most recent example (hope it > doesn't look like too much biased, it's just to illustrate the whole > picture ;o) which is unittests for a framework like Trac . In that > case it is critical to have a Trac environment, a ready-to-use DB and > backend, initialize the plugins cache by loading relevant plugins, run > the actions specified by each IEnvironmentSetup participant, sometimes > a ready to use repository (if testing code depending on Trac VCS API) > and more ... Just considering these cases someone could : > > - Create a fake environment used as a stub > - But having a single global environment is not a good idea because > it would be very difficult to run multiple (independent) tests > concurrently (e.g. test multiple Trac plugins concurrently in a dedica= > ted > CI environment). So an environment has to be started for every > test run and be as isolated as possible from other similar > stub environments > - The DB and backend can be replaced by using in-memory SQLite > connection > - Plugins cache and loading is mandatory as well running the actions > specified by each IEnvironmentSetup participant > - VCS can be mocked, but if it's needed it has to be initialized as well > > And all this is needed to run *ANY* test of *ANY* kind (that includes > unittests ;o) . I hope that, up to this point, you all are convinced This doesn't sound very unit-testy, really. It sounds like you are operating at a rather high level (closer to integration testing). As someone else said, I don't see anything wrong with using unittest as a basis for doing that myself, but I don't think your example is a clear example of wanting a setup and teardown that is executed once per TestCase for tests that are more obviously what everyone would consider "unit" tests. I do have an example of that, though. I have an external database containing test data. My unittests are generated on the fly so that each generated test method pulls one set of test data from the database and runs the appropriate checks that the package processes the data correctly. (If you are curious, I'm testing email header parsing, and there are a lot of different possible quirky headers that the parser needs to be checked against) Putting the test data in a database makes managing the test data easier, and makes it available to other test frameworks to reuse the data. So, having the connection to the database set up once at TestCase start, and closed at TestCase end, would make the most sense. Currently there's no way I know of to do that, so I open and close the database for every unittest. Fortunately it's an sqlite database, so the run time penalty for doing that isn't prohibitive. -- R. David Murray www.bitdance.com From olemis at gmail.com Wed Feb 10 22:27:43 2010 From: olemis at gmail.com (Olemis Lang) Date: Wed, 10 Feb 2010 16:27:43 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <20100210205644.C11B11D6C2C@kimball.webabinitio.net> References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> <24ea26601002100645l5ea6f727j657debc10c5ef8b1@mail.gmail.com> <20100210205644.C11B11D6C2C@kimball.webabinitio.net> Message-ID: <24ea26601002101327l2b568b8crc8444ed927f35771@mail.gmail.com> On Wed, Feb 10, 2010 at 3:56 PM, R. David Murray wrote: > On Wed, 10 Feb 2010 09:45:41 -0500, Olemis Lang wrote: >> On Tue, Feb 9, 2010 at 5:34 PM, Holger Krekel wrote: >> > On Tue, Feb 9, 2010 at 10:57 PM, Ben Finney wrote: >> >> Michael Foord writes: >> >> >> >>> The next 'big' change to unittest will (may?) be the introduction of >> >>> class and module level setUp and tearDown. This was discussed on >> >>> Python-ideas and Guido supported them. They can be useful but are also >> >>> very easy to abuse (too much shared state, monolithic test classes and >> >>> modules). Several authors of other Python testing frameworks spoke up >> >>> *against* them, but several *users* of test frameworks spoke up in >> >>> favour of them. ;-) >> >> >> >> I think the perceived need for these is from people trying to use the >> >> unittest API for test that are *not* unit tests. >> >> >> >> Well the example I was talking about before is when some (critical) >> resource needed for unittesting requires a very, very heavy >> initialization process. I'll employ the most recent example (hope it >> doesn't look like too much biased, it's just to illustrate the whole >> picture ;o) which is unittests for a framework like Trac . In that >> case it is critical to have a Trac environment, a ready-to-use DB and >> backend, initialize the plugins cache by loading relevant plugins, run >> the actions specified by each IEnvironmentSetup participant, sometimes >> a ready to use repository (if testing code depending on Trac VCS API) >> and more ... Just considering these cases someone could : >> >> ? - Create a fake environment used as a stub >> ? - But having a single global environment is not a good idea because >> ? ? ?it would be very difficult to run multiple (independent) tests >> ? ? ?concurrently (e.g. test multiple Trac plugins concurrently in a dedica= >> ted >> ? ? ?CI environment). So an environment has to be started for every >> ? ? ?test run and be as isolated as possible from other similar >> ? ? ?stub environments >> ? - The DB and backend can be replaced by using in-memory SQLite >> ? ? ?connection >> ? - Plugins cache and loading is mandatory as well running the actions >> ? ? ?specified by each IEnvironmentSetup participant >> ? - VCS can be mocked, but if it's needed it has to be initialized as well >> >> And all this is needed to run *ANY* test of *ANY* kind (that includes >> unittests ;o) . I hope that, up to this point, you all are convinced > > This doesn't sound very unit-testy, really. ?It sounds like you are > operating at a rather high level (closer to integration testing). > As someone else said, I don't see anything wrong with using unittest > as a basis for doing that myself, but I don't think your example is a > clear example of wanting a setup and teardown that is executed once per > TestCase for tests that are more obviously what everyone would consider > "unit" tests. > Well, probably this is OT here but I follow in order to clarify what I am saying. I am not integrating talking about integration tests, but in general, yes they are unittests, but for Trac plugins (i.e. it is possible that others tests won't need all this ;o) . For example let's consider TracRpc plugin. Let's say you are gonna implement an RPC handler that retrieves the ticket summary provided it's ID (pretty simple method indeed) . In that case you need - Implement IRPCHandler interface (in order to extend RPC system ;o) - Query ticket data Let's say you will only test that second part (which is the functional part without any objections ;o). In that case you'll still need a Trac environment, you'll need to setup the DB connection inside of it , and all that just to perform the query . In general, in such cases (e.g. DB access, but there are others ;o), almost everything needs a Trac environment and therefore, at least part of what I mentioned before ;o) > So, having the connection to the database set up once at TestCase start, > and closed at TestCase end, would make the most sense. ?Currently there's > no way I know of to do that, so I open and close the database for every > unittest. ?Fortunately it's an sqlite database, so the run time penalty > for doing that isn't prohibitive. > I really cannot see the difference between this and what I mentioned before since one of the things that's needed is to create a connexion just once for each test run, but (guess-what !) the connection needs to be set for the environment itself (i.e. trac.env.db ) so first the chicken, then the egg ;o) PS: BTW, The situation you mention is almost the classic example ;o) -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: PEP 391 - Please Vote! - http://feedproxy.google.com/~r/TracGViz-full/~3/hY2h6ZSAFRE/110617 From rdmurray at bitdance.com Wed Feb 10 22:47:05 2010 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 10 Feb 2010 16:47:05 -0500 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> <4B723577.8000206@v.loewis.de> <1265783113.3344.11.camel@localhost> Message-ID: <20100210214705.83CCD1F9AE0@kimball.webabinitio.net> On Wed, 10 Feb 2010 13:57:31 +0200, anatoly techtonik wrote: > On Wed, Feb 10, 2010 at 8:25 AM, Antoine Pitrou wrote: > > > > Besides, as Barry said, classifying a bug as blocker is also a good way > > to attract some attention on it. Other classifications, even "critical", > > don't have the same effect. > > Unfortunately, not many people have privilege to change bug properties > to attract attention to the issues. For example, this patch - > http://bugs.python.org/issue7582 is ready to be committed, it is > trivial, not a release blocker, but would be nice be released. How to > make it evident if nobody except committers is able to add any > keywords to the issue? I suspect that even committers do not receive > this privilege automatically. FYI, committers do (or at least should) have full privileges on the tracker. Other people can also get full privileges on the tracker without being committers, generally by participating helpfully in issue review and issue triage. We give out tracker privileges more easily than commit privileges, but we don't give them out willy nilly. So the concern someone expressed about issues getting set to release blocker "just" to attract attention isn't an issue in practice, it seems to me. If a committer or triage person sets an issue to release blocker it should mean that they think the release manager should make a decision about that issue before the next release. That decision may well be that it shouldn't be a blocker. I think that the logic here is that it is all well and good if the release manager has the time to review all critical issues pre-release, but since they may not, those with tracker privs can help sort through the clutter by marking as release blockers those issues that the release manager (and others who are helping out) really *should* think about before the release goes out the door. I think that's what Barry was asking for when he said "feel free to mark things as release blockers". Of course there should be far fewer things getting set to release blocker for a maintenance release than for a new release even under this approach, and Martin's criteria are the ones that should be used by the release manager when deciding whether to *leave* an issue marked as a release blocker. But this is just my perception of the process, and I'm willing to work with whatever framework the community and release manager wants :) Anatoly, if you want particular issues to get attention, start reviewing issues on the tracker and helping move them along by commenting, and if your work is helpful you'll get noticed and offered tracker privs and be able to help even more. Related to this is the offer that Martin made and I have seconded: if someone wants attention paid to a particular issue, review five others and let Martin and/or I know and we'll review the issue you care about. -- R. David Murray www.bitdance.com From martin at v.loewis.de Wed Feb 10 23:46:18 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 10 Feb 2010 23:46:18 +0100 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <20100210214705.83CCD1F9AE0@kimball.webabinitio.net> References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> <4B723577.8000206@v.loewis.de> <1265783113.3344.11.camel@localhost> <20100210214705.83CCD1F9AE0@kimball.webabinitio.net> Message-ID: <4B73373A.9050707@v.loewis.de> > If a committer or triage > person sets an issue to release blocker it should mean that they think > the release manager should make a decision about that issue before the > next release. That decision may well be that it shouldn't be a blocker. I think it's (slightly) worse. For the release manager to override the triage, he has to study and understand the issue and then make the decision. In the past, that *did* cause delays in releases (though not in bug fix releases). So committers should be *fairly* conservative in declaring stuff release-critical. The release manager's time is too precious. > I think that the logic here is that it is all well and good if the release > manager has the time to review all critical issues pre-release, but since > they may not, those with tracker privs can help sort through the clutter > by marking as release blockers those issues that the release manager (and > others who are helping out) really *should* think about before the release > goes out the door. I think that's what Barry was asking for when he said > "feel free to mark things as release blockers". That would require that Barry actually *can* judge the issue at hand. In the specific case, I would expect that Barry would defer the specifics of the Windows issue to Windows experts, and then listen to what they say. I'm personally split whether the proposed patch is correct (i.e. whether asctime really *can* be implemented in a cross-platform manner; any definite ruling on that would be welcome). In the past, we had rather taken approaches like disabling runtime assertions "locally"; not sure whether such approaches would work for asctime as well. In any case, I feel that the issue is not security-critical at all. People just don't pass out-of-range values to asctime, but instead typically pass the result of gmtime/localtime, which will not cause any problems. Regards, Martin From v+python at g.nevcal.com Thu Feb 11 10:37:42 2010 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 11 Feb 2010 01:37:42 -0800 Subject: [Python-Dev] Executing zipfiles and directories (was Re: PyCon Keynote) In-Reply-To: <4B6002A9.8000000@g.nevcal.com> References: <4B5E4C59.2090709@gmail.com> <55981.115.128.40.33.1264479303.squirrel@syd-srv02.ezyreg.com> <20100126045124.CED4B3A4075@sparrow.telecommunity.com> <60564.115.128.40.33.1264483634.squirrel@syd-srv02.ezyreg.com> <4B5EA38D.4000206@g.nevcal.com> <1654.218.214.45.58.1264563357.squirrel@syd-srv02.ezyreg.com> <4B6002A9.8000000@g.nevcal.com> Message-ID: <4B73CFE6.1020307@g.nevcal.com> On approximately 1/27/2010 1:08 AM, came the following characters from the keyboard of Glenn Linderman: > Without reference to distutils, it seems the pieces are: > > 1) a way to decide what to include in the package > 2) code that knows where to put what is included, on one or more > platforms > 3) the process to create the ZIP file that includes 1 & 2, and call it > an appropriate name.py > > 3 looks easy, once 1 & 2 are figured out. distutils might provide the > foundation for 1. 2 sounds like something a distutils application > might create. I'm not sure that distutils is in the business of > building installer programs, my understanding is that it is in the > business of providing a standard way of recording and interpreting > bills of materials and maybe dependencies, but that is based only on > reading discussions here, not reading documentation. I haven't had a > chance to read all the module documentation since coming to python. 3 was rather easy, and has come in handy for me, as it turned out: see http://code.activestate.com/recipes/577042/ -- Glenn -- http://nevcal.com/ =========================== A protocol is complete when there is nothing left to remove. -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking From ncoghlan at gmail.com Thu Feb 11 13:02:22 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Feb 2010 22:02:22 +1000 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> <4B723577.8000206@v.loewis.de> <1265783113.3344.11.camel@localhost> Message-ID: <4B73F1CE.70408@gmail.com> Antoine Pitrou wrote: > As for setting keywords, there doesn't seem to be much you could have an > authority to decide as a non-committer. You might think (and perhaps with good > reason) that the patch is ready for commit into the SVN, but it's precisely a > committer's job to decide that. There are actually a few folks with dev privileges on the tracker that don't have commit rights. They do a good job helping to kick things in the right direction (there are a few bugs on my list that wouldn't be there if the triage people hadn't added me to the nosy list... now I just need to actually do something about them all...) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Thu Feb 11 13:05:46 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Feb 2010 22:05:46 +1000 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <4B73373A.9050707@v.loewis.de> References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> <4B723577.8000206@v.loewis.de> <1265783113.3344.11.camel@localhost> <20100210214705.83CCD1F9AE0@kimball.webabinitio.net> <4B73373A.9050707@v.loewis.de> Message-ID: <4B73F29A.6@gmail.com> Martin v. L?wis wrote: >> If a committer or triage >> person sets an issue to release blocker it should mean that they think >> the release manager should make a decision about that issue before the >> next release. That decision may well be that it shouldn't be a blocker. > > I think it's (slightly) worse. For the release manager to override the > triage, he has to study and understand the issue and then make the > decision. In the past, that *did* cause delays in releases (though not > in bug fix releases). So committers should be *fairly* conservative in > declaring stuff release-critical. The release manager's time is too > precious. When I've kicked issues in the RM's direction for a decision, I've generally tried to make sure my last comment makes it clear exactly what decision I'm asking them to make. If I didn't want their opinion on some aspect of the issue I would just reject it, postpone it or commit it myself :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Thu Feb 11 13:13:11 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Feb 2010 22:13:11 +1000 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues In-Reply-To: <4B71E422.8000402@voidspace.org.uk> References: <4B719001.7080201@voidspace.org.uk> <87sk9ahyeo.fsf@benfinney.id.au> <4B71E422.8000402@voidspace.org.uk> Message-ID: <4B73F457.4050709@gmail.com> Michael Foord wrote: > Given that the change broke something, and the desired effect can be > gained with a different change, I don't really see a downside to the > change I'm proposing (reverting shortDescription and moving the code > that adds the test name to TestResult). +1 on fixing this in a way that doesn't break third-party tests :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Thu Feb 11 13:30:59 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Feb 2010 22:30:59 +1000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B71908A.3080306@voidspace.org.uk> References: <4B71908A.3080306@voidspace.org.uk> Message-ID: <4B73F883.6050506@gmail.com> Michael Foord wrote: > I'm not sure what response I expect from this email, and neither option > will be implemented without further discussion - possibly at the PyCon > sprints - but I thought I would make it clear what the possible > directions are. I'll repeat what I said in the python-ideas thread [1]: with the advent of PEP 343 and context managers, I see any further extension of the JUnit inspired setUp/tearDown nomenclature as an undesirable direction for Python to take. Instead, I believe unittest should be adjusted to allow appropriate definition of context managers that take effect at the level of the test module, test class and each individual test. For example, given the following method definitions in unittest.TestCase for backwards compatibility: def __enter__(self): self.setUp() def __exit__(self, *args): self.tearDown() The test framework might promise to do the following for each test: with get_module_cm(test_instance): # However identified with get_class_cm(test_instance): # However identified with test_instance: # ** test_instance.test_method() It would then be up to the design of the module and class context manager instances to cache any desired common state. Further design work would also be needed on the underlying API for identifying the module and class context managers given only the test instance to work with. The get_*_cm mechanisms would return a no-op CM if there was no specific CM defined for the supplied TestCase. Cheers, Nick. [1] http://mail.python.org/pipermail/python-ideas/2010-January/006758.html -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From fuzzyman at voidspace.org.uk Thu Feb 11 13:41:37 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Thu, 11 Feb 2010 12:41:37 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B73F883.6050506@gmail.com> References: <4B71908A.3080306@voidspace.org.uk> <4B73F883.6050506@gmail.com> Message-ID: <4B73FB01.7040403@voidspace.org.uk> On 11/02/2010 12:30, Nick Coghlan wrote: > Michael Foord wrote: > >> I'm not sure what response I expect from this email, and neither option >> will be implemented without further discussion - possibly at the PyCon >> sprints - but I thought I would make it clear what the possible >> directions are. >> > I'll repeat what I said in the python-ideas thread [1]: with the advent > of PEP 343 and context managers, I see any further extension of the > JUnit inspired setUp/tearDown nomenclature as an undesirable direction > for Python to take. > > Instead, I believe unittest should be adjusted to allow appropriate > definition of context managers that take effect at the level of the test > module, test class and each individual test. > > For example, given the following method definitions in unittest.TestCase > for backwards compatibility: > > def __enter__(self): > self.setUp() > > def __exit__(self, *args): > self.tearDown() > > The test framework might promise to do the following for each test: > > with get_module_cm(test_instance): # However identified > with get_class_cm(test_instance): # However identified > with test_instance: # ** > test_instance.test_method() > Well that is *effectively* how they would work (the semantics) but I don't see how that would fit with the design of unittest to make them work *specifically* like that - especially not if we are to remain compatible with existing unittest extensions. If you can come up with a concrete proposal of how to do this then I'm happy to listen. I'm not saying it is impossible, but it isn't immediately obvious. I don't see any advantage of just using context managers for the sake of it and definitely not at the cost of backwards incompatibility. Michael > It would then be up to the design of the module and class context > manager instances to cache any desired common state. Further design work > would also be needed on the underlying API for identifying the module > and class context managers given only the test instance to work with. > > The get_*_cm mechanisms would return a no-op CM if there was no specific > CM defined for the supplied TestCase. > > Cheers, > Nick. > > [1] > http://mail.python.org/pipermail/python-ideas/2010-January/006758.html > > > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From fuzzyman at voidspace.org.uk Thu Feb 11 13:43:55 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Thu, 11 Feb 2010 12:43:55 +0000 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues In-Reply-To: <4B73F457.4050709@gmail.com> References: <4B719001.7080201@voidspace.org.uk> <87sk9ahyeo.fsf@benfinney.id.au> <4B71E422.8000402@voidspace.org.uk> <4B73F457.4050709@gmail.com> Message-ID: <4B73FB8B.2070807@voidspace.org.uk> On 11/02/2010 12:13, Nick Coghlan wrote: > Michael Foord wrote: > >> Given that the change broke something, and the desired effect can be >> gained with a different change, I don't really see a downside to the >> change I'm proposing (reverting shortDescription and moving the code >> that adds the test name to TestResult). >> > +1 on fixing this in a way that doesn't break third-party tests :) > > It is done. The slight disadvantage is that overriding shortDescription on your own TestCase no longer removes the test name from being added to the short description. On the other hand if you do override shortDescription you don't have to add the test name yourself, and using a custom TestResult (overriding getDescription) is much easier now that the TextTestRunner takes a resultclass argument in the constructor. All the best, Michael > Cheers, > Nick. > > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From fijall at gmail.com Thu Feb 11 15:39:45 2010 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 11 Feb 2010 09:39:45 -0500 Subject: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython In-Reply-To: <3c8293b61002021454w664c7646ya5e2dd7395380f5f@mail.gmail.com> References: <3c8293b61001201427y30fc9f28ke6f7152b2a112b4e@mail.gmail.com> <3c8293b61001201756g26212a44m9abe7f5b471e6bb4@mail.gmail.com> <3c8293b61001210932i9c5d31i4bc71b7d9e0611f2@mail.gmail.com> <3c8293b61001211214m4b24c3b9x3738cf9e5375b0f8@mail.gmail.com> <3c8293b61002021454w664c7646ya5e2dd7395380f5f@mail.gmail.com> Message-ID: <693bc9ab1002110639r5ca143b1t281fe0135effc493@mail.gmail.com> Snippet from: http://codereview.appspot.com/186247/diff2/5014:8003/7002 *PyPy*: PyPy [#pypy]_ has good performance on numerical code, but is slower than Unladen Swallow on non-numerical workloads. PyPy only supports 32-bit x86 code generation. It has poor support for CPython extension modules, making migration for large applications prohibitively expensive. That part at the very least has some sort of personal opinion "prohibitively", while the other part is not completely true "slower than US on non-numerical workloads". Fancy providing a proof for that? I'm well aware that there are benchmarks on which PyPy is slower than CPython or US, however, I would like a bit more weighted opinion in the PEP. Cheers, fijal From olemis at gmail.com Thu Feb 11 15:41:39 2010 From: olemis at gmail.com (Olemis Lang) Date: Thu, 11 Feb 2010 09:41:39 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B73FB01.7040403@voidspace.org.uk> References: <4B71908A.3080306@voidspace.org.uk> <4B73F883.6050506@gmail.com> <4B73FB01.7040403@voidspace.org.uk> Message-ID: <24ea26601002110641r5a6c6138ie4a59aaa5e934e41@mail.gmail.com> On Thu, Feb 11, 2010 at 7:41 AM, Michael Foord wrote: > On 11/02/2010 12:30, Nick Coghlan wrote: >> >> Michael Foord wrote: >> >>> >>> I'm not sure what response I expect from this email, and neither option >>> will be implemented without further discussion - possibly at the PyCon >>> sprints - but I thought I would make it clear what the possible >>> directions are. >>> >> >> I'll repeat what I said in the python-ideas thread [1]: with the advent >> of PEP 343 and context managers, I see any further extension of the >> JUnit inspired setUp/tearDown nomenclature as an undesirable direction >> for Python to take. >> >> Instead, I believe unittest should be adjusted to allow appropriate >> definition of context managers that take effect at the level of the test >> module, test class and each individual test. >> >> For example, given the following method definitions in unittest.TestCase >> for backwards compatibility: >> >> ? def __enter__(self): >> ? ? self.setUp() >> >> ? def __exit__(self, *args): >> ? ? self.tearDown() >> >> The test framework might promise to do the following for each test: >> >> ? with get_module_cm(test_instance): # However identified >> ? ? with get_class_cm(test_instance): # However identified >> ? ? ? with test_instance: # ** >> ? ? ? ? test_instance.test_method() >> > What Nick pointed out is the right direction (IMHO), and the one I had in mind since I realized that unittest extensibility is the key feature that needs to be implemented . I even wanted to start a project using this particular architecture to make PyUnit extensible. It's too bad (for me) that I don't have time at all, to move forward an just do it . :( I need days with 38 hrs !!! (at least) :$ > Well that is *effectively* how they would work (the semantics) but I don't > see how that would fit with the design of unittest to make them work > *specifically* like that - especially not if we are to remain compatible > with existing unittest extensions. > AFAICS (so not sure especially since there's nothing done to criticize ;o) is that backwards compatibility is not the main stopper ... > If you can come up with a concrete proposal of how to do this then I'm happy > to listen. I'm not saying it is impossible, but it isn't immediately > obvious. I don't see any advantage of just using context managers for the > sake of it and definitely not at the cost of backwards incompatibility. > ... but since I have nothing I can show you , everything is still in my mind ... -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: Free milestone ranch Download - mac software - http://feedproxy.google.com/~r/TracGViz-full/~3/rX6_RmRWThE/ From exarkun at twistedmatrix.com Thu Feb 11 16:10:34 2010 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Thu, 11 Feb 2010 15:10:34 -0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <24ea26601002110641r5a6c6138ie4a59aaa5e934e41@mail.gmail.com> References: <4B71908A.3080306@voidspace.org.uk> <4B73F883.6050506@gmail.com> <4B73FB01.7040403@voidspace.org.uk> <24ea26601002110641r5a6c6138ie4a59aaa5e934e41@mail.gmail.com> Message-ID: <20100211151034.26099.1357187229.divmod.xquotient.1049@localhost.localdomain> On 02:41 pm, olemis at gmail.com wrote: >On Thu, Feb 11, 2010 at 7:41 AM, Michael Foord > wrote: >>On 11/02/2010 12:30, Nick Coghlan wrote: >>> >>>Michael Foord wrote: >>>> >>>>I'm not sure what response I expect from this email, and neither >>>>option >>>>will be implemented without further discussion - possibly at the >>>>PyCon >>>>sprints - but I thought I would make it clear what the possible >>>>directions are. >>> >>>I'll repeat what I said in the python-ideas thread [1]: with the >>>advent >>>of PEP 343 and context managers, I see any further extension of the >>>JUnit inspired setUp/tearDown nomenclature as an undesirable >>>direction >>>for Python to take. >>> >>>Instead, I believe unittest should be adjusted to allow appropriate >>>definition of context managers that take effect at the level of the >>>test >>>module, test class and each individual test. >>> >>>For example, given the following method definitions in >>>unittest.TestCase >>>for backwards compatibility: >>> >>>? def __enter__(self): >>>? ? self.setUp() >>> >>>? def __exit__(self, *args): >>>? ? self.tearDown() >>> >>>The test framework might promise to do the following for each test: >>> >>>? with get_module_cm(test_instance): # However identified >>>? ? with get_class_cm(test_instance): # However identified >>>? ? ? with test_instance: # ** >>>? ? ? ? test_instance.test_method() >> > >What Nick pointed out is the right direction (IMHO), and the one I had Why? Change for the sake of change is not a good thing. What are the advantages of switching to context managers for this? Perhaps the idea was more strongly justified in the python-ideas thread. Anyone have a link to that? >in mind since I realized that unittest extensibility is the key >feature that needs to be implemented . I even wanted to start a >project using this particular architecture to make PyUnit extensible. What makes you think it isn't extensible now? Lots of people are extending it in lots of ways. Jean-Paul From olemis at gmail.com Thu Feb 11 16:11:12 2010 From: olemis at gmail.com (Olemis Lang) Date: Thu, 11 Feb 2010 10:11:12 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <24ea26601002110641r5a6c6138ie4a59aaa5e934e41@mail.gmail.com> References: <4B71908A.3080306@voidspace.org.uk> <4B73F883.6050506@gmail.com> <4B73FB01.7040403@voidspace.org.uk> <24ea26601002110641r5a6c6138ie4a59aaa5e934e41@mail.gmail.com> Message-ID: <24ea26601002110711k2b8c531cuda3630dd2cb3c5e5@mail.gmail.com> On Thu, Feb 11, 2010 at 9:41 AM, Olemis Lang wrote: > On Thu, Feb 11, 2010 at 7:41 AM, Michael Foord > wrote: >> On 11/02/2010 12:30, Nick Coghlan wrote: >>> >>> Michael Foord wrote: >>> >>>> >>>> I'm not sure what response I expect from this email, and neither option >>>> will be implemented without further discussion - possibly at the PyCon >>>> sprints - but I thought I would make it clear what the possible >>>> directions are. >>>> >>> >>> I'll repeat what I said in the python-ideas thread [1]: with the advent >>> of PEP 343 and context managers, I see any further extension of the >>> JUnit inspired setUp/tearDown nomenclature as an undesirable direction >>> for Python to take. >>> >>> Instead, I believe unittest should be adjusted to allow appropriate >>> definition of context managers that take effect at the level of the test >>> module, test class and each individual test. >>> >>> For example, given the following method definitions in unittest.TestCase >>> for backwards compatibility: >>> >>> ? def __enter__(self): >>> ? ? self.setUp() >>> >>> ? def __exit__(self, *args): >>> ? ? self.tearDown() >>> >>> The test framework might promise to do the following for each test: >>> >>> ? with get_module_cm(test_instance): # However identified >>> ? ? with get_class_cm(test_instance): # However identified >>> ? ? ? with test_instance: # ** >>> ? ? ? ? test_instance.test_method() >>> >> > > What Nick pointed out is the right direction (IMHO), and the one I had > in mind since I realized that unittest extensibility is the key > feature that needs to be implemented . I even wanted to start a > project using this particular architecture to make PyUnit extensible. > It's too bad (for me) that I don't have time at all, to move forward > an just do it . > > :( > > I need days with 38 hrs !!! (at least) > > :$ > >> Well that is *effectively* how they would work (the semantics) but I don't >> see how that would fit with the design of unittest to make them work >> *specifically* like that - especially not if we are to remain compatible >> with existing unittest extensions. >> > > AFAICS (so not sure especially since there's nothing done to criticize > ;o) is that backwards compatibility ?is not the main stopper ... > >> If you can come up with a concrete proposal of how to do this then I'm happy >> to listen. I'm not saying it is impossible, but it isn't immediately >> obvious. I don't see any advantage of just using context managers for the >> sake of it and definitely not at the cost of backwards incompatibility. >> > > ... but since I have nothing I can show you , everything is still in my mind ... > The idea (at least the one in my head ;o) is based on the features recently introduced in JUnit 4.7, especially the @Rule ;o) -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: Free milestone ranch Download - mac software - http://feedproxy.google.com/~r/TracGViz-full/~3/rX6_RmRWThE/ From olemis at gmail.com Thu Feb 11 16:12:50 2010 From: olemis at gmail.com (Olemis Lang) Date: Thu, 11 Feb 2010 10:12:50 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <24ea26601002110711k2b8c531cuda3630dd2cb3c5e5@mail.gmail.com> References: <4B71908A.3080306@voidspace.org.uk> <4B73F883.6050506@gmail.com> <4B73FB01.7040403@voidspace.org.uk> <24ea26601002110641r5a6c6138ie4a59aaa5e934e41@mail.gmail.com> <24ea26601002110711k2b8c531cuda3630dd2cb3c5e5@mail.gmail.com> Message-ID: <24ea26601002110712g34d2b536mf2ce95e2b15bd6e1@mail.gmail.com> On Thu, Feb 11, 2010 at 10:11 AM, Olemis Lang wrote: > On Thu, Feb 11, 2010 at 9:41 AM, Olemis Lang wrote: >> On Thu, Feb 11, 2010 at 7:41 AM, Michael Foord >> wrote: >>> On 11/02/2010 12:30, Nick Coghlan wrote: >>>> >>>> Michael Foord wrote: >>>> >>>>> >>>>> I'm not sure what response I expect from this email, and neither option >>>>> will be implemented without further discussion - possibly at the PyCon >>>>> sprints - but I thought I would make it clear what the possible >>>>> directions are. >>>>> >>>> >>>> I'll repeat what I said in the python-ideas thread [1]: with the advent >>>> of PEP 343 and context managers, I see any further extension of the >>>> JUnit inspired setUp/tearDown nomenclature as an undesirable direction >>>> for Python to take. >>>> >>>> Instead, I believe unittest should be adjusted to allow appropriate >>>> definition of context managers that take effect at the level of the test >>>> module, test class and each individual test. >>>> >>>> For example, given the following method definitions in unittest.TestCase >>>> for backwards compatibility: >>>> >>>> ? def __enter__(self): >>>> ? ? self.setUp() >>>> >>>> ? def __exit__(self, *args): >>>> ? ? self.tearDown() >>>> >>>> The test framework might promise to do the following for each test: >>>> >>>> ? with get_module_cm(test_instance): # However identified >>>> ? ? with get_class_cm(test_instance): # However identified >>>> ? ? ? with test_instance: # ** >>>> ? ? ? ? test_instance.test_method() >>>> >>> >> >> What Nick pointed out is the right direction (IMHO), and the one I had >> in mind since I realized that unittest extensibility is the key >> feature that needs to be implemented . I even wanted to start a >> project using this particular architecture to make PyUnit extensible. >> It's too bad (for me) that I don't have time at all, to move forward >> an just do it . >> >> :( >> >> I need days with 38 hrs !!! (at least) >> >> :$ >> >>> Well that is *effectively* how they would work (the semantics) but I don't >>> see how that would fit with the design of unittest to make them work >>> *specifically* like that - especially not if we are to remain compatible >>> with existing unittest extensions. >>> >> >> AFAICS (so not sure especially since there's nothing done to criticize >> ;o) is that backwards compatibility ?is not the main stopper ... >> >>> If you can come up with a concrete proposal of how to do this then I'm happy >>> to listen. I'm not saying it is impossible, but it isn't immediately >>> obvious. I don't see any advantage of just using context managers for the >>> sake of it and definitely not at the cost of backwards incompatibility. >>> >> >> ... but since I have nothing I can show you , everything is still in my mind ... >> > > The idea (at least the one in my head ;o) is based on the features > recently introduced in JUnit 4.7, especially the @Rule > > ;o) > .. [1] Writing your own JUnit extensions using @Rule | JUnit.org (http://www.junit.org/node/580) -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: setUpClass and setUpModule in unittest | Python | Dev - http://feedproxy.google.com/~r/TracGViz-full/~3/x18-60vceqg/806136 From olemis at gmail.com Thu Feb 11 16:16:04 2010 From: olemis at gmail.com (Olemis Lang) Date: Thu, 11 Feb 2010 10:16:04 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <20100211151034.26099.1357187229.divmod.xquotient.1049@localhost.localdomain> References: <4B71908A.3080306@voidspace.org.uk> <4B73F883.6050506@gmail.com> <4B73FB01.7040403@voidspace.org.uk> <24ea26601002110641r5a6c6138ie4a59aaa5e934e41@mail.gmail.com> <20100211151034.26099.1357187229.divmod.xquotient.1049@localhost.localdomain> Message-ID: <24ea26601002110716p7d630513m35dc51278a289d49@mail.gmail.com> On Thu, Feb 11, 2010 at 10:10 AM, wrote: > On 02:41 pm, olemis at gmail.com wrote: >> >> On Thu, Feb 11, 2010 at 7:41 AM, Michael Foord >> wrote: >>> >>> On 11/02/2010 12:30, Nick Coghlan wrote: >>>> >>>> Michael Foord wrote: >>>>> >>>>> I'm not sure what response I expect from this email, and neither option >>>>> will be implemented without further discussion - possibly at the PyCon >>>>> sprints - but I thought I would make it clear what the possible >>>>> directions are. >>>> >>>> I'll repeat what I said in the python-ideas thread [1]: with the advent >>>> of PEP 343 and context managers, I see any further extension of the >>>> JUnit inspired setUp/tearDown nomenclature as an undesirable direction >>>> for Python to take. >>>> >>>> Instead, I believe unittest should be adjusted to allow appropriate >>>> definition of context managers that take effect at the level of the test >>>> module, test class and each individual test. >>>> >>>> For example, given the following method definitions in unittest.TestCase >>>> for backwards compatibility: >>>> >>>> ? def __enter__(self): >>>> ? ? self.setUp() >>>> >>>> ? def __exit__(self, *args): >>>> ? ? self.tearDown() >>>> >>>> The test framework might promise to do the following for each test: >>>> >>>> ? with get_module_cm(test_instance): # However identified >>>> ? ? with get_class_cm(test_instance): # However identified >>>> ? ? ? with test_instance: # ** >>>> ? ? ? ? test_instance.test_method() >>> >> >> What Nick pointed out is the right direction (IMHO), and the one I had > > Why? ?Change for the sake of change is not a good thing. ?What are the > advantages of switching to context managers for this? > > Perhaps the idea was more strongly justified in the python-ideas thread. > Anyone have a link to that? >> >> in mind since I realized that unittest extensibility is the key >> feature that needs to be implemented . I even wanted to start a >> project using this particular architecture to make PyUnit extensible. > > What makes you think it isn't extensible now? ?Lots of people are extending > it in lots of ways. > Nothing I want to spend my time on. Just consider what the authors of JUnit (and XUnit too) thought about JUnit<4.7, what they did in JUnit 4.7, and you'll save me a lot of time I don't have to explain it to you (/me not being rude /me have no time :-/ ) -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: Nabble - Trac Users - Embedding pages? - http://feedproxy.google.com/~r/TracGViz-full/~3/MWT7MJBi08w/Embedding-pages--td27358804.html From exarkun at twistedmatrix.com Thu Feb 11 16:25:40 2010 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Thu, 11 Feb 2010 15:25:40 -0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <24ea26601002100647w177f576dgf3deb1771263c58a@mail.gmail.com> References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> <4B71E4DF.6060309@voidspace.org.uk> <20100209231505.26099.1257895177.divmod.xquotient.852@localhost.localdomain> <24ea26601002100647w177f576dgf3deb1771263c58a@mail.gmail.com> Message-ID: <20100211152540.26099.780628741.divmod.xquotient.1074@localhost.localdomain> On 10 Feb, 02:47 pm, olemis at gmail.com wrote: >On Tue, Feb 9, 2010 at 6:15 PM, wrote: >> >>For what it's worth, we just finished *removing* support for >>setUpClass and >>tearDownClass from Trial. > >Ok ... but why ? Are they considered dangerous for modern societies ? Several reasons: - Over the many years the feature was available, we never found anyone actually benefiting significantly from it. It was mostly used where setUp/tearDown would have worked just as well. - There are many confusing corner cases related to ordering and error handling (particularly in the face of inheritance). Different users invariably have different expectations about how these things work, and there's no way to satisfy them all. One might say that this could apply to any feature, but... - People are exploring other solutions (such as testresources) which may provide better functionality more simply and don't need support deep in the loader/runner/reporter implementations. Jean-Paul From barry at python.org Thu Feb 11 16:36:22 2010 From: barry at python.org (Barry Warsaw) Date: Thu, 11 Feb 2010 10:36:22 -0500 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <4B73373A.9050707@v.loewis.de> References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> <4B723577.8000206@v.loewis.de> <1265783113.3344.11.camel@localhost> <20100210214705.83CCD1F9AE0@kimball.webabinitio.net> <4B73373A.9050707@v.loewis.de> Message-ID: <20100211103622.5a1a8ef6@freewill.wooz.org> On Feb 10, 2010, at 11:46 PM, Martin v. L?wis wrote: >That would require that Barry actually *can* judge the issue at hand. In >the specific case, I would expect that Barry would defer the specifics >of the Windows issue to Windows experts, and then listen to what they >say. Yep, absolutely. >I'm personally split whether the proposed patch is correct (i.e. whether >asctime really *can* be implemented in a cross-platform manner; any >definite ruling on that would be welcome). In the past, we had rather >taken approaches like disabling runtime assertions "locally"; not sure >whether such approaches would work for asctime as well. > >In any case, I feel that the issue is not security-critical at all. >People just don't pass out-of-range values to asctime, but instead >typically pass the result of gmtime/localtime, which will not cause any >problems. Unless other details come to light, I agree. This one isn't worth holding up the release for. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Thu Feb 11 16:37:03 2010 From: barry at python.org (Barry Warsaw) Date: Thu, 11 Feb 2010 10:37:03 -0500 Subject: [Python-Dev] Python 2.6.5 In-Reply-To: <4B73F29A.6@gmail.com> References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> <4B723577.8000206@v.loewis.de> <1265783113.3344.11.camel@localhost> <20100210214705.83CCD1F9AE0@kimball.webabinitio.net> <4B73373A.9050707@v.loewis.de> <4B73F29A.6@gmail.com> Message-ID: <20100211103703.0798eeff@freewill.wooz.org> On Feb 11, 2010, at 10:05 PM, Nick Coghlan wrote: >When I've kicked issues in the RM's direction for a decision, I've >generally tried to make sure my last comment makes it clear exactly what >decision I'm asking them to make. Yes, this is an *excellent* point! -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From rdmurray at bitdance.com Thu Feb 11 16:56:32 2010 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 11 Feb 2010 10:56:32 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B73FB01.7040403@voidspace.org.uk> References: <4B71908A.3080306@voidspace.org.uk> <4B73F883.6050506@gmail.com> <4B73FB01.7040403@voidspace.org.uk> Message-ID: <20100211155632.D54EE1FCC71@kimball.webabinitio.net> On Thu, 11 Feb 2010 12:41:37 +0000, Michael Foord wrote: > On 11/02/2010 12:30, Nick Coghlan wrote: > > The test framework might promise to do the following for each test: > > > > with get_module_cm(test_instance): # However identified > > with get_class_cm(test_instance): # However identified > > with test_instance: # ** > > test_instance.test_method() > > Well that is *effectively* how they would work (the semantics) but I > don't see how that would fit with the design of unittest to make them > work *specifically* like that - especially not if we are to remain > compatible with existing unittest extensions. > > If you can come up with a concrete proposal of how to do this then I'm > happy to listen. I'm not saying it is impossible, but it isn't > immediately obvious. I don't see any advantage of just using context > managers for the sake of it and definitely not at the cost of backwards > incompatibility. I suspect that Nick is saying that it is worth doing for the sake of it, as being more "Pythonic" in some sense. That is, it seems to me that in a modern Python writing something like: @contextlib.contextmanager def foo_cm(testcase): testcase.bar = some_costly_setup_function() yield testcase.bar.close() @contextlib.contextmanager def foo_test_cm(testcase): testcase.baz = Mock(testcase.bar) yield @unittest.case_context(foo_cm) @unittest.test_context(foo_test_cm) class TestFoo(unittest.TestCase): def test_bar: foo = Foo(self.baz, testing=True) self.assertTrue("Context managers are cool") would be easier to write, be more maintainable, and be easier to understand when reading the code than the equivalent setUp and tearDown methods would be. I'm not saying it would be easy to implement, and as you say backward compatibility is a key concern. -- R. David Murray www.bitdance.com From fuzzyman at voidspace.org.uk Thu Feb 11 17:08:54 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Thu, 11 Feb 2010 16:08:54 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <20100211155632.D54EE1FCC71@kimball.webabinitio.net> References: <4B71908A.3080306@voidspace.org.uk> <4B73F883.6050506@gmail.com> <4B73FB01.7040403@voidspace.org.uk> <20100211155632.D54EE1FCC71@kimball.webabinitio.net> Message-ID: <4B742B96.1090508@voidspace.org.uk> On 11/02/2010 15:56, R. David Murray wrote: > On Thu, 11 Feb 2010 12:41:37 +0000, Michael Foord wrote: > >> On 11/02/2010 12:30, Nick Coghlan wrote: >> >>> The test framework might promise to do the following for each test: >>> >>> with get_module_cm(test_instance): # However identified >>> with get_class_cm(test_instance): # However identified >>> with test_instance: # ** >>> test_instance.test_method() >>> >> Well that is *effectively* how they would work (the semantics) but I >> don't see how that would fit with the design of unittest to make them >> work *specifically* like that - especially not if we are to remain >> compatible with existing unittest extensions. >> >> If you can come up with a concrete proposal of how to do this then I'm >> happy to listen. I'm not saying it is impossible, but it isn't >> immediately obvious. I don't see any advantage of just using context >> managers for the sake of it and definitely not at the cost of backwards >> incompatibility. >> > I suspect that Nick is saying that it is worth doing for the sake of it, > as being more "Pythonic" in some sense. > > That is, it seems to me that in a modern Python writing something like: > > > @contextlib.contextmanager > def foo_cm(testcase): > testcase.bar = some_costly_setup_function() > yield > testcase.bar.close() > > @contextlib.contextmanager > def foo_test_cm(testcase): > testcase.baz = Mock(testcase.bar) > yield > > > @unittest.case_context(foo_cm) > @unittest.test_context(foo_test_cm) > class TestFoo(unittest.TestCase): > > def test_bar: > foo = Foo(self.baz, testing=True) > self.assertTrue("Context managers are cool") > > > would be easier to write, be more maintainable, and be easier to > understand when reading the code than the equivalent setUp and tearDown > methods would be. > > I'm not saying it would be easy to implement, and as you say backward > compatibility is a key concern. > This is quite different to what Nick *specifically* suggested. It also doesn't suggest a general approach that would easily allow for setUpModule as well. *However*, I am *hoping* to be able to incorporate some or all of Test Resources as a general solution (with simple recipes for the setUpClass and setUpModule cases) - at which point this particular discussion will become moot. All the best, Michael Foord > -- > R. David Murray www.bitdance.com > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From tseaver at palladion.com Thu Feb 11 17:18:51 2010 From: tseaver at palladion.com (Tres Seaver) Date: Thu, 11 Feb 2010 11:18:51 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <24ea26601002100656l12897ce1jd64705df968c4139@mail.gmail.com> References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> <4B71E4DF.6060309@voidspace.org.uk> <876365j3p5.fsf@benfinney.id.au> <24ea26601002100656l12897ce1jd64705df968c4139@mail.gmail.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Olemis Lang wrote: > On Tue, Feb 9, 2010 at 8:10 PM, Ben Finney wrote: >> Michael Foord writes: >> >>> I've used unittest for long running functional and integration tests >>> (in both desktop and web applications). The infrastructure it provides >>> is great for this. Don't get hung up on the fact that it is called >>> unittest. In fact for many users the biggest reason it isn't suitable >>> for tests like these is the lack of shared fixture support - which is >>> why the other Python test frameworks provide them and we are going to >>> bring it into unittest. >> I would argue that one of the things that makes ?unittest? good is that >> it makes it difficult to do the wrong thing ? or at least *this* wrong >> thing. Fixtures persist for the lifetime of a single test case, and no >> more; that's the way unit tests should work. >> >> Making the distinction clearer by using a different API (and *not* >> extending the ?unittest? API) seems to be the right way to go. >> > > If that means that development should be focused on including > mechanisms to make unittest more extensible instead of complicating > the current ?relatively simple? API , then I agree . I think about > unittest as a framework for writing test cases; but OTOH as a > meta-framework to be used as the basic building blocks to build or > integrate third-party testing infrastructures (and that includes > third-party packages ;o) Just as a point of reference: zope.testing[1] has a "layer" feature which is used to support this usecase: a layer is a class namedd as an attribute of a testcase, e.g.: class FunctionalLayer: @classmethod def setUp(klass): """ Do some expesnive shared setup. """ @classmethod def tearDown(klass): """ Undo the expensive setup. """ class MyTest(unittest.TestCase): layer = FunctionalLayer The zope.testing testrunner groups testcase classes together by layer: each layer's setUp is called, then the testcases for that layer are run, then the layer's tearDown is called. Other features: - - Layer classes can define per-testcase-method 'testSetUp' and 'testTearDown' methods. - - Layers can be composed via inheritance, and don't need to call base layers' methods directly: the testrunner does that for them. These features has been in heavy use for about 3 1/2 years with a lot of success. [1] http://pypi.python.org/pypi/zope.testing/ Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkt0LeYACgkQ+gerLs4ltQ57WgCdFTzc1OHocXj/WTLShP62Q1bx vSAAnAqE/9+o1tZAaSLzlXfxaoRGTiuf =O/b2 -----END PGP SIGNATURE----- From exarkun at twistedmatrix.com Thu Feb 11 17:33:42 2010 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Thu, 11 Feb 2010 16:33:42 -0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> <4B71E4DF.6060309@voidspace.org.uk> <876365j3p5.fsf@benfinney.id.au> <24ea26601002100656l12897ce1jd64705df968c4139@mail.gmail.com> Message-ID: <20100211163342.26099.1099964461.divmod.xquotient.1080@localhost.localdomain> On 04:18 pm, tseaver at palladion.com wrote: > >Just as a point of reference: zope.testing[1] has a "layer" feature >which is used to support this usecase: a layer is a class namedd as an >attribute of a testcase, e.g.: > > class FunctionalLayer: > @classmethod > def setUp(klass): > """ Do some expesnive shared setup. > """ > @classmethod > def tearDown(klass): > """ Undo the expensive setup. > """ > > class MyTest(unittest.TestCase): > layer = FunctionalLayer > >The zope.testing testrunner groups testcase classes together by layer: >each layer's setUp is called, then the testcases for that layer are >run, >then the layer's tearDown is called. > >Other features: > >- - Layer classes can define per-testcase-method 'testSetUp' and >'testTearDown' methods. > >- - Layers can be composed via inheritance, and don't need to call base > layers' methods directly: the testrunner does that for them. > >These features has been in heavy use for about 3 1/2 years with a lot >of >success. > > >[1] http://pypi.python.org/pypi/zope.testing/ On the other hand: http://code.mumak.net/2009/09/layers-are-terrible.html I've never used layers myself, so I won't personally weigh in for or against. Jean-Paul From tseaver at palladion.com Thu Feb 11 18:12:37 2010 From: tseaver at palladion.com (Tres Seaver) Date: Thu, 11 Feb 2010 12:12:37 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <20100211163342.26099.1099964461.divmod.xquotient.1080@localhost.localdomain> References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> <4B71E4DF.6060309@voidspace.org.uk> <876365j3p5.fsf@benfinney.id.au> <24ea26601002100656l12897ce1jd64705df968c4139@mail.gmail.com> <20100211163342.26099.1099964461.divmod.xquotient.1080@localhost.localdomain> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 exarkun at twistedmatrix.com wrote: > On 04:18 pm, tseaver at palladion.com wrote: >> Just as a point of reference: zope.testing[1] has a "layer" feature >> which is used to support this usecase: a layer is a class namedd as an >> attribute of a testcase, e.g.: >> >> class FunctionalLayer: >> @classmethod >> def setUp(klass): >> """ Do some expesnive shared setup. >> """ >> @classmethod >> def tearDown(klass): >> """ Undo the expensive setup. >> """ >> >> class MyTest(unittest.TestCase): >> layer = FunctionalLayer >> >> The zope.testing testrunner groups testcase classes together by layer: >> each layer's setUp is called, then the testcases for that layer are >> run, >> then the layer's tearDown is called. >> >> Other features: >> >> - - Layer classes can define per-testcase-method 'testSetUp' and >> 'testTearDown' methods. >> >> - - Layers can be composed via inheritance, and don't need to call base >> layers' methods directly: the testrunner does that for them. >> >> These features has been in heavy use for about 3 1/2 years with a lot >> of >> success. >> >> >> [1] http://pypi.python.org/pypi/zope.testing/ > > On the other hand: > > http://code.mumak.net/2009/09/layers-are-terrible.html > > I've never used layers myself, so I won't personally weigh in for or > against. I don't know the author of that post as a core Zope developer: the fact is that using inheritance to manage the layers works just fine for Zope's thousands of functional tests. As for his objections: if you don't want the superclass methods called, then don't make your layer inherit from it (why else would you?). Sharing setup across test methods is the whole point of layers, or of the other mechanisms being discussed here: while I agree that such tests aren't "unit tests" in the classic sense, they do have their place. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkt0OoAACgkQ+gerLs4ltQ5YSACeLzR+LfkafGB3GLWMgMPvdiPc 8nEAoKuudwJMznZiyrmJD1SHcOkYw3cr =6VG8 -----END PGP SIGNATURE----- From olemis at gmail.com Thu Feb 11 18:30:36 2010 From: olemis at gmail.com (Olemis Lang) Date: Thu, 11 Feb 2010 12:30:36 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <87ocjyhy2s.fsf@benfinney.id.au> <4B71E4DF.6060309@voidspace.org.uk> <876365j3p5.fsf@benfinney.id.au> <24ea26601002100656l12897ce1jd64705df968c4139@mail.gmail.com> Message-ID: <24ea26601002110930k1eebd6f1v44bb6162027e75e@mail.gmail.com> On Thu, Feb 11, 2010 at 11:18 AM, Tres Seaver wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Olemis Lang wrote: >> On Tue, Feb 9, 2010 at 8:10 PM, Ben Finney wrote: >>> Michael Foord writes: >>> >>>> I've used unittest for long running functional and integration tests >>>> (in both desktop and web applications). The infrastructure it provides >>>> is great for this. Don't get hung up on the fact that it is called >>>> unittest. In fact for many users the biggest reason it isn't suitable >>>> for tests like these is the lack of shared fixture support - which is >>>> why the other Python test frameworks provide them and we are going to >>>> bring it into unittest. >>> I would argue that one of the things that makes ?unittest? good is that >>> it makes it difficult to do the wrong thing ? or at least *this* wrong >>> thing. Fixtures persist for the lifetime of a single test case, and no >>> more; that's the way unit tests should work. >>> >>> Making the distinction clearer by using a different API (and *not* >>> extending the ?unittest? API) seems to be the right way to go. >>> >> >> If that means that development should be focused on including >> mechanisms to make unittest more extensible instead of complicating >> the current ?relatively simple? API , then I agree . I think about >> unittest as a framework for writing test cases; but OTOH as a >> meta-framework to be used as the basic building blocks to build or >> integrate third-party testing infrastructures (and that includes >> third-party packages ;o) > > Just as a point of reference: ?zope.testing[1] has a "layer" feature > which is used to support this usecase: ?a layer is a class namedd as an > attribute of a testcase, e.g.: > > ?class FunctionalLayer: > ? ? @classmethod > ? ? def setUp(klass): > ? ? ? ? """ Do some expesnive shared setup. > ? ? ? ? """ > ? ? @classmethod > ? ? def tearDown(klass): > ? ? ? ? """ Undo the expensive setup. > ? ? ? ? """ > > ?class MyTest(unittest.TestCase): > ? ? ?layer = FunctionalLayer > > The zope.testing testrunner groups testcase classes together by layer: > each layer's setUp is called, then the testcases for that layer are run, > then the layer's tearDown is called. > > Other features: > > - - Layer classes can define per-testcase-method 'testSetUp' and > ?'testTearDown' methods. > > - - Layers can be composed via inheritance, and don't need to call base > ?layers' methods directly: ?the testrunner does that for them. > > These features has been in heavy use for about 3 1/2 years with a lot of > success. > I really like the style and the possibility to control the scope of ( setUp | tearDown ) . That's something I'd really consider to be included in the API ... and if it was accompanied or integrated to something like the @Rule in the backend to make it look like an extension and thus provide ?standar mechanism(s)? to get other similar features done outside stdlib too, well, much better ;o) I have to start using Zope ! Damn, I'm wasting my few most happy years ! PS: I confess that I didn't follow the thread @ Py-Ideas. I associated Nick comment to the @Rule because, in JUnit, this is implemented using something similar to Aspect Oriented Programming (something like before and after hooks ;o), and in that case the Pythonic (and IMHO more ?explicit?) translation could be context managers . Perhaps I misunderstood something in previous messages . -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: PEP 391 - Please Vote! - http://feedproxy.google.com/~r/TracGViz-full/~3/hY2h6ZSAFRE/110617 From guido at python.org Thu Feb 11 19:11:18 2010 From: guido at python.org (Guido van Rossum) Date: Thu, 11 Feb 2010 10:11:18 -0800 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B71908A.3080306@voidspace.org.uk> References: <4B71908A.3080306@voidspace.org.uk> Message-ID: On Tue, Feb 9, 2010 at 8:42 AM, Michael Foord wrote: > The next 'big' change to unittest will (may?) be the introduction of class > and module level setUp and tearDown. This was discussed on Python-ideas and > Guido supported them. They can be useful but are also very easy to abuse > (too much shared state, monolithic test classes and modules). Several > authors of other Python testing frameworks spoke up *against* them, but > several *users* of test frameworks spoke up in favour of them. ;-) Hi Michael, I have skimmed this thread (hence this reply to the first rather than the last message), but in general I am baffled by the hostility of testing framework developers towards their users. The arguments against class- and module-level seUp/tearDown functions seems to be inspired by religion or ideology more than by the zen of Python. What happened to Practicality Beats Purity? The potential for abuse in and of itself should not be an argument against a feature; it must always be weighed against the advantages. The argument that a unittest framework shouldn't be "abused" for regression tests (or integration tests, or whatever) is also bizarre to my mind. Surely if a testing framework applies to multiple kinds of testing that's a good thing, not something to be frowned upon? There are several alternative testing frameworks available outside the standard library. The provide useful competition with the stlib's unittest and doctest modules, and useful inspiration for potential new features. They also, by and large, evolve much faster than a stdlib module ever could, and including anyone of these in the stdlib might well be the death of it (just as unittest has evolved much slower since it was included). But unittest *is* still evolving, and there is no reason not to keep adding features along the lines of your module/class setUp/tearDown proposal (or extra assertions like assertListEqual, which I am happy to see has been added). On the other hand, I think we should be careful to extend unittest in a consistent way. I shuddered at earlier proposals (on python-ideas) to name the new functions (variations of) set_up and tear_down "to conform with PEP 8" (this would actually have violated that PEP, which explicitly prefers local consistency over global consistency). I also think that using a with-statement or a decorator to indicate the scope of setUp/tearDown operations flies in the face of the existing "style" of the unittest module (um, package, I know :-), which is based on defining setUp and tearDown methods with specific semantics. Regarding the objection that setUp/tearDown for classes would run into issues with subclassing, I propose to let the standard semantics of subclasses do their job. Thus a subclass that overrides setUpClass or tearDownClass is responsible for calling the base class's setUpClass and tearDownClass (and the TestCase base class should provide empty versions of both). The testrunner should only call setUpClass and tearDownClass for classes that have at least one test that is selected. Yes, this would mean that if a base class has a test method and a setUpClass (and tearDownClass) method and a subclass also has a test method and overrides setUpClass (and/or tearDown), the base class's setUpClass and tearDown may be called twice. What's the big deal? If setUpClass and tearDownClass are written properly they should support this. If this behavior is undesired in a particular case, maybe what was really meant were module-level setUp and tearDown, or the class structure should be rearranged. Anyway, Michael, thanks for getting this started -- I support your attempts to improve the unittest package and am writing in the hope that the discussion will soon converge and patches whipped up. -- --Guido van Rossum (python.org/~guido) From olemis at gmail.com Thu Feb 11 19:46:21 2010 From: olemis at gmail.com (Olemis Lang) Date: Thu, 11 Feb 2010 13:46:21 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> Message-ID: <24ea26601002111046v1fd5f4d6tea35f5f05117382d@mail.gmail.com> On Thu, Feb 11, 2010 at 1:11 PM, Guido van Rossum wrote: > On Tue, Feb 9, 2010 at 8:42 AM, Michael Foord wrote: >> The next 'big' change to unittest will (may?) be the introduction of class >> and module level setUp and tearDown. This was discussed on Python-ideas and >> Guido supported them. They can be useful but are also very easy to abuse >> (too much shared state, monolithic test classes and modules). Several >> authors of other Python testing frameworks spoke up *against* them, but >> several *users* of test frameworks spoke up in favour of them. ;-) > > But unittest *is* still evolving, as well as the XUnit paradigm as a whole, especially considering the recent work committed to and released by JUnit ;o) . > > On the other hand, I think we should be careful to extend unittest in > a consistent way. +1 . IMO that's a key indicator of the success of anything related to its evolution . > Regarding the objection that setUp/tearDown for classes would run into > issues with subclassing, I propose to let the standard semantics of > subclasses do their job. Thus a subclass that overrides setUpClass or > tearDownClass is responsible for calling the base class's setUpClass > and tearDownClass (and the TestCase base class should provide empty > versions of both). The testrunner should only call setUpClass and > tearDownClass for classes that have at least one test that is > selected. > +1 Considering zope.testing layers proposal, it seems that subclassing of layers works different, isn't it ? -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: Nabble - Trac Users - Embedding pages? - http://feedproxy.google.com/~r/TracGViz-full/~3/MWT7MJBi08w/Embedding-pages--td27358804.html From solipsis at pitrou.net Thu Feb 11 19:58:11 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 11 Feb 2010 18:58:11 +0000 (UTC) Subject: [Python-Dev] setUpClass and setUpModule in unittest References: <4B71908A.3080306@voidspace.org.uk> <4B73F883.6050506@gmail.com> <4B73FB01.7040403@voidspace.org.uk> <20100211155632.D54EE1FCC71@kimball.webabinitio.net> Message-ID: Le Thu, 11 Feb 2010 10:56:32 -0500, R. David Murray a ?crit?: > > @unittest.case_context(foo_cm) > @unittest.test_context(foo_test_cm) > class TestFoo(unittest.TestCase): > > def test_bar: > foo = Foo(self.baz, testing=True) > self.assertTrue("Context managers are cool") > > would be easier to write, be more maintainable, and be easier to > understand when reading the code than the equivalent setUp and tearDown > methods would be. I don't think it would be seriously easier to write, more maintainable or easier to understand. There's nothing complicated or obscure in setUp and tearDown methods (the only annoying thing being PEP8 non-compliance). As a matter of fact, nose has a "with_setup()" decorator which allows to avoid writing setUp/tearDown methods. But in my experience it's more annoying to use because: - you have to add the decorator explicitly (setUp/tearDown is always invoked) - you have to create your own recipient for local state (setUp/tearDown can simply use the TestCase instance), or use global variables which is ugly. Regards Antoine. From solipsis at pitrou.net Thu Feb 11 20:02:11 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 11 Feb 2010 19:02:11 +0000 (UTC) Subject: [Python-Dev] Python 2.6.5 References: <20100202100859.00d34437@heresy.wooz.org> <4B71D9E2.5020201@v.loewis.de> <1265752965.3367.1.camel@localhost> <4B71DFCB.9080304@v.loewis.de> <4B723577.8000206@v.loewis.de> <1265783113.3344.11.camel@localhost> <20100210214705.83CCD1F9AE0@kimball.webabinitio.net> <4B73373A.9050707@v.loewis.de> <20100211103622.5a1a8ef6@freewill.wooz.org> Message-ID: Le Thu, 11 Feb 2010 10:36:22 -0500, Barry Warsaw a ?crit?: > > Unless other details come to light, I agree. This one isn't worth > holding up the release for. Ok, since everyone seems to agree on this, I've downgraded the priority of the issue. Thanks for an insightful discussion :-) cheers Antoine. From tseaver at palladion.com Thu Feb 11 20:25:58 2010 From: tseaver at palladion.com (Tres Seaver) Date: Thu, 11 Feb 2010 14:25:58 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <24ea26601002111046v1fd5f4d6tea35f5f05117382d@mail.gmail.com> References: <4B71908A.3080306@voidspace.org.uk> <24ea26601002111046v1fd5f4d6tea35f5f05117382d@mail.gmail.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Olemis Lang wrote: > On Thu, Feb 11, 2010 at 1:11 PM, Guido van Rossum wrote: >> Regarding the objection that setUp/tearDown for classes would run into >> issues with subclassing, I propose to let the standard semantics of >> subclasses do their job. Thus a subclass that overrides setUpClass or >> tearDownClass is responsible for calling the base class's setUpClass >> and tearDownClass (and the TestCase base class should provide empty >> versions of both). The testrunner should only call setUpClass and >> tearDownClass for classes that have at least one test that is >> selected. >> > > +1 > > Considering zope.testing layers proposal, it seems that subclassing of > layers works different, isn't it ? Hmm, I wasn't making a proposal that the unittest module adopt zope.testing's layers model: I was just trying to point out that another model did exist and was being used successfully. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkt0WcUACgkQ+gerLs4ltQ6CgACfb9kQ6vpu6BwOJLBOLDnHnTil dZMAnjdkdT/5RQXGIWFXGuUgnV8rQSuI =ExUu -----END PGP SIGNATURE----- From holger.krekel at gmail.com Thu Feb 11 20:26:29 2010 From: holger.krekel at gmail.com (Holger Krekel) Date: Thu, 11 Feb 2010 20:26:29 +0100 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> Message-ID: Hi Guido, On Thu, Feb 11, 2010 at 7:11 PM, Guido van Rossum wrote: > On Tue, Feb 9, 2010 at 8:42 AM, Michael Foord wrote: >> The next 'big' change to unittest will (may?) be the introduction of class >> and module level setUp and tearDown. This was discussed on Python-ideas and >> Guido supported them. They can be useful but are also very easy to abuse >> (too much shared state, monolithic test classes and modules). Several >> authors of other Python testing frameworks spoke up *against* them, but >> several *users* of test frameworks spoke up in favour of them. ;-) > > Hi Michael, > > I have skimmed this thread (hence this reply to the first rather than > the last message), but in general I am baffled by the hostility of > testing framework developers towards their users. The arguments > against class- and module-level seUp/tearDown functions seems to be > inspired by religion or ideology more than by the zen of Python. What > happened to Practicality Beats Purity? Hostility against users? I have not heart that feedback from my users yet - or am i missing some meaning of your words? > The potential for abuse in and of itself should not be an argument > against a feature; it must always be weighed against the advantages. sure. > The argument that a unittest framework shouldn't be "abused" for > regression tests (or integration tests, or whatever) is also bizarre > to my mind. Surely if a testing framework applies to multiple kinds of > testing that's a good thing, not something to be frowned upon? If an approach has known limitations it's also good to point them out. Also ok to disregard them and still consider something useful enough. > There are several alternative testing frameworks available outside the > standard library. The provide useful competition with the stlib's > unittest and doctest modules, and useful inspiration for potential new > features. They also, by and large, evolve much faster than a stdlib > module ever could, and including anyone of these in the stdlib might > well be the death of it (just as unittest has evolved much slower > since it was included). Fully agreed :) > But unittest *is* still evolving, and there is no reason not to keep > adding features along the lines of your module/class setUp/tearDown > proposal (or extra assertions like assertListEqual, which I am happy > to see has been added). > > On the other hand, I think we should be careful to extend unittest in > a consistent way. I shuddered at earlier proposals (on python-ideas) > to name the new functions (variations of) set_up and tear_down "to > conform with PEP 8" (this would actually have violated that PEP, which > explicitly prefers local consistency over global consistency). If that was me you refer to - i followed PEP8 5 years ago when introducing setup_class/module and i still stand by it, it was supposed to be a more pythonic alternative and i consider PEP8 as part of that. But i agree - introducing it to std-unittest now makes not much sense due to local consistency reasons. I appreciate Michael's effort to help advance testing - we have a good private discussion currently btw - and i am happy to collaborate with him on future issues, setupClass or not :) cheers, holger From rdmurray at bitdance.com Thu Feb 11 20:28:51 2010 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 11 Feb 2010 14:28:51 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B742B96.1090508@voidspace.org.uk> References: <4B71908A.3080306@voidspace.org.uk> <4B73F883.6050506@gmail.com> <4B73FB01.7040403@voidspace.org.uk> <20100211155632.D54EE1FCC71@kimball.webabinitio.net> <4B742B96.1090508@voidspace.org.uk> Message-ID: <20100211192851.241771FCCBC@kimball.webabinitio.net> On Thu, 11 Feb 2010 16:08:54 +0000, Michael Foord wrote: > On 11/02/2010 15:56, R. David Murray wrote: > > On Thu, 11 Feb 2010 12:41:37 +0000, Michael Foord wrote: > >> On 11/02/2010 12:30, Nick Coghlan wrote: > >>> The test framework might promise to do the following for each test: > >>> > >>> with get_module_cm(test_instance): # However identified > >>> with get_class_cm(test_instance): # However identified > >>> with test_instance: # ** > >>> test_instance.test_method() > >>> > > > > @contextlib.contextmanager > > def foo_cm(testcase): > > testcase.bar = some_costly_setup_function() > > yield > > testcase.bar.close() > > > > @contextlib.contextmanager > > def foo_test_cm(testcase): > > testcase.baz = Mock(testcase.bar) > > yield > > > > > > @unittest.case_context(foo_cm) > > @unittest.test_context(foo_test_cm) > > class TestFoo(unittest.TestCase): > > > > def test_bar: > > foo = Foo(self.baz, testing=True) > > self.assertTrue("Context managers are cool") > > > > > This is quite different to what Nick *specifically* suggested. It also > doesn't suggest a general approach that would easily allow for > setUpModule as well. I'm not sure how it is different. I thought I was indicating how to do the context manager "discovery" that Nick punted on. (Except for module level, which I didn't have a good idea for). > *However*, I am *hoping* to be able to incorporate some or all of Test > Resources as a general solution (with simple recipes for the setUpClass > and setUpModule cases) - at which point this particular discussion will > become moot. Which pretty much makes my noddling above moot right now, because having taken a quick look at testresources I think that's a much closer fit for my use cases than class level setup/teardown. So I'm +1 for going the testresources route rather than the setup/teardown route. -- R. David Murray www.bitdance.com From guido at python.org Thu Feb 11 20:40:39 2010 From: guido at python.org (Guido van Rossum) Date: Thu, 11 Feb 2010 11:40:39 -0800 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> Message-ID: On Thu, Feb 11, 2010 at 11:26 AM, Holger Krekel wrote: > Hi Guido, > > On Thu, Feb 11, 2010 at 7:11 PM, Guido van Rossum wrote: >> On Tue, Feb 9, 2010 at 8:42 AM, Michael Foord wrote: >>> The next 'big' change to unittest will (may?) be the introduction of class >>> and module level setUp and tearDown. This was discussed on Python-ideas and >>> Guido supported them. They can be useful but are also very easy to abuse >>> (too much shared state, monolithic test classes and modules). Several >>> authors of other Python testing frameworks spoke up *against* them, but >>> several *users* of test frameworks spoke up in favour of them. ;-) >> >> Hi Michael, >> >> I have skimmed this thread (hence this reply to the first rather than >> the last message), but in general I am baffled by the hostility of >> testing framework developers towards their users. The arguments >> against class- and module-level seUp/tearDown functions seems to be >> inspired by religion or ideology more than by the zen of Python. What >> happened to Practicality Beats Purity? > > Hostility against users? ?I have not heart that feedback from my users > yet - or am i missing some meaning of your words? Sorry for the sweeping generality. I was referring to one or more posts (I don't recall by whom) arguing against including class/module setup/teardown functionality based on it being against the notion of unittesting or something like that. I'm sorry, but the thread is too long for me to find the specific post. But I'm pretty sure I saw something like that. >> The potential for abuse in and of itself should not be an argument >> against a feature; it must always be weighed against the advantages. > > sure. > >> The argument that a unittest framework shouldn't be "abused" for >> regression tests (or integration tests, or whatever) is also bizarre >> to my mind. Surely if a testing framework applies to multiple kinds of >> testing that's a good thing, not something to be frowned upon? > > If an approach has known limitations it's also good to point them out. > Also ok to disregard them and still consider something useful enough. > >> There are several alternative testing frameworks available outside the >> standard library. The provide useful competition with the stlib's >> unittest and doctest modules, and useful inspiration for potential new >> features. They also, by and large, evolve much faster than a stdlib >> module ever could, and including anyone of these in the stdlib might >> well be the death of it (just as unittest has evolved much slower >> since it was included). > > Fully agreed :) > >> But unittest *is* still evolving, and there is no reason not to keep >> adding features along the lines of your module/class setUp/tearDown >> proposal (or extra assertions like assertListEqual, which I am happy >> to see has been added). > >> >> On the other hand, I think we should be careful to extend unittest in >> a consistent way. I shuddered at earlier proposals (on python-ideas) >> to name the new functions (variations of) set_up and tear_down "to >> conform with PEP 8" (this would actually have violated that PEP, which >> explicitly prefers local consistency over global consistency). > > If that was me you refer to - i followed PEP8 5 years ago when > introducing setup_class/module and i still stand by it, it was > supposed to be a more pythonic alternative and i consider PEP8 as part > of that. ?But i agree - introducing it to std-unittest now makes not > much sense due to local consistency reasons. Ok let's drop it then. > I appreciate Michael's effort to help advance testing - we have a good > private discussion currently btw - and i am happy to collaborate with > him on future issues, setupClass or not :) > > cheers, > holger > -- --Guido van Rossum (python.org/~guido) From ben+python at benfinney.id.au Thu Feb 11 23:03:11 2010 From: ben+python at benfinney.id.au (Ben Finney) Date: Fri, 12 Feb 2010 09:03:11 +1100 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues References: <4B719001.7080201@voidspace.org.uk> <87sk9ahyeo.fsf@benfinney.id.au> <4B71E422.8000402@voidspace.org.uk> <4B73F457.4050709@gmail.com> <4B73FB8B.2070807@voidspace.org.uk> Message-ID: <87eikrh1ls.fsf@benfinney.id.au> Michael Foord writes: > It is done. The slight disadvantage is that overriding > shortDescription on your own TestCase no longer removes the test name > from being added to the short description. That's a significant disadvantage; it can easily double the length of the reported description for a test case. Before: The Wodget should spangulate with the specified doohickey... ok After: test_zwickyblatt.MechakuchaWidget.test_spangulates_with_specified_doohickey: The Wodget should spangulate with the specified doohickey... ok (if I have the new description incorrect feel free to correct me, but I think the point is clear about adding the test name to the description). Reports that before would mostly stay within a standard 80-column terminal will now almost always be line-wrapping, making the output much harder to read. > On the other hand if you do override shortDescription you don't have > to add the test name yourself The problem isn't only with overridden shortDescription. The problem is the breakage in the existing behaviour of shortDescription, even in cases that never needed to override shortDescription. > and using a custom TestResult (overriding getDescription) is much > easier now that the TextTestRunner takes a resultclass argument in the > constructor. Again, it seems that adding this to the output is the job of the thing which does the reporting, *if* wanted. The (long!) name isn't part of the TestCase description, so shouldn't be bolted onto the TestResult description. -- \ Moriarty: ?Forty thousand million billion dollars? That money | `\ must be worth a fortune!? ?The Goon Show, _The Sale of | _o__) Manhattan_ | Ben Finney From ben+python at benfinney.id.au Thu Feb 11 23:20:52 2010 From: ben+python at benfinney.id.au (Ben Finney) Date: Fri, 12 Feb 2010 09:20:52 +1100 Subject: [Python-Dev] setUpClass and setUpModule in unittest References: <4B71908A.3080306@voidspace.org.uk> Message-ID: <87aavfh0sb.fsf@benfinney.id.au> Guido van Rossum writes: > The potential for abuse in and of itself should not be an argument > against a feature; it must always be weighed against the advantages. It's both, surely? The potential for abuse of something is an argument against it; *and* that argument should be weighed against other arguments. Or, in other words: the potential for abuse of a feature is an argument that should not be discarded solely because there are advantages to that feature. > The argument that a unittest framework shouldn't be "abused" for > regression tests (or integration tests, or whatever) is also bizarre > to my mind. Surely if a testing framework applies to multiple kinds of > testing that's a good thing, not something to be frowned upon? To my mind, an API should take a stand on the ?right? way to use it, rather than being a kitchen-sink of whatever ideas had any support. Doing something the right way should be easy, and doing something the wrong way should be awkward. This must be balanced, of course, with the principle that easy things should be easy and difficult things should be possible. But it doesn't necessarily conflict; we just need to take care that the easy and the right align well :-) > There are several alternative testing frameworks available outside the > standard library. The provide useful competition with the stlib's > unittest and doctest modules, and useful inspiration for potential new > features. They also, by and large, evolve much faster than a stdlib > module ever could, and including anyone of these in the stdlib might > well be the death of it (just as unittest has evolved much slower > since it was included). Right. This is an argument in favour of being assertive and parsimonious in the design of the standard-library ?unittest? API: this is the clear and obvious way to use this API, and if someone wants to do it a different way there are alternatives available. > But unittest *is* still evolving, and there is no reason not to keep > adding features along the lines of your module/class setUp/tearDown > proposal (or extra assertions like assertListEqual, which I am happy > to see has been added). That's a dismissal of the reasons that have been presented, without actually countering those reasons. > Anyway, Michael, thanks for getting this started -- I support your > attempts to improve the unittest package and am writing in the hope > that the discussion will soon converge and patches whipped up. Ditto. -- \ ?I have had a perfectly wonderful evening, but this wasn't it.? | `\ ?Groucho Marx | _o__) | Ben Finney From robert.kern at gmail.com Fri Feb 12 00:00:51 2010 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 11 Feb 2010 17:00:51 -0600 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <87aavfh0sb.fsf@benfinney.id.au> References: <4B71908A.3080306@voidspace.org.uk> <87aavfh0sb.fsf@benfinney.id.au> Message-ID: On 2010-02-11 16:20 PM, Ben Finney wrote: > Guido van Rossum writes: >> The argument that a unittest framework shouldn't be "abused" for >> regression tests (or integration tests, or whatever) is also bizarre >> to my mind. Surely if a testing framework applies to multiple kinds of >> testing that's a good thing, not something to be frowned upon? > > To my mind, an API should take a stand on the ?right? way to use it, > rather than being a kitchen-sink of whatever ideas had any support. > Doing something the right way should be easy, and doing something the > wrong way should be awkward. setUpClass and setUpModule are the "right" way to do many types of integration and functional tests. Integration and functional tests are vital tasks to perform, and unittest provides a good framework otherwise for implementing such tests. >> There are several alternative testing frameworks available outside the >> standard library. The provide useful competition with the stlib's >> unittest and doctest modules, and useful inspiration for potential new >> features. They also, by and large, evolve much faster than a stdlib >> module ever could, and including anyone of these in the stdlib might >> well be the death of it (just as unittest has evolved much slower >> since it was included). > > Right. This is an argument in favour of being assertive and parsimonious > in the design of the standard-library ?unittest? API: this is the clear > and obvious way to use this API, and if someone wants to do it a > different way there are alternatives available. I would agree if the requirements for unit testing and integration/functional tests were so different. However, unittest provides most of the necessary infrastructure that is common to all of those kinds of testing. It's just that the latter kinds of tests also could use setUpClass/setUpModule. It would be a waste (and honestly kind of ridiculous) to force people to use a whole new framework (which would duplicate unittest in almost its entirety) for want of those two methods. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From holger.krekel at gmail.com Fri Feb 12 00:57:58 2010 From: holger.krekel at gmail.com (Holger Krekel) Date: Fri, 12 Feb 2010 00:57:58 +0100 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <87aavfh0sb.fsf@benfinney.id.au> Message-ID: On Fri, Feb 12, 2010 at 12:00 AM, Robert Kern wrote: > On 2010-02-11 16:20 PM, Ben Finney wrote: >> >> Guido van Rossum ?writes: > >>> The argument that a unittest framework shouldn't be "abused" for >>> regression tests (or integration tests, or whatever) is also bizarre >>> to my mind. Surely if a testing framework applies to multiple kinds of >>> testing that's a good thing, not something to be frowned upon? >> >> To my mind, an API should take a stand on the ?right? way to use it, >> rather than being a kitchen-sink of whatever ideas had any support. >> Doing something the right way should be easy, and doing something the >> wrong way should be awkward. > > setUpClass and setUpModule are the "right" way to do many types of > integration and functional tests. Integration and functional tests are vital > tasks to perform, and unittest provides a good framework otherwise for > implementing such tests. Ben just expressed his opinion about API design and you claim some truth about testing in general. In my experience, integration and functional testing is a complex and evolving topic, usually requiring more from the tool or framework than classic unit-testing. To name a few issues: * creating tempdirs and files * setting up base environments * starting and stopping servers * mocking components * replaying individual tests * reacting to timeouts * handling asynchronicity * web-javascript integration support * configuring fixtures from config files * CI tool integration and multi-platform deployment * running variations of the same tests across different base configs * ... much much more It's true that you can go and extend unittest for that but a) unittest is just a tiny bit of what is involved for satisfying the needs b) what you are doing then is mostly using the fact that a setup function (or chain) is invoked and a test function is invoked and that python has some builtin modules for handling the above issues. And you are using Python - and Python is nice and (potentially) concise for writing tests, sure. That's not wholly the fault of the unittest module, though :) So. Doing fixtures via static encoding in class and module setup functions is a way to provide a generic framing for writing tests. The "right" way? In many cases and for the about 6 different people i interacted with (and on actual RL code) in the last 2 weeks it does not help incredibly much. There is experiences from other test tool authors indicating similar experiences. I will say that module/class can be helpful and you can do some useful things with it and thus it makes some sense to add it for std-unittest but claiming this is great and most of what you need for "many types" of functional testing is misleading and plays down the many good things you can do with Testing and Python. best, holger From robert.kern at gmail.com Fri Feb 12 01:15:12 2010 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 11 Feb 2010 18:15:12 -0600 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <87aavfh0sb.fsf@benfinney.id.au> Message-ID: On 2010-02-11 17:57 PM, Holger Krekel wrote: > On Fri, Feb 12, 2010 at 12:00 AM, Robert Kern wrote: >> On 2010-02-11 16:20 PM, Ben Finney wrote: >>> >>> Guido van Rossum writes: >> >>>> The argument that a unittest framework shouldn't be "abused" for >>>> regression tests (or integration tests, or whatever) is also bizarre >>>> to my mind. Surely if a testing framework applies to multiple kinds of >>>> testing that's a good thing, not something to be frowned upon? >>> >>> To my mind, an API should take a stand on the ?right? way to use it, >>> rather than being a kitchen-sink of whatever ideas had any support. >>> Doing something the right way should be easy, and doing something the >>> wrong way should be awkward. >> >> setUpClass and setUpModule are the "right" way to do many types of >> integration and functional tests. Integration and functional tests are vital >> tasks to perform, and unittest provides a good framework otherwise for >> implementing such tests. > > Ben just expressed his opinion about API design and you claim some > truth about testing in general. My first sentence was about API design. My second was justification that the use case is worth designing and API for. You can add implicit "in my opinion"s to just about anything I say, if you wish. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fuzzyman at voidspace.org.uk Fri Feb 12 01:43:29 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 12 Feb 2010 00:43:29 +0000 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues In-Reply-To: <87eikrh1ls.fsf@benfinney.id.au> References: <4B719001.7080201@voidspace.org.uk> <87sk9ahyeo.fsf@benfinney.id.au> <4B71E422.8000402@voidspace.org.uk> <4B73F457.4050709@gmail.com> <4B73FB8B.2070807@voidspace.org.uk> <87eikrh1ls.fsf@benfinney.id.au> Message-ID: <4B74A431.90001@voidspace.org.uk> On 11/02/2010 22:03, Ben Finney wrote: > Michael Foord writes: > > >> It is done. The slight disadvantage is that overriding >> shortDescription on your own TestCase no longer removes the test name >> from being added to the short description. >> > That's a significant disadvantage; it can easily double the length of > the reported description for a test case. > > Before: > > The Wodget should spangulate with the specified doohickey... ok > > After: > > test_zwickyblatt.MechakuchaWidget.test_spangulates_with_specified_doohickey: The Wodget should spangulate with the specified doohickey... ok > There is a newline between the testname and the first line of the docstring. If there is no docstring behaviour is completely unchanged. This is how it was in the 2.7 codebase before I made the change and is unchanged. The only difference is that you don't lose this behaviour by overriding TestCase.shortDescription(). > (if I have the new description incorrect feel free to correct me, but I > think the point is clear about adding the test name to the description). > > Reports that before would mostly stay within a standard 80-column > terminal will now almost always be line-wrapping, making the output much > harder to read. > > >> On the other hand if you do override shortDescription you don't have >> to add the test name yourself >> > The problem isn't only with overridden shortDescription. The problem is > the breakage in the existing behaviour of shortDescription, even in > cases that never needed to override shortDescription. > shortDescription itself is now unchanged from Python 2.6. > >> and using a custom TestResult (overriding getDescription) is much >> easier now that the TextTestRunner takes a resultclass argument in the >> constructor. >> > Again, it seems that adding this to the output is the job of the thing > which does the reporting, *if* wanted. The (long!) name isn't part of > the TestCase description, so shouldn't be bolted onto the TestResult > description. > > Well, it *is* the TextTestResult that does the reporting. Don't believe me - look at the code. Test results are reported (written to the output stream) by the TextTestResult. Actually my description above was slightly incorrect - it is only TextTestResult that has a getDescription method, so custom TestResult implementations that inherit directly from TestResult will now also have unchanged behavior from 2.6 in this regard. Michael -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From ben+python at benfinney.id.au Fri Feb 12 02:01:39 2010 From: ben+python at benfinney.id.au (Ben Finney) Date: Fri, 12 Feb 2010 12:01:39 +1100 Subject: [Python-Dev] unittest: shortDescription, _TextTestResult and other issues References: <4B719001.7080201@voidspace.org.uk> <87sk9ahyeo.fsf@benfinney.id.au> <4B71E422.8000402@voidspace.org.uk> <4B73F457.4050709@gmail.com> <4B73FB8B.2070807@voidspace.org.uk> <87eikrh1ls.fsf@benfinney.id.au> <4B74A431.90001@voidspace.org.uk> Message-ID: <876363gtcc.fsf@benfinney.id.au> Michael Foord writes: > There is a newline between the testname and the first line of the > docstring. If there is no docstring behaviour is completely unchanged. [?] > shortDescription itself is now unchanged from Python 2.6. Thanks, that completely addresses and satisfies my concerns about the test case reporting. Great work, Michael; not only in the coding, but especially in the communication about these changes. -- \ ?Science is a way of trying not to fool yourself. The first | `\ principle is that you must not fool yourself, and you are the | _o__) easiest person to fool.? ?Richard P. Feynman, 1964 | Ben Finney From g.brandl at gmx.net Fri Feb 12 09:34:10 2010 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 12 Feb 2010 09:34:10 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> Message-ID: Am 09.02.2010 04:47, schrieb Benjamin Peterson: > 2010/2/8 "Martin v. L?wis" : >> Benjamin Peterson wrote: >>> 2010/2/8 Dirkjan Ochtman : >>>> On Sun, Feb 7, 2010 at 22:51, Benjamin Peterson wrote: >>>>> Will you do test conversions of the sandbox projects, too? >>>> Got any particular projects in mind? >>> >>> 2to3. >> >> Does Mercurial even support merge tracking the way we are doing it for >> 2to3 right now? > > I don't believe so. My plan was to manually sync updates or use subrepos. Why even keep 2to3 in the sandbox? It should be mature enough now to be maintained directly in the tree. Also, using a subrepo is fine for the Python 2 version, but what about the Python 3 version? Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From g.brandl at gmx.net Fri Feb 12 09:39:07 2010 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 12 Feb 2010 09:39:07 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B70D8F6.3010806@v.loewis.de> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> Message-ID: Am 09.02.2010 04:39, schrieb "Martin v. L?wis": > Benjamin Peterson wrote: >> 2010/2/8 Dirkjan Ochtman : >>> On Sun, Feb 7, 2010 at 22:51, Benjamin Peterson wrote: >>>> Will you do test conversions of the sandbox projects, too? >>> Got any particular projects in mind? >> >> 2to3. > > Does Mercurial even support merge tracking the way we are doing it for > 2to3 right now? No, it does not. This is also a concern for the Python 2 -> Python 3 merging, where (I think) we decided not to have shared history. Transplant already does most of the job (including recording the source hash of transplanted changesets), but it lacks blocking and consistent rejection of already-merged changesets (it does not read the source hashes it records, but keeps a local cache of such hashes instead, which obviously doesn't do anything across repositories.) I think it should be possible to have transplant regenerate and update that cache automatically on clone/pull/etc. I guess this is a relatively simple task for a Mercurial hacker, and if it's decided to use this workflow "someone" ;) could address it at the PyCon sprint. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From dirkjan at ochtman.nl Fri Feb 12 10:36:25 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Fri, 12 Feb 2010 10:36:25 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> Message-ID: On Fri, Feb 12, 2010 at 09:39, Georg Brandl wrote: > No, it does not. ?This is also a concern for the Python 2 -> Python 3 merging, > where (I think) we decided not to have shared history. ?Transplant already I don't think this is similar to 2 vs. 3, because 2 vs. 3 are full branching (so you could still use "normal" hg merge tracking there). Since hg doesn't do merge tracking on the directory level, you couldn't use Mercurial merges (or transplant, AFAICS) to do what you want here. > does most of the job (including recording the source hash of transplanted > changesets), but it lacks blocking and consistent rejection of already-merged > changesets (it does not read the source hashes it records, but keeps a local > cache of such hashes instead, which obviously doesn't do anything across > repositories.) ?I think it should be possible to have transplant regenerate > and update that cache automatically on clone/pull/etc. > > I guess this is a relatively simple task for a Mercurial hacker, and if it's > decided to use this workflow "someone" ;) could address it at the PyCon sprint. Yes, we should figure out some workflow issues soon. Cheers, Dirkjan From ncoghlan at gmail.com Fri Feb 12 11:10:20 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 12 Feb 2010 20:10:20 +1000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <20100211155632.D54EE1FCC71@kimball.webabinitio.net> References: <4B71908A.3080306@voidspace.org.uk> <4B73F883.6050506@gmail.com> <4B73FB01.7040403@voidspace.org.uk> <20100211155632.D54EE1FCC71@kimball.webabinitio.net> Message-ID: <4B75290C.4060101@gmail.com> R. David Murray wrote: > would be easier to write, be more maintainable, and be easier to > understand when reading the code than the equivalent setUp and tearDown > methods would be. > > I'm not saying it would be easy to implement, and as you say backward > compatibility is a key concern. That's the gist of my thinking, yeah. However, a couple of problems of note that occurred to me after an extra day or so of musing on the topic: - the semantics I listed in my original post are broken, since a naive context manager couldn't be used (it would setup and tear down the resources for each test instance, which is what we're trying to avoid). Supporting naive context managers would require iterating over the test classes within the module CM and iterating over the instances within the class CM. - context managers fit best in more procedural code. They're tricky to invoke correctly from code that is split across several methods in different classes (as I believe unittest is), since you can't use a with statement directly to do the invocation for you So I think new setup*/teardown* methods and functions are likely to be a better fit for the unittest architecture. At a later date, it may be worth adding some mixins or other mechanisms that adapt from the unittest setup/teardown model to a CM based model, but to be honest, if I want to use a CM when testing, I'll generally create a more complex test method that iterates through a bunch of test inputs itself. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Fri Feb 12 11:18:26 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 12 Feb 2010 20:18:26 +1000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <87aavfh0sb.fsf@benfinney.id.au> Message-ID: <4B752AF2.3060703@gmail.com> Holger Krekel wrote: > In my experience, integration and > functional testing is a complex and evolving topic, usually requiring > more from the tool or framework than classic unit-testing. Assignment for the reader: compare and contrast unittest and test.regrtest (including test.support and friends) :) Yes, you need extra stuff to do higher level testing, but in many instances unittest still makes a nice framework to hang it on. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From olemis at gmail.com Fri Feb 12 15:16:36 2010 From: olemis at gmail.com (Olemis Lang) Date: Fri, 12 Feb 2010 09:16:36 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B71B1A0.3070904@voidspace.org.uk> References: <4B71908A.3080306@voidspace.org.uk> <24ea26601002091029j6f70efd3g539589eda9c8c873@mail.gmail.com> <24ea26601002091055x46f8228dk3f210931434ef61@mail.gmail.com> <24ea26601002091100i56b745e0m4c441daca3df8941@mail.gmail.com> <4B71B1A0.3070904@voidspace.org.uk> Message-ID: <24ea26601002120616o2b60d28ctd0486ec17adbfc2c@mail.gmail.com> On Tue, Feb 9, 2010 at 2:04 PM, Michael Foord wrote: > On 09/02/2010 19:00, Olemis Lang wrote: >> >> Sorry. I had not finished the previous message >> >> On Tue, Feb 9, 2010 at 1:55 PM, Olemis Lang ?wrote: >>> On Tue, Feb 9, 2010 at 1:29 PM, Olemis Lang ?wrote: >>>> On Tue, Feb 9, 2010 at 11:42 AM, Michael Foord >>>> ?wrote: >>>> >>>>> Hello all, >>>>> >>>>> Several >>>>> authors of other Python testing frameworks spoke up *against* them, but >>>>> several *users* of test frameworks spoke up in favour of them. ;-) >>>>> >>>> >>>> +1 for having something like that included in unittest >>>> >>>>> I'm pretty sure I can introduce setUpClass and setUpModule without >>>>> breaking >>>>> compatibility with existing unittest extensions or backwards >>>>> compatibility >>>>> issues >>>> >>>> Is it possible to use the names `BeforeClass` and `AfterClass` (just >>>> to be make it look similar to JUnit naming conventions ;o) ? >>> >>> Another Q: >>> >>> ?- class setup method will be a `classmethod` isn't it ? It should not be >>> ? ? a regular instance method because IMO it is not bound to a particular >>> ? ? `TestCase` instance. >> >> ? - Is it possible to rely on the fact that all class-level tear down >> ? ? methods will be guaranteed to run even if class-level setup >> ? ? method throws an exception ? > > Yes it will be a classmethod rather than an instance method. +1 > I would expect > that in common with instance setUp the tearDown would *not* be run if setUp > fails. What would be nice would be an extension of addCleanUp so that it can > be used by class and module level setUp. Clean-ups largely obsolete the need > for tearDown anyway. > I really disagree. IMO I am -1 for having `addCleanUp` and so on added to the core API (i.e. `TestCase` class). The same goes for test resources (especially if that means to merge it with the API rather than including it as a separate independent module). The use cases for that feature are not, in general, basic use cases ('cause if they were *simple*, setUp/tearDown would be a *simple* alternative to do the same thing ;o). I repeat that my opinion is that I am -1 for including each and every feature needed for testing purposes jus because it's very (super) useful to solve even many use cases (e.g. context managers, by themselves, are an empty and abstract construct that solve a set of problems *once they are implemented*, but the top-level abstractions are not directly useful by themselves). It's an API. In JUnit there are a lot of useful extensions implemented in `junit-ext` package & Co. (and AFAICR that also includes integration testing & test resources) and besides there are some other important features in JUnit>=4.7 itself, *but not hard coded in TestCase* (e.g. `org.junit.rules.ExternalResource`, ...) and also allowing extensions (e.g. `org.junit.rules.TemporaryFolder`) using well established mechanisms (e.g. inheritance) . This also has the benefit that the responsibilities are distributed to a set of relevant objects following well-known interaction patterns, rather than cluttering a class with all sort of functionalities . PS: I say this and I know that it's quite unlikely that you will reconsider it in order to revert what's being done there . But, if we take a look to JUnit>=4.7, just notice that resource management is not an integral part of `TestCase` at all, and is performed in a more structured way, consistent with the ?standard? or officially supported mechanism used to add any other extension to JUnit . Honestly I can obviously see the differences with respect to `addCleanUp` implementation as we know it today. -- Regards, Olemis. Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: TracGViz plugin downloaded more than 1000 times (> 300 from PyPI) - http://feedproxy.google.com/~r/simelo-en/~3/06Exn-JPLIA/tracgviz-plugin-downloaded-more-than.html From fuzzyman at voidspace.org.uk Fri Feb 12 16:49:02 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 12 Feb 2010 15:49:02 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> Message-ID: <4B75786E.4090800@voidspace.org.uk> On 11/02/2010 18:11, Guido van Rossum wrote: > On Tue, Feb 9, 2010 at 8:42 AM, Michael Foord wrote: > >> The next 'big' change to unittest will (may?) be the introduction of class >> and module level setUp and tearDown. This was discussed on Python-ideas and >> Guido supported them. They can be useful but are also very easy to abuse >> (too much shared state, monolithic test classes and modules). > [snip...] > > The potential for abuse in and of itself should not be an argument > against a feature; it must always be weighed against the advantages. > The advantage of setUpClass and setUpModule is that they allow you to have shared fixtures shared between tests, essential for certain kinds of testing. A practical difficulty with setUpClass and setUpModule is that because the scope of the shared fixtures is fixed it makes it much harder to later refactor your tests - either into several classes or into several modules - when the tests grow. My *hope* is that we provide a general solution, possibly based on all or part of Test Resources, with an easy mechanism for the setUpClass and setUpModule but also solves the more general case of sharing fixtures between tests. If that doesn't turn out to be possible then we'll go for a straight implementation of setUpClass / setUpModule. I'm hoping I can get this together in time for the PyCon sprints... Here's a current minimal example of using Test Resources. It could be simplified further with helper functions and by some of the functionality moving into unittest itself. OptimisingTestSuite here ensures that the resource is created before first use (MyTempDir.make is called) and disposed of when finished with (MyTempDir.clean is called). import shutil import tempfile import testresources def load_tests(loader, tests, pattern): # this step could be built into the standard loader return testresources.OptimisingTestSuite(tests) class MyTempDir(testresources.TestResource): def make(self, dependency_resources): return tempfile.mkdtemp() def clean(self, resource): shutil.rmtree(resource) class MyTest(testresources.ResourcedTestCase): resources = [('workdir', MyTempDir())] def test_foo(self): print self.workdir def test_bar(self): print self.workdir Michael -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From status at bugs.python.org Fri Feb 12 18:07:25 2010 From: status at bugs.python.org (Python tracker) Date: Fri, 12 Feb 2010 18:07:25 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20100212170725.68C26785F4@psf.upfronthosting.co.za> ACTIVITY SUMMARY (02/05/10 - 02/12/10) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue number. Do NOT respond to this message. 2602 open (+33) / 17137 closed (+25) / 19739 total (+58) Open issues with patches: 1069 Average duration of open issues: 712 days. Median duration of open issues: 466 days. Open Issues Breakdown open 2567 (+33) pending 34 ( +0) Issues Created Or Reopened (59) _______________________________ support "with self.assertRaises(SomeException) as exc:" syntax 02/07/10 CLOSED http://bugs.python.org/issue7859 reopened flox 32-bit Python on 64-bit Windows reports incorrect architecture 02/05/10 http://bugs.python.org/issue7860 created brian.curtin patch, patch, needs review 2to3: "import foo" -> "from . import foo" 02/05/10 CLOSED http://bugs.python.org/issue7861 created tomspur fileio.c: ValueError vs. IOError with impossible operations 02/05/10 http://bugs.python.org/issue7862 created skrah platform module doesn't detect Windows 7 02/05/10 http://bugs.python.org/issue7863 created adal patch, needs review Deprecation markers in unittest docs are unclear 02/06/10 CLOSED http://bugs.python.org/issue7864 created Justin.Lebar io close() swallowing exceptions 02/06/10 http://bugs.python.org/issue7865 created pakal it looks like a typo in unittest 02/06/10 CLOSED http://bugs.python.org/issue7866 created artech Proposed FAQ entry on pass-by-? semantics and the meaning of 'va 02/06/10 http://bugs.python.org/issue7867 created r.david.murray add a loggerClass attribute to Manager 02/06/10 CLOSED http://bugs.python.org/issue7868 created georg.brandl patch, patch, easy traceback from logging is unusable. 02/07/10 CLOSED http://bugs.python.org/issue7869 created naoki patch Duplicate test methods in test_memoryio 02/07/10 CLOSED http://bugs.python.org/issue7870 created georg.brandl Duplicate test method in test_heapq 02/07/10 http://bugs.python.org/issue7871 created georg.brandl Incorrect error raised on importing invalid module via unittest 02/07/10 CLOSED http://bugs.python.org/issue7872 created Daniel.Waterworth Remove precision restriction for integer formatting. 02/07/10 http://bugs.python.org/issue7873 created mark.dickinson easy logging.basicConfig should raise warning/exception on second cal 02/07/10 CLOSED http://bugs.python.org/issue7874 created tocomo test_multiprocessing / importlib failure 02/07/10 http://bugs.python.org/issue7875 created pitrou buildbot unittest docs use deprecated method in code example 02/07/10 CLOSED http://bugs.python.org/issue7876 created Bernt.R??skar.Brenna Iterators over _winreg EnumKey and EnumValue results 02/07/10 http://bugs.python.org/issue7877 created brian.curtin patch, patch, needs review regrtest should check for changes in import machinery 02/07/10 http://bugs.python.org/issue7878 created brett.cannon easy Too narrow platform check in test_datetime 02/07/10 http://bugs.python.org/issue7879 created akrpic77 patch sysconfig does not like symlinks 02/07/10 http://bugs.python.org/issue7880 created flox patch Hardcoded path, unsafe tempfile in test_logging 02/08/10 CLOSED http://bugs.python.org/issue7881 created nascheme Compiling on MOX 10.6 "Snow Leopard" --#warning Building for Int 02/08/10 CLOSED http://bugs.python.org/issue7882 created global667 CallTips.py _find_constructor does not work 02/08/10 http://bugs.python.org/issue7883 created Bernt.R??skar.Brenna IDLE 3.1.1 crashes with UnicodeDecodeError when I press Ctrl-Spa 02/08/10 CLOSED http://bugs.python.org/issue7884 created Bernt.R??skar.Brenna test_distutils fails if Python built in separate directory 02/08/10 http://bugs.python.org/issue7885 created nascheme reverse on an empty list returns None 02/08/10 CLOSED http://bugs.python.org/issue7886 created tormen errno 107 socket.recv issure 02/08/10 CLOSED http://bugs.python.org/issue7887 created twistedphrame turtle "settiltangle" should be marked deprecated, not "tiltangl 02/09/10 http://bugs.python.org/issue7888 created mnewman random produces different output on different architectures 02/09/10 http://bugs.python.org/issue7889 created terrence equal unicode/str objects can have unequal hash 02/09/10 CLOSED http://bugs.python.org/issue7890 created ldeller add links to SVN for documentation developers 02/09/10 http://bugs.python.org/issue7891 created techtonik refactor "test_dict.py" using new assertRaises context manager. 02/09/10 http://bugs.python.org/issue7892 created flox patch unittest: have to subclass TextTestRunner to use alternative Tes 02/09/10 CLOSED http://bugs.python.org/issue7893 created michael.foord easy too aggressive dependency tracking in distutils 02/09/10 http://bugs.python.org/issue7894 created ronaldoussoren Mac 10.6 mac_ver() crashes with USING_FORK_WITHOUT_EXEC_IS _NOT_ 02/09/10 http://bugs.python.org/issue7895 created aahz IDLE.app crashes when attempting to open a .py file 02/09/10 http://bugs.python.org/issue7896 created phyreman Support parametrized tests in unittest 02/10/10 http://bugs.python.org/issue7897 created fperez rlcompleter add "real tab" when text is empty feature 02/10/10 http://bugs.python.org/issue7898 created lilaboc patch MemoryError While Executing Python Code 02/10/10 http://bugs.python.org/issue7899 created p_noblebose posix.getgroups() failure on Mac OS X 02/10/10 http://bugs.python.org/issue7900 created michael.foord Add Vista/7 symlink support 02/10/10 CLOSED http://bugs.python.org/issue7901 created ipatrol relative import broken 02/10/10 http://bugs.python.org/issue7902 created gangesmaster Configure script incorrect for reasonably recent OpenBSD 02/10/10 http://bugs.python.org/issue7903 created johns urlparse.urlsplit mishandles novel schemes 02/10/10 http://bugs.python.org/issue7904 created mbloore Shelf 'keyencoding' keyword argument is undocumented and does no 02/11/10 http://bugs.python.org/issue7905 created r.david.murray patch, patch, needs review float("INFI") returns inf on certain platforms 02/11/10 CLOSED http://bugs.python.org/issue7906 created csernazs winreg docs: CreateKeyEx should be CreateKey (minor) 02/11/10 CLOSED http://bugs.python.org/issue7907 created patraulea patch, needs review remove leftover macos9 support code 02/11/10 http://bugs.python.org/issue7908 created ronaldoussoren patch, needs review os.path.abspath(os.devnull) returns \\\\nul should be nul? 02/11/10 http://bugs.python.org/issue7909 created abakker immutability w/r to tuple.__add__ 02/11/10 CLOSED http://bugs.python.org/issue7910 created mattrussell unittest.TestCase.longMessage should default to True in Python 3 02/11/10 http://bugs.python.org/issue7911 created michael.foord easy Error in additon of decimal numbers 02/11/10 CLOSED http://bugs.python.org/issue7912 created James.Sparenberg Enhance Cmd support for docstrings and document it. 02/11/10 http://bugs.python.org/issue7913 created r.david.murray patch IntVar() - AttributeError: 'NoneType' object has no attribute 't 02/11/10 CLOSED http://bugs.python.org/issue7914 created Plazma A lists which list.sort seems to leave out of order. 02/12/10 CLOSED http://bugs.python.org/issue7915 created throwaway zipfile.ZipExtFile passes long to fileobj.read() 02/12/10 CLOSED http://bugs.python.org/issue7916 created anacrolix list of list created by * 02/12/10 CLOSED http://bugs.python.org/issue7917 created sledge76 Issues Now Closed (60) ______________________ Intermitent failure in test_multiprocessing.test_number_of_objec 543 days http://bugs.python.org/issue3562 flox allow unicode keyword args 2 days http://bugs.python.org/issue4978 benjamin.peterson patch, needs review A selection of spelling errors and typos throughout source 3 days http://bugs.python.org/issue5341 merwok patch [PATCH]Add FastDbfilenameShelf: shelf nerver sync cache even whe 335 days http://bugs.python.org/issue5483 r.david.murray patch memory leaks in py3k 317 days http://bugs.python.org/issue5596 flox patch Serious interpreter crash and/or arbitrary memory leak using .re 309 days http://bugs.python.org/issue5677 pitrou patch Shelve module writeback parameter does not act as advertised 302 days http://bugs.python.org/issue5754 r.david.murray patch, easy ZipFile.writestr "compression_type" argument 271 days http://bugs.python.org/issue6003 ronaldoussoren patch ElementTree (py3k) doesn't properly encode characters that can't 247 days http://bugs.python.org/issue6233 pitrou patch rare assertion failure in test_multiprocessing 224 days http://bugs.python.org/issue6366 flox OS X: python3 from python-3.1.dmg crashes at startup 224 days http://bugs.python.org/issue6393 srid patch, needs review multiprocessing logging support test 77 days http://bugs.python.org/issue6615 vinay.sajip patch test test_multiprocessing failed 172 days http://bugs.python.org/issue6747 flox enable compilation of readline module on Mac OS X 10.5 and 10.6 7 days http://bugs.python.org/issue6877 minge patch, 26backport, needs review Update version{added,changed} entries in py3k unittest docs 127 days http://bugs.python.org/issue7030 ezio.melotti Make assertMultilineEqual default for unicode string comparison 130 days http://bugs.python.org/issue7032 michael.foord low performance of zipfile readline() 105 days http://bugs.python.org/issue7216 amaury.forgeotdarc OverflowError: signed integer is greater than maximum on mips64 91 days http://bugs.python.org/issue7296 mark.dickinson OS X 2.6.4 installer fails on 10.3 with two corrupted file names 64 days http://bugs.python.org/issue7437 ronaldoussoren python -m unittest path_to_suite_function errors 59 days http://bugs.python.org/issue7501 michael.foord unittest.TestCase.shortDescription isn't short anymore 44 days http://bugs.python.org/issue7588 michael.foord OS X pythonw.c compile error with 10.4 or earlier deployment tar 30 days http://bugs.python.org/issue7658 ronaldoussoren patch configure GCC version detection fix for Darwin 22 days http://bugs.python.org/issue7714 ronaldoussoren patch, needs review Allow use of GNU arch on Darwin 26 days http://bugs.python.org/issue7715 ronaldoussoren patch, needs review test_timeout should use "find_unused_port" helper 19 days http://bugs.python.org/issue7728 r.david.murray patch, easy, buildbot unittest returning standard_tests from load_tests in module fail 8 days http://bugs.python.org/issue7799 michael.foord easy test_macostools fails on OS X 10.6: no attribute 'FSSpec' 9 days http://bugs.python.org/issue7807 ronaldoussoren Call to gestalt('sysu') on OSX can lead to freeze in wxPython ap 8 days http://bugs.python.org/issue7812 ronaldoussoren Minor bug in 2.6.4 related to cleanup at end of program 9 days http://bugs.python.org/issue7835 r.david.murray patch, easy python-dev archives are not updated 5 days http://bugs.python.org/issue7843 isandler Remove 'python -U' or document it 2 days http://bugs.python.org/issue7847 barry patch, needs review copy.copy corrupts objects that return false value from __getsta 4 days http://bugs.python.org/issue7848 georg.brandl on __exit__(), exc_value does not contain the exception. 2 days http://bugs.python.org/issue7853 benjamin.peterson patch test_logging fails 3 days http://bugs.python.org/issue7857 vinay.sajip buildbot support "with self.assertRaises(SomeException) as exc:" syntax 0 days http://bugs.python.org/issue7859 flox 2to3: "import foo" -> "from . import foo" 3 days http://bugs.python.org/issue7861 tomspur Deprecation markers in unittest docs are unclear 0 days http://bugs.python.org/issue7864 georg.brandl it looks like a typo in unittest 0 days http://bugs.python.org/issue7866 merwok add a loggerClass attribute to Manager 0 days http://bugs.python.org/issue7868 georg.brandl patch, patch, easy traceback from logging is unusable. 0 days http://bugs.python.org/issue7869 vinay.sajip patch Duplicate test methods in test_memoryio 0 days http://bugs.python.org/issue7870 pitrou Incorrect error raised on importing invalid module via unittest 0 days http://bugs.python.org/issue7872 michael.foord logging.basicConfig should raise warning/exception on second cal 0 days http://bugs.python.org/issue7874 vinay.sajip unittest docs use deprecated method in code example 1 days http://bugs.python.org/issue7876 ezio.melotti Hardcoded path, unsafe tempfile in test_logging 0 days http://bugs.python.org/issue7881 vinay.sajip Compiling on MOX 10.6 "Snow Leopard" --#warning Building for Int 1 days http://bugs.python.org/issue7882 ronaldoussoren IDLE 3.1.1 crashes with UnicodeDecodeError when I press Ctrl-Spa 0 days http://bugs.python.org/issue7884 ezio.melotti reverse on an empty list returns None 0 days http://bugs.python.org/issue7886 tormen errno 107 socket.recv issure 1 days http://bugs.python.org/issue7887 twistedphrame equal unicode/str objects can have unequal hash 0 days http://bugs.python.org/issue7890 lemburg unittest: have to subclass TextTestRunner to use alternative Tes 1 days http://bugs.python.org/issue7893 michael.foord easy Add Vista/7 symlink support 0 days http://bugs.python.org/issue7901 tim.golden float("INFI") returns inf on certain platforms 1 days http://bugs.python.org/issue7906 r.david.murray winreg docs: CreateKeyEx should be CreateKey (minor) 0 days http://bugs.python.org/issue7907 ezio.melotti patch, needs review immutability w/r to tuple.__add__ 0 days http://bugs.python.org/issue7910 rhettinger Error in additon of decimal numbers 0 days http://bugs.python.org/issue7912 r.david.murray IntVar() - AttributeError: 'NoneType' object has no attribute 't 0 days http://bugs.python.org/issue7914 brian.curtin A lists which list.sort seems to leave out of order. 0 days http://bugs.python.org/issue7915 throwaway zipfile.ZipExtFile passes long to fileobj.read() 0 days http://bugs.python.org/issue7916 amaury.forgeotdarc list of list created by * 0 days http://bugs.python.org/issue7917 flox Top Issues Most Discussed (10) ______________________________ 17 Add a context manager to change cwd in test.test_support and ru 27 days open http://bugs.python.org/issue7712 17 enable compilation of readline module on Mac OS X 10.5 and 10.6 7 days closed http://bugs.python.org/issue6877 10 random produces different output on different architectures 4 days open http://bugs.python.org/issue7889 9 Support parametrized tests in unittest 3 days open http://bugs.python.org/issue7897 9 support "with self.assertRaises(SomeException) as exc:" syntax 0 days closed http://bugs.python.org/issue7859 8 test_multiprocessing / importlib failure 5 days open http://bugs.python.org/issue7875 8 Add os.link() and os.symlink() and os.path.islink() support for 1215 days open http://bugs.python.org/issue1578269 7 float("INFI") returns inf on certain platforms 1 days closed http://bugs.python.org/issue7906 7 Proposed FAQ entry on pass-by-? semantics and the meaning of 'v 6 days open http://bugs.python.org/issue7867 7 platform module doesn't detect Windows 7 7 days open http://bugs.python.org/issue7863 From martin at v.loewis.de Fri Feb 12 20:17:53 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 12 Feb 2010 20:17:53 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> Message-ID: <4B75A961.3000309@v.loewis.de> > Why even keep 2to3 in the sandbox? It should be mature enough now to be > maintained directly in the tree. I think the original plan was to make standalone releases, so that people could upgrade their installation from a newer release of 2to3. IMO, it is realistic to predict that this will not actually happen. If we can agree to give up the 2to3 sandbox, we should incorporate find_pattern into the tree, and perhaps test.py as well. Regards, Martin From brett at python.org Fri Feb 12 20:34:58 2010 From: brett at python.org (Brett Cannon) Date: Fri, 12 Feb 2010 11:34:58 -0800 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B75A961.3000309@v.loewis.de> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> Message-ID: On Fri, Feb 12, 2010 at 11:17, "Martin v. L?wis" wrote: >> Why even keep 2to3 in the sandbox? ?It should be mature enough now to be >> maintained directly in the tree. > > I think the original plan was to make standalone releases, so that > people could upgrade their installation from a newer release of 2to3. > That's what I remember as well. > IMO, it is realistic to predict that this will not actually happen. If > we can agree to give up the 2to3 sandbox, we should incorporate > find_pattern into the tree, and perhaps test.py as well. I vote on giving up the 2to3 sandbox. -Brett From guido at python.org Fri Feb 12 20:48:37 2010 From: guido at python.org (Guido van Rossum) Date: Fri, 12 Feb 2010 11:48:37 -0800 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B75786E.4090800@voidspace.org.uk> References: <4B71908A.3080306@voidspace.org.uk> <4B75786E.4090800@voidspace.org.uk> Message-ID: On Fri, Feb 12, 2010 at 7:49 AM, Michael Foord wrote: > My *hope* is that we provide a general solution, possibly based on all or > part of Test Resources, with an easy mechanism for the setUpClass and > setUpModule but also solves the more general case of sharing fixtures > between tests. If that doesn't turn out to be possible then we'll go for a > straight implementation of setUpClass / setUpModule. I'm hoping I can get > this together in time for the PyCon sprints... Do you have a reference for Test Resources? > Here's a current minimal example of using Test Resources. It could be > simplified further with helper functions and by some of the functionality > moving into unittest itself. OptimisingTestSuite here ensures that the > resource is created before first use (MyTempDir.make is called) and disposed > of when finished with (MyTempDir.clean is called). > > import shutil > import tempfile > import testresources > > def load_tests(loader, tests, pattern): > # this step could be built into the standard loader > return testresources.OptimisingTestSuite(tests) > > class MyTempDir(testresources.TestResource): > def make(self, dependency_resources): > return tempfile.mkdtemp() > > def clean(self, resource): > shutil.rmtree(resource) > > class MyTest(testresources.ResourcedTestCase): > resources = [('workdir', MyTempDir())] > def test_foo(self): > print self.workdir > def test_bar(self): > print self.workdir This came out with all leading indentation removed, but I think I can guess what you meant to write. However from this example I *cannot* guess whether those resources are set up and torn down per test or per test class. Also the notation resources = [('workdir', MyTempDir())] looks pretty ugly -- if 'workdir' ends up being an instance attribute, why not make it a dict instead of a list of tuples? Or even better, a could each resource become a class variable? -- --Guido van Rossum (python.org/~guido) From exarkun at twistedmatrix.com Fri Feb 12 21:20:51 2010 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Fri, 12 Feb 2010 20:20:51 -0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <4B75786E.4090800@voidspace.org.uk> Message-ID: <20100212202051.26099.568349857.divmod.xquotient.1225@localhost.localdomain> On 07:48 pm, guido at python.org wrote: >On Fri, Feb 12, 2010 at 7:49 AM, Michael Foord > wrote: >>My *hope* is that we provide a general solution, possibly based on all >>or >>part of Test Resources, with an easy mechanism for the setUpClass and >>setUpModule but also solves the more general case of sharing fixtures >>between tests. If that doesn't turn out to be possible then we'll go >>for a >>straight implementation of setUpClass / setUpModule. I'm hoping I can >>get >>this together in time for the PyCon sprints... > >Do you have a reference for Test Resources? http://pypi.python.org/pypi/testresources/0.2.2 >[snip] > >However from this example I *cannot* guess whether those resources are >set up and torn down per test or per test class. Also the notation The idea is that you're declaring what the tests need in order to work. You're not explicitly defining the order in which things are set up and torn down. That is left up to another part of the library to determine. One such other part, OptimisingTestSuite, will look at *all* of your tests and find an order which involves the least redundant effort. You might have something else that breaks up the test run across multiple processes and uses the resource declarations to run all tests requiring one thing in one process and all tests requiring another thing somewhere else. You might have still something else that wants to completely randomize the order of tests, and sets up all the resources at the beginning and tears them down at the end. Or you might need to be more memory/whatever conscious than that, and do each set up and tear down around each test. The really nice thing here is that you're not constrained in how you group your tests into classes and modules by what resources you want to use in them. You're free to group them by what they're logically testing, or in whatever other way you wish. Jean-Paul From guido at python.org Fri Feb 12 21:27:29 2010 From: guido at python.org (Guido van Rossum) Date: Fri, 12 Feb 2010 12:27:29 -0800 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <20100212202051.26099.568349857.divmod.xquotient.1225@localhost.localdomain> References: <4B71908A.3080306@voidspace.org.uk> <4B75786E.4090800@voidspace.org.uk> <20100212202051.26099.568349857.divmod.xquotient.1225@localhost.localdomain> Message-ID: On Fri, Feb 12, 2010 at 12:20 PM, wrote: > The idea is that you're declaring what the tests need in order to work. > You're not explicitly defining the order in which things are set up and torn > down. ?That is left up to another part of the library to determine. > > One such other part, OptimisingTestSuite, will look at *all* of your tests > and find an order which involves the least redundant effort. So is there a way to associate a "cost" with a resource? I may have one resource which is simply a /tmp subdirectory (very cheap) and another that requires starting a database service (very expensive). > You might have something else that breaks up the test run across multiple > processes and uses the resource declarations to run all tests requiring one > thing in one process and all tests requiring another thing somewhere else. I admire the approach, though I am skeptical. We have a thing to split up tests at Google which looks at past running times for tests to make an informed opinion. Have you thought of that? > You might have still something else that wants to completely randomize the > order of tests, and sets up all the resources at the beginning and tears > them down at the end. ?Or you might need to be more memory/whatever > conscious than that, and do each set up and tear down around each test. How does your code know the constraints? > The really nice thing here is that you're not constrained in how you group > your tests into classes and modules by what resources you want to use in > them. ?You're free to group them by what they're logically testing, or in > whatever other way you wish. I guess this requires some trust in the system. :-) -- --Guido van Rossum (python.org/~guido) From ben+python at benfinney.id.au Fri Feb 12 21:51:36 2010 From: ben+python at benfinney.id.au (Ben Finney) Date: Sat, 13 Feb 2010 07:51:36 +1100 Subject: [Python-Dev] setUpClass and setUpModule in unittest References: <4B71908A.3080306@voidspace.org.uk> <4B75786E.4090800@voidspace.org.uk> Message-ID: <87k4uifa93.fsf@benfinney.id.au> Michael Foord writes: > The advantage of setUpClass and setUpModule is that they allow you to > have shared fixtures shared between tests, essential for certain kinds > of testing. [?] Yes, this would be very useful for non-unit tests. > My *hope* is that we provide a general solution, possibly based on all > or part of Test Resources, with an easy mechanism for the setUpClass > and setUpModule but also solves the more general case of sharing > fixtures between tests. +1 for having these in a more general ?testing API?, with the ?unittest? API a special case that does *not* share fixtures between tests. -- \ ?Following fashion and the status quo is easy. Thinking about | `\ your users' lives and creating something practical is much | _o__) harder.? ?Ryan Singer, 2008-07-09 | Ben Finney From exarkun at twistedmatrix.com Fri Feb 12 21:59:37 2010 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Fri, 12 Feb 2010 20:59:37 -0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <4B75786E.4090800@voidspace.org.uk> <20100212202051.26099.568349857.divmod.xquotient.1225@localhost.localdomain> Message-ID: <20100212205937.26099.582452672.divmod.xquotient.1247@localhost.localdomain> On 08:27 pm, guido at python.org wrote: >On Fri, Feb 12, 2010 at 12:20 PM, wrote: >>The idea is that you're declaring what the tests need in order to >>work. >>You're not explicitly defining the order in which things are set up >>and torn >>down. ?That is left up to another part of the library to determine. >> >>One such other part, OptimisingTestSuite, will look at *all* of your >>tests >>and find an order which involves the least redundant effort. > >So is there a way to associate a "cost" with a resource? I may have >one resource which is simply a /tmp subdirectory (very cheap) and >another that requires starting a database service (very expensive). I don't think so. From the docs, "This TestSuite will introspect all the test cases it holds directly and if they declare needed resources, will run the tests in an order that attempts to minimise the number of setup and tear downs required.". > >>You might have something else that breaks up the test run across >>multiple >>processes and uses the resource declarations to run all tests >>requiring one >>thing in one process and all tests requiring another thing somewhere >>else. > >I admire the approach, though I am skeptical. We have a thing to split >up tests at Google which looks at past running times for tests to make >an informed opinion. Have you thought of that? >>You might have still something else that wants to completely randomize >>the >>order of tests, and sets up all the resources at the beginning and >>tears >>them down at the end. Or you might need to be more memory/whatever >>conscious than that, and do each set up and tear down around each >>test. > >How does your code know the constraints? To be clear, aside from OptimisingTestSuite, I don't think testresources implements any of the features I talked about. They're just things one might want to and be able to implement, given a test suite which uses testresources. > >>The really nice thing here is that you're not constrained in how you >>group >>your tests into classes and modules by what resources you want to use >>in >>them. You're free to group them by what they're logically testing, or >>in >>whatever other way you wish. > >I guess this requires some trust in the system. :-) Jean-Paul From collinwinter at google.com Sat Feb 13 01:04:07 2010 From: collinwinter at google.com (Collin Winter) Date: Fri, 12 Feb 2010 16:04:07 -0800 Subject: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython In-Reply-To: <693bc9ab1002110639r5ca143b1t281fe0135effc493@mail.gmail.com> References: <3c8293b61001201427y30fc9f28ke6f7152b2a112b4e@mail.gmail.com> <3c8293b61001201756g26212a44m9abe7f5b471e6bb4@mail.gmail.com> <3c8293b61001210932i9c5d31i4bc71b7d9e0611f2@mail.gmail.com> <3c8293b61001211214m4b24c3b9x3738cf9e5375b0f8@mail.gmail.com> <3c8293b61002021454w664c7646ya5e2dd7395380f5f@mail.gmail.com> <693bc9ab1002110639r5ca143b1t281fe0135effc493@mail.gmail.com> Message-ID: <3c8293b61002121604i204cf579nafa26e53b75e1cc@mail.gmail.com> Hey Maciej, On Thu, Feb 11, 2010 at 6:39 AM, Maciej Fijalkowski wrote: > Snippet from: > > http://codereview.appspot.com/186247/diff2/5014:8003/7002 > > *PyPy*: PyPy [#pypy]_ has good performance on numerical code, but is > slower than Unladen Swallow on non-numerical workloads. PyPy only > supports 32-bit x86 code generation. It has poor support for CPython > extension modules, making migration for large applications > prohibitively expensive. > > That part at the very least has some sort of personal opinion > "prohibitively", Of course; difficulty is always in the eye of the person doing the work. Simply put, PyPy is not a drop-in replacement for CPython: there is no embedding API, much less the same one exported by CPython; important libraries, such as MySQLdb and pycrypto, do not build against PyPy; PyPy is 32-bit x86 only. All of these problems can be overcome with enough time/effort/money, but I think you'd agree that, if all I'm trying to do is speed up my application, adding a new x86-64 backend or implementing support for CPython extension modules is certainly north of "prohibitively expensive". I stand by that wording. I'm willing to enumerate all of PyPy's deficiencies in this regard in the PEP, rather than the current vaguer wording, if you'd prefer. > while the other part is not completely true "slower > than US on non-numerical workloads". Fancy providing a proof for that? > I'm well aware that there are benchmarks on which PyPy is slower than > CPython or US, however, I would like a bit more weighted opinion in > the PEP. Based on the benchmarks you're running at http://codespeak.net:8099/plotsummary.html, PyPy is slower than CPython on many non-numerical workloads, which Unladen Swallow is faster than CPython at. Looking at the benchmarks there at which PyPy is faster than CPython, they are primarily numerical; this was the basis for the wording in the PEP. My own recent benchmarking of PyPy and Unladen Swallow (both trunk; PyPy wouldn't run some benchmarks): | Benchmark | PyPy | Unladen | Change | +==============+=======+=========+=================+ | ai | 0.61 | 0.51 | 1.1921x faster | | django | 0.68 | 0.8 | 1.1898x slower | | float | 0.03 | 0.07 | 2.7108x slower | | html5lib | 20.04 | 16.42 | 1.2201x faster | | pickle | 17.7 | 1.09 | 16.2465x faster | | rietveld | 1.09 | 0.59 | 1.8597x faster | | slowpickle | 0.43 | 0.56 | 1.2956x slower | | slowspitfire | 2.5 | 0.63 | 3.9853x faster | | slowunpickle | 0.26 | 0.27 | 1.0585x slower | | unpickle | 28.45 | 0.78 | 36.6427x faster | I'm happy to change the wording to "slower than US on some workloads". Thanks, Collin Winter From fuzzyman at voidspace.org.uk Sat Feb 13 02:04:51 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sat, 13 Feb 2010 01:04:51 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <4B75786E.4090800@voidspace.org.uk> Message-ID: <4B75FAB3.9010009@voidspace.org.uk> On 12/02/2010 19:48, Guido van Rossum wrote: > [snip...] >> Here's a current minimal example of using Test Resources. It could be >> simplified further with helper functions and by some of the functionality >> moving into unittest itself. OptimisingTestSuite here ensures that the >> resource is created before first use (MyTempDir.make is called) and disposed >> of when finished with (MyTempDir.clean is called). >> >> import shutil >> import tempfile >> import testresources >> >> def load_tests(loader, tests, pattern): >> # this step could be built into the standard loader >> return testresources.OptimisingTestSuite(tests) >> >> class MyTempDir(testresources.TestResource): >> def make(self, dependency_resources): >> return tempfile.mkdtemp() >> >> def clean(self, resource): >> shutil.rmtree(resource) >> >> class MyTest(testresources.ResourcedTestCase): >> resources = [('workdir', MyTempDir())] >> def test_foo(self): >> print self.workdir >> def test_bar(self): >> print self.workdir >> > This came out with all leading indentation removed, but I think I can > guess what you meant to write. > For goodness sake. Sorry about that. > However from this example I *cannot* guess whether those resources are > set up and torn down per test or per test class. This particular example is the equivalent of setUpClass - so by declaring the resource as a class attribute it will created before the first test for the class is run and disposed of after the last test for the class. You could *also* create a single resource and share it between several test classes, or even across classes in several modules, and have it created and disposed of at the right point. I've copied Rob Collins in on this email in case I've misunderstood. > Also the notation > > resources = [('workdir', MyTempDir())] > > looks pretty ugly -- if 'workdir' ends up being an instance attribute, > why not make it a dict instead of a list of tuples? Or even better, a > could each resource become a class variable? > > I guess we could introspect the class for every attribute that is a resource, but I prefer some way of explicitly declaring which resources a TestCase is using. Michael Foord -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From ncoghlan at gmail.com Sat Feb 13 02:49:18 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 13 Feb 2010 11:49:18 +1000 Subject: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython In-Reply-To: <3c8293b61002121604i204cf579nafa26e53b75e1cc@mail.gmail.com> References: <3c8293b61001201427y30fc9f28ke6f7152b2a112b4e@mail.gmail.com> <3c8293b61001201756g26212a44m9abe7f5b471e6bb4@mail.gmail.com> <3c8293b61001210932i9c5d31i4bc71b7d9e0611f2@mail.gmail.com> <3c8293b61001211214m4b24c3b9x3738cf9e5375b0f8@mail.gmail.com> <3c8293b61002021454w664c7646ya5e2dd7395380f5f@mail.gmail.com> <693bc9ab1002110639r5ca143b1t281fe0135effc493@mail.gmail.com> <3c8293b61002121604i204cf579nafa26e53b75e1cc@mail.gmail.com> Message-ID: <4B76051E.9050204@gmail.com> Collin Winter wrote: > Hey Maciej, > > On Thu, Feb 11, 2010 at 6:39 AM, Maciej Fijalkowski wrote: >> Snippet from: >> >> http://codereview.appspot.com/186247/diff2/5014:8003/7002 >> >> *PyPy*: PyPy [#pypy]_ has good performance on numerical code, but is >> slower than Unladen Swallow on non-numerical workloads. PyPy only >> supports 32-bit x86 code generation. It has poor support for CPython >> extension modules, making migration for large applications >> prohibitively expensive. >> >> That part at the very least has some sort of personal opinion >> "prohibitively", > > Of course; difficulty is always in the eye of the person doing the > work. Simply put, PyPy is not a drop-in replacement for CPython: there > is no embedding API, much less the same one exported by CPython; > important libraries, such as MySQLdb and pycrypto, do not build > against PyPy; PyPy is 32-bit x86 only. I think pointing out at least these two restrictions explicitly would be helpful (since they put some objective bounds on the meaning of "prohibitive" in this context). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Sat Feb 13 02:51:26 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 13 Feb 2010 11:51:26 +1000 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> Message-ID: <4B76059E.3060101@gmail.com> Brett Cannon wrote: > On Fri, Feb 12, 2010 at 11:17, "Martin v. L?wis" wrote: >> IMO, it is realistic to predict that this will not actually happen. If >> we can agree to give up the 2to3 sandbox, we should incorporate >> find_pattern into the tree, and perhaps test.py as well. > > I vote on giving up the 2to3 sandbox. Besides, if we're using hg, it should make it much easier for someone else to branch that part of the stdlib and create a standalone 2to3 release from it if they really want to. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Sat Feb 13 02:53:20 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 13 Feb 2010 11:53:20 +1000 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> Message-ID: <4B760610.8010100@gmail.com> Brett Cannon wrote: > On Fri, Feb 12, 2010 at 11:17, "Martin v. L?wis" wrote: > I vote on giving up the 2to3 sandbox. One other point - is there a Python 2.6 backwards compatibility restriction on 2to3 at the moment? If there isn't, should there be? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From benjamin at python.org Sat Feb 13 03:02:07 2010 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 12 Feb 2010 20:02:07 -0600 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B76059E.3060101@gmail.com> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> Message-ID: <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> 2010/2/12 Nick Coghlan : > Brett Cannon wrote: >> On Fri, Feb 12, 2010 at 11:17, "Martin v. L?wis" wrote: >>> IMO, it is realistic to predict that this will not actually happen. If >>> we can agree to give up the 2to3 sandbox, we should incorporate >>> find_pattern into the tree, and perhaps test.py as well. >> >> I vote on giving up the 2to3 sandbox. > > Besides, if we're using hg, it should make it much easier for someone > else to branch that part of the stdlib and create a standalone 2to3 > release from it if they really want to. I personally like 2to3 in a separate repo because it fits well with my view that 2to3 is an extra application that happens to also be distributed with python. -- Regards, Benjamin From benjamin at python.org Sat Feb 13 03:02:33 2010 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 12 Feb 2010 20:02:33 -0600 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B760610.8010100@gmail.com> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B760610.8010100@gmail.com> Message-ID: <1afaf6161002121802x749661a0w9c23fbcd5634cc15@mail.gmail.com> 2010/2/12 Nick Coghlan : > Brett Cannon wrote: >> On Fri, Feb 12, 2010 at 11:17, "Martin v. L?wis" wrote: >> I vote on giving up the 2to3 sandbox. > > One other point - is there a Python 2.6 backwards compatibility > restriction on 2to3 at the moment? If there isn't, should there be? I try to keep it compatible with 2.6, since we have to backport changes. -- Regards, Benjamin From ncoghlan at gmail.com Sat Feb 13 03:23:31 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 13 Feb 2010 12:23:31 +1000 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <1afaf6161002121802x749661a0w9c23fbcd5634cc15@mail.gmail.com> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B760610.8010100@gmail.com> <1afaf6161002121802x749661a0w9c23fbcd5634cc15@mail.gmail.com> Message-ID: <4B760D23.6000407@gmail.com> Benjamin Peterson wrote: > 2010/2/12 Nick Coghlan : >> Brett Cannon wrote: >>> On Fri, Feb 12, 2010 at 11:17, "Martin v. L?wis" wrote: >>> I vote on giving up the 2to3 sandbox. >> One other point - is there a Python 2.6 backwards compatibility >> restriction on 2to3 at the moment? If there isn't, should there be? > > I try to keep it compatible with 2.6, since we have to backport changes. With 2.7 just around the corner, it should probably be listed in PEP 291 on that basis. Of course, PEP 291 could do with a list of 2.5 and 2.6 specific features first... Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From benjamin at python.org Sat Feb 13 03:35:44 2010 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 12 Feb 2010 20:35:44 -0600 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B760D23.6000407@gmail.com> References: <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B760610.8010100@gmail.com> <1afaf6161002121802x749661a0w9c23fbcd5634cc15@mail.gmail.com> <4B760D23.6000407@gmail.com> Message-ID: <1afaf6161002121835g467ad6abp9e77f103b910222e@mail.gmail.com> 2010/2/12 Nick Coghlan : > Benjamin Peterson wrote: >> 2010/2/12 Nick Coghlan : >>> Brett Cannon wrote: >>>> On Fri, Feb 12, 2010 at 11:17, "Martin v. L?wis" wrote: >>>> I vote on giving up the 2to3 sandbox. >>> One other point - is there a Python 2.6 backwards compatibility >>> restriction on 2to3 at the moment? If there isn't, should there be? >> >> I try to keep it compatible with 2.6, since we have to backport changes. > > With 2.7 just around the corner, it should probably be listed in PEP 291 > on that basis. Done. > > Of course, PEP 291 could do with a list of 2.5 and 2.6 specific features > first... I think that section is rather pointless to keep updated, since a good list can be found in the what's new documents. What people really need to do is run the unittests on all supported versions. -- Regards, Benjamin From benjamin at python.org Sat Feb 13 03:52:23 2010 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 12 Feb 2010 20:52:23 -0600 Subject: [Python-Dev] 3.1.2 Message-ID: <1afaf6161002121852q7c4fd3c1h7e8d38d2244d7fe9@mail.gmail.com> It's about time for another 3.1 bug fix release. I propose this schedule: March 6: Release Candidate (same day as 2.7a4) March 20: 3.1.2 Final release -- Regards, Benjamin From glyph at twistedmatrix.com Sat Feb 13 05:01:41 2010 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Fri, 12 Feb 2010 23:01:41 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> Message-ID: <2E9EFC73-20B4-42F0-973C-66933410C9EE@twistedmatrix.com> On Feb 11, 2010, at 1:11 PM, Guido van Rossum wrote: > I have skimmed this thread (hence this reply to the first rather than > the last message), but in general I am baffled by the hostility of > testing framework developers towards their users. The arguments > against class- and module-level seUp/tearDown functions seems to be > inspired by religion or ideology more than by the zen of Python. What > happened to Practicality Beats Purity? My sentiments tend to echo Jean-Paul Calderone's in this regard, but I think what he's saying bears a lot of repeating. We really screwed up this feature in Twisted and I'd like to make sure that the stdlib doesn't repeat the mistake. (Granted, we screwed it up extra bad , but I do think many of the problems we encountered are inherent.) The issue is not that we test-framework developers don't like our users, or want to protect them from themselves. It is that our users - ourselves chief among them - desire features like "I want my tests to be transparently optimized across N cores and N disks". I can understand how resistance to setUp/tearDown*Class/Module comes across as user-hostility, but I can assure you this is not the case. It's subtle and difficult to explain how incompatible with these advanced features the *apparently* straightforward semantics of setting up and tearing down classes and modules. Most questions of semantics can be resolved with a simple decision, and it's not clear how that would interfere with other features. In Twisted's implementation of setUpClass and tearDownClass, everything seemed like it worked right up until the point where it didn't. The test writer thinks that they're writing "simple" setUpClass and tearDownClass methods to optimize things, except almost by definition a setUpClass method needs to manipulate global state, shared across tests. Which means that said state starts getting confused when it is set up and torn down concurrently across multiple processes. These methods seem simple, but do they touch the filesystem? Do they touch a shared database, even a little? How do they determine a unique location to do that? Without generally available tools to allow test writers to mess with the order and execution environment of their tests, one tends to write tests that rely on these implementation and ordering accidents, which means that when such a tool does arrive, things start breaking in unpredictable ways. > The argument that a unittest framework shouldn't be "abused" for > regression tests (or integration tests, or whatever) is also bizarre > to my mind. Surely if a testing framework applies to multiple kinds of > testing that's a good thing, not something to be frowned upon? For what it's worth, I am a big fan of abusing test frameworks in generally, and pyunit specifically, to perform every possible kind of testing. In fact, I find setUpClass more hostile to *other* kinds of testing, because this convenience for simple integration tests makes more involved, performance-intensive integration tests harder to write and manage. > On the other hand, I think we should be careful to extend unittest in > a consistent way. I shuddered at earlier proposals (on python-ideas) > to name the new functions (variations of) set_up and tear_down "to > conform with PEP 8" (this would actually have violated that PEP, which > explicitly prefers local consistency over global consistency). This is a very important point. But, it's important not only to extend unittest itself in a consistent way, but to clearly describe the points of extensibility so that third-party things can continue to extend unittest themselves, and cooperate with each other using some defined protocol so that you can combine those tools. I tried to write about this problem a while ago - the current extensibility API (which is mostly just composing "run()") is sub-optimal in many ways, but it's important not to break it. And setUpClass does inevitably start to break those integration points down, because it implies certain things, like the fact that classes and modules are suites, or are otherwise grouped together in test ordering. This makes it difficult to create custom suites, to do custom ordering, custom per-test behavior (like metrics collection before and after run(), or gc.collect() after each test, or looking for newly-opened-but-not-cleaned-up external resources like file descriptors after each tearDown). Again: these are all concrete features that *users* of test frameworks want, not just idle architectural fantasy of us framework hackers. I haven't had the opportunity to read the entire thread, so I don't know if this discussion has come to fruition, but I can see that some attention has been paid to these difficulties. I have no problem with setUpClass or tearDownClass hooks *per se*, as long as they can be implemented in a way which explicitly preserves extensibility. > Regarding the objection that setUp/tearDown for classes would run into > issues with subclassing, I propose to let the standard semantics of > subclasses do their job. Thus a subclass that overrides setUpClass or > tearDownClass is responsible for calling the base class's setUpClass > and tearDownClass (and the TestCase base class should provide empty > versions of both). The testrunner should only call setUpClass and > tearDownClass for classes that have at least one test that is > selected. > > Yes, this would mean that if a base class has a test method and a > setUpClass (and tearDownClass) method and a subclass also has a test > method and overrides setUpClass (and/or tearDown), the base class's > setUpClass and tearDown may be called twice. What's the big deal? If > setUpClass and tearDownClass are written properly they should support > this. Just to be clear: by "written properly" you mean, written as classmethods, storing their data only on 'cls', right? > If this behavior is undesired in a particular case, maybe what > was really meant were module-level setUp and tearDown, or the class > structure should be rearranged. There's also a bit of an open question here for me: if subclassing is allowed, and module-level setup and teardown are allowed, then what if I define a test class with test methods in module 'a', as well as module setup and teardown, then subclass it in 'b' which *doesn't* have setup and teardown... is the subclass in 'b' always assumed to depend on the module-level setup in 'a'? Is there a way that it could be made not to if it weren't necessary? What if it stubs out all of its test methods? In the case of classes you've got the 'cls' variable to describe the dependency and the shared state, but in the case of modules, inheritance doesn't create an additional module object to hold on to. testresources very neatly sidesteps this problem by just providing an API to say "this test case depends on that test resource", without relying on the grouping of tests within classes, modules, or packages. Of course you can just define a class-level or module-level resource and then have all your tests depend on it, which gives you the behavior of setUpClass and setUpModule in a more general way. -glyph From fijall at gmail.com Sat Feb 13 06:12:48 2010 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 13 Feb 2010 00:12:48 -0500 Subject: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython In-Reply-To: <3c8293b61002121604i204cf579nafa26e53b75e1cc@mail.gmail.com> References: <3c8293b61001201427y30fc9f28ke6f7152b2a112b4e@mail.gmail.com> <3c8293b61001201756g26212a44m9abe7f5b471e6bb4@mail.gmail.com> <3c8293b61001210932i9c5d31i4bc71b7d9e0611f2@mail.gmail.com> <3c8293b61001211214m4b24c3b9x3738cf9e5375b0f8@mail.gmail.com> <3c8293b61002021454w664c7646ya5e2dd7395380f5f@mail.gmail.com> <693bc9ab1002110639r5ca143b1t281fe0135effc493@mail.gmail.com> <3c8293b61002121604i204cf579nafa26e53b75e1cc@mail.gmail.com> Message-ID: <693bc9ab1002122112u562d9dfaobf36e79bf71f194c@mail.gmail.com> On Fri, Feb 12, 2010 at 7:04 PM, Collin Winter wrote: > Hey Maciej, > > On Thu, Feb 11, 2010 at 6:39 AM, Maciej Fijalkowski wrote: >> Snippet from: >> >> http://codereview.appspot.com/186247/diff2/5014:8003/7002 >> >> *PyPy*: PyPy [#pypy]_ has good performance on numerical code, but is >> slower than Unladen Swallow on non-numerical workloads. PyPy only >> supports 32-bit x86 code generation. It has poor support for CPython >> extension modules, making migration for large applications >> prohibitively expensive. >> >> That part at the very least has some sort of personal opinion >> "prohibitively", > > Of course; difficulty is always in the eye of the person doing the > work. Simply put, PyPy is not a drop-in replacement for CPython: there > is no embedding API, much less the same one exported by CPython; > important libraries, such as MySQLdb and pycrypto, do not build > against PyPy; PyPy is 32-bit x86 only. I like this wording far more. It's at the very least far more precise. Those examples are fair enough (except the fact that PyPy is not 32bit x86 only, the JIT is). > All of these problems can be overcome with enough time/effort/money, > but I think you'd agree that, if all I'm trying to do is speed up my > application, adding a new x86-64 backend or implementing support for > CPython extension modules is certainly north of "prohibitively > expensive". I stand by that wording. I'm willing to enumerate all of > PyPy's deficiencies in this regard in the PEP, rather than the current > vaguer wording, if you'd prefer. > >> while the other part is not completely true "slower >> than US on non-numerical workloads". Fancy providing a proof for that? >> I'm well aware that there are benchmarks on which PyPy is slower than >> CPython or US, however, I would like a bit more weighted opinion in >> the PEP. > > Based on the benchmarks you're running at > http://codespeak.net:8099/plotsummary.html, PyPy is slower than > CPython on many non-numerical workloads, which Unladen Swallow is > faster than CPython at. Looking at the benchmarks there at which PyPy > is faster than CPython, they are primarily numerical; this was the > basis for the wording in the PEP. > > My own recent benchmarking of PyPy and Unladen Swallow (both trunk; > PyPy wouldn't run some benchmarks): > > | Benchmark ? ?| PyPy ?| Unladen | Change ? ? ? ? ?| > +==============+=======+=========+=================+ > | ai ? ? ? ? ? | 0.61 ?| 0.51 ? ?| ?1.1921x faster | > | django ? ? ? | 0.68 ?| 0.8 ? ? | ?1.1898x slower | > | float ? ? ? ?| 0.03 ?| 0.07 ? ?| ?2.7108x slower | > | html5lib ? ? | 20.04 | 16.42 ? | ?1.2201x faster | > | pickle ? ? ? | 17.7 ?| 1.09 ? ?| 16.2465x faster | > | rietveld ? ? | 1.09 ?| 0.59 ? ?| ?1.8597x faster | > | slowpickle ? | 0.43 ?| 0.56 ? ?| ?1.2956x slower | > | slowspitfire | 2.5 ? | 0.63 ? ?| ?3.9853x faster | > | slowunpickle | 0.26 ?| 0.27 ? ?| ?1.0585x slower | > | unpickle ? ? | 28.45 | 0.78 ? ?| 36.6427x faster | > > I'm happy to change the wording to "slower than US on some workloads". > > Thanks, > Collin Winter > "slower than US on some workloads" is true, while not really telling much to a potential reader. For any X and Y implementing the same language "X is faster than Y on some workloads" is usually true. To be precise you would need to include the above table in the PEP, which is probably a bit too much, given that PEP is not about PyPy at all. I'm fine with any wording that is at least correct. Cheers, fijal From martin at v.loewis.de Sat Feb 13 07:31:41 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 13 Feb 2010 07:31:41 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B76059E.3060101@gmail.com> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> Message-ID: <4B76474D.2050407@v.loewis.de> >> On Fri, Feb 12, 2010 at 11:17, "Martin v. L?wis" wrote: >>> IMO, it is realistic to predict that this will not actually happen. If >>> we can agree to give up the 2to3 sandbox, we should incorporate >>> find_pattern into the tree, and perhaps test.py as well. >> I vote on giving up the 2to3 sandbox. > > Besides, if we're using hg, it should make it much easier for someone > else to branch that part of the stdlib Actually - no: hg doesn't support branching of parts of a repository. You would need to branch all of Python. Then, there wouldn't be a straight-forward place to setup.py and any other top-level files (although you could hack them into Lib, and work with a distutils manifest). Regards, Martin From martin at v.loewis.de Sat Feb 13 07:38:15 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 13 Feb 2010 07:38:15 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> Message-ID: <4B7648D7.9060301@v.loewis.de> > I personally like 2to3 in a separate repo because it fits well with my > view that 2to3 is an extra application that happens to also be > distributed with python. But isn't that just a theoretical property? I know that's how 2to3 started, but who, other than the committers, actually accesses the 2to3 repo? I would be much more supportive of that view if there had been a single release of 2to3 at any point in time (e.g. to PyPI). Alas, partially due to me creating lib2to3, you actually couldn't release it as an extra application and run it on 2.6 or 2.7, as the builtin lib2to3 would take precedence over the lib2to3 bundled with the application. Regards, Martin From ncoghlan at gmail.com Sat Feb 13 08:55:36 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 13 Feb 2010 17:55:36 +1000 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <1afaf6161002121835g467ad6abp9e77f103b910222e@mail.gmail.com> References: <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B760610.8010100@gmail.com> <1afaf6161002121802x749661a0w9c23fbcd5634cc15@mail.gmail.com> <4B760D23.6000407@gmail.com> <1afaf6161002121835g467ad6abp9e77f103b910222e@mail.gmail.com> Message-ID: <4B765AF8.7020706@gmail.com> Benjamin Peterson wrote: > 2010/2/12 Nick Coghlan : >> Of course, PEP 291 could do with a list of 2.5 and 2.6 specific features >> first... > > I think that section is rather pointless to keep updated, since a good > list can be found in the what's new documents. What people really need > to do is run the unittests on all supported versions. It's handy as a list of big ticket items to avoid, especially those that can significantly affect the way you structure code. Agreed that the main enforcement should be to run those tests on the relevant older versions. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From robertc at robertcollins.net Sat Feb 13 10:28:19 2010 From: robertc at robertcollins.net (Robert Collins) Date: Sat, 13 Feb 2010 20:28:19 +1100 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B75FAB3.9010009@voidspace.org.uk> References: <4B71908A.3080306@voidspace.org.uk> <4B75786E.4090800@voidspace.org.uk> <4B75FAB3.9010009@voidspace.org.uk> Message-ID: <1266053299.3458.273.camel@lifeless-64> On Sat, 2010-02-13 at 01:04 +0000, Michael Foord wrote: > > However from this example I *cannot* guess whether those resources are > > set up and torn down per test or per test class. > This particular example is the equivalent of setUpClass - so by > declaring the resource as a class attribute it will created before the > first test for the class is run and disposed of after the last test for > the class. > > You could *also* create a single resource and share it between several > test classes, or even across classes in several modules, and have it > created and disposed of at the right point. I've copied Rob Collins in > on this email in case I've misunderstood. Yes, precisely. > > Also the notation > > > > resources = [('workdir', MyTempDir())] > > > > looks pretty ugly -- if 'workdir' ends up being an instance attribute, > > why not make it a dict instead of a list of tuples? Or even better, a > > could each resource become a class variable? I have two key 'todos' planned in the notation for testresources: - I want to make a decorator to let individual tests (rather than /just/ class scope as currently happens) get resources easily. Something like @resource(workdir=MyTempDir()) def test_foo(self): pass Secondly, I want to make the class level 'resources' list into a dict. It was a list initially, for misguided reasons that no longer apply, and it would be much nicer and clearer as a dict. > > > I guess we could introspect the class for every attribute that is a > resource, but I prefer some way of explicitly declaring which resources > a TestCase is using. I could see doing the below as an alternative: @resource(workdir=MyTempDir()) class TestFoo(TestCase): ... I'm not personally very keen on inspecting everything in self.__dict__, I suspect it would tickle bugs in other unittest extensions. However I'm not really /against/ it - I don't think it will result in bad test behaviour or isolation issues. So if users would like it, lets do it. -Rob -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From robertc at robertcollins.net Sat Feb 13 10:36:37 2010 From: robertc at robertcollins.net (Robert Collins) Date: Sat, 13 Feb 2010 20:36:37 +1100 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <4B75786E.4090800@voidspace.org.uk> <20100212202051.26099.568349857.divmod.xquotient.1225@localhost.localdomain> Message-ID: <1266053797.3458.282.camel@lifeless-64> On Fri, 2010-02-12 at 12:27 -0800, Guido van Rossum wrote: > On Fri, Feb 12, 2010 at 12:20 PM, wrote: > > The idea is that you're declaring what the tests need in order to work. > > You're not explicitly defining the order in which things are set up and torn > > down. That is left up to another part of the library to determine. > > > > One such other part, OptimisingTestSuite, will look at *all* of your tests > > and find an order which involves the least redundant effort. > > So is there a way to associate a "cost" with a resource? I may have > one resource which is simply a /tmp subdirectory (very cheap) and > another that requires starting a database service (very expensive). From the pydoc: :ivar setUpCost: The relative cost to construct a resource of this type. One good approach is to set this to the number of seconds it normally takes to set up the resource. :ivar tearDownCost: The relative cost to tear down a resource of this type. One good approach is to set this to the number of seconds it normally takes to tear down the resource. > > You might have something else that breaks up the test run across multiple > > processes and uses the resource declarations to run all tests requiring one > > thing in one process and all tests requiring another thing somewhere else. > > I admire the approach, though I am skeptical. We have a thing to split > up tests at Google which looks at past running times for tests to make > an informed opinion. Have you thought of that? I think thats a great way to do it; in fact doing the same thing to assign setup and teardown costs would be lovely; I should write some glue to do that automatically for people. -Rob -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From chris at simplistix.co.uk Fri Feb 12 19:26:04 2010 From: chris at simplistix.co.uk (Chris Withers) Date: Fri, 12 Feb 2010 18:26:04 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <20100211155632.D54EE1FCC71@kimball.webabinitio.net> References: <4B71908A.3080306@voidspace.org.uk> <4B73F883.6050506@gmail.com> <4B73FB01.7040403@voidspace.org.uk> <20100211155632.D54EE1FCC71@kimball.webabinitio.net> Message-ID: <4B759D3C.3030304@simplistix.co.uk> R. David Murray wrote: > On Thu, 11 Feb 2010 12:41:37 +0000, Michael Foord wrote: >> On 11/02/2010 12:30, Nick Coghlan wrote: >>> The test framework might promise to do the following for each test: >>> >>> with get_module_cm(test_instance): # However identified >>> with get_class_cm(test_instance): # However identified >>> with test_instance: # ** >>> test_instance.test_method() >> Well that is *effectively* how they would work (the semantics) but I >> don't see how that would fit with the design of unittest to make them >> work *specifically* like that - especially not if we are to remain >> compatible with existing unittest extensions. >> >> If you can come up with a concrete proposal of how to do this then I'm >> happy to listen. I'm not saying it is impossible, but it isn't >> immediately obvious. I don't see any advantage of just using context >> managers for the sake of it and definitely not at the cost of backwards >> incompatibility. > > I suspect that Nick is saying that it is worth doing for the sake of it, > as being more "Pythonic" in some sense. > > That is, it seems to me that in a modern Python writing something like: > > > @contextlib.contextmanager > def foo_cm(testcase): > testcase.bar = some_costly_setup_function() > yield > testcase.bar.close() > > @contextlib.contextmanager > def foo_test_cm(testcase): > testcase.baz = Mock(testcase.bar) > yield > > > @unittest.case_context(foo_cm) > @unittest.test_context(foo_test_cm) > class TestFoo(unittest.TestCase): > > def test_bar: > foo = Foo(self.baz, testing=True) > self.assertTrue("Context managers are cool") This reminds me of the decorators I have available in testfixtures: http://packages.python.org/testfixtures/mocking.html http://packages.python.org/testfixtures/logging.html http://packages.python.org/testfixtures/files.html (the last of which is a lot prettier in svn, not had a chance to release :-S) Anyway, these I've ended up making available as context managers as well as decorators... But yes, something similar for sharing state between tests and/or doing setup for each test would be nice :-) cheers, Chris -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk From solipsis at pitrou.net Sat Feb 13 11:42:41 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 13 Feb 2010 10:42:41 +0000 (UTC) Subject: [Python-Dev] setUpClass and setUpModule in unittest References: <4B71908A.3080306@voidspace.org.uk> <4B75786E.4090800@voidspace.org.uk> <4B75FAB3.9010009@voidspace.org.uk> <1266053299.3458.273.camel@lifeless-64> Message-ID: Robert Collins robertcollins.net> writes: > > I'm not personally very keen on inspecting everything in self.__dict__, > I suspect it would tickle bugs in other unittest extensions. However I'm > not really /against/ it - I don't think it will result in bad test > behaviour or isolation issues. So if users would like it, lets do it. Why not take all resource_XXX attributes? By the way, how does a given test access the allocated resource? Say, the DB connection. Does it become an attribute of the test case instance? Thank you Antoine. From robertc at robertcollins.net Sat Feb 13 12:00:06 2010 From: robertc at robertcollins.net (Robert Collins) Date: Sat, 13 Feb 2010 22:00:06 +1100 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <4B75786E.4090800@voidspace.org.uk> <4B75FAB3.9010009@voidspace.org.uk> <1266053299.3458.273.camel@lifeless-64> Message-ID: <1266058806.3458.290.camel@lifeless-64> On Sat, 2010-02-13 at 10:42 +0000, Antoine Pitrou wrote: > Robert Collins robertcollins.net> writes: > > > > I'm not personally very keen on inspecting everything in self.__dict__, > > I suspect it would tickle bugs in other unittest extensions. However I'm > > not really /against/ it - I don't think it will result in bad test > > behaviour or isolation issues. So if users would like it, lets do it. > > Why not take all resource_XXX attributes? Sure, though if we're just introspecting I don't see much risk difference between resource_XXX and all XXX. As I say above I'm happy to do it if folk think it will be nice. > By the way, how does a given test access the allocated resource? Say, the DB > connection. Does it become an attribute of the test case instance? yes. Given class Foo(TestCase): resources = {'thing', MyResourceManager()} def test_foo(self): self.thing self.thing will access the resource returned by MyResourceManager.make() -Rob -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From martin at v.loewis.de Sat Feb 13 12:53:12 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 13 Feb 2010 12:53:12 +0100 Subject: [Python-Dev] PEP 385: Auditing Message-ID: <4B7692A8.9090801@v.loewis.de> I recently set up a Mercurial hosting solution myself, and noticed that there is no audit trail of who had been writing to the "master" clone. There are commit messages, but they could be fake (even misleading to a different committer). The threat I'm concerned about is that of a stolen SSH key. If that is abused to push suspicious changes into the repository, it is really difficult to find out whose key had been used. The solution I came up with is to define an "incoming" hook on the repository which will log the SSH user along with the pack ID of the pack being pushed. I'd like to propose that a similar hook is installed on repositories hosted at hg.python.org (unless Mercurial offers something better already). Whether or not this log should be publicly visible can be debated; IMO it would be sufficient if only sysadmins can inspect it in case of doubt. Alterntively, the email notification sent to python-checkins could could report who the pusher was. Dirkjan: if you agree to such a strategy, please mention that in the PEP. Regards, Martin From solipsis at pitrou.net Sat Feb 13 13:19:18 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 13 Feb 2010 12:19:18 +0000 (UTC) Subject: [Python-Dev] PEP 385: Auditing References: <4B7692A8.9090801@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > > Alterntively, the email notification sent to python-checkins could could > report who the pusher was. This sounds reasonable, assuming it doesn't disclose any private information. Regards Antoine. From pachi at rvburke.com Sat Feb 13 14:47:23 2010 From: pachi at rvburke.com (Rafael Villar Burke) Date: Sat, 13 Feb 2010 13:47:23 +0000 (UTC) Subject: [Python-Dev] PEP 385: Auditing References: <4B7692A8.9090801@v.loewis.de> Message-ID: Antoine Pitrou pitrou.net> writes: > > Martin v. L?wis v.loewis.de> writes: > > > > Alterntively, the email notification sent to python-checkins could could > > report who the pusher was. > > This sounds reasonable, assuming it doesn't disclose any private information. There are already made solutions for that, as the pushlog hooks used by Mozilla, OpenJDK and others. Mozilla's pushlog can be seen here: http://hg.mozilla.org/mozilla-central/pushloghtml And its code is avaliable here: http://hg.mozilla.org/users/bsmedberg_mozilla.com/hgpoller/file/tip/pushlog-feed.py Dirkjan is its author, so I suppose he was already thinking about having a similar hook for Python repos. Regards, Rafael From martin at v.loewis.de Sat Feb 13 15:25:58 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 13 Feb 2010 15:25:58 +0100 Subject: [Python-Dev] PEP 385: Auditing In-Reply-To: References: <4B7692A8.9090801@v.loewis.de> Message-ID: <4B76B676.8050109@v.loewis.de> > Mozilla's pushlog can be seen here: > > http://hg.mozilla.org/mozilla-central/pushloghtml > > And its code is avaliable here: > http://hg.mozilla.org/users/bsmedberg_mozilla.com/hgpoller/file/tip/pushlog-feed.py > > Dirkjan is its author, so I suppose he was already thinking about having a > similar hook for Python repos. This seems to just be the code that generates the feed, out of a database pushlog2.db that somehow must be created. So where is the code to actually fill that database? Regards, Martin From pachi at rvburke.com Sat Feb 13 16:03:57 2010 From: pachi at rvburke.com (Rafael Villar Burke (Pachi)) Date: Sat, 13 Feb 2010 16:03:57 +0100 Subject: [Python-Dev] PEP 385: Auditing In-Reply-To: <4B76B676.8050109@v.loewis.de> References: <4B7692A8.9090801@v.loewis.de> <4B76B676.8050109@v.loewis.de> Message-ID: <4B76BF5D.4000204@rvburke.com> On 13/02/2010 15:25, "Martin v. L?wis" wrote: >> Mozilla's pushlog can be seen here: >> >> http://hg.mozilla.org/mozilla-central/pushloghtml >> >> And its code is avaliable here: >> http://hg.mozilla.org/users/bsmedberg_mozilla.com/hgpoller/file/tip/pushlog-feed.py >> >> Dirkjan is its author, so I suppose he was already thinking about having a >> similar hook for Python repos. >> > This seems to just be the code that generates the feed, out of a > database pushlog2.db that somehow must be created. So where is the code > to actually fill that database? > There's some more content here: http://hg.mozilla.org/users/bsmedberg_mozilla.com/hgpoller/file/tip But I don't use it myself, just knew about its existance. Surely Dirkjan can make all the pieces fit nicely :). Rafael From pachi at rvburke.com Sat Feb 13 16:09:43 2010 From: pachi at rvburke.com (Rafael Villar Burke (Pachi)) Date: Sat, 13 Feb 2010 16:09:43 +0100 Subject: [Python-Dev] PEP 385: Auditing In-Reply-To: <4B76BF5D.4000204@rvburke.com> References: <4B7692A8.9090801@v.loewis.de> <4B76B676.8050109@v.loewis.de> <4B76BF5D.4000204@rvburke.com> Message-ID: <4B76C0B7.6010508@rvburke.com> On 13/02/2010 16:03, Rafael Villar Burke (Pachi) wrote: > There's some more content here: > http://hg.mozilla.org/users/bsmedberg_mozilla.com/hgpoller/file/tip > But I don't use it myself, just knew about its existance. Surely > Dirkjan can make all the pieces fit nicely :). The hook code looks like it's here: http://hg.mozilla.org/users/bsmedberg_mozilla.com/hghooks/file/tip The previous repository link is the hgwebdir integration code. Regards, Rafael From barry at python.org Sat Feb 13 17:14:26 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 13 Feb 2010 11:14:26 -0500 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B76474D.2050407@v.loewis.de> References: <1afaf6161002071351p463a766fl2fc1348a047a2e3@mail.gmail.com> <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <4B76474D.2050407@v.loewis.de> Message-ID: On Feb 13, 2010, at 1:31 AM, Martin v. L?wis wrote: >>> On Fri, Feb 12, 2010 at 11:17, "Martin v. L?wis" wrote: >>>> IMO, it is realistic to predict that this will not actually happen. If >>>> we can agree to give up the 2to3 sandbox, we should incorporate >>>> find_pattern into the tree, and perhaps test.py as well. >>> I vote on giving up the 2to3 sandbox. >> >> Besides, if we're using hg, it should make it much easier for someone >> else to branch that part of the stdlib > > Actually - no: hg doesn't support branching of parts of a repository. > You would need to branch all of Python. Then, there wouldn't be a > straight-forward place to setup.py and any other top-level files > (although you could hack them into Lib, and work with a distutils manifest). Does hg support an equivalent of 'bzr split'? % bzr split --help Purpose: Split a subdirectory of a tree into a separate tree. Usage: bzr split TREE Options: --usage Show usage message and options. -v, --verbose Display more information. -q, --quiet Only display errors and warnings. -h, --help Show help message. Description: This command will produce a target tree in a format that supports rich roots, like 'rich-root' or 'rich-root-pack'. These formats cannot be converted into earlier formats like 'dirstate-tags'. The TREE argument should be a subdirectory of a working tree. That subdirectory will be converted into an independent tree, with its own branch. Commits in the top-level tree will not apply to the new subtree. See also: join -Barry From benjamin at python.org Sat Feb 13 17:48:02 2010 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 13 Feb 2010 10:48:02 -0600 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B7648D7.9060301@v.loewis.de> References: <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> Message-ID: <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> 2010/2/13 "Martin v. L?wis" : >> I personally like 2to3 in a separate repo because it fits well with my >> view that 2to3 is an extra application that happens to also be >> distributed with python. > > But isn't that just a theoretical property? I know that's how 2to3 > started, but who, other than the committers, actually accesses the 2to3 > repo? It's used in 3to2 for example. > > I would be much more supportive of that view if there had been a single > release of 2to3 at any point in time (e.g. to PyPI). Alas, partially due > to me creating lib2to3, you actually couldn't release it as an extra > application and run it on 2.6 or 2.7, as the builtin lib2to3 would take > precedence over the lib2to3 bundled with the application. It could be distributed under another name or provide a way to override the stdlib version. -- Regards, Benjamin From martin at v.loewis.de Sat Feb 13 18:14:31 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 13 Feb 2010 18:14:31 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> References: <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> Message-ID: <4B76DDF7.7040306@v.loewis.de> >> But isn't that just a theoretical property? I know that's how 2to3 >> started, but who, other than the committers, actually accesses the 2to3 >> repo? > > It's used in 3to2 for example. That doesn't really seem to be the case. AFAICT, 3to2 is a hg repository, with no inherent connection to the 2to3 svn sandbox. It does use lib2to3, but that could come either from an installed Python, from a trunk/3k checkout, or from the sandbox. Correct? So if the 2.x trunk became the official master for (lib)2to3, nothing would really change for 3to3, right? (except for the comment in the readme that you should get 2to3 from the sandbox if the trunk copy doesn't work; this comment would become obsolete as changes *would* propagate immediately into the Python trunk). >> I would be much more supportive of that view if there had been a single >> release of 2to3 at any point in time (e.g. to PyPI). Alas, partially due >> to me creating lib2to3, you actually couldn't release it as an extra >> application and run it on 2.6 or 2.7, as the builtin lib2to3 would take >> precedence over the lib2to3 bundled with the application. > > It could be distributed under another name or provide a way to > override the stdlib version. Sure. However, I'm still claiming that this is theoretical. The only person who has shown a slight interest in having this as a separate project (since Collin Winter left) is you, and so far, you haven't made any efforts to produce a stand-alone release. I don't blame you at all for that, in fact, I think Python is better off with the status quo (i.e. changes to 2to3 get liberally released even with bug fix releases, basically in an exemption from the "no new features" policy - similar to -3 warnings). I still think that the best approach for projects to use 2to3 is to run 2to3 at install time from a single-source release. For that, projects will have to adjust to whatever bugs certain 2to3 releases have, rather than requiring users to download a newer version of 2to3 that fixes them. For this use case, a tightly-integrated lib2to3 (with that name and sole purpose) is the best thing. Regards, Martin From guido at python.org Sat Feb 13 18:46:26 2010 From: guido at python.org (Guido van Rossum) Date: Sat, 13 Feb 2010 09:46:26 -0800 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <2E9EFC73-20B4-42F0-973C-66933410C9EE@twistedmatrix.com> References: <4B71908A.3080306@voidspace.org.uk> <2E9EFC73-20B4-42F0-973C-66933410C9EE@twistedmatrix.com> Message-ID: On Fri, Feb 12, 2010 at 8:01 PM, Glyph Lefkowitz wrote: > On Feb 11, 2010, at 1:11 PM, Guido van Rossum wrote: > >> I have skimmed this thread (hence this reply to the first rather than >> the last message), but in general I am baffled by the hostility of >> testing framework developers towards their users. The arguments >> against class- and module-level seUp/tearDown functions seems to be >> inspired by religion or ideology more than by the zen of Python. What >> happened to Practicality Beats Purity? > > My sentiments tend to echo Jean-Paul Calderone's in this regard, but I think what he's saying bears a lot of repeating. ?We really screwed up this feature in Twisted and I'd like to make sure that the stdlib doesn't repeat the mistake. ?(Granted, we screwed it up extra bad , but I do think many of the problems we encountered are inherent.) Especially since you screwed up extra bad, the danger exists that you're overreacting. > The issue is not that we test-framework developers don't like our users, or want to protect them from themselves. ?It is that our users - ourselves chief among them - desire features like "I want my tests to be transparently optimized across N cores and N disks". Yeah, users ask for impossible features all the time. ;-) Seriously, we do this at Google on a massive scale, for many languages including Python. It's works well but takes getting used to: while time is saved waiting for tests, some time is wasted debugging tests that run fine on the developer's workstation but not in the test cluster. We've developed quite a few practices around this, which include ways to override and control the test distribution, as well as reports showing the historical "flakiness" for each tests. > I can understand how resistance to setUp/tearDown*Class/Module comes across as user-hostility, but I can assure you this is not the case. ?It's subtle and difficult to explain how incompatible with these advanced features the *apparently* straightforward semantics of setting up and tearing down classes and modules. ?Most questions of semantics can be resolved with a simple decision, and it's not clear how that would interfere with other features. > > In Twisted's implementation of setUpClass and tearDownClass, everything seemed like it worked right up until the point where it didn't. ?The test writer thinks that they're writing "simple" setUpClass and tearDownClass methods to optimize things, except almost by definition a setUpClass method needs to manipulate global state, shared across tests. ?Which means that said state starts getting confused when it is set up and torn down concurrently across multiple processes. ?These methods seem simple, but do they touch the filesystem? ?Do they touch a shared database, even a little? ?How do they determine a unique location to do that? ?Without generally available tools to allow test writers to mess with the order and execution environment of their tests, one tends to write tests that rely on these implementation and ordering accidents, which means that when such a tool does arrive, things start breaking in unpredictable ways. Been there, done that. The guideline should be that setUpClass and friends save time but should still isolate themselves from other copies that might run concurrently. E.g. if you have to copy a ton of stuff into the filesystem, you should still put it in a temp dir with a randomized name, and store that name as a class variable. When there's a global resource (such as a database) that really can't be shared, well, you have to come up with a way to lock it -- that's probably necessary even if your tests ran completely serialized, unless there's only one developer and she never multitasks. :-) >> The argument that a unittest framework shouldn't be "abused" for >> regression tests (or integration tests, or whatever) is also bizarre >> to my mind. Surely if a testing framework applies to multiple kinds of >> testing that's a good thing, not something to be frowned upon? > > For what it's worth, I am a big fan of abusing test frameworks in generally, and pyunit specifically, to perform every possible kind of testing. ?In fact, I find setUpClass more hostile to *other* kinds of testing, because this convenience for simple integration tests makes more involved, performance-intensive integration tests harder to write and manage. That sounds odd, as if the presence of this convenience would prohibit you from also implement other features. >> On the other hand, I think we should be careful to extend unittest in >> a consistent way. I shuddered at earlier proposals (on python-ideas) >> to name the new functions (variations of) set_up and tear_down "to >> conform with PEP 8" (this would actually have violated that PEP, which >> explicitly prefers local consistency over global consistency). > > This is a very important point. ?But, it's important not only to extend unittest itself in a consistent way, but to clearly describe the points of extensibility so that third-party things can continue to extend unittest themselves, and cooperate with each other using some defined protocol so that you can combine those tools. Yeah, and I suspect that the original pyunit (now unittest) wasn't always clear on this point. > I tried to write about this problem a while ago - the current extensibility API (which is mostly just composing "run()") is sub-optimal in many ways, but it's important not to break it. I expect that *eventually* something will come along that is so much better than unittest that, once matured, we'll want it in the stdlib. (Or, alternatively, eventually stdlib inclusion won't be such a big deal any more since distros mix and match. But then inclusion in a distro would become every package developer's goal -- and then the circle would be round, since distros hardly move faster than Python releases...) But in the mean time I believe evolving unittest is the right thing to do. Adding new methods is relatively easy. Adding whole new paradigms (like testresources) is a lot harder, eventually in the light of the latter's relative immaturity. > And setUpClass does inevitably start to break those integration points down, because it implies certain things, like the fact that classes and modules are suites, or are otherwise grouped together in test ordering. I expect that is what the majority of unittest users already believe. > This makes it difficult to create custom suites, to do custom ordering, custom per-test behavior (like metrics collection before and after run(), or gc.collect() after each test, or looking for newly-opened-but-not-cleaned-up external resources like file descriptors after each tearDown). True, the list never ends. > Again: these are all concrete features that *users* of test frameworks want, not just idle architectural fantasy of us framework hackers. I expect that most bleeding edge users will end up writing a custom framework, or at least work with a bleeding edge framework that change change rapidly to meet their urgent needs. > I haven't had the opportunity to read the entire thread, so I don't know if this discussion has come to fruition, but I can see that some attention has been paid to these difficulties. ?I have no problem with setUpClass or tearDownClass hooks *per se*, as long as they can be implemented in a way which explicitly preserves extensibility. That's good to know. I have no doubt they (and setUpModule c.s.) can be done in a clean, extensible way. And that doesn't mean we couldn't also add other features -- after all, not all users have the same needs. (If you read the Zen of Python, you'll see that TOOWTDI has several qualifications. :-) >> Regarding the objection that setUp/tearDown for classes would run into >> issues with subclassing, I propose to let the standard semantics of >> subclasses do their job. Thus a subclass that overrides setUpClass or >> tearDownClass is responsible for calling the base class's setUpClass >> and tearDownClass (and the TestCase base class should provide empty >> versions of both). The testrunner should only call setUpClass and >> tearDownClass for classes that have at least one test that is >> selected. >> >> Yes, this would mean that if a base class has a test method and a >> setUpClass (and tearDownClass) method and a subclass also has a test >> method and overrides setUpClass (and/or tearDown), the base class's >> setUpClass and tearDown may be called twice. What's the big deal? If >> setUpClass and tearDownClass are written properly they should support >> this. > > Just to be clear: by "written properly" you mean, written as classmethods, storing their data only on 'cls', right? Yes. And avoiding referencing unique global resources (both within and outside the current process). >> If this behavior is undesired in a particular case, maybe what >> was really meant were module-level setUp and tearDown, or the class >> structure should be rearranged. > > There's also a bit of an open question here for me: if subclassing is allowed, and module-level setup and teardown are allowed, then what if I define a test class with test methods in module 'a', as well as module setup and teardown, then subclass it in 'b' which *doesn't* have setup and teardown... is the subclass in 'b' always assumed to depend on the module-level setup in 'a'? You shouldn't be doing that kind of thing, but for definiteness, the answer is "no". If you use class setup/teardown instead you can control this via inheritance. At the module level, if you really want to do this, b's module setup would have to explicitly call a's module setup. > Is there a way that it could be made not to if it weren't necessary? What if it stubs out all of its test methods? ?In the case of classes you've got the 'cls' variable to describe the dependency and the shared state, but in the case of modules, inheritance doesn't create an additional module object to hold on to. It should be "no" so that you can explicitly code up "yes" if you want to. The other way around would be much messier, as you describe. > testresources very neatly sidesteps this problem by just providing an API to say "this test case depends on that test resource", without relying on the grouping of tests within classes, modules, or packages. ?Of course you can just define a class-level or module-level resource and then have all your tests depend on it, which gives you the behavior of setUpClass and setUpModule in a more general way. I wish it was always a matter of "resources". I've seen use cases for module-level setup that were much messier than that (e.g. fixing import paths). I expect it will be a while before the testresources design has been shaken out sufficiently for it to be included in the stdlib. -- --Guido van Rossum (python.org/~guido) From dirkjan at ochtman.nl Sat Feb 13 18:49:52 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Sat, 13 Feb 2010 18:49:52 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: <1afaf6161002081711q6fe928eclfa3daf26d54d0d6b@mail.gmail.com> <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <4B76474D.2050407@v.loewis.de> Message-ID: On Sat, Feb 13, 2010 at 17:14, Barry Warsaw wrote: > Does hg support an equivalent of 'bzr split'? > > % bzr split --help > Purpose: Split a subdirectory of a tree into a separate tree. > Usage: ? bzr split TREE > > Options: > ?--usage ? ? ? ?Show usage message and options. > ?-v, --verbose ?Display more information. > ?-q, --quiet ? ?Only display errors and warnings. > ?-h, --help ? ? Show help message. > > Description: > ?This command will produce a target tree in a format that supports > ?rich roots, like 'rich-root' or 'rich-root-pack'. ?These formats cannot be > ?converted into earlier formats like 'dirstate-tags'. > > ?The TREE argument should be a subdirectory of a working tree. ?That > ?subdirectory will be converted into an independent tree, with its own > ?branch. ?Commits in the top-level tree will not apply to the new subtree. Is that like a clone/branch of a subdir of the original repository? We don't have what we usually call "narrow clones" yet (nor "shallow clones", the other potentially useful form of partial clones). We do have the convert extension, which allows you to create a new repository from a subtree of an old repository, but it changes all the hashes (and I don't know if we have a way to splice them back together). Cheers, Dirkjan From dirkjan at ochtman.nl Sat Feb 13 18:52:55 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Sat, 13 Feb 2010 18:52:55 +0100 Subject: [Python-Dev] PEP 385: Auditing In-Reply-To: <4B7692A8.9090801@v.loewis.de> References: <4B7692A8.9090801@v.loewis.de> Message-ID: On Sat, Feb 13, 2010 at 12:53, "Martin v. L?wis" wrote: > Dirkjan: if you agree to such a strategy, please mention that in the PEP. Having a pushlog and/or including the pusher in the email sounds like a good idea, I'll add something to that effect to the PEP. I slightly prefer adding it to the commit email because it would seem to require less infrastructure, and it can be handy at times to know who pushed something right off the bat. Cheers, Dirkjan From benjamin at python.org Sat Feb 13 19:43:28 2010 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 13 Feb 2010 12:43:28 -0600 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <4B76474D.2050407@v.loewis.de> Message-ID: <1afaf6161002131043h5f4ca05by6b75a51f2ace03d8@mail.gmail.com> 2010/2/13 Dirkjan Ochtman : > On Sat, Feb 13, 2010 at 17:14, Barry Warsaw wrote: >> Does hg support an equivalent of 'bzr split'? >> >> % bzr split --help >> Purpose: Split a subdirectory of a tree into a separate tree. >> Usage: ? bzr split TREE >> >> Options: >> ?--usage ? ? ? ?Show usage message and options. >> ?-v, --verbose ?Display more information. >> ?-q, --quiet ? ?Only display errors and warnings. >> ?-h, --help ? ? Show help message. >> >> Description: >> ?This command will produce a target tree in a format that supports >> ?rich roots, like 'rich-root' or 'rich-root-pack'. ?These formats cannot be >> ?converted into earlier formats like 'dirstate-tags'. >> >> ?The TREE argument should be a subdirectory of a working tree. ?That >> ?subdirectory will be converted into an independent tree, with its own >> ?branch. ?Commits in the top-level tree will not apply to the new subtree. > > Is that like a clone/branch of a subdir of the original repository? We > don't have what we usually call "narrow clones" yet (nor "shallow > clones", the other potentially useful form of partial clones). We do > have the convert extension, which allows you to create a new > repository from a subtree of an old repository, but it changes all the > hashes (and I don't know if we have a way to splice them back > together). It is not a partial clone, but rather similar to what you are referring to with the convert extension. -- Regards, Benjamin From benjamin at python.org Sat Feb 13 20:23:23 2010 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 13 Feb 2010 13:23:23 -0600 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B76DDF7.7040306@v.loewis.de> References: <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> Message-ID: <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> 2010/2/13 "Martin v. L?wis" : >>> But isn't that just a theoretical property? I know that's how 2to3 >>> started, but who, other than the committers, actually accesses the 2to3 >>> repo? >> >> It's used in 3to2 for example. > > That doesn't really seem to be the case. AFAICT, 3to2 is a hg > repository, with no inherent connection to the 2to3 svn sandbox. It does > use lib2to3, but that could come either from an installed Python, from a > trunk/3k checkout, or from the sandbox. Correct? It has to be from the sandbox (or trunk I suppose) because it requires changes that haven't been released. > > So if the 2.x trunk became the official master for (lib)2to3, nothing > would really change for 3to3, right? (except for the comment in the > readme that you should get 2to3 from the sandbox if the trunk copy > doesn't work; this comment would become obsolete as changes *would* > propagate immediately into the Python trunk). Right, except you would have to clone the entire history of Python in order to get at the trunk version. > >>> I would be much more supportive of that view if there had been a single >>> release of 2to3 at any point in time (e.g. to PyPI). Alas, partially due >>> to me creating lib2to3, you actually couldn't release it as an extra >>> application and run it on 2.6 or 2.7, as the builtin lib2to3 would take >>> precedence over the lib2to3 bundled with the application. >> >> It could be distributed under another name or provide a way to >> override the stdlib version. > > Sure. However, I'm still claiming that this is theoretical. The only > person who has shown a slight interest in having this as a separate > project (since Collin Winter left) is you, and so far, you haven't made > any efforts to produce a stand-alone release. I don't blame you at all > for that, in fact, I think Python is better off with the status quo > (i.e. changes to 2to3 get liberally released even with bug fix releases, > basically in an exemption from the "no new features" policy - similar to > -3 warnings). > > I still think that the best approach for projects to use 2to3 is to run > 2to3 at install time from a single-source release. For that, projects > will have to adjust to whatever bugs certain 2to3 releases have, rather > than requiring users to download a newer version of 2to3 that fixes > them. For this use case, a tightly-integrated lib2to3 (with that name > and sole purpose) is the best thing. Alright. That is reasonable. The other thing is that we will loose some vcs history and some history granularity by switching development to the trunk version, since just the svnmerged revisions will be converted. -- Regards, Benjamin From martin at v.loewis.de Sat Feb 13 20:35:50 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 13 Feb 2010 20:35:50 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> References: <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> Message-ID: <4B76FF16.3000907@v.loewis.de> > The other thing is that we will loose some vcs history and some > history granularity by switching development to the trunk version, > since just the svnmerged revisions will be converted. I suppose it might be possible to fake the history of Lib/lib2to3 with commits that didn't actually happen, although this is probably a cure worse than the disease. We are not going to throw away the subversion repository, so it would always be possible to go back and look at the actual history. Regards, Martin From barry at python.org Sun Feb 14 00:22:51 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 13 Feb 2010 18:22:51 -0500 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <1afaf6161002131043h5f4ca05by6b75a51f2ace03d8@mail.gmail.com> References: <4B70D8F6.3010806@v.loewis.de> <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <4B76474D.2050407@v.loewis.de> <1afaf6161002131043h5f4ca05by6b75a51f2ace03d8@mail.gmail.com> Message-ID: On Feb 13, 2010, at 1:43 PM, Benjamin Peterson wrote: > 2010/2/13 Dirkjan Ochtman : >> On Sat, Feb 13, 2010 at 17:14, Barry Warsaw wrote: >>> Does hg support an equivalent of 'bzr split'? >>> >>> % bzr split --help >>> Purpose: Split a subdirectory of a tree into a separate tree. >>> Usage: bzr split TREE >>> >>> Options: >>> --usage Show usage message and options. >>> -v, --verbose Display more information. >>> -q, --quiet Only display errors and warnings. >>> -h, --help Show help message. >>> >>> Description: >>> This command will produce a target tree in a format that supports >>> rich roots, like 'rich-root' or 'rich-root-pack'. These formats cannot be >>> converted into earlier formats like 'dirstate-tags'. >>> >>> The TREE argument should be a subdirectory of a working tree. That >>> subdirectory will be converted into an independent tree, with its own >>> branch. Commits in the top-level tree will not apply to the new subtree. >> >> Is that like a clone/branch of a subdir of the original repository? We >> don't have what we usually call "narrow clones" yet (nor "shallow >> clones", the other potentially useful form of partial clones). We do >> have the convert extension, which allows you to create a new >> repository from a subtree of an old repository, but it changes all the >> hashes (and I don't know if we have a way to splice them back >> together). > > It is not a partial clone, but rather similar to what you are > referring to with the convert extension. Right, that's what it sounds like. I've used it on a bzr-svn converted repository to split a monolithic tree after Subversion->Bazaar conversion into separately managed subtrees. Note that 'bzr join' is the inverse. The interesting thing (both good and bad) is that after the split both subtrees have the full history of the original. -Barry From amentajo at msu.edu Sun Feb 14 00:52:04 2010 From: amentajo at msu.edu (Joe Amenta) Date: Sat, 13 Feb 2010 18:52:04 -0500 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B76DDF7.7040306@v.loewis.de> References: <1afaf6161002081947n3dd4049bm6d45443394b7c152@mail.gmail.com> <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> Message-ID: <4dc473a51002131552j3177e635q338489accc6c4152@mail.gmail.com> On Sat, Feb 13, 2010 at 12:14 PM, "Martin v. L?wis" wrote: > >> But isn't that just a theoretical property? I know that's how 2to3 > >> started, but who, other than the committers, actually accesses the 2to3 > >> repo? > > > > It's used in 3to2 for example. > > That doesn't really seem to be the case. AFAICT, 3to2 is a hg > repository, with no inherent connection to the 2to3 svn sandbox. It does > use lib2to3, but that could come either from an installed Python, from a > trunk/3k checkout, or from the sandbox. Correct? > > So if the 2.x trunk became the official master for (lib)2to3, nothing > would really change for 3to3, right? (except for the comment in the > readme that you should get 2to3 from the sandbox if the trunk copy > doesn't work; this comment would become obsolete as changes *would* > propagate immediately into the Python trunk). > > >> I would be much more supportive of that view if there had been a single > >> release of 2to3 at any point in time (e.g. to PyPI). Alas, partially due > >> to me creating lib2to3, you actually couldn't release it as an extra > >> application and run it on 2.6 or 2.7, as the builtin lib2to3 would take > >> precedence over the lib2to3 bundled with the application. > > > > It could be distributed under another name or provide a way to > > override the stdlib version. > > Sure. However, I'm still claiming that this is theoretical. The only > person who has shown a slight interest in having this as a separate > project (since Collin Winter left) is you, and so far, you haven't made > any efforts to produce a stand-alone release. I don't blame you at all > for that, in fact, I think Python is better off with the status quo > (i.e. changes to 2to3 get liberally released even with bug fix releases, > basically in an exemption from the "no new features" policy - similar to > -3 warnings). > > I still think that the best approach for projects to use 2to3 is to run > 2to3 at install time from a single-source release. For that, projects > will have to adjust to whatever bugs certain 2to3 releases have, rather > than requiring users to download a newer version of 2to3 that fixes > them. For this use case, a tightly-integrated lib2to3 (with that name > and sole purpose) is the best thing. > > Regards, > Martin > > > Yes, if the trunk were the official master for lib2to3, then 3to2 would not change at all. If fixes to lib2to3 were immediately propagated to the trunk, 3to2 would benefit from that. I support lib2to3's integration with the trunk... it's too confusing otherwise and kind of defeats the idea of "trunk": if lib2to3 is provided with Python, then shouldn't its latest version be in Python's trunk? --Joe Amenta -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcguire at google.com Sun Feb 14 01:53:06 2010 From: mcguire at google.com (Jake McGuire) Date: Sat, 13 Feb 2010 16:53:06 -0800 Subject: [Python-Dev] What to intern (e.g. func_code.co_filename)? Message-ID: <77c780b41002131653r2ee7f4bfi99f019bf2b85a00b@mail.gmail.com> Has anyone come up with rules of thumb for what to intern and what the performance implications of interning are? I'm working on profiling App Engine again, and since they don't allow marshall I have to modify pstats to save the profile via pickle. While trying to get profiles under 1MB, I noticed that each function has its own copy of the filename in which it is defined, and sometimes these strings can be rather long. Creating a code object already interns a bunch of stuff; argument names, variable names, etc. Interning the filename will add some CPU overhead during function creation, should save a decent amount of memory, and ought to have minimal overall performance impact. I have a local patch, but wanted to see if anyone had ideas or experience weighing these tradeoffs. -jake From benjamin at python.org Sun Feb 14 02:36:28 2010 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 13 Feb 2010 19:36:28 -0600 Subject: [Python-Dev] What to intern (e.g. func_code.co_filename)? In-Reply-To: <77c780b41002131653r2ee7f4bfi99f019bf2b85a00b@mail.gmail.com> References: <77c780b41002131653r2ee7f4bfi99f019bf2b85a00b@mail.gmail.com> Message-ID: <1afaf6161002131736w1b5faf0dief2523aee4680065@mail.gmail.com> 2010/2/13 Jake McGuire : > I have a local patch, but wanted to see if anyone had ideas or > experience weighing these tradeoffs. Interning is really only useful because it speeds up dictionary lookups for identifiers. A better idea would be to just attach the same filename object in compiling and unmarshaling. -- Regards, Benjamin From martin at v.loewis.de Sun Feb 14 08:03:03 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 14 Feb 2010 08:03:03 +0100 Subject: [Python-Dev] What to intern (e.g. func_code.co_filename)? In-Reply-To: <1afaf6161002131736w1b5faf0dief2523aee4680065@mail.gmail.com> References: <77c780b41002131653r2ee7f4bfi99f019bf2b85a00b@mail.gmail.com> <1afaf6161002131736w1b5faf0dief2523aee4680065@mail.gmail.com> Message-ID: <4B77A027.40009@v.loewis.de> Benjamin Peterson wrote: > 2010/2/13 Jake McGuire : >> I have a local patch, but wanted to see if anyone had ideas or >> experience weighing these tradeoffs. > > Interning is really only useful because it speeds up dictionary > lookups for identifiers. A better idea would be to just attach the > same filename object in compiling and unmarshaling. I would try to do the sharing during marshaling already. I agree that the file names shouldn't be interned, though, so I propose to create a new code TYPE_SHAREDSTRING, similar to TYPE_INTERNED. It would use the same numbering as TYPE_INTERNED, so backreferences could continue to use TYPE_STRINGREF. Alternatively, a general sharing feature could be added to marshal, sharing all hashable objects. However, before that gets added, I'd like to see statistics how many objects get considered for sharing, and how many back-references then get actually generated. Regards, Martin From g.brandl at gmx.net Sun Feb 14 14:17:36 2010 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 14 Feb 2010 14:17:36 +0100 Subject: [Python-Dev] PEP 385: Auditing In-Reply-To: References: <4B7692A8.9090801@v.loewis.de> Message-ID: Am 13.02.2010 13:19, schrieb Antoine Pitrou: > Martin v. L?wis v.loewis.de> writes: >> >> Alterntively, the email notification sent to python-checkins could could >> report who the pusher was. > > This sounds reasonable, assuming it doesn't disclose any private information. How could it disclose more than the SVN hook does today (i.e. who is working on the repo right now)? Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From g.brandl at gmx.net Sun Feb 14 14:18:07 2010 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 14 Feb 2010 14:18:07 +0100 Subject: [Python-Dev] PEP 385: Auditing In-Reply-To: References: <4B7692A8.9090801@v.loewis.de> Message-ID: Am 13.02.2010 18:52, schrieb Dirkjan Ochtman: > On Sat, Feb 13, 2010 at 12:53, "Martin v. L?wis" wrote: >> Dirkjan: if you agree to such a strategy, please mention that in the PEP. > > Having a pushlog and/or including the pusher in the email sounds like > a good idea, I'll add something to that effect to the PEP. I slightly > prefer adding it to the commit email because it would seem to require > less infrastructure, and it can be handy at times to know who pushed > something right off the bat. +1. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From techtonik at gmail.com Sun Feb 14 19:31:59 2010 From: techtonik at gmail.com (anatoly techtonik) Date: Sun, 14 Feb 2010 20:31:59 +0200 Subject: [Python-Dev] Google Groups Mirror Message-ID: What is the point in maintaining archive listed in http://mail.python.org/mailman/listinfo/python-dev if it is not searchable? Recently I needed to find old thread about adding tags to roundup but couldn't. GMane archive doesn't search - http://news.gmane.org/navbar.php?group=gmane.comp.python.devel&query=roundup+tags and Google Group archive is private http://groups.google.com/group/python-dev How about opening an official read-only Google Group mirror? Then it will be possible to subscribe to notifications to just interesting threads. -- anatoly t. From benjamin at python.org Sun Feb 14 19:36:41 2010 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 14 Feb 2010 12:36:41 -0600 Subject: [Python-Dev] Google Groups Mirror In-Reply-To: References: Message-ID: <1afaf6161002141036i6743f674y32c481d3dbd91b6e@mail.gmail.com> 2010/2/14 anatoly techtonik : > What is the point in maintaining archive listed in > http://mail.python.org/mailman/listinfo/python-dev if it is not > searchable? It is: Google "some interesting python thing site:mail.python.org/mailman/pipermail/python-dev" -- Regards, Benjamin From martin at v.loewis.de Sun Feb 14 19:40:18 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 14 Feb 2010 19:40:18 +0100 Subject: [Python-Dev] PEP 385: Auditing In-Reply-To: References: <4B7692A8.9090801@v.loewis.de> Message-ID: <4B784392.2000101@v.loewis.de> Georg Brandl wrote: > Am 13.02.2010 13:19, schrieb Antoine Pitrou: >> Martin v. L?wis v.loewis.de> writes: >>> Alterntively, the email notification sent to python-checkins could could >>> report who the pusher was. >> This sounds reasonable, assuming it doesn't disclose any private information. > > How could it disclose more than the SVN hook does today (i.e. who is working > on the repo right now)? It could reveal email addresses, for example, which in turn would attract spammers. However, I assume that Antoine was bringing up privacy just as a general concern, and didn't really expect any specific issue. Regards, Martin From phd at phd.pp.ru Sun Feb 14 19:43:50 2010 From: phd at phd.pp.ru (Oleg Broytman) Date: Sun, 14 Feb 2010 21:43:50 +0300 Subject: [Python-Dev] Search the mail list (was: Google Groups Mirror) In-Reply-To: References: Message-ID: <20100214184349.GA21572@phd.pp.ru> On Sun, Feb 14, 2010 at 08:31:59PM +0200, anatoly techtonik wrote: > What is the point in maintaining archive listed in > http://mail.python.org/mailman/listinfo/python-dev if it is not > searchable? > Recently I needed to find old thread about adding tags to roundup but couldn't. It is searchable: http://www.google.com/search?q=python-dev+roundup+tags+site%3Amail.python.org Oleg. -- Oleg Broytman http://phd.pp.ru/ phd at phd.pp.ru Programmers don't die, they just GOSUB without RETURN. From solipsis at pitrou.net Sun Feb 14 19:47:20 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 14 Feb 2010 18:47:20 +0000 (UTC) Subject: [Python-Dev] PEP 385: Auditing References: <4B7692A8.9090801@v.loewis.de> <4B784392.2000101@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > > Georg Brandl wrote: > > Am 13.02.2010 13:19, schrieb Antoine Pitrou: > >> Martin v. L?wis v.loewis.de> writes: > >>> Alterntively, the email notification sent to python-checkins could could > >>> report who the pusher was. > >> This sounds reasonable, assuming it doesn't disclose any private information. > > > > How could it disclose more than the SVN hook does today (i.e. who is working > > on the repo right now)? > > It could reveal email addresses, for example, which in turn would > attract spammers. However, I assume that Antoine was bringing up privacy > just as a general concern, and didn't really expect any specific issue. That's right. Thanks for de-obfuscating me :) Regards Antoine. From solipsis at pitrou.net Sun Feb 14 19:48:28 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 14 Feb 2010 18:48:28 +0000 (UTC) Subject: [Python-Dev] 3.1.2 References: <1afaf6161002121852q7c4fd3c1h7e8d38d2244d7fe9@mail.gmail.com> Message-ID: Le Fri, 12 Feb 2010 20:52:23 -0600, Benjamin Peterson a ?crit?: > It's about time for another 3.1 bug fix release. I propose this > schedule: > > March 6: Release Candidate (same day as 2.7a4) > March 20: 3.1.2 Final release Looks perfect to me! Antoine. From techtonik at gmail.com Sun Feb 14 19:56:31 2010 From: techtonik at gmail.com (anatoly techtonik) Date: Sun, 14 Feb 2010 20:56:31 +0200 Subject: [Python-Dev] Google Groups Mirror In-Reply-To: <1afaf6161002141036i6743f674y32c481d3dbd91b6e@mail.gmail.com> References: <1afaf6161002141036i6743f674y32c481d3dbd91b6e@mail.gmail.com> Message-ID: On Sun, Feb 14, 2010 at 8:36 PM, Benjamin Peterson wrote: > 2010/2/14 anatoly techtonik : >> What is the point in maintaining archive listed in >> http://mail.python.org/mailman/listinfo/python-dev if it is not >> searchable? > > It is: > > Google "some interesting python thing > site:mail.python.org/mailman/pipermail/python-dev" Doesn't work. http://www.google.com/search?q=site:mail.python.org/mailman/pipermail/python-dev+roundup+tags Seems that mailman in your query is not needed. This search is definitely not simple to use. How about to add Google search form to http://mail.python.org/mailman/listinfo/python-dev ? -- anatoly t. From benjamin at python.org Sun Feb 14 19:58:33 2010 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 14 Feb 2010 12:58:33 -0600 Subject: [Python-Dev] Google Groups Mirror In-Reply-To: References: <1afaf6161002141036i6743f674y32c481d3dbd91b6e@mail.gmail.com> Message-ID: <1afaf6161002141058g51e59dccxe884a8d14e7e387b@mail.gmail.com> 2010/2/14 anatoly techtonik : > On Sun, Feb 14, 2010 at 8:36 PM, Benjamin Peterson wrote: >> 2010/2/14 anatoly techtonik : >>> What is the point in maintaining archive listed in >>> http://mail.python.org/mailman/listinfo/python-dev if it is not >>> searchable? >> >> It is: >> >> Google "some interesting python thing >> site:mail.python.org/mailman/pipermail/python-dev" > > Doesn't work. > http://www.google.com/search?q=site:mail.python.org/mailman/pipermail/python-dev+roundup+tags But http://www.google.com/search?q=roundup+tags+site%3Amail.python.org%2Fpipermail%2Fpython-dev does. -- Regards, Benjamin From techtonik at gmail.com Sun Feb 14 20:02:00 2010 From: techtonik at gmail.com (anatoly techtonik) Date: Sun, 14 Feb 2010 21:02:00 +0200 Subject: [Python-Dev] Google Groups Mirror In-Reply-To: <1afaf6161002141058g51e59dccxe884a8d14e7e387b@mail.gmail.com> References: <1afaf6161002141036i6743f674y32c481d3dbd91b6e@mail.gmail.com> <1afaf6161002141058g51e59dccxe884a8d14e7e387b@mail.gmail.com> Message-ID: On Sun, Feb 14, 2010 at 8:58 PM, Benjamin Peterson wrote: > 2010/2/14 anatoly techtonik : >> On Sun, Feb 14, 2010 at 8:36 PM, Benjamin Peterson wrote: >>> 2010/2/14 anatoly techtonik : >>>> What is the point in maintaining archive listed in >>>> http://mail.python.org/mailman/listinfo/python-dev if it is not >>>> searchable? >>> >>> It is: >>> >>> Google "some interesting python thing >>> site:mail.python.org/mailman/pipermail/python-dev" >> >> Doesn't work. >> http://www.google.com/search?q=site:mail.python.org/mailman/pipermail/python-dev+roundup+tags > > But http://www.google.com/search?q=roundup+tags+site%3Amail.python.org%2Fpipermail%2Fpython-dev > does. Yep. Just like I said. So, how about to add Google search form to http://mail.python.org/mailman/listinfo/python-dev ? -- anatoly t. From benjamin at python.org Sun Feb 14 20:03:26 2010 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 14 Feb 2010 13:03:26 -0600 Subject: [Python-Dev] Google Groups Mirror In-Reply-To: References: <1afaf6161002141036i6743f674y32c481d3dbd91b6e@mail.gmail.com> <1afaf6161002141058g51e59dccxe884a8d14e7e387b@mail.gmail.com> Message-ID: <1afaf6161002141103q5a504a0fw83f65a6aa885528a@mail.gmail.com> 2010/2/14 anatoly techtonik : > Yep. Just like I said. So, how about to add Google search form to > http://mail.python.org/mailman/listinfo/python-dev ? You'll have to talk to postmaster at python.org about that. -- Regards, Benjamin From techtonik at gmail.com Sun Feb 14 20:15:49 2010 From: techtonik at gmail.com (anatoly techtonik) Date: Sun, 14 Feb 2010 21:15:49 +0200 Subject: [Python-Dev] Google Groups Mirror In-Reply-To: <1afaf6161002141103q5a504a0fw83f65a6aa885528a@mail.gmail.com> References: <1afaf6161002141036i6743f674y32c481d3dbd91b6e@mail.gmail.com> <1afaf6161002141058g51e59dccxe884a8d14e7e387b@mail.gmail.com> <1afaf6161002141103q5a504a0fw83f65a6aa885528a@mail.gmail.com> Message-ID: On Sun, Feb 14, 2010 at 9:03 PM, Benjamin Peterson wrote: > 2010/2/14 anatoly techtonik : >> Yep. Just like I said. So, how about to add Google search form to >> http://mail.python.org/mailman/listinfo/python-dev ? > > You'll have to talk to postmaster at python.org about that. Crossposted. And what about per-thread subscription that comes with Google Groups? I do not need it, because I am subscribed, but if that could be possible - I'd unsubscribe to read it from web receiving notification only about interesting topics. Who owns http://groups.google.com/group/python-dev ? -- anatoly t. From mg at lazybytes.net Sun Feb 14 20:46:23 2010 From: mg at lazybytes.net (Martin Geisler) Date: Sun, 14 Feb 2010 20:46:23 +0100 Subject: [Python-Dev] Google Groups Mirror References: Message-ID: <87eiknvbw0.fsf@hbox.dyndns.org> anatoly techtonik writes: > What is the point in maintaining archive listed in > http://mail.python.org/mailman/listinfo/python-dev if it is not > searchable? > Recently I needed to find old thread about adding tags to roundup but couldn't. > > GMane archive doesn't search - > http://news.gmane.org/navbar.php?group=gmane.comp.python.devel&query=roundup+tags This is where your search should go: http://search.gmane.org/?query=roundup+tags&group=gmane.comp.python.devel I got the page when searching from http://dir.gmane.org/gmane.comp.python.devel I don't know why the search box at the bottom of http://news.gmane.org/gmane.comp.python.devel fails... -- Martin Geisler VIFF (Virtual Ideal Functionality Framework) brings easy and efficient SMPC (Secure Multiparty Computation) to Python. See: http://viff.dk/. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From barry at python.org Mon Feb 15 00:31:49 2010 From: barry at python.org (Barry Warsaw) Date: Sun, 14 Feb 2010 18:31:49 -0500 Subject: [Python-Dev] Google Groups Mirror In-Reply-To: References: Message-ID: <9B4EBCFD-ED91-4E23-9935-B82A08762C1C@python.org> On Feb 14, 2010, at 1:31 PM, anatoly techtonik wrote: > What is the point in maintaining archive listed in > http://mail.python.org/mailman/listinfo/python-dev if it is not > searchable? > Recently I needed to find old thread about adding tags to roundup but couldn't. > > GMane archive doesn't search - > http://news.gmane.org/navbar.php?group=gmane.comp.python.devel&query=roundup+tags > and Google Group archive is private http://groups.google.com/group/python-dev > > How about opening an official read-only Google Group mirror? Then it > will be possible to subscribe to notifications to just interesting > threads. Try mail-archive.com, it's searchable. http://www.mail-archive.com/python-dev at python.org/ -Barry From martin at v.loewis.de Mon Feb 15 08:24:08 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 15 Feb 2010 08:24:08 +0100 Subject: [Python-Dev] Tracker outage Message-ID: <4B78F698.2070208@v.loewis.de> Starting around 14:00 UTC today, we will take the trackers at bugs.python.org, bugs.jython.org, and psf.upfronthosting.co.za offline for a system upgrade. The outage should not last longer than four hours (probably much shorter). Regards, Martin From stefan_ml at behnel.de Mon Feb 15 09:23:49 2010 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 15 Feb 2010 09:23:49 +0100 Subject: [Python-Dev] 3.1.2 In-Reply-To: <1afaf6161002121852q7c4fd3c1h7e8d38d2244d7fe9@mail.gmail.com> References: <1afaf6161002121852q7c4fd3c1h7e8d38d2244d7fe9@mail.gmail.com> Message-ID: Benjamin Peterson, 13.02.2010 03:52: > It's about time for another 3.1 bug fix release. I propose this schedule: > > March 6: Release Candidate (same day as 2.7a4) > March 20: 3.1.2 Final release Does a crash like #7173 qualify as a blocker for 3.1.2? Stefan From martin at v.loewis.de Mon Feb 15 09:50:42 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 15 Feb 2010 09:50:42 +0100 Subject: [Python-Dev] 3.1.2 In-Reply-To: References: <1afaf6161002121852q7c4fd3c1h7e8d38d2244d7fe9@mail.gmail.com> Message-ID: <4B790AE2.6010607@v.loewis.de> Stefan Behnel wrote: > Benjamin Peterson, 13.02.2010 03:52: >> It's about time for another 3.1 bug fix release. I propose this schedule: >> >> March 6: Release Candidate (same day as 2.7a4) >> March 20: 3.1.2 Final release > > Does a crash like #7173 qualify as a blocker for 3.1.2? I'm not the release manager, but my feeling is that, because there is no proposed resolution of the issue, it can't possibly be a blocker. Only if a patch is available, waiting for application of that patch may block the release. Waiting for a patch may cause indefinite delay, which would be bad. Of course, for releases managed by Barry (i.e. 2.6), Barry said that you can declare anything a blocker - whether it then will block the release is a different matter (and one that Barry then decides on a case-by-case basis). Regards, Martin From techtonik at gmail.com Mon Feb 15 13:25:39 2010 From: techtonik at gmail.com (anatoly techtonik) Date: Mon, 15 Feb 2010 14:25:39 +0200 Subject: [Python-Dev] Google Groups Mirror In-Reply-To: <9B4EBCFD-ED91-4E23-9935-B82A08762C1C@python.org> References: <9B4EBCFD-ED91-4E23-9935-B82A08762C1C@python.org> Message-ID: On Mon, Feb 15, 2010 at 1:31 AM, Barry Warsaw wrote: > >> What is the point in maintaining archive listed in >> http://mail.python.org/mailman/listinfo/python-dev if it is not >> searchable? >> Recently I needed to find old thread about adding tags to roundup but couldn't. >> >> GMane archive doesn't search - >> http://news.gmane.org/navbar.php?group=gmane.comp.python.devel&query=roundup+tags >> and Google Group archive is private http://groups.google.com/group/python-dev >> >> How about opening an official read-only Google Group mirror? Then it >> will be possible to subscribe to notifications to just interesting >> threads. > > Try mail-archive.com, it's searchable. > > http://www.mail-archive.com/python-dev at python.org/ Thanks. Still an official search form will be welcomed as well as per-thread subscriptions, and not only for python-dev. Crossposted to postmaster at python.org -- anatoly t. From techtonik at gmail.com Mon Feb 15 13:49:18 2010 From: techtonik at gmail.com (anatoly techtonik) Date: Mon, 15 Feb 2010 14:49:18 +0200 Subject: [Python-Dev] Release timer for Core Development page Message-ID: I've got another idea of having a release timer on http://python.org/dev/ page together with link to generated release calendar. It will help to automatically monitor deadlines for feature fixes in alpha releases without manually monitoring this mailing list. There is already a navigation box on the right side where this information fits like a glove. Does anybody else find this feature useful for Python development? -- anatoly t. From fuzzyman at voidspace.org.uk Mon Feb 15 17:15:04 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Mon, 15 Feb 2010 16:15:04 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <1266058806.3458.290.camel@lifeless-64> References: <4B71908A.3080306@voidspace.org.uk> <4B75786E.4090800@voidspace.org.uk> <4B75FAB3.9010009@voidspace.org.uk> <1266053299.3458.273.camel@lifeless-64> <1266058806.3458.290.camel@lifeless-64> Message-ID: <4B797308.4030706@voidspace.org.uk> On 13/02/2010 11:00, Robert Collins wrote: > On Sat, 2010-02-13 at 10:42 +0000, Antoine Pitrou wrote: > >> Robert Collins robertcollins.net> writes: >> >>> I'm not personally very keen on inspecting everything in self.__dict__, >>> I suspect it would tickle bugs in other unittest extensions. However I'm >>> not really /against/ it - I don't think it will result in bad test >>> behaviour or isolation issues. So if users would like it, lets do it. >>> >> Why not take all resource_XXX attributes? >> > Sure, though if we're just introspecting I don't see much risk > difference between resource_XXX and all XXX. As I say above I'm happy to > do it if folk think it will be nice. > We could introspect all class attributes and find all the resources. We should use dir(...) rather than looking in the class __dict__ so that resources can be inherited. However, it sounds like Guido isn't a fan of Test Resources *instead* of setUpClass / module because it doesn't offer a simple solution for the setUpModule use case and the API still needs to mature. All the best, Michael > >> By the way, how does a given test access the allocated resource? Say, the DB >> connection. Does it become an attribute of the test case instance? >> > yes. Given > > class Foo(TestCase): > resources = {'thing', MyResourceManager()} > def test_foo(self): > self.thing > > self.thing will access the resource returned by MyResourceManager.make() > > -Rob > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at voidspace.org.uk Mon Feb 15 18:05:35 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Mon, 15 Feb 2010 17:05:35 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <2E9EFC73-20B4-42F0-973C-66933410C9EE@twistedmatrix.com> References: <4B71908A.3080306@voidspace.org.uk> <2E9EFC73-20B4-42F0-973C-66933410C9EE@twistedmatrix.com> Message-ID: <4B797EDF.8010701@voidspace.org.uk> On 13/02/2010 04:01, Glyph Lefkowitz wrote: [snipping some good points...] >> Regarding the objection that setUp/tearDown for classes would run into >> issues with subclassing, I propose to let the standard semantics of >> subclasses do their job. Thus a subclass that overrides setUpClass or >> tearDownClass is responsible for calling the base class's setUpClass >> and tearDownClass (and the TestCase base class should provide empty >> versions of both). The testrunner should only call setUpClass and >> tearDownClass for classes that have at least one test that is >> selected. >> >> Yes, this would mean that if a base class has a test method and a >> setUpClass (and tearDownClass) method and a subclass also has a test >> method and overrides setUpClass (and/or tearDown), the base class's >> setUpClass and tearDown may be called twice. What's the big deal? If >> setUpClass and tearDownClass are written properly they should support >> this. >> > Just to be clear: by "written properly" you mean, written as classmethods, storing their data only on 'cls', right? > > Heh, yes (answered several times in this thread already I think...). >> If this behavior is undesired in a particular case, maybe what >> was really meant were module-level setUp and tearDown, or the class >> structure should be rearranged. >> > There's also a bit of an open question here for me: if subclassing is allowed, and module-level setup and teardown are allowed, then what if I define a test class with test methods in module 'a', as well as module setup and teardown, then subclass it in 'b' which *doesn't* have setup and teardown... is the subclass in 'b' always assumed to depend on the module-level setup in 'a'? Is there a way that it could be made not to if it weren't necessary? What if it stubs out all of its test methods? In the case of classes you've got the 'cls' variable to describe the dependency and the shared state, but in the case of modules, inheritance doesn't create an additional module object to hold on to. > > This is also an interesting point. The 'naive' implementation, which I think I prefer, only runs the setUpModule of modules actually containing tests. Similarly setUpClass is only called on classes with actual tests, although they may call up to base class implementations. This has a couple of consequences, particularly for setUpModule. It makes the basic rule: * only use setUpModule for modules actually containing tests * don't mix concrete TestCases (with tests) in the same module (using setUpModule) as base classes for other tests The use case (that I can think of) that isn't supported (and of course all this needs to be documented): * Having setUpModule / tearDownModule in modules that define base classes and expecting them to be called around all tests that inherit from those base classes Having this in place makes the implementation simpler. If we explicitly document that this rule may change and so users shouldn't rely on setUpModule not being called for modules containing base classes, then we are free to rethink it later. Not having this restriction at all is possible, it just requires more introspection at TestSuite creation / ordering time. Note that setUpClass on a base class maybe called several times if several base classes inherit and all call up to the base class implementation. As it will be a class method the cls argument will be different for each call. Another question. If we are implementing TestCase.setUpClass as an additional test then should it be reported *even* if it is only the default (empty) implementation that is used? The reason to have setUpClass implemented as a test is so that you can report the failure *before* you run all the tests. Lots of unit test users want a consistent number of tests every run - so we shouldn't insert an extra test only on fail. The other alternative is to report a setUpClass failure as part of the first test that depends on it - this makes the implementation more, complex (having a pseudo-test represent setUpClass / tearDownClass is convenient for ordering and isolating the 'magic' in one place - the rest of the test running infrastructure doesn't need to know about setUpClass or Module). If we do add a default setUpClass test for all TestCases it means extra noise for test runs that don't use the new feature. Saying no it shouldn't be shown means that we have to introspect test classes to see if they inherit setUpClass from TestCase or from some intermediate base class. Not hard just an extra complexity. One solution would be for TestCase *not* to have default implementations, but it is probably nicer for them to appear in the API. I guess my preferred approach is to have a default implementation, but not to create pseudo-tests for them if they aren't used. All the best, Michael > testresources very neatly sidesteps this problem by just providing an API to say "this test case depends on that test resource", without relying on the grouping of tests within classes, modules, or packages. Of course you can just define a class-level or module-level resource and then have all your tests depend on it, which gives you the behavior of setUpClass and setUpModule in a more general way. > > -glyph > > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From victor.stinner at haypocalc.com Mon Feb 15 18:11:43 2010 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Mon, 15 Feb 2010 18:11:43 +0100 Subject: [Python-Dev] pysandsox project Message-ID: <201002151811.43563.victor.stinner@haypocalc.com> Hi, I'm working on a new sandbox project. The goal is to create an empty namespace and write strict rules for the interaction with the existing namespace (full featured Python namespace). By default, you cannot read a file, use print, import a module or exit Python. But you can enable some functions by using config "features". Example: "regex" feature allows you to import the re module which will contain a subset of the real re module, just enough to match a regex. To protect the sandbox namespace, some attributes are "hidden": function closure and globals, frame locals and type subclasses. __builtins__ is also replaced by a read-only dictionary. Objects are not directly injected in the sandbox namespace: a proxy is used to get a read-only view of the object. pysandbox is based on safelite.py, project written by tav one year ago (search tav in python-dev archive, February 2009). I tested RestrictedPython, but the approach is different (rewrite bytecode) and the project is not maintained since 3 or 4 years (only minor updates on the documentation or the copyright header). pysandbox is different than RestrictedPython because it blocks everything by default and has simply config options to enable a set of features. pysandbox is different than safelite.py because it contains unit tests (ensure that blocked features are really blocked). Only attributes/functions allowing to escape the sandbox are blocked. Eg. frames are still accessibles, only the frame locals are blocked. This blacklist policy is broken by design, but it's a nice way to quickly get a working sandbox without having to modify CPython too much. pysandbox status is closer to a proof-of-concept than a beta version, there are open issues (see above). Please test it and try to break it! -- To try pysandbox, download the last version using git clone or a tarball at: http://github.com/haypo/pysandbox/ You don't need to install it, use "python interpreter.py" or "python execfile.py yourscript.py". Use --help to get more options. I tested pysandbox on Linux with Python 2.5, 2.6 and 2.7. I guess that it should work on Python 3.0 with minor changes. -- The current syntax is: config = SandboxConfig(...) with Sandbox(config): ... execute untrusted code here ... This syntax has a problem: local frame variables are not protected by a proxy, nor removed from the sandbox namespace. I tried to remove the frame locals, but Python uses STORE_FAST/LOAD_FAST bytecodes in a function, and this fast cache is not accessible in Python. Clear this cache may introduce unexpected behaviours. pysandbox modify some structure attributes (frame.f_builtins and frame.f_tstate.interp.builtins) directly in memory using some ctypes tricks. I used that to avoid patching CPython and to get faster a working proof-of-concept. Set USE_CPYTHON_HACK to False (in sandbox/__init__.py) to disable these hacks, but they are needed to protect __builtins__ (see related tests). -- By default, pysandbox doesn't use CPython restricted mode, because this mode is too restrictive (it's not possible to read a file or import a module). But pysandbox can use it with SandboxConfig(cpython_restricted=True). -- See README file for more information and TODO file for a longer status. Victor From fuzzyman at voidspace.org.uk Mon Feb 15 18:45:02 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Mon, 15 Feb 2010 17:45:02 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B797EDF.8010701@voidspace.org.uk> References: <4B71908A.3080306@voidspace.org.uk> <2E9EFC73-20B4-42F0-973C-66933410C9EE@twistedmatrix.com> <4B797EDF.8010701@voidspace.org.uk> Message-ID: <4B79881E.6010000@voidspace.org.uk> On 15/02/2010 17:05, Michael Foord wrote: > [snip] > This is also an interesting point. The 'naive' implementation, which I > think I prefer, only runs the setUpModule of modules actually > containing tests. Similarly setUpClass is only called on classes with > actual tests, although they may call up to base class implementations. > > This has a couple of consequences, particularly for setUpModule. It > makes the basic rule: > > * only use setUpModule for modules actually containing tests > * don't mix concrete TestCases (with tests) in the same module (using > setUpModule) as base classes for other tests > > The use case (that I can think of) that isn't supported (and of course > all this needs to be documented): > > * Having setUpModule / tearDownModule in modules that define base > classes and expecting them to be called around all tests that inherit > from those base classes > > Having this in place makes the implementation simpler. If we > explicitly document that this rule may change and so users shouldn't > rely on setUpModule not being called for modules containing base > classes, then we are free to rethink it later. Not having this > restriction at all is possible, it just requires more introspection at > TestSuite creation / ordering time. > > Note that setUpClass on a base class maybe called several times if > several base classes inherit and all call up to the base class > implementation. As it will be a class method the cls argument will be > different for each call. > > Another question. If we are implementing TestCase.setUpClass as an > additional test then should it be reported *even* if it is only the > default (empty) implementation that is used? > > The reason to have setUpClass implemented as a test is so that you can > report the failure *before* you run all the tests. Lots of unit test > users want a consistent number of tests every run - so we shouldn't > insert an extra test only on fail. The other alternative is to report > a setUpClass failure as part of the first test that depends on it - > this makes the implementation more, complex (having a pseudo-test > represent setUpClass / tearDownClass is convenient for ordering and > isolating the 'magic' in one place - the rest of the test running > infrastructure doesn't need to know about setUpClass or Module). > > If we do add a default setUpClass test for all TestCases it means > extra noise for test runs that don't use the new feature. Saying no it > shouldn't be shown means that we have to introspect test classes to > see if they inherit setUpClass from TestCase or from some intermediate > base class. Not hard just an extra complexity. One solution would be > for TestCase *not* to have default implementations, but it is probably > nicer for them to appear in the API. > > I guess my preferred approach is to have a default implementation, but > not to create pseudo-tests for them if they aren't used. > One place to implement this is in the TestLoader (specifically in loadTestsFromModule and loadTestsFromTestCase) - especially as this is the place where test ordering is currently provided by unittest. The TestSuite may not need to change much. This isn't compatible with test frameworks that build custom suites without going through loadTestsFromModule - e.g. modules implementing load_tests that replace the standard test suite with a new suite with specific TestCases and still expect setUpModule / tearDownModule to be used [1]. Perhaps an API hook to make this easy? TestCase would need a change in run(...) to support automatic SetupFailed test failing when setUpClass / module fails. This *isn't* compatible with custom TestSuites that reorder tests. The alternative is for TestSuite.run(...) to change to support setUpClass / setUpModule by adding them at test run time by introspecting contained tests. That would make setUpClass (etc) incompatible with custom TestSuite implementations that override run. It would be compatible with TestSuites that do reordering but that don't overload run(...) - although the resulting test order may not be 'optimal' (the setUp and tearDown may end up separated farther in time than desired but at least they would work). Perhaps this approach is less likely to break? (Manually reordering tests or adding new tests to a TestSuite wouldn't break setUpClass as the TestSuite only adds them at run(...) time rather than placing them in the suite where the 'user' can move them around). All the best, Michael [1] or even load_tests that just addTests to the standard suite but still need tearDownModule to be run *after* all the tests. Even with the implementation in loadTestsFrom* it seems like the TestSuite may need to know something about the order. Perhaps we could have TestLoader.orderTests - the loader is available to load_tests functions. > > All the best, > > Michael > >> testresources very neatly sidesteps this problem by just providing an >> API to say "this test case depends on that test resource", without >> relying on the grouping of tests within classes, modules, or >> packages. Of course you can just define a class-level or >> module-level resource and then have all your tests depend on it, >> which gives you the behavior of setUpClass and setUpModule in a more >> general way. >> >> -glyph >> > > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From benjamin at python.org Mon Feb 15 18:54:34 2010 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 15 Feb 2010 11:54:34 -0600 Subject: [Python-Dev] 3.1.2 In-Reply-To: <4B790AE2.6010607@v.loewis.de> References: <1afaf6161002121852q7c4fd3c1h7e8d38d2244d7fe9@mail.gmail.com> <4B790AE2.6010607@v.loewis.de> Message-ID: <1afaf6161002150954s1038d8d0x2376187ee1e6ae0@mail.gmail.com> 2010/2/15 "Martin v. L?wis" : > Stefan Behnel wrote: >> Benjamin Peterson, 13.02.2010 03:52: >>> It's about time for another 3.1 bug fix release. I propose this schedule: >>> >>> March 6: Release Candidate (same day as 2.7a4) >>> March 20: ?3.1.2 Final release >> >> Does a crash like #7173 qualify as a blocker for 3.1.2? > > I'm not the release manager, but my feeling is that, because there is no > proposed resolution of the issue, it can't possibly be a blocker. Only > if a patch is available, waiting for application of that patch may block > the release. Waiting for a patch may cause indefinite delay, which would > be bad. I agree with Martin here. I would be more inclined to make #7173 a release blocker if it had a more specific test than "run cython and maybe it'll crash". -- Regards, Benjamin From hanno at hannosch.eu Mon Feb 15 19:00:30 2010 From: hanno at hannosch.eu (Hanno Schlichting) Date: Mon, 15 Feb 2010 19:00:30 +0100 Subject: [Python-Dev] pysandsox project In-Reply-To: <201002151811.43563.victor.stinner@haypocalc.com> References: <201002151811.43563.victor.stinner@haypocalc.com> Message-ID: <5cae42b21002151000v7cfdfc3an166aff78a9390716@mail.gmail.com> On Mon, Feb 15, 2010 at 6:11 PM, Victor Stinner wrote: > pysandbox is based on safelite.py, project written by tav one year ago (search > tav in python-dev archive, February 2009). I tested RestrictedPython, but the > approach is different (rewrite bytecode) and the project is not maintained > since 3 or 4 years (only minor updates on the documentation or the copyright > header). Not that it matters much, but RestrictedPython [1] is pretty well maintained for a seasoned library. That doesn't mean it's a particular good solution to the problem, just that it works for the use-case it was written for and is still in widespread use as part of Zope / Plone. Hanno [1] http://pypi.python.org/pypi/RestrictedPython From amauryfa at gmail.com Mon Feb 15 20:47:45 2010 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 15 Feb 2010 20:47:45 +0100 Subject: [Python-Dev] 3.1.2 In-Reply-To: <1afaf6161002150954s1038d8d0x2376187ee1e6ae0@mail.gmail.com> References: <1afaf6161002121852q7c4fd3c1h7e8d38d2244d7fe9@mail.gmail.com> <4B790AE2.6010607@v.loewis.de> <1afaf6161002150954s1038d8d0x2376187ee1e6ae0@mail.gmail.com> Message-ID: Hi, 2010/2/15 Benjamin Peterson > 2010/2/15 "Martin v. L?wis" : > > Stefan Behnel wrote: > >> Benjamin Peterson, 13.02.2010 03:52: > >>> It's about time for another 3.1 bug fix release. I propose this > schedule: > >>> > >>> March 6: Release Candidate (same day as 2.7a4) > >>> March 20: 3.1.2 Final release > >> > >> Does a crash like #7173 qualify as a blocker for 3.1.2? > > > > I'm not the release manager, but my feeling is that, because there is no > > proposed resolution of the issue, it can't possibly be a blocker. Only > > if a patch is available, waiting for application of that patch may block > > the release. Waiting for a patch may cause indefinite delay, which would > > be bad. > > I agree with Martin here. I would be more inclined to make #7173 a > release blocker if it had a more specific test than "run cython and > maybe it'll crash". > I just updated #7173 with a short crasher. In short, I think that next() in an exception handler messes with the exception state. This doesn't play well with the cyclic garbage collector which can call tp_clear() on a resurrected object, if the reference is *moved* out of the cycle by some tp_dealloc. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Mon Feb 15 21:27:26 2010 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Mon, 15 Feb 2010 15:27:26 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: References: <4B71908A.3080306@voidspace.org.uk> <2E9EFC73-20B4-42F0-973C-66933410C9EE@twistedmatrix.com> Message-ID: <5A4642E0-C933-47A6-B420-14EAD5BE6048@twistedmatrix.com> On Feb 13, 2010, at 12:46 PM, Guido van Rossum wrote: > On Fri, Feb 12, 2010 at 8:01 PM, Glyph Lefkowitz > wrote: >> On Feb 11, 2010, at 1:11 PM, Guido van Rossum wrote: >> >> For what it's worth, I am a big fan of abusing test frameworks in generally, and pyunit specifically, to perform every possible kind of testing. In fact, I find setUpClass more hostile to *other* kinds of testing, because this convenience for simple integration tests makes more involved, performance-intensive integration tests harder to write and manage. > > That sounds odd, as if the presence of this convenience would prohibit > you from also implement other features. Well, that is the main point I'm trying to make. There are ways to implement setUpClass that *do* make the implementation of other features effectively impossible, by breaking the integration mechanisms between tests and framework, and between multiple testing frameworks. And I am pretty sure this is not just my over-reaction; Michael still appears to be wrestling with the problems I'm describing. In a recent message he was talking about either breaking compatibility with TestSuite implementations that override run(), or test-reordering - both of which I consider important, core features of the unittest module. >> I tried to write about this problem a while ago - the current extensibility API (which is mostly just composing "run()") is sub-optimal in many ways, but it's important not to break it. > > I expect that *eventually* something will come along that is so much > better than unittest that, once matured, we'll want it in the stdlib. I'm not sure what point you're trying to make here. I was saying "it's not perfect, but we should be careful not to break it, because it's all we've got". Are you saying that we shouldn't worry about unittest's composition API, because it's just a stopgap until something better comes along? > (Or, alternatively, eventually stdlib inclusion won't be such a big > deal any more since distros mix and match. But then inclusion in a > distro would become every package developer's goal -- and then the > circle would be round, since distros hardly move faster than Python > releases...) > > But in the mean time I believe evolving unittest is the right thing to > do. Adding new methods is relatively easy. Adding whole new paradigms > (like testresources) is a lot harder, eventually in the light of the > latter's relative immaturity. I disagree with your classification of the solutions. First and foremost: setUpClass is not a "new method", it's a pile of new code to call that method, to deal with ordering that method, etc. Code which has not yet been written or tested or tried in the real world. It is beyond simply immature, it's hypothetical. We do have an implementation of this code in Twisted, but as I have said, it's an albatross we are struggling to divest ourselves of, not something we'd like to propose for inclusion in the standard library. (Nose has this feature as well, but I doubt their implementation would be usable, since their idea of a 'test' isn't really TestCase based.) testresources, by contrast, is a tested, existing package, which people are already using, using a long-standing integration mechanism that has been part of unittest since its first implementation. Granted, I would not contest that it is "immature"; it is still fairly new, and doesn't have a huge number of uses, but it's odd to criticize it on grounds of maturity when it's so much *more* mature than the alternative. While superficially the programming interface to testresources is slightly more unusual, this is only because programmers don't think to hard about what unittest actually does with your code, and testresources requires a little more familiarity with that. >> And setUpClass does inevitably start to break those integration points down, because it implies certain things, like the fact that classes and modules are suites, or are otherwise grouped together in test ordering. > I expect that is what the majority of unittest users already believe. Yes, but they're wrong, and enforcing this misconception doesn't help anyone. There are all kinds of assumptions that most python developers have about how Python works which are vaguely incorrect abstractions over the actual behavior. >> This makes it difficult to create custom suites, to do custom ordering, custom per-test behavior (like metrics collection before and after run(), or gc.collect() after each test, or looking for newly-opened-but-not-cleaned-up external resources like file descriptors after each tearDown). > > True, the list never ends. > >> Again: these are all concrete features that *users* of test frameworks want, not just idle architectural fantasy of us framework hackers. > > I expect that most bleeding edge users will end up writing a custom > framework, or at least work with a bleeding edge framework that change > change rapidly to meet their urgent needs. Yes, exactly. Users who want an esoteric feature like setUpClass should use a custom framework :). The standard library's job, in my view, is to provide an integration point for those more advanced framework so that different "bleeding edge" frameworks have a mechanism to communicate, and users have some level of choice about what tools they use to run their tests. I have not always thought this way. Originally, my intention was for twisted's test framework to be a complete departure from the standard library and just do its own thing. But, both users of the framework and more savvy test developers have gradually convinced me that it's really useful to be able to load stdlib tests with a variety of tools. This doesn't mean that I think the stdlib should stop changing completely, but I do think changes which potentially break compatibility with _basic_ test framework things like re-ordering tests or overriding a core method like 'run' need to be done extremely carefully. >> I haven't had the opportunity to read the entire thread, so I don't know if this discussion has come to fruition, but I can see that some attention has been paid to these difficulties. I have no problem with setUpClass or tearDownClass hooks *per se*, as long as they can be implemented in a way which explicitly preserves extensibility. > > That's good to know. I have no doubt they (and setUpModule c.s.) can > be done in a clean, extensible way. And that doesn't mean we couldn't > also add other features -- after all, not all users have the same > needs. (If you read the Zen of Python, you'll see that TOOWTDI has > several qualifications. :-) Recent messages indicate that this is still a problem. But perhaps it will be solved! I can yell at Michael in person in a few days, anyway :). All those answers were pretty reasonable, so I will avoid retreading them. >> testresources very neatly sidesteps this problem by just providing an API to say "this test case depends on that test resource", without relying on the grouping of tests within classes, modules, or packages. Of course you can just define a class-level or module-level resource and then have all your tests depend on it, which gives you the behavior of setUpClass and setUpModule in a more general way. > > I wish it was always a matter of "resources". I've seen use cases for > module-level setup that were much messier than that (e.g. fixing > import paths). I expect it will be a while before the testresources > design has been shaken out sufficiently for it to be included in the > stdlib. Why isn't loadable code a "resource" like anything else? I haven't used testresources specifically for this, but I've definitely written fixture setup and teardown code that dealt with sys.path. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at voidspace.org.uk Mon Feb 15 21:50:54 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Mon, 15 Feb 2010 20:50:54 +0000 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <5A4642E0-C933-47A6-B420-14EAD5BE6048@twistedmatrix.com> References: <4B71908A.3080306@voidspace.org.uk> <2E9EFC73-20B4-42F0-973C-66933410C9EE@twistedmatrix.com> <5A4642E0-C933-47A6-B420-14EAD5BE6048@twistedmatrix.com> Message-ID: <4B79B3AE.5040504@voidspace.org.uk> On 15/02/2010 20:27, Glyph Lefkowitz wrote: > > On Feb 13, 2010, at 12:46 PM, Guido van Rossum wrote: > >> On Fri, Feb 12, 2010 at 8:01 PM, Glyph Lefkowitz >> > wrote: >>> On Feb 11, 2010, at 1:11 PM, Guido van Rossum wrote: >>> >>> For what it's worth, I am a big fan of abusing test frameworks in >>> generally, and pyunit specifically, to perform every possible kind >>> of testing. In fact, I find setUpClass more hostile to *other* >>> kinds of testing, because this convenience for simple integration >>> tests makes more involved, performance-intensive integration tests >>> harder to write and manage. >> >> That sounds odd, as if the presence of this convenience would prohibit >> you from also implement other features. > > Well, that is the main point I'm trying to make. There are ways to > implement setUpClass that *do* make the implementation of other > features effectively impossible, by breaking the integration > mechanisms between tests and framework, and between multiple testing > frameworks. > > And I am pretty sure this is not just my over-reaction; Michael still > appears to be wrestling with the problems I'm describing. And I appreciate your input. > In a recent message he was talking about either breaking > compatibility with TestSuite implementations that override run(), or > test-reordering - both of which I consider important, core features of > the unittest module. Well, by "breaking compatibility with custom TestSuite implementations that override run" I mean that is one possible place to put the functionality. Code that does override it will *not* stop working, it just won't support the new features. If we chose this implementation strategy there would be no compatibility issues for existing tests / frameworks that don't use the new features. If tests do want to use the new features then the framework authors will need to ensure they are compatible with them. This seems like a reasonable trade-off to me. We can ensure that it is easy to write custom TestSuite objects that work with earlier versions of unittest but are also compatible with setUpClass in 2.7 (and document the recipe - although I expect it will just mean that TestSuite.run should call a single method if it exists). Perhaps a better idea might be to also add startTest and stopTest methods to TestSuite so that frameworks can build in features like timing tests (etc) without having to override run itself. This is already possible in the TestResult of course, which is a more common extensibility point in *my* experience. All the best, Michael > >>> I tried to write about this problem a while ago >>> - the current extensibility >>> API (which is mostly just composing "run()") is sub-optimal in many >>> ways, but it's important not to break it. >> >> I expect that *eventually* something will come along that is so much >> better than unittest that, once matured, we'll want it in the stdlib. > > I'm not sure what point you're trying to make here. I was saying > "it's not perfect, but we should be careful not to break it, because > it's all we've got". Are you saying that we shouldn't worry about > unittest's composition API, because it's just a stopgap until > something better comes along? > >> (Or, alternatively, eventually stdlib inclusion won't be such a big >> deal any more since distros mix and match. But then inclusion in a >> distro would become every package developer's goal -- and then the >> circle would be round, since distros hardly move faster than Python >> releases...) >> >> But in the mean time I believe evolving unittest is the right thing to >> do. Adding new methods is relatively easy. Adding whole new paradigms >> (like testresources) is a lot harder, eventually in the light of the >> latter's relative immaturity. > > I disagree with your classification of the solutions. > > First and foremost: setUpClass is not a "new method", it's a pile of > new code to call that method, to deal with ordering that method, etc. > Code which has not yet been written or tested or tried in the real > world. It is beyond simply immature, it's hypothetical. We do have an > implementation of this code in Twisted, but as I have said, it's an > albatross we are struggling to divest ourselves of, not something we'd > like to propose for inclusion in the standard library. (Nose has this > feature as well, but I doubt their implementation would be usable, > since their idea of a 'test' isn't really TestCase based.) > > testresources, by contrast, is a tested, existing package, which > people are already using, using a long-standing integration mechanism > that has been part of unittest since its first implementation. > Granted, I would not contest that it is "immature"; it is still > fairly new, and doesn't have a huge number of uses, but it's odd to > criticize it on grounds of maturity when it's so much *more* mature > than the alternative. > > While superficially the programming interface to testresources is > slightly more unusual, this is only because programmers don't think to > hard about what unittest actually does with your code, and > testresources requires a little more familiarity with that. > >>> And setUpClass does inevitably start to break those integration >>> points down, because it implies certain things, like the fact that >>> classes and modules are suites, or are otherwise grouped together in >>> test ordering. > >> I expect that is what the majority of unittest users already believe. > > Yes, but they're wrong, and enforcing this misconception doesn't help > anyone. There are all kinds of assumptions that most python > developers have about how Python works which are vaguely incorrect > abstractions over the actual behavior. > >>> This makes it difficult to create custom suites, to do custom >>> ordering, custom per-test behavior (like metrics collection before >>> and after run(), or gc.collect() after each test, or looking for >>> newly-opened-but-not-cleaned-up external resources like file >>> descriptors after each tearDown). >> >> True, the list never ends. >> >>> Again: these are all concrete features that *users* of test >>> frameworks want, not just idle architectural fantasy of us framework >>> hackers. >> >> I expect that most bleeding edge users will end up writing a custom >> framework, or at least work with a bleeding edge framework that change >> change rapidly to meet their urgent needs. > > Yes, exactly. Users who want an esoteric feature like setUpClass > should use a custom framework :). The standard library's job, in my > view, is to provide an integration point for those more advanced > framework so that different "bleeding edge" frameworks have a > mechanism to communicate, and users have some level of choice about > what tools they use to run their tests. > > I have not always thought this way. Originally, my intention was for > twisted's test framework to be a complete departure from the standard > library and just do its own thing. But, both users of the framework > and more savvy test developers have gradually convinced me that it's > really useful to be able to load stdlib tests with a variety of tools. > > This doesn't mean that I think the stdlib should stop changing > completely, but I do think changes which potentially break > compatibility with _basic_ test framework things like re-ordering > tests or overriding a core method like 'run' need to be done extremely > carefully. > >>> I haven't had the opportunity to read the entire thread, so I don't >>> know if this discussion has come to fruition, but I can see that >>> some attention has been paid to these difficulties. I have no >>> problem with setUpClass or tearDownClass hooks *per se*, as long as >>> they can be implemented in a way which explicitly preserves >>> extensibility. >> >> That's good to know. I have no doubt they (and setUpModule c.s.) can >> be done in a clean, extensible way. And that doesn't mean we couldn't >> also add other features -- after all, not all users have the same >> needs. (If you read the Zen of Python, you'll see that TOOWTDI has >> several qualifications. :-) > > Recent messages indicate that this is still a problem. But perhaps it > will be solved! I can yell at Michael in person in a few days, anyway :). > > > > All those answers were pretty reasonable, so I will avoid retreading them. > >>> testresources very neatly sidesteps this problem by just providing >>> an API to say "this test case depends on that test resource", >>> without relying on the grouping of tests within classes, modules, or >>> packages. Of course you can just define a class-level or >>> module-level resource and then have all your tests depend on it, >>> which gives you the behavior of setUpClass and setUpModule in a more >>> general way. >> >> I wish it was always a matter of "resources". I've seen use cases for >> module-level setup that were much messier than that (e.g. fixing >> import paths). I expect it will be a while before the testresources >> design has been shaken out sufficiently for it to be included in the >> stdlib. > > Why isn't loadable code a "resource" like anything else? I haven't > used testresources specifically for this, but I've definitely written > fixture setup and teardown code that dealt with sys.path. > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Tue Feb 16 08:09:09 2010 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Tue, 16 Feb 2010 02:09:09 -0500 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <4B79B3AE.5040504@voidspace.org.uk> References: <4B71908A.3080306@voidspace.org.uk> <2E9EFC73-20B4-42F0-973C-66933410C9EE@twistedmatrix.com> <5A4642E0-C933-47A6-B420-14EAD5BE6048@twistedmatrix.com> <4B79B3AE.5040504@voidspace.org.uk> Message-ID: <1D88FC32-BF93-4364-934E-322A84D1BAD5@twistedmatrix.com> On Feb 15, 2010, at 3:50 PM, Michael Foord wrote: > On 15/02/2010 20:27, Glyph Lefkowitz wrote: >> >> >> On Feb 13, 2010, at 12:46 PM, Guido van Rossum wrote: >> >>> On Fri, Feb 12, 2010 at 8:01 PM, Glyph Lefkowitz >>> wrote: >>>> I find setUpClass more hostile to *other* kinds of testing, because this convenience for simple integration tests makes more involved, performance-intensive integration tests harder to write and manage. >>> >>> That sounds odd, as if the presence of this convenience would prohibit >>> you from also implement other features. >> >> And I am pretty sure this is not just my over-reaction; Michael still appears to be wrestling with the problems I'm describing. > And I appreciate your input. Thanks :). >> In a recent message he was talking about either breaking compatibility with TestSuite implementations that override run(), or test-reordering - both of which I consider important, core features of the unittest module. > > Well, by "breaking compatibility with custom TestSuite implementations that override run" I mean that is one possible place to put the functionality. Code that does override it will *not* stop working, it just won't support the new features. Ah, I see. This doesn't sound *too* bad, but I'd personally prefer it if the distinction were a bit more clearly drawn. I'd like frameworks to be able to implement extension functionality without having to first stub out functionality. In other words, if I want a test suite without setUpClass, I'd prefer to avoid having an abstraction inversion. Practically speaking this could be implemented by having a very spare, basic TestSuite base class and ClassSuite/ModuleSuite subclasses which implement the setUpXXX functionality. > If we chose this implementation strategy there would be no compatibility issues for existing tests / frameworks that don't use the new features. That's very good to hear. > If tests do want to use the new features then the framework authors will need to ensure they are compatible with them. This seems like a reasonable trade-off to me. We can ensure that it is easy to write custom TestSuite objects that work with earlier versions of unittest but are also compatible with setUpClass in 2.7 (and document the recipe - although I expect it will just mean that TestSuite.run should call a single method if it exists). This is something that I hope Jonathan Lange or Robert Collins will chime in to comment on: expanding the protocol between suite and test is an area which is fraught with peril, but it seems like it's something that test framework authors always want to do. (Personally, *I* really want to do it because I want to be able to run things asynchronously, so the semantics of 'run()' need to change pretty dramatically to support that...) It might be good to eventually develop a general mechanism for this, rather than building up an ad-hoc list of test-feature compatibility recipes which involve a list of if hasattr(...): foo(); checks in every suite implementation. > Perhaps a better idea might be to also add startTest and stopTest methods to TestSuite so that frameworks can build in features like timing tests (etc) without having to override run itself. This is already possible in the TestResult of course, which is a more common extensibility point in *my* experience. I think timing and monitoring tests can mostly be done in the TestResult class; those were bad examples. There's stuff like synthesizing arguments for test methods, or deciding to repeat a potentially flaky test method before reporting a failure, which are not possible to do from the result. I'm not sure that startTest and stopTest hooks help with those features, the ones which really need suites; it would seem it mostly gives you a hook to do stuff that could already be done in TestResult anyway. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Tue Feb 16 10:28:40 2010 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 16 Feb 2010 20:28:40 +1100 Subject: [Python-Dev] setUpClass and setUpModule in unittest In-Reply-To: <1D88FC32-BF93-4364-934E-322A84D1BAD5@twistedmatrix.com> References: <4B71908A.3080306@voidspace.org.uk> <2E9EFC73-20B4-42F0-973C-66933410C9EE@twistedmatrix.com> <5A4642E0-C933-47A6-B420-14EAD5BE6048@twistedmatrix.com> <4B79B3AE.5040504@voidspace.org.uk> <1D88FC32-BF93-4364-934E-322A84D1BAD5@twistedmatrix.com> Message-ID: <1266312521.26009.121.camel@lifeless-64> On Tue, 2010-02-16 at 02:09 -0500, Glyph Lefkowitz wrote: > > > > In a recent message he was talking about either breaking > > > compatibility with TestSuite implementations that override run(), > > > or test-reordering - both of which I consider important, core > > > features of the unittest module. > > > > Well, by "breaking compatibility with custom TestSuite > > implementations that override run" I mean that is one possible place > > to put the functionality. Code that does override it will *not* stop > > working, it just won't support the new features. > > > > > Ah, I see. This doesn't sound *too* bad, but I'd personally prefer it > if the distinction were a bit more clearly drawn. I'd like frameworks > to be able to implement extension functionality without having to > first stub out functionality. In other words, if I want a test suite > without setUpClass, I'd prefer to avoid having an abstraction > inversion. +1 > > If we chose this implementation strategy there would be no > > compatibility issues for existing tests / frameworks that don't use > > the new features. > > That's very good to hear. It does however get tougher to be 'stdlib compatible' for frameworks that extend the stdlib - at least with how extensions work today. > > If tests do want to use the new features then the framework authors > > will need to ensure they are compatible with them. This seems like a > > reasonable trade-off to me. We can ensure that it is easy to write > > custom TestSuite objects that work with earlier versions of unittest > > but are also compatible with setUpClass in 2.7 (and document the > > recipe - although I expect it will just mean that TestSuite.run > > should call a single method if it exists). > > > > > This is something that I hope Jonathan Lange or Robert Collins will > chime in to comment on: expanding the protocol between suite and test > is an area which is fraught with peril, but it seems like it's > something that test framework authors always want to do. (Personally, > *I* really want to do it because I want to be able to run things > asynchronously, so the semantics of 'run()' need to change pretty > dramatically to support that...) It might be good to eventually > develop a general mechanism for this, rather than building up an > ad-hoc list of test-feature compatibility recipes which involve a list > of if hasattr(...): foo(); checks in every suite implementation. Please have a look at the testtools.TestCase.run - its incomplete, but its working towards making it possible for trial to not need to replace run, but instead provide a couple of hooks (registered during setUp) to handle what you need. What it currently offers is catching additional exceptions for you, which is a common form of extension. bzrlib is using this quite successfully, and we deleted a lot of code that overlapped the stdlib unittest run(). > > Perhaps a better idea might be to also add startTest and stopTest > > methods to TestSuite so that frameworks can build in features like > > timing tests (etc) without having to override run itself. This is > > already possible in the TestResult of course, which is a more common > > extensibility point in *my* experience. > > > > I think timing and monitoring tests can mostly be done in the > TestResult class; those were bad examples. There's stuff like > synthesizing arguments for test methods, or deciding to repeat a > potentially flaky test method before reporting a failure, which are > not possible to do from the result. I'm not sure that startTest and > stopTest hooks help with those features, the ones which really need > suites; it would seem it mostly gives you a hook to do stuff that > could already be done in TestResult anyway. Also its not really possible to 'run one thing' around a test at the moment - theres no good place (without changing tests or doing somewhat convoluted stuff) to have custom code sit in the stack above the test code - this makes it harder to handle: - profiling - drop-into-a-debugger - $other use case This is also in my hit-list of things to solve-and-propose-for-stdlib-unittest that I blogged about a while back. -Rob -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From techtonik at gmail.com Tue Feb 16 11:38:05 2010 From: techtonik at gmail.com (anatoly techtonik) Date: Tue, 16 Feb 2010 12:38:05 +0200 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) Message-ID: Hello, So far, Python timezone handling is far from "pythonic". There is no function to get current UTC offset, intuitive API to get DST of current time zone and whenever it is active, no functions to work with internet timestamps (RFC 3339). In my case [1] it took about one month and five people to get the right solution for a valid RFC 3339 timestamp. One of the reasons I see is that date/time functions are implemented in C, they expose C API, and there are not many people who can help and patch them. I am sure many current Python users will appreciate UTC functions and improved date/time API more than any new language features. That's why I would like to propose this enhancements for a coding spring on forthcoming PyCon. I am located in Europe and can't attend PyCon, but I summarized date/time issues related to UTC below. More open UTC-related Python issues: http://bugs.python.org/issue1647654 No obvious and correct way to get the time zone offset http://bugs.python.org/issue1667546 Time zone-capable variant of time.localtime http://bugs.python.org/issue7662 time.utcoffset() http://bugs.python.org/issue7229 [PATCH] Manual entry for time.daylight can be misleading http://bugs.python.org/issue5094 datetime lacks concrete tzinfo impl. for UTC http://bugs.python.org/issue7584 datetime.rfcformat() for Date and Time on the Internet (support RFC 3339, ISO 8601 datetime format) http://bugs.python.org/issue665194 datetime-RFC2822 roundtripping http://bugs.python.org/issue6280 calendar.timegm() belongs in time module, next to time.gmtime() All solutions require C expertise. If it will be impossible to find experts able to modify current implementation, then perhaps it could be real to create Python stub for coding solution in Python later? FWIW, this proposal is from my other issue about problems with Python date/time in separate tracker on Google Code [2]. [1] http://bugs.python.org/issue7582 [patch] diff.py to use iso timestamp [2] http://code.google.com/p/rainforce/issues/detail?id=10 python: date/time is a mess -- anatoly t. From victor.stinner at haypocalc.com Tue Feb 16 12:52:06 2010 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Tue, 16 Feb 2010 12:52:06 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: Message-ID: <201002161252.07049.victor.stinner@haypocalc.com> Hi, Le Tuesday 16 February 2010 11:38:05 anatoly techtonik, vous avez ?crit?: > So far, Python timezone handling is far from "pythonic". There is no > function to get current UTC offset, (...) There is the time.timezone attribute: UTC offset in seconds. > One of the reasons I see is that date/time functions are > implemented in C, they expose C API, and there are not many people who > can help and patch them. Is it no possible to extend Python datetime module in Python? There are already 3rd party libraries: http://pypi.python.org/pypi/pytz http://pypi.python.org/pypi/django-timezones http://www.egenix.com/products/python/mxBase/mxDateTime/ ... Can't why integrating an existing module (well tested, with a documentation, an user base, etc.)? > I am sure many current Python users will appreciate UTC functions and > improved date/time API more than any new language features. Sure, the current API is complex and has few documentation. > That's why I would like to propose this enhancements for a coding spring on > forthcoming PyCon. Excellent idea :) -- There are also some interesting open issues about the datetime module: http://bugs.python.org/issue1289118 - timedelta multiply and divide by floating point http://bugs.python.org/issue1673409 - datetime module missing some important methods http://bugs.python.org/issue2706 - datetime: define division timedelta/timedelta http://bugs.python.org/issue2736 - datetime needs and "epoch" method Bugs about old timestamps: http://bugs.python.org/issue1726687 - Bug found in datetime for Epoch time = -1 http://bugs.python.org/issue1777412 - Python's strftime dislikes years before 1900 http://bugs.python.org/issue2494 - Can't round-trip datetimes<->timestamps prior to 1970 on Windows Victor From skip at pobox.com Tue Feb 16 13:03:14 2010 From: skip at pobox.com (skip at pobox.com) Date: Tue, 16 Feb 2010 06:03:14 -0600 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: Message-ID: <19322.35202.132728.326728@montanaro.dyndns.org> Maybe an alternate sprint idea would be to incorporate dateutil into the Python core: http://labix.org/python-dateutil Skip From skip at pobox.com Tue Feb 16 13:05:22 2010 From: skip at pobox.com (skip at pobox.com) Date: Tue, 16 Feb 2010 06:05:22 -0600 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: Message-ID: <19322.35330.429508.954732@montanaro.dyndns.org> Maybe an alternate sprint idea would be to incorporate dateutil into the Python core: http://labix.org/python-dateutil Whoops... (just waking up - still need that first cup of coffee) While incorporating dateutil into the core would be nice (in my opinion at least), I was really thinking of pytz: http://pytz.sourceforge.net/ Skip From ncoghlan at gmail.com Tue Feb 16 13:31:33 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Feb 2010 22:31:33 +1000 Subject: [Python-Dev] Release timer for Core Development page In-Reply-To: References: Message-ID: <4B7A9025.8000907@gmail.com> anatoly techtonik wrote: > Does anybody else find this feature useful for Python development? Not particularly. The target release dates are in the release PEPs and if I wanted a timer I'd add it to my personal calendar. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Tue Feb 16 13:35:26 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Feb 2010 22:35:26 +1000 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <19322.35330.429508.954732@montanaro.dyndns.org> References: <19322.35330.429508.954732@montanaro.dyndns.org> Message-ID: <4B7A910E.9060401@gmail.com> skip at pobox.com wrote: > While incorporating dateutil into the core would be nice (in my opinion at > least) I believe that idea has come up before - as I recall, the major concern was with the heuristic nature of some of the 'natural language' date parsing. (I could be completely misremembering though...) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Tue Feb 16 13:36:58 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Feb 2010 22:36:58 +1000 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <201002161252.07049.victor.stinner@haypocalc.com> References: <201002161252.07049.victor.stinner@haypocalc.com> Message-ID: <4B7A916A.2040006@gmail.com> Victor Stinner wrote: > Hi, > > Le Tuesday 16 February 2010 11:38:05 anatoly techtonik, vous avez ?crit : >> So far, Python timezone handling is far from "pythonic". There is no >> function to get current UTC offset, (...) > > There is the time.timezone attribute: UTC offset in seconds. > >> One of the reasons I see is that date/time functions are >> implemented in C, they expose C API, and there are not many people who >> can help and patch them. > > Is it no possible to extend Python datetime module in Python? Splitting datetime into a datetime.py with an underlying _datetime.c is an idea definitely worth exploring - that module structure makes it much easier to accelerate things that need it, while allowing less critical or more complex aspects to be written in the higher level language. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From dirkjan at ochtman.nl Tue Feb 16 13:42:28 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Tue, 16 Feb 2010 13:42:28 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <19322.35330.429508.954732@montanaro.dyndns.org> References: <19322.35330.429508.954732@montanaro.dyndns.org> Message-ID: On Tue, Feb 16, 2010 at 13:05, wrote: > ? ?Maybe an alternate sprint idea would be to incorporate dateutil into the > ? ?Python core: http://labix.org/python-dateutil > > Whoops... ?(just waking up - still need that first cup of coffee) > > While incorporating dateutil into the core would be nice (in my opinion at > least), I was really thinking of pytz: http://pytz.sourceforge.net/ I think dateutil is fairly heavy for the stdlib, but I think pytz would be a very good candidate for inclusion. Without it, the timezone support in datetime is hardly usable. I'd be happy to participate in a PyCon sprint to get datetime issues sorted out and/or work on pytz inclusion. Cheers, Dirkjan From tseaver at palladion.com Tue Feb 16 16:26:10 2010 From: tseaver at palladion.com (Tres Seaver) Date: Tue, 16 Feb 2010 10:26:10 -0500 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <19322.35330.429508.954732@montanaro.dyndns.org> References: <19322.35330.429508.954732@montanaro.dyndns.org> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 skip at pobox.com wrote: > Maybe an alternate sprint idea would be to incorporate dateutil into the > Python core: http://labix.org/python-dateutil > > Whoops... (just waking up - still need that first cup of coffee) > > While incorporating dateutil into the core would be nice (in my opinion at > least), I was really thinking of pytz: http://pytz.sourceforge.net/ Because timezones are defined politically, they change frequently. pytz is released frequently (multiple times per year) to accomodate those changes: I can't see any way to preserve that flexibility if the package were part of stdlib. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkt6uQwACgkQ+gerLs4ltQ4BiwCcDKbfmFmapdQZ188AbiiJ8iCD JvcAoMozT+bcXDCX1tQ5FuLqpCTTbxZe =OP1W -----END PGP SIGNATURE----- From dirkjan at ochtman.nl Tue Feb 16 16:43:40 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Tue, 16 Feb 2010 16:43:40 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: <19322.35330.429508.954732@montanaro.dyndns.org> Message-ID: On Tue, Feb 16, 2010 at 16:26, Tres Seaver wrote: > Because timezones are defined politically, they change frequently. ?pytz > is released frequently (multiple times per year) to accomodate those > changes: ?I can't see any way to preserve that flexibility if the > package were part of stdlib. By using what the OS provides. At least on Linux, the basic timezone data is usually updated by other means (at least on the distro I'm familiar with, it's updated quite often, too; through the package manager). I'm assuming Windows and OS X would also be able to provide something like this. I think pytz already looks at this data if it's available (precisely because it might well be newer). Cheers, Dirkjan From exarkun at twistedmatrix.com Tue Feb 16 17:15:14 2010 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Tue, 16 Feb 2010 16:15:14 -0000 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: <19322.35330.429508.954732@montanaro.dyndns.org> Message-ID: <20100216161514.26099.1541592303.divmod.xquotient.1414@localhost.localdomain> On 03:43 pm, dirkjan at ochtman.nl wrote: >On Tue, Feb 16, 2010 at 16:26, Tres Seaver >wrote: >>Because timezones are defined politically, they change frequently. >>pytz >>is released frequently (multiple times per year) to accomodate those >>changes: ?I can't see any way to preserve that flexibility if the >>package were part of stdlib. > >By using what the OS provides. At least on Linux, the basic timezone >data is usually updated by other means (at least on the distro I'm >familiar with, it's updated quite often, too; through the package >manager). I'm assuming Windows and OS X would also be able to provide >something like this. I think pytz already looks at this data if it's >available (precisely because it might well be newer). pytz includes its own timezone database. It doesn't use the system timezone data, even on Linux. Jean-Paul From tseaver at palladion.com Tue Feb 16 17:18:42 2010 From: tseaver at palladion.com (Tres Seaver) Date: Tue, 16 Feb 2010 11:18:42 -0500 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: <19322.35330.429508.954732@montanaro.dyndns.org> Message-ID: <4B7AC562.1050405@palladion.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Dirkjan Ochtman wrote: > On Tue, Feb 16, 2010 at 16:26, Tres Seaver wrote: >> Because timezones are defined politically, they change frequently. pytz >> is released frequently (multiple times per year) to accomodate those >> changes: I can't see any way to preserve that flexibility if the >> package were part of stdlib. > > By using what the OS provides. At least on Linux, the basic timezone > data is usually updated by other means (at least on the distro I'm > familiar with, it's updated quite often, too; through the package > manager). I'm assuming Windows and OS X would also be able to provide > something like this. I think pytz already looks at this data if it's > available (precisely because it might well be newer). If that were so, I don't think Stuart would be going to the trouble to re-release the library 6 - 12 times per year. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkt6xVwACgkQ+gerLs4ltQ4q4ACdGRozE9rfoYkYGmNOiGTQIZyj CeMAoJlmEamyWUbHSQYA0Yq28t+YlbZT =UC3U -----END PGP SIGNATURE----- From popuser at christest2.dc.k12us.com Tue Feb 16 18:55:03 2010 From: popuser at christest2.dc.k12us.com (Pop User) Date: Tue, 16 Feb 2010 12:55:03 -0500 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <20100216161514.26099.1541592303.divmod.xquotient.1414@localhost.localdomain> References: <19322.35330.429508.954732@montanaro.dyndns.org> <20100216161514.26099.1541592303.divmod.xquotient.1414@localhost.localdomain> Message-ID: <4B7ADBF7.1080304@christest2.dc.k12us.com> On 2/16/2010 11:15 AM, exarkun at twistedmatrix.com wrote: > On 03:43 pm, dirkjan at ochtman.nl wrote: > pytz includes its own timezone database. It doesn't use the system > timezone data, even on Linux. dateutil can use the system timezone data. See tzfile. http://labix.org/python-dateutil#head-4e4386d98006f1e3cb9290a04bff7e01e584505b or on windows see tzwin. http://labix.org/python-dateutil#head-566bbb3e75e621ac00d2cb1b54abc09036b994f1 From brett at python.org Tue Feb 16 19:44:52 2010 From: brett at python.org (Brett Cannon) Date: Tue, 16 Feb 2010 10:44:52 -0800 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: Message-ID: On Tue, Feb 16, 2010 at 02:38, anatoly techtonik wrote: > Hello, > > So far, Python timezone handling is far from "pythonic". There is no > function to get current UTC offset, intuitive API to get DST of > current time zone and whenever it is active, no functions to work with > internet timestamps (RFC 3339). In my case [1] it took about one month > and five people to get the right solution for a valid RFC 3339 > timestamp. One of the reasons I see is that date/time functions are > implemented in C, they expose C API, and there are not many people who > can help and patch them. > > I am sure many current Python users will appreciate UTC functions and > improved date/time API more than any new language features. That's why > I would like to propose this enhancements for a coding spring on > forthcoming PyCon. I am located in Europe and can't attend PyCon, but > I summarized date/time issues related to UTC below. > > More open UTC-related Python issues: > http://bugs.python.org/issue1647654 ?No obvious and correct way to get > the time zone offset > http://bugs.python.org/issue1667546 ?Time zone-capable variant of time.localtime > http://bugs.python.org/issue7662 ? ? time.utcoffset() > http://bugs.python.org/issue7229 ? ? [PATCH] Manual entry for > time.daylight can be misleading > http://bugs.python.org/issue5094 ? ? datetime lacks concrete tzinfo > impl. for UTC Issue 5094 already has a patch that is nearly complete to provide a default UTC object (and requisite changes to functions to no longer be naive but to use UTC). I did a code review on it in Rietveld and it only has minor things to correct. > http://bugs.python.org/issue7584 ? ? datetime.rfcformat() for Date and > Time on the Internet (support RFC 3339, ISO 8601 datetime format) > http://bugs.python.org/issue665194 ? datetime-RFC2822 roundtripping > http://bugs.python.org/issue6280 ? ? calendar.timegm() belongs in time > module, next to time.gmtime() > > All solutions require C expertise. If it will be impossible to find > experts able to modify current implementation, then perhaps it could > be real to create Python stub for coding solution in Python later? Probably worth doing as I am sure everyone would prefer to maintain a pure Python version when possible and only drop into C as needed. See heapq, warnings, and a couple of others if you don't know how to properly do a Python/C module split. -Brett > > FWIW, this proposal is from my other issue about problems with Python > date/time in separate tracker on Google Code [2]. > > [1] http://bugs.python.org/issue7582 ? ?[patch] diff.py to use iso timestamp > [2] http://code.google.com/p/rainforce/issues/detail?id=10 ? ?python: > date/time is a mess > -- > anatoly t. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org > From ddicato at microsoft.com Tue Feb 16 22:19:00 2010 From: ddicato at microsoft.com (David DiCato) Date: Tue, 16 Feb 2010 21:19:00 +0000 Subject: [Python-Dev] math.hypot, complex.__abs__, and documentation Message-ID: I have a minor concern about certain corner cases with math.hypot and complex.__abs__, namely when one component is infinite and one is not a number. If we execute the following code: import math inf = float('inf') nan = float('nan') print math.hypot(inf, nan) print abs(complex(nan, inf)) ... then we see that 'inf' is printed in both cases. The standard library tests (for example, test_cmath.py:test_abs()) seem to test for this behavior as well, and FWIW, I personally agree with this convention. However, the math module's documentation for both 2.6 and 3.1 states, "All functions return a quiet NaN if at least one of the args is NaN." math.pow(1.0, nan) is another such exception to the rule. Perhaps the documentation should be updated to reflect this. Thanks, - David -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Feb 16 23:12:58 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Feb 2010 08:12:58 +1000 Subject: [Python-Dev] math.hypot, complex.__abs__, and documentation In-Reply-To: References: Message-ID: <4B7B186A.7050208@gmail.com> David DiCato wrote: > ? then we see that ?inf? is printed in both cases. The standard library > tests (for example, test_cmath.py:test_abs()) seem to test for this > behavior as well, and FWIW, I personally agree with this convention. > However, the math module?s documentation for both 2.6 and 3.1 states, > ?All functions return a quiet NaN if at least one of the args is NaN.? > > math.pow(1.0, nan) is another such exception to the rule. Perhaps the > documentation should be updated to reflect this. This sounds like a legitimate documentation bug for the tracker at bugs.python.org (bug reports tend to get lost/forgotten if they only exist on the mailing list). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From dickinsm at gmail.com Tue Feb 16 23:14:53 2010 From: dickinsm at gmail.com (Mark Dickinson) Date: Tue, 16 Feb 2010 22:14:53 +0000 Subject: [Python-Dev] math.hypot, complex.__abs__, and documentation In-Reply-To: References: Message-ID: <5c6f2a5d1002161414k37094d7eo8d369f6f47c65183@mail.gmail.com> On Tue, Feb 16, 2010 at 9:19 PM, David DiCato wrote: > I have a minor concern about certain corner cases with math.hypot and > complex.__abs__, namely when one component is infinite and one is not a > number. > as well, and FWIW, I personally agree with this convention. However, the > math module?s documentation for both 2.6 and 3.1 states, ?All functions > return a quiet NaN if at least one of the args is NaN.? Yes; this is a doc bug. Please could you open an issue on http://bugs.python.org ? > math.pow(1.0, nan) is another such exception to the rule. Perhaps the > documentation should be updated to reflect this. Yes, it should. Thanks! Mark From ddicato at microsoft.com Tue Feb 16 23:42:39 2010 From: ddicato at microsoft.com (David DiCato) Date: Tue, 16 Feb 2010 22:42:39 +0000 Subject: [Python-Dev] math.hypot, complex.__abs__, and documentation In-Reply-To: <5c6f2a5d1002161414k37094d7eo8d369f6f47c65183@mail.gmail.com> References: <5c6f2a5d1002161414k37094d7eo8d369f6f47c65183@mail.gmail.com> Message-ID: Ok, thanks! It's submitted as issue 7947. - David -----Original Message----- From: Mark Dickinson [mailto:dickinsm at gmail.com] Sent: Tuesday, February 16, 2010 2:15 PM To: David DiCato Cc: python-dev at python.org Subject: Re: [Python-Dev] math.hypot, complex.__abs__, and documentation On Tue, Feb 16, 2010 at 9:19 PM, David DiCato wrote: > I have a minor concern about certain corner cases with math.hypot and > complex.__abs__, namely when one component is infinite and one is not a > number. > as well, and FWIW, I personally agree with this convention. However, the > math module?s documentation for both 2.6 and 3.1 states, ?All functions > return a quiet NaN if at least one of the args is NaN.? Yes; this is a doc bug. Please could you open an issue on http://bugs.python.org ? > math.pow(1.0, nan) is another such exception to the rule. Perhaps the > documentation should be updated to reflect this. Yes, it should. Thanks! Mark From steve at pearwood.info Tue Feb 16 23:46:38 2010 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 17 Feb 2010 09:46:38 +1100 Subject: [Python-Dev] math.hypot, complex.__abs__, and documentation In-Reply-To: References: Message-ID: <201002170946.38799.steve@pearwood.info> On Wed, 17 Feb 2010 08:19:00 am David DiCato wrote: > I have a minor concern about certain corner cases with math.hypot and > complex.__abs__, namely when one component is infinite and one is not > a number. If we execute the following code: > > import math > inf = float('inf') > nan = float('nan') > print math.hypot(inf, nan) > print abs(complex(nan, inf)) > > ... then we see that 'inf' is printed in both cases. The standard > library tests (for example, test_cmath.py:test_abs()) seem to test > for this behavior as well, and FWIW, I personally agree with this > convention. What's the justification for that convention? It seems wrong to me. If you expand out hypot and substitute a=inf and b=nan, you get: >>> math.sqrt(inf*inf + nan*nan) nan which agrees with my pencil-and-paper calculation: sqrt(inf*inf + nan*nan) = sqrt(inf + nan) = sqrt(nan) = nan -- Steven D'Aprano From ddicato at microsoft.com Tue Feb 16 23:54:08 2010 From: ddicato at microsoft.com (David DiCato) Date: Tue, 16 Feb 2010 22:54:08 +0000 Subject: [Python-Dev] math.hypot, complex.__abs__, and documentation In-Reply-To: <201002170946.38799.steve@pearwood.info> References: <201002170946.38799.steve@pearwood.info> Message-ID: Mathematically, think of nan as 'indeterminate'. When you're trying to get the magnitude of a vector, you know that it's infinite if even one of the components is infinite. So, the fact that the other component is indeterminate can be ignored. It's the same with math.pow(1.0, float('nan')); the second argument simply doesn't matter when the first is 1.0. FWIW, these conventions also exist in the C99 standard. Hope this helps, - David -----Original Message----- From: python-dev-bounces+ddicato=microsoft.com at python.org [mailto:python-dev-bounces+ddicato=microsoft.com at python.org] On Behalf Of Steven D'Aprano Sent: Tuesday, February 16, 2010 2:47 PM To: python-dev at python.org Subject: Re: [Python-Dev] math.hypot, complex.__abs__, and documentation On Wed, 17 Feb 2010 08:19:00 am David DiCato wrote: > I have a minor concern about certain corner cases with math.hypot and > complex.__abs__, namely when one component is infinite and one is not > a number. If we execute the following code: > > import math > inf = float('inf') > nan = float('nan') > print math.hypot(inf, nan) > print abs(complex(nan, inf)) > > ... then we see that 'inf' is printed in both cases. The standard > library tests (for example, test_cmath.py:test_abs()) seem to test > for this behavior as well, and FWIW, I personally agree with this > convention. What's the justification for that convention? It seems wrong to me. If you expand out hypot and substitute a=inf and b=nan, you get: >>> math.sqrt(inf*inf + nan*nan) nan which agrees with my pencil-and-paper calculation: sqrt(inf*inf + nan*nan) = sqrt(inf + nan) = sqrt(nan) = nan -- Steven D'Aprano _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/ddicato%40microsoft.com From dickinsm at gmail.com Wed Feb 17 00:06:01 2010 From: dickinsm at gmail.com (Mark Dickinson) Date: Tue, 16 Feb 2010 23:06:01 +0000 Subject: [Python-Dev] math.hypot, complex.__abs__, and documentation In-Reply-To: <201002170946.38799.steve@pearwood.info> References: <201002170946.38799.steve@pearwood.info> Message-ID: <5c6f2a5d1002161506t2c75341ej142fb9385cdbce18@mail.gmail.com> On Tue, Feb 16, 2010 at 10:46 PM, Steven D'Aprano > What's the justification for that convention? It seems wrong to me. It's difficult to do better than to point to Kahan's writings. See http://www.eecs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF and particularly the discussion on page 8 that starts "Were there no way to get rid of NaNs ...". I don't think it covers hypot, but the same justification given for having nan**0 == 1 applies here. Interestingly, he says that at the time of writing, 1**nan == nan is the preferred alternative. But since then, the standards (well, at least C99 and IEEE 754-2008) have come out in favour of 1**nan == 1. Mark From dickinsm at gmail.com Wed Feb 17 00:09:39 2010 From: dickinsm at gmail.com (Mark Dickinson) Date: Tue, 16 Feb 2010 23:09:39 +0000 Subject: [Python-Dev] math.hypot, complex.__abs__, and documentation In-Reply-To: <5c6f2a5d1002161506t2c75341ej142fb9385cdbce18@mail.gmail.com> References: <201002170946.38799.steve@pearwood.info> <5c6f2a5d1002161506t2c75341ej142fb9385cdbce18@mail.gmail.com> Message-ID: <5c6f2a5d1002161509w7b21236cue14bc34d712763b5@mail.gmail.com> On Tue, Feb 16, 2010 at 11:06 PM, Mark Dickinson wrote: > and particularly the discussion on page 8 that starts "Were there no > way to get rid of NaNs ...". ?I don't think it covers hypot, but the Whoops. I should have reread that article myself. The behaviour of hypot *is* mentioned, on page 7. Mark From greg.ewing at canterbury.ac.nz Tue Feb 16 22:50:31 2010 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 17 Feb 2010 10:50:31 +1300 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: Message-ID: <4B7B1327.3060908@canterbury.ac.nz> Brett Cannon wrote: > Issue 5094 already has a patch that is nearly complete to provide a > default UTC object (and requisite changes to functions to no longer be > naive but to use UTC). Are you sure it's really a good idea to default to UTC? I thought it was considered a feature that datetime objects are naive unless you explicitly specify a timezone. -- Greg From stuart at stuartbishop.net Wed Feb 17 03:15:25 2010 From: stuart at stuartbishop.net (Stuart Bishop) Date: Wed, 17 Feb 2010 09:15:25 +0700 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <4B7AC562.1050405@palladion.com> References: <19322.35330.429508.954732@montanaro.dyndns.org> <4B7AC562.1050405@palladion.com> Message-ID: <6bc73d4c1002161815k7262d229kcdefc738cb303f9a@mail.gmail.com> On Tue, Feb 16, 2010 at 11:18 PM, Tres Seaver wrote: > Dirkjan Ochtman wrote: >> On Tue, Feb 16, 2010 at 16:26, Tres Seaver wrote: >>> Because timezones are defined politically, they change frequently. ?pytz >>> is released frequently (multiple times per year) to accomodate those >>> changes: ?I can't see any way to preserve that flexibility if the >>> package were part of stdlib. >> >> By using what the OS provides. At least on Linux, the basic timezone >> data is usually updated by other means (at least on the distro I'm >> familiar with, it's updated quite often, too; through the package >> manager). I'm assuming Windows and OS X would also be able to provide >> something like this. I think pytz already looks at this data if it's >> available (precisely because it might well be newer). > > If that were so, I don't think Stuart would be going to the trouble to > re-release the library 6 - 12 times per year. The Debian, Ubuntu and I think Redhat packages all use the system zoneinfo database - there are hooks in there to support package maintainers that want to do this. This way the package can be included in the supported release but still receive timezone information updates via the OS (but no code updates, but these are rare and usually irrelevant unless you where the person who filed the bug ;) ). I'd be happy to rework pytz for the standard Library using the system installed zoneinfo database if it is available. I think for the standard library though, it needs to follow the documented API better rather than the .normalize() & .localize rubbish I needed to get localized datetime arithmetic working correctly. Having seen the confusion and bug reports over the last few years, I think people who need this are in the minority and pytz can still exist as a separate package to support them. tzwin could be used on Windows platforms - I'd need to look into that further to see if the API can remain consistent between *nix and Windows. I suspect that pytz without the .normalize() & .localize() rubbish may look remarkably similar to dateutil so that might be a better option to start from. We could consider extending the existing datetime library to support localized datetime arithmetic. This would either involve adding an extra bit to datetime instances to support the is_dst flag (originally deemed unacceptable as it increased the pickle size by a whole byte), or better support for tzinfo implementations to store the is_dst flag in the tzinfo instance (the approach pytz used). This requires a C programmer though and I'm so very, very rusty. I am not at pycon alas. Some of my coworkers from Canonical will be though and they might be interested as we use pytz for Launchpad and other Canonical projects. -- Stuart Bishop http://www.stuartbishop.net/ From a.badger at gmail.com Wed Feb 17 05:13:01 2010 From: a.badger at gmail.com (Toshio Kuratomi) Date: Tue, 16 Feb 2010 23:13:01 -0500 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <6bc73d4c1002161815k7262d229kcdefc738cb303f9a@mail.gmail.com> References: <19322.35330.429508.954732@montanaro.dyndns.org> <4B7AC562.1050405@palladion.com> <6bc73d4c1002161815k7262d229kcdefc738cb303f9a@mail.gmail.com> Message-ID: <20100217041301.GH5440@unaka.lan> On Wed, Feb 17, 2010 at 09:15:25AM +0700, Stuart Bishop wrote: > > The Debian, Ubuntu and I think Redhat packages all use the system > zoneinfo database - there are hooks in there to support package > maintainers that want to do this. Where RedHat == Fedora && EPEL packages for RHEL/Centos 5, yes :-) -Toshio -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From stephen at blackroses.com Wed Feb 17 06:49:41 2010 From: stephen at blackroses.com (stephen) Date: Wed, 17 Feb 2010 00:49:41 -0500 Subject: [Python-Dev] embedding Python interpreter in non-console windows application Message-ID: <990ae6531002162149u184fce78v9a12b02df4fcdad9@mail.gmail.com> Hello, THE PROBLEM: I am having a problem that I have seen asked quite a bit on the web, with little to no follow up. The problem is essentially this. When embedding (LoadLibraryA()) the python interpreter dll in a non-windows application the developer must first create a console for python to do output/input with. I properly initialize the CRT and AllocConsole() to do this. I then GetSTDHandle() for stdin and stdout accordingly and open those handles with the requisite flags "read" for STDIN and "write" for stdout. This all works great and is then verified and tested to work by printf() and fgets(). This issue however happens when attempting to PyRun_InteractiveLoop() and PyRun_SimpleString(). A PyRun_SimpleString("print 'test'") displays nothing in my freshly allocated console window. Similarly a PyRun_InteractiveLoop(stdin, NULL); yields nothing either even though the line printf("testing"); directly ahead of it works just fine. Does anyone have insight on how I can make this work with the freshly allocated console's stdin/stdout/stderr? SPECULATION: That is the question, so now on to the speculation. I suspect that something in the python runtime doesn't "get handles" correctly for STDIN and STDOUT upon initialization. I have perused the source code to find out exactly how this is done and I suspect that it starts in PyInitializeEx with calls to PySys_GetObject("stdin") and "stdout" accordingly. However I don't actually see where this translates into the Python runtime checking with the C-runtime for the "real" handles to STDIN and STDOUT. I dont ever see the Python runtime "ask the system" where his handles to STDIN and STDOUT are. SUBSEQUENT QUESTION: Is there anything I can do to initialize the Python interpreter (running as a dll) pointing him at his appropriate STDIN and STDOUT handles? -------------- next part -------------- An HTML attachment was scrubbed... URL: From danchr at gmail.com Wed Feb 17 09:47:17 2010 From: danchr at gmail.com (Dan Villiom Podlaski Christiansen) Date: Wed, 17 Feb 2010 09:47:17 +0100 Subject: [Python-Dev] __file__ is not always an absolute path In-Reply-To: <20100207042709.26099.1983212382.divmod.xquotient.613@localhost.localdomain> References: <20100207042709.26099.1983212382.divmod.xquotient.613@localhost.localdomain> Message-ID: <6C75C25A-7E40-4949-88D7-0963EA130072@gmail.com> On 7 Feb 2010, at 05:27, exarkun at twistedmatrix.com wrote: > Do you know of a case where it's actually slow? If not, how convincing should this argument really be? Perhaps we can measure it on a few platforms before passing judgement. On Mac OS X at least, system calls are notoriously slow. I think it has to do with Mach overhead, or something? $ arch -arch ppc /usr/bin/python2.6 -m timeit -s 'def f(): pass' 'f()' 1000000 loops, best of 3: 0.476 usec per loop $ arch -arch ppc /usr/bin/python2.6 -m timeit -s 'from os import getcwd' 'getcwd()' 10000 loops, best of 3: 21.9 usec per loop $ arch -arch i386 /usr/bin/python2.6 -m timeit -s 'def f(): pass' 'f()' 1000000 loops, best of 3: 0.234 usec per loop $ arch -arch i386 /usr/bin/python2.6 -m timeit -s 'from os import getcwd' 'getcwd()' 100000 loops, best of 3: 14.1 usec per loop $ arch -arch x86_64 /usr/bin/python2.6 -m timeit -s 'def f(): pass' 'f()' 10000000 loops, best of 3: 0.182 usec per loop $ arch -arch x86_64 /usr/bin/python2.6 -m timeit -s 'from os import getcwd' 'getcwd()' 100000 loops, best of 3: 11 usec per loop For maximum reproducibility, I used the stock Python 2.6.1 included in Mac OS X 10.6.2. In other words ?os.getcwd()? is more than fifty times as slow as a regular function call when using Mac OS X. -- Dan Villiom Podlaski Christiansen danchr at gmail.com -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1943 bytes Desc: not available URL: From stefan_ml at behnel.de Wed Feb 17 10:16:35 2010 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 17 Feb 2010 10:16:35 +0100 Subject: [Python-Dev] embedding Python interpreter in non-console windows application In-Reply-To: <990ae6531002162149u184fce78v9a12b02df4fcdad9@mail.gmail.com> References: <990ae6531002162149u184fce78v9a12b02df4fcdad9@mail.gmail.com> Message-ID: stephen, 17.02.2010 06:49: > THE PROBLEM: > I am having a problem that I have seen asked quite a bit on the web, with > little to no follow up. Note that this list is about developing the CPython core runtime, not about solving problems with Python code or Python usage. See the comp.lang.python newsgroup for that (or the corresponding mailing list mirror). Stefan From asmodai at in-nomine.org Wed Feb 17 11:24:19 2010 From: asmodai at in-nomine.org (Jeroen Ruigrok van der Werven) Date: Wed, 17 Feb 2010 11:24:19 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <6bc73d4c1002161815k7262d229kcdefc738cb303f9a@mail.gmail.com> References: <19322.35330.429508.954732@montanaro.dyndns.org> <4B7AC562.1050405@palladion.com> <6bc73d4c1002161815k7262d229kcdefc738cb303f9a@mail.gmail.com> Message-ID: <20100217102419.GC14271@nexus.in-nomine.org> -On [20100217 03:19], Stuart Bishop (stuart at stuartbishop.net) wrote: >The Debian, Ubuntu and I think Redhat packages all use the system >zoneinfo database - there are hooks in there to support package >maintainers that want to do this. This way the package can be included >in the supported release but still receive timezone information >updates via the OS (but no code updates, but these are rare and >usually irrelevant unless you where the person who filed the bug ;) ). This can also work for all the BSDs since they include the Olson zoneinfo data in the base system as well. And that will probably mean Mac OS X as well, if they stuck to what FreeBSD had in place for that. Can anyone verify that? -- Jeroen Ruigrok van der Werven / asmodai ????? ?????? ??? ?? ?????? http://www.in-nomine.org/ | http://www.rangaku.org/ | GPG: 2EAC625B Anything becomes possible, after you find the courage to admit that nothing is certain. From regebro at gmail.com Wed Feb 17 12:32:31 2010 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 17 Feb 2010 12:32:31 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: <19322.35330.429508.954732@montanaro.dyndns.org> Message-ID: <319e029f1002170332y560432aeg26b3b033f2534b77@mail.gmail.com> On Tue, Feb 16, 2010 at 13:42, Dirkjan Ochtman wrote: > On Tue, Feb 16, 2010 at 13:05, ? wrote: >> ? ?Maybe an alternate sprint idea would be to incorporate dateutil into the >> ? ?Python core: http://labix.org/python-dateutil >> >> Whoops... ?(just waking up - still need that first cup of coffee) >> >> While incorporating dateutil into the core would be nice (in my opinion at >> least), I was really thinking of pytz: http://pytz.sourceforge.net/ > > I think dateutil is fairly heavy for the stdlib, but I think pytz > would be a very good candidate for inclusion. Without it, the timezone > support in datetime is hardly usable. The timezone database is updated several times per year. You can *not* include it in the standard library. On Tue, Feb 16, 2010 at 16:43, Dirkjan Ochtman wrote: > By using what the OS provides. The OS often does not. > At least on Linux, the basic timezone > data is usually updated by other means (at least on the distro I'm > familiar with, it's updated quite often, too; through the package > manager). I'm assuming Windows and OS X would also be able to provide > something like this. The Windows timezone data sucks donkeyballs through a hose. Thus, if the timezone implementations from pytz was in the standard library, and the timezone data not, they would not be useable on Windows. So, no can do. Also, different Unices often have slightly different names and organisations of the Olsen database, which would create confusions and incompatibilities, so that's probably also not the best solution. == So, what to do? Use Pytz! == There is no need to stick Pytz in the standard library. It's available on PyPI, updated frequently, etc. What we can do is point to it from the documentation. But before that, it needs a fix. Pytz is great, but missing one thing: Wrappers for the current locale settings. This is necessary, because there is no way of realiably figuring out the current locale. See http://regebro.wordpress.com/2008/05/10/python-and-time-zones-part-2-the-beast-returns/ (and http://regebro.wordpress.com/2007/12/18/python-and-time-zones-fighting-the-beast/ for other timezone issues). These kinds of wrappers exist in dateutils.tz. It would be great if that type of functionality could get into Pytz as well. A sprint to do this and fix the issues in the tracker should solve the issues, I think. There is no need to move things into the core. An Pytz could use more maintainers, Stuart tends not to answer emails, I assume this is because he is overw -- Lennart Regebro: http://regebro.wordpress.com/ Python 3 Porting: http://python-incompatibility.googlecode.com/ +33 661 58 14 64 From regebro at gmail.com Wed Feb 17 12:34:45 2010 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 17 Feb 2010 12:34:45 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <319e029f1002170332y560432aeg26b3b033f2534b77@mail.gmail.com> References: <19322.35330.429508.954732@montanaro.dyndns.org> <319e029f1002170332y560432aeg26b3b033f2534b77@mail.gmail.com> Message-ID: <319e029f1002170334g45476e4eyfca0c111b0cbf45@mail.gmail.com> On Wed, Feb 17, 2010 at 12:32, Lennart Regebro wrote: > These kinds of wrappers exist in dateutils.tz. It would be great if > that type of functionality could get into Pytz as well. A sprint to do > this and fix the issues in the tracker should solve the issues, I > think. There is no need to move things into the core. An Pytz could > use more maintainers, Stuart tends not to answer emails, I assume this > is because he is overw Bloody gmail! I did NOT press send. Glah. Stuart tends not to answer emails, I assume this is because he is overworked, so more eyes on Pytz is probably a good idea. He is welcome to correct me if this is not so. :) -- Lennart Regebro: Python, Zope, Plone, Grok http://regebro.wordpress.com/ +33 661 58 14 64 From steve at pearwood.info Wed Feb 17 13:22:12 2010 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 17 Feb 2010 23:22:12 +1100 Subject: [Python-Dev] math.hypot, complex.__abs__, and documentation In-Reply-To: <5c6f2a5d1002161506t2c75341ej142fb9385cdbce18@mail.gmail.com> References: <201002170946.38799.steve@pearwood.info> <5c6f2a5d1002161506t2c75341ej142fb9385cdbce18@mail.gmail.com> Message-ID: <201002172322.12792.steve@pearwood.info> On Wed, 17 Feb 2010 10:06:01 am Mark Dickinson wrote: > On Tue, Feb 16, 2010 at 10:46 PM, Steven D'Aprano > > > > What's the justification for that convention? It seems wrong to me. > > It's difficult to do better than to point to Kahan's writings. See > > http://www.eecs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF Well, who am I to question Kahan? I guess if you interpret nan as "indeterminate", than hypot(inf, nan) should be inf; but if you interpret it as "not a number", then it should be nan. Since NANs can be both, I guess we're stuck with one or the other. So I'm satisfied that there's a good reason for the behaviour, even if I'm not 100% convinced it's the best reason. On a related note, why the distinction here? >>> inf*inf inf >>> inf**2 Traceback (most recent call last): File "", line 1, in OverflowError: (34, 'Numerical result out of range') -- Steven D'Aprano From dickinsm at gmail.com Wed Feb 17 14:05:46 2010 From: dickinsm at gmail.com (Mark Dickinson) Date: Wed, 17 Feb 2010 13:05:46 +0000 Subject: [Python-Dev] math.hypot, complex.__abs__, and documentation In-Reply-To: <201002172322.12792.steve@pearwood.info> References: <201002170946.38799.steve@pearwood.info> <5c6f2a5d1002161506t2c75341ej142fb9385cdbce18@mail.gmail.com> <201002172322.12792.steve@pearwood.info> Message-ID: <5c6f2a5d1002170505s21522191iad77d95b88858013@mail.gmail.com> [With apologies for Steven for the duplicate email.] On Wed, Feb 17, 2010 at 12:22 PM, Steven D'Aprano wrote: > Well, who am I to question Kahan? Yes, there I go with the argument from authority. But while we shouldn't instantly accept Kahan's arguments just because he's Kahan, it would be equally foolish for us mere mortals to ignore words from one of the prime movers of the IEEE 754 standard. :-) > I guess if you interpret nan as "indeterminate", than hypot(inf, nan) > should be inf; but if you interpret it as "not a number", then it > should be nan. Since NANs can be both, I guess we're stuck with one or > the other. Apart from the 'should be's, I think there's also a practical aspect to consider: I'm guessing that part of the reason for this sort of behaviour is that it make it more likely for numerical code to 'do the right thing' without extra special-case handling, in much the same way that infinities can appear and disappear during a numerical calculation, leaving a valid finite result, without the user having had to worry about inserting special cases to handle those infinities. As an example of the latter behaviour, consider evaluating the function f(x) = 1/(1+1/x) naively at x = 0; if this formula appears in any real-world circumstances, the chances are that you want a result of 0, and IEEE 754's non-stop mode gives it to you. (This doesn't work in Python, of course, because it doesn't really have a non-stop mode; more on this below.) Unfortunately, to back this argument up properly I'd need lots of real-world examples, which I don't have. :( > So I'm satisfied that there's a good reason for the > behaviour, even if I'm not 100% convinced it's the best reason. >From Python's point of view, the real reason for implementing it this way is that it follows current standards (C99 and IEEE 754; probably also the Fortran standards too, but I haven't checked), so this special case behaviour (a) likely matches expectations for numerical users, and (b) has been thought about carefully by at least some experts. > On a related note, why the distinction here? > >>>> inf*inf > inf >>>> inf**2 > Traceback (most recent call last): > File "", line 1, in > OverflowError: (34, 'Numerical result out of range') For that particular example, it's because you haven't upgraded to Python 2.7 yet. :) Python 2.7a3+ (trunk:78206M, Feb 17 2010, 10:19:00) [GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> float('inf') ** 2 inf See http://bugs.python.org/issue7534. But there are similar problems that aren't fixed, and can't reasonably be fixed without causing upset: >>> 1e300 ** 2 Traceback (most recent call last): File "", line 1, in OverflowError: (34, 'Result too large') >>> 1e300 * 1e300 inf Here I'd argue that the ideal Python behaviour would be to produce an OverflowError in both cases; more generally, arithmetic with finite numbers would never produce infinities or nans, but always raise Python exceptions instead. But some users need or expect some kind of 'non-stop mode' for arithmetic, so changing this probably wouldn't go down well. Mark From baptiste.lepilleur at gmail.com Wed Feb 17 14:28:58 2010 From: baptiste.lepilleur at gmail.com (Baptiste Lepilleur) Date: Wed, 17 Feb 2010 14:28:58 +0100 Subject: [Python-Dev] __file__ is not always an absolute path In-Reply-To: <6C75C25A-7E40-4949-88D7-0963EA130072@gmail.com> References: <20100207042709.26099.1983212382.divmod.xquotient.613@localhost.localdomain> <6C75C25A-7E40-4949-88D7-0963EA130072@gmail.com> Message-ID: I did some quick measures out of curiosity. Performances seems clearly filesystem and O.S. dependent (and are likely deployment/configuration dependent). I did each test 3 times to ensure measure where consistent. Tests were done with ActivePython 2.6.3.7. * AIX 5.3: python26 -m timeit -s 'def f(): pass' 'f()' 1000000 loops, best of 3: 0.336 usec per loop cwd is NFS mount: users/baplepil/sandbox> python26 -m timeit -s 'from os import getcwd' 'getcwd()' 1000 loops, best of 3: 1.09 msec per loop cwd is /tmp: /tmp> python26 -m timeit -s 'from os import getcwd' 'getcwd()' 1000 loops, best of 3: 323 usec per loop * Solaris 10 (Sparc): python26 -m timeit -s 'def f(): pass' 'f()' 1000000 loops, best of 3: 0.495 usec per loop cwd is NFS mount: users/baplepil/sandbox> python26 -m timeit -s 'from os import getcwd' 'getcwd()' 100000 loops, best of 3: 12.1 usec per loop cwd is /tmp: /tmp> python26 -m timeit -s 'from os import getcwd' 'getcwd()' 100000 loops, best of 3: 4.58 usec per loop * Windows XP SP2: python -m timeit -s "def f(): pass; f()" 10000000 loops, best of 3: 0.0531 usec per loop cwd is network drive (same as previous NFS mount): R:\...\users\baplepil>python -m timeit -s "from os import getcwd" "getcwd()" 100000 loops, best of 3: 5.14 usec per loop cwd is C:\temp>: C:\temp>python -m timeit -s "from os import getcwd" "getcwd()" 100000 loops, best of 3: 4.27 usec per loop 2010/2/17 Dan Villiom Podlaski Christiansen > On 7 Feb 2010, at 05:27, exarkun at twistedmatrix.com wrote: > > > Do you know of a case where it's actually slow? If not, how convincing > should this argument really be? Perhaps we can measure it on a few > platforms before passing judgement. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Feb 17 16:59:14 2010 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 17 Feb 2010 16:59:14 +0100 Subject: [Python-Dev] embedding Python interpreter in non-console windows application In-Reply-To: <990ae6531002162149u184fce78v9a12b02df4fcdad9@mail.gmail.com> References: <990ae6531002162149u184fce78v9a12b02df4fcdad9@mail.gmail.com> Message-ID: Hi, 2010/2/17 stephen > Hello, > > THE PROBLEM: > I am having a problem that I have seen asked quite a bit on the web, with > little to no follow up. > The problem is essentially this. When embedding (LoadLibraryA()) the python > interpreter dll > in a non-windows application the developer must first create a console for > python to do output/input with. > I properly initialize the CRT and AllocConsole() to do this. I then > GetSTDHandle() for stdin and stdout accordingly > and open those handles with the requisite flags "read" for STDIN and > "write" for stdout. This all works great > and is then verified and tested to work by printf() and fgets(). This issue > however happens when attempting > to PyRun_InteractiveLoop() and PyRun_SimpleString(). A > PyRun_SimpleString("print 'test'") displays nothing in my > freshly allocated console window. Similarly a PyRun_InteractiveLoop(stdin, > NULL); yields nothing either even though > the line printf("testing"); directly ahead of it works just fine. Does > anyone have insight on how I can make this work > with the freshly allocated console's stdin/stdout/stderr? > > SPECULATION: > That is the question, so now on to the speculation. I suspect that > something in the python runtime doesn't "get handles" > correctly for STDIN and STDOUT upon initialization. I have perused the > source code to find out exactly how this is done > and I suspect that it starts in PyInitializeEx with calls to > PySys_GetObject("stdin") and "stdout" accordingly. However I > don't actually see where this translates into the Python runtime checking > with the C-runtime for the "real" handles to STDIN and STDOUT. I dont ever > see the Python runtime "ask the system" where his handles to STDIN and > STDOUT are. > Are you using the same compiler as the one used to compile Python? It's important that your program and python use the same C runtime library (MSVCR90.dll for python 2.6), otherwise "stdout" refers to different things. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Wed Feb 17 18:49:04 2010 From: brett at python.org (Brett Cannon) Date: Wed, 17 Feb 2010 09:49:04 -0800 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: <4B7B1327.3060908@canterbury.ac.nz> Message-ID: Defaulting to UTC is not a good idea, which is why relevant methods take an argument to specify whether to be UTC (exact details are in the patch; don't remember exact details). On Feb 16, 2010 4:07 PM, "Greg Ewing" wrote: Brett Cannon wrote: > Issue 5094 already has a patch that is nearly complete to provide a > default... Are you sure it's really a good idea to default to UTC? I thought it was considered a feature that datetime objects are naive unless you explicitly specify a timezone. -- Greg _______________________________________________ Python-Dev mailing list Python-Dev at python.org http:... -------------- next part -------------- An HTML attachment was scrubbed... URL: From skip at pobox.com Wed Feb 17 23:46:41 2010 From: skip at pobox.com (skip at pobox.com) Date: Wed, 17 Feb 2010 16:46:41 -0600 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <319e029f1002170332y560432aeg26b3b033f2534b77@mail.gmail.com> References: <19322.35330.429508.954732@montanaro.dyndns.org> <319e029f1002170332y560432aeg26b3b033f2534b77@mail.gmail.com> Message-ID: <19324.29137.534922.641250@montanaro.dyndns.org> Lennart> The timezone database is updated several times per year. You Lennart> can *not* include it in the standard library. My guess is the data are updated several times per year, not the code. Can they not be separated? Skip From regebro at gmail.com Thu Feb 18 00:01:43 2010 From: regebro at gmail.com (Lennart Regebro) Date: Thu, 18 Feb 2010 00:01:43 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <19324.29137.534922.641250@montanaro.dyndns.org> References: <19322.35330.429508.954732@montanaro.dyndns.org> <319e029f1002170332y560432aeg26b3b033f2534b77@mail.gmail.com> <19324.29137.534922.641250@montanaro.dyndns.org> Message-ID: <319e029f1002171501o2381d71ft12eecbeddbcdf290@mail.gmail.com> On Wed, Feb 17, 2010 at 23:46, wrote: > > ? ?Lennart> The timezone database is updated several times per year. You > ? ?Lennart> can *not* include it in the standard library. > > My guess is the data are updated several times per year, not the code. ?Can > they not be separated? Yes, but that would mean we have an implementation in stdlib that relies on a dataset which may not exist. That is just going to be confusing. Moving pytz into the stdlib doesn't solve anything, really. So why do it? It's not like pytz is hard to install. -- Lennart Regebro: Python, Zope, Plone, Grok http://regebro.wordpress.com/ +33 661 58 14 64 From ben+python at benfinney.id.au Thu Feb 18 00:25:43 2010 From: ben+python at benfinney.id.au (Ben Finney) Date: Thu, 18 Feb 2010 10:25:43 +1100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) References: <19322.35330.429508.954732@montanaro.dyndns.org> <319e029f1002170332y560432aeg26b3b033f2534b77@mail.gmail.com> <19324.29137.534922.641250@montanaro.dyndns.org> Message-ID: <87635vcumg.fsf@benfinney.id.au> skip at pobox.com writes: > My guess is the data are updated several times per year, not the code. > Can they not be separated? AIUI this discussion is about getting the ?pytz? library into the Python standard library. If the data is separate from the modules, the question then becomes how users on various platforms can update the data without installing a new version of the whole standard library. -- \ ?The best ad-libs are rehearsed.? ?Graham Kennedy | `\ | _o__) | Ben Finney From python at mrabarnett.plus.com Thu Feb 18 01:50:26 2010 From: python at mrabarnett.plus.com (MRAB) Date: Thu, 18 Feb 2010 00:50:26 +0000 Subject: [Python-Dev] [issue2636] Regexp 2.7 (modifications to current re 2.2.2) In-Reply-To: <1266450207.35.0.249463945425.issue2636@psf.upfronthosting.co.za> References: <1266450207.35.0.249463945425.issue2636@psf.upfronthosting.co.za> Message-ID: <4B7C8ED2.4070006@mrabarnett.plus.com> Vlastimil Brom wrote: > Vlastimil Brom added the comment: > > I just tested the fix for unicode tracebacks and found some possibly weird results (not sure how/whether it should be fixed, as these inputs are indeed rather artificial...). > (win XPp SP3 Czech, Python 2.6.4) > > Using the cmd console, the output is fine (for the characters it can accept and display) > >>>> regex.findall(ur"\p{InBasicLatin?}", u"a?") > Traceback (most recent call last): > ... > File "C:\Python26\lib\regex.py", line 1244, in _parse_property > raise error("undefined property name '%s'" % name) > regex.error: undefined property name 'InBasicLatin?' > > (same result for other distorted "proprety names" containing e.g. ??????????????i???? ... > > However, in Idle the output differs depending on the characters present > >>>> regex.findall(ur"\p{InBasicLatin?}", u"ab c") > yields the expected > ... > File "C:\Python26\lib\regex.py", line 1244, in _parse_property > raise error("undefined property name '%s'" % name) > error: undefined property name 'InBasicLatin?' > > but > >>>> regex.findall(ur"\p{InBasicLatin?}", u"ab c") > > Traceback (most recent call last): > ... > File "C:\Python26\lib\regex.py", line 1244, in _parse_property > raise error("undefined property name '%s'" % name) > File "C:\Python26\lib\regex.py", line 167, in __init__ > message = message.encode(sys.stdout.encoding) > File "C:\Python26\lib\encodings\cp1250.py", line 12, in encode > return codecs.charmap_encode(input,errors,encoding_table) > UnicodeEncodeError: 'charmap' codec can't encode character u'\xcc' in position 37: character maps to > > which might be surprising, as cp1250 should be able to encode "?", maybe there is some intermediate ascii step? > > using the wxpython pyShell I get its specific encoding error: > > regex.findall(ur"\p{InBasicLatin?}", u"ab c") > Traceback (most recent call last): > ... > File "C:\Python26\lib\regex.py", line 1102, in _parse_escape > return _parse_property(source, info, in_set, ch) > File "C:\Python26\lib\regex.py", line 1244, in _parse_property > raise error("undefined property name '%s'" % name) > File "C:\Python26\lib\regex.py", line 167, in __init__ > message = message.encode(sys.stdout.encoding) > AttributeError: PseudoFileOut instance has no attribute 'encoding' > > (the same for \p{InBasicLatin?} etc.) > Maybe it shouldn't show the property name at all. That would avoid the problem. > > In python 3.1 in Idle, all of these exceptions are displayed correctly, also in other scripts or with special characters. > > Maybe in python 2.x e.g. repr(...) of the unicode error messages could be used in order to avoid these problems, but I don't know, what the conventions are in these cases. > > > Another issue I found here (unrelated to tracebacks) are backslashes or punctuation (except the handled -_) in the property names, which just lead to failed mathces and no exceptions about unknown property names > > regex.findall(u"\p{InBasic.Latin}", u"ab c") > [] > In the re module a malformed pattern is sometimes treated as a literal: >>> re.match(r"a{1,2", r"a{1,2").group() 'a{1,2' which is what I'm trying to replicate, as far as possible. Which characters should it accept when parsing the property name, even if it subsequently rejects the name? I don't want it to accept every character until it sees the closing '}'. I currently include alphanumeric, whitespace, '&', '_' and '-'. '.' might be a reasonable addition. > > I was also surprised by the added pos/endpos parameters, as I used flags as a non-keyword third parameter for the re functions in my code (probably my fault ...) > > re.findall(pattern, string, flags=0) > > regex.findall(pattern, string, pos=None, endpos=None, flags=0, overlapped=False) > > (is there a specific reason for this order, or could it be changed to maintain compatibility with the current re module?) > Oops! I'll fix that. > I hope, at least some of these remarks make some sense; > thanks for the continued work on this module! > All constructive remarks are welcome! :-) From fuzzyman at voidspace.org.uk Thu Feb 18 03:48:00 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Thu, 18 Feb 2010 02:48:00 +0000 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <319e029f1002171501o2381d71ft12eecbeddbcdf290@mail.gmail.com> References: <19322.35330.429508.954732@montanaro.dyndns.org> <319e029f1002170332y560432aeg26b3b033f2534b77@mail.gmail.com> <19324.29137.534922.641250@montanaro.dyndns.org> <319e029f1002171501o2381d71ft12eecbeddbcdf290@mail.gmail.com> Message-ID: <6f4025011002171848y6d507e98p433f5225fb20dd42@mail.gmail.com> On 17 February 2010 23:01, Lennart Regebro wrote: > On Wed, Feb 17, 2010 at 23:46, wrote: > > > > Lennart> The timezone database is updated several times per year. You > > Lennart> can *not* include it in the standard library. > > > > My guess is the data are updated several times per year, not the code. > Can > > they not be separated? > > Yes, but that would mean we have an implementation in stdlib that > relies on a dataset which may not exist. That is just going to be > confusing. Moving pytz into the stdlib doesn't solve anything, really. > So why do it? It's not like pytz is hard to install. > > Some of the Linux distributions *already* patch pytz to use the system information, which they keep updated separately. That information is also available from the system on Mac OS and Windows. It would seem to be very useful to have a version of pytz that defaults to using the system information if available, has a mechanism for using separate data for systems that don't provide the information or raises an error when neither system information nor separate data is available. The data could then still be available and released regularly without being tied to the Python release schedule. That assumes that the author of pytz *wants* it to come into the standard library of course. Michael Foord > -- > Lennart Regebro: Python, Zope, Plone, Grok > http://regebro.wordpress.com/ > +33 661 58 14 64 > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.ironpythoninaction.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From florent.xicluna at gmail.com Thu Feb 18 10:21:36 2010 From: florent.xicluna at gmail.com (Florent Xicluna) Date: Thu, 18 Feb 2010 09:21:36 +0000 (UTC) Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 Message-ID: Hello, On November 2006 and September 2007 Fredrik proposed to update "xml.etree" in Python 2.6 with the upcoming version 1.3. Now we are three years later, and the version shipped with 2.7alpha3 is 1.2.6. http://bugs.python.org/issue1602189#msg54944 http://bugs.python.org/issue1143 This would not be an issue, without the numerous bug reports accumulating on bugs.python.org. Most of these reports are waiting for the 1.3 release. Three months ago I worked on some of these issues, and after fixing them separately, I proposed a patch which merges the latest 1.3 snapshot (released in 2007) within the standard library. The aim is to provide a bug-free version of ElementTree/cElementTree in the standard library. For this purpose, I grew the test suite from 300 lines to 1800 lines, using both the tests from upstream and the tests proposed by Neil Muller on issue #6232. To ensure consistency, now the test_suite for the C implementation is the same as the Python implementation. http://bugs.python.org/issue6472 We are still interested with the upcoming release of ElementTree, but we should adopt a pragmatic approach: the xml.etree.ElementTree needs to be fixed for all Python users, even if 1.3 is not ready before 2.7beta. This is the only purpose of the patch. The patch sticks as much as possible to the upstream library. Initially I kept all the new features of the 1.3 branch in the patch. It should ease the integration of 1.3 final when it is released. With the last comment from Fredrik, I think to be more conservative: I plan to split out the experimental C API from the package. It is not required for the bug-fix release, and there's some risk of incompatibility with the final design of the API, which is still secret. As a side-effect, the patch will add some features and methods from the 1.3 branch (some of them where requested in the bug tracker): - ET.fromstringlist(), ET.tostringlist() - Element.extend(), Element.iter(), Element.itertext() - new selector engine - extended slicing However the highlighted features of this patch are: - to fix many bugs which were postponed because of 1.3 release - to ensure consistency between C and Python implementations (with tests) - to provide a better test coverage The patch is uploaded on Rietveld for review. The 3.x version of the patch will be updated after 2.x is merged in trunk. The patch covers documentation, too. http://codereview.appspot.com/207048/show It's time to comment and review. The proposed plan is to merge the patch in trunk before 2.7 alpha4. Best regards, -- Florent Xicluna From regebro at gmail.com Thu Feb 18 10:38:45 2010 From: regebro at gmail.com (Lennart Regebro) Date: Thu, 18 Feb 2010 10:38:45 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <6f4025011002171848y6d507e98p433f5225fb20dd42@mail.gmail.com> References: <19322.35330.429508.954732@montanaro.dyndns.org> <319e029f1002170332y560432aeg26b3b033f2534b77@mail.gmail.com> <19324.29137.534922.641250@montanaro.dyndns.org> <319e029f1002171501o2381d71ft12eecbeddbcdf290@mail.gmail.com> <6f4025011002171848y6d507e98p433f5225fb20dd42@mail.gmail.com> Message-ID: <319e029f1002180138j11d5832ya968cc900ce757a5@mail.gmail.com> On Thu, Feb 18, 2010 at 03:48, Michael Foord wrote: > Some of the Linux distributions *already* patch pytz to use the system > information, which they keep updated separately. Yes. And what problem does including pytz in the stdlib solve? > That information is also > available from the system on Mac OS and Windows. It is not available on Windows in any reasonable and useable form. > It would seem to be very > useful to have a version of pytz that defaults to using the system > information if available, has a mechanism for using separate data for > systems that don't provide the information or raises an error when neither > system information nor separate data is available. Pytz has mechanisms for that, perhaps they should be more easily useable. Perhaps it should even default to using the system Olsen database if there is one. But the discussion was if it should be included in the standard library, and nobody still has explain what problem that would solve. If it doesn't solve a problem, it shouldn't be done, as it also is going to create problems, because everything does. :) > The data could then still be available and released regularly without being > tied to the Python release schedule. Which it already is. So.... no problem solved. -- Lennart Regebro: Python, Zope, Plone, Grok http://regebro.wordpress.com/ +33 661 58 14 64 From asmodai at in-nomine.org Thu Feb 18 10:43:47 2010 From: asmodai at in-nomine.org (Jeroen Ruigrok van der Werven) Date: Thu, 18 Feb 2010 10:43:47 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <19324.29137.534922.641250@montanaro.dyndns.org> References: <19322.35330.429508.954732@montanaro.dyndns.org> <319e029f1002170332y560432aeg26b3b033f2534b77@mail.gmail.com> <19324.29137.534922.641250@montanaro.dyndns.org> Message-ID: <20100218094347.GE14271@nexus.in-nomine.org> -On [20100217 23:48], skip at pobox.com (skip at pobox.com) wrote: >My guess is the data are updated several times per year, not the code. Can >they not be separated? The bulk of the original timezone package is data for the timezones. Last year saw close to 26 releases for this. -- Jeroen Ruigrok van der Werven / asmodai ????? ?????? ??? ?? ?????? http://www.in-nomine.org/ | http://www.rangaku.org/ | GPG: 2EAC625B When you pass through, no one can pin you down, no one can call you back... From stuart at stuartbishop.net Thu Feb 18 11:29:47 2010 From: stuart at stuartbishop.net (Stuart Bishop) Date: Thu, 18 Feb 2010 17:29:47 +0700 (ICT) Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <319e029f1002180138j11d5832ya968cc900ce757a5@mail.gmail.com> Message-ID: On Thu, Feb 18, 2010 at 4:38 PM, Lennart Regebro wrote: > If it doesn't solve a problem, it shouldn't be done, as it also is > going to create problems, because everything does. :) I think a tzinfo implementation in the standard library that uses the system timezone database would be useful to a great many people, providing a standard way of mapping a string to a tzinfo instance. The number of frameworks requiring pytz as a dependency demonstrate this. It is unfortunate that those strings would be platform specific. For most applications this doesn't matter - you are reading the key from a config file or allowing the user to select from a list of possible values. For applications where that actually matters it would be simple enough to install and maintain a local zoneinfo database, for example by allowing pytz to plug itself in or just a well known location in the Python tree where valid compiled zoneinfo files can be copied in from a nearby unix-like system or pytz tarball. As the pytz maintainer, this would help me solve a long standing problem. Currently, pytz tzinfo instances don't really follow the documented tzinfo interface (in order to allow localized datetime arithmetic to be always correct instead of good enough). This is a source of confusion to many users who don't need this level of accuracy. It would be great if the standard library provided a tzinfo implementation that was good enough for the vast majority of users, and for people who do care they can continue to use pytz.timezone() to retrieve the anal tzinfo implementation. Users will be happier as they will have fewer bugs in their code. The alternative for me is to eventually split pytz, somehow providing the simpler interface that works exactly as documented in the Python reference and the anal interface that works per the pytz README (in hindsight, it should have been this way from day 1). I'm happy to work on this if there is agreement. I'm happy to relicense any pytz code used as a basis if necessary (currently MIT), and dateutil is already PSF licensed if that seems a better starting point. -- Stuart Bishop http://www.stuartbishop.net/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 263 bytes Desc: OpenPGP digital signature URL: From techtonik at gmail.com Thu Feb 18 13:41:33 2010 From: techtonik at gmail.com (anatoly techtonik) Date: Thu, 18 Feb 2010 14:41:33 +0200 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: <319e029f1002180138j11d5832ya968cc900ce757a5@mail.gmail.com> Message-ID: On Tue, Feb 16, 2010 at 1:52 PM, Victor Stinner wrote: >> So far, Python timezone handling is far from "pythonic". There is no >> function to get current UTC offset, (...) > > There is the time.timezone attribute: UTC offset in seconds. It is correct only if DST is not in effect. On Tue, Feb 16, 2010 at 5:26 PM, Tres Seaver wrote: > > skip at pobox.com wrote: >> >> While incorporating dateutil into the core would be nice (in my opinion at >> least), I was really thinking of pytz: http://pytz.sourceforge.net/ > > Because timezones are defined politically, they change frequently. ?pytz > is released frequently (multiple times per year) to accomodate those > changes: ?I can't see any way to preserve that flexibility if the > package were part of stdlib. Actual TZ information can be shipped with every Python release, but.. If pytz package is available and it's newer - library functions may use its data instead. Of course, this should be documented as official way to maintain TZ info up-to-date. If pytz to be included in standard library - it should still be distributed as separate package to provide more frequent TZ updates and updates to older Python versions. On Wed, Feb 17, 2010 at 1:32 PM, Lennart Regebro wrote: > There is no need to stick Pytz in the standard library. It's available > on PyPI, updated frequently, etc. What we can do is point to it from > the documentation. It will still require workarounds and bridges to make API in user scripts convenient, i.e. try: import pytz mydatetime = PytzDatetime() catch ImportError: mydatetime = ClassicDatetime() The goal is to reduce workarounds and avoid repeated code in Python scripts. Leaving pytz aside, does everybody feel comfortable with setting a Wave for API design of date/time issues and the stuff to be done? If there will be an API draft and current list of stuff - I can try to do some work in "offline" mode. -- anatoly t. From fdrake at acm.org Thu Feb 18 14:43:41 2010 From: fdrake at acm.org (Fred Drake) Date: Thu, 18 Feb 2010 08:43:41 -0500 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: <319e029f1002180138j11d5832ya968cc900ce757a5@mail.gmail.com> Message-ID: <9cee7ab81002180543q63ed889bsfc5f17c45a7b38f3@mail.gmail.com> On Thu, Feb 18, 2010 at 7:41 AM, anatoly techtonik wrote: > It will still require workarounds and bridges to make API in user > scripts convenient, i.e. I'm not entirely sure what you intended the "It" to refer to here. My take on this is that bundling a version of pytz in the standard library will simply generate more ways to deal with timezones. Using pytz as an external dependency is easy and provides a high level of update support. -Fred -- Fred L. Drake, Jr. "Chaos is the score upon which reality is written." --Henry Miller From regebro at gmail.com Thu Feb 18 15:58:30 2010 From: regebro at gmail.com (Lennart Regebro) Date: Thu, 18 Feb 2010 15:58:30 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: <319e029f1002180138j11d5832ya968cc900ce757a5@mail.gmail.com> Message-ID: <319e029f1002180658t41ff1bbk5a3d7e3deaf553f2@mail.gmail.com> On Thu, Feb 18, 2010 at 11:29, Stuart Bishop wrote: > I think a tzinfo implementation in the standard library that uses the system > timezone database would be useful to a great many people, providing a > standard way of mapping a string to a tzinfo instance. Well, that wouldn't work on Windows, which would be a bit strange. So yes, on some systems it would mean you now have pytz in the standard library, while you don't on others. That's going to cause problems, while it doesn't actually solve any problem except "I need to install pytz", which isn't much of a problem. > The number of > frameworks requiring pytz as a dependency demonstrate this. They are going to still need to require pytz, or rather the data part of it. > It is unfortunate that those strings would be platform specific. For most > applications this doesn't matter - you are reading the key from a config > file or allowing the user to select from a list of possible values. Well, the problem in finding your won timezone has been documented in one of the links I sent before. But that's another problems, solved by the tzfile/tzwin implementations discussed previously. > As the pytz maintainer, this would help me solve a long standing problem. > Currently, pytz tzinfo instances don't really follow the documented tzinfo > interface (in order to allow localized datetime arithmetic to be always > correct instead of good enough). This is a source of confusion to many users > who don't need this level of accuracy. It would be great if the standard > library provided a tzinfo implementation that was good enough for the vast > majority of users, and for people who do care they can continue to use > pytz.timezone() to retrieve the anal tzinfo implementation. Users will be > happier as they will have fewer bugs in their code. The alternative for me > is to eventually split pytz, somehow providing the simpler interface that > works exactly as documented in the Python reference and the anal interface > that works per the pytz README (in hindsight, it should have been this way > from day 1). I understand the need for different API's but can't the extended part that doesn't behave like timezone be separate methods? I don't *mind* pytz in the standardlibrary, I just don't really see how it solves any problems, while I can see how it creates them. -- Lennart Regebro: Python, Zope, Plone, Grok http://regebro.wordpress.com/ +33 661 58 14 64 From regebro at gmail.com Thu Feb 18 16:01:16 2010 From: regebro at gmail.com (Lennart Regebro) Date: Thu, 18 Feb 2010 16:01:16 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: <319e029f1002180138j11d5832ya968cc900ce757a5@mail.gmail.com> Message-ID: <319e029f1002180701n13512a3s276b298c1b2c61b7@mail.gmail.com> On Thu, Feb 18, 2010 at 13:41, anatoly techtonik wrote: > It will still require workarounds and bridges to make API in user > scripts convenient, i.e. > > try: > ?import pytz > ?mydatetime = PytzDatetime() > catch ImportError: > ?mydatetime = ClassicDatetime() Only if you want to work both with and without pytz. So don't. Just require pytz. I don't see the problem with that. -- Lennart Regebro: Python, Zope, Plone, Grok http://regebro.wordpress.com/ +33 661 58 14 64 From chambon.pascal at gmail.com Thu Feb 18 19:59:28 2010 From: chambon.pascal at gmail.com (Pascal Chambon) Date: Thu, 18 Feb 2010 19:59:28 +0100 Subject: [Python-Dev] Buffered streams design + raw io gotchas Message-ID: <4B7D8E10.20609@wanadoo.fr> Hello, As I continue experimenting with advanced streams, I'm currently beginning an important modification of io's Buffered and Text streams (removal of locks, adding of methods...), to fit the optimization process of the whole library. However, I'm now wondering what the idea is behind the 3 main buffer classes : Bufferedwriter, Bufferedreader and Bufferedrandom. The i/o PEP claimed that the two first ones were for sequential streams only, and the latter for all kinds of seekable streams; but as it is implemented, actually the 3 classes can be returned by open() for seekable files. Am I missing some use case in which this distinction would be useful (for optimizations ?) ? Else, I guess I should just create a RSBufferedStream class which handles all kinds of situations, raising InsupportedOperation exceptions whenever needed.... after all, text streams act that way (there is no TextWriter or TextReader stream), and they seem fine. Also, io.open() might return a raw file stream when we set buffering=0. The problem is that raw file streams are NOT like buffered streams with a buffer limit of zero : raw streams might fail writing/reading all the data asked, without raising errors. I agree this case should be rare, but it might be a gotcha for people wanting direct control of the stream (eg. for locking purpose), but no silently incomplete read/write operation. Shouldn't we rather return a "write through" buffered stream in this case "buffering=0", to cleanly handle partial read/write ops ? regards, Pascal PS : if you have 3 minutes, I'd be very interested by your opinion on the "advanced modes" draft below. Does it seem intuitive to you ? In particular, shouldn't the "+" and "-" flags have the opposite meaning ? http://bytebucket.org/pchambon/python-rock-solid-tools/wiki/rsopen.html From guido at python.org Thu Feb 18 21:33:17 2010 From: guido at python.org (Guido van Rossum) Date: Thu, 18 Feb 2010 15:33:17 -0500 Subject: [Python-Dev] Buffered streams design + raw io gotchas In-Reply-To: <4B7D8E10.20609@wanadoo.fr> References: <4B7D8E10.20609@wanadoo.fr> Message-ID: IIRC here is the use case for buffered reader/writer vs. random: a disk file opened for reading and writing uses a random access buffer; but a TCP stream stream, while both writable and readable, should use separate read and write buffers. The reader and writer don't have to worry about reversing the I/O direction. But maybe I'm missing something about your question? --Guido On Thu, Feb 18, 2010 at 1:59 PM, Pascal Chambon wrote: > Hello, > > As I continue experimenting with advanced streams, I'm currently beginning > an important modification of io's Buffered and Text streams (removal of > locks, adding of methods...), to fit the optimization process of the whole > library. > However, I'm now wondering what the idea is behind the 3 main buffer classes > : Bufferedwriter, Bufferedreader and Bufferedrandom. > > The i/o PEP claimed that the two first ones were for sequential streams > only, and the latter for all kinds of seekable streams; but as it is > implemented, actually the 3 classes can be returned by open() for seekable > files. > > Am I missing some use case in which this distinction would be useful (for > optimizations ?) ? Else, I guess I should just create a RSBufferedStream > class which handles all kinds of situations, raising InsupportedOperation > exceptions whenever needed.... after all, text streams act that way (there > is no TextWriter or TextReader stream), and they seem fine. > > Also, io.open() might return a raw file stream when we set buffering=0. The > problem is that raw file streams are NOT like buffered streams with a buffer > limit of zero : raw streams might fail writing/reading all the data asked, > without raising errors. I agree this case should be rare, but it might be a > gotcha for people wanting direct control of the stream (eg. for locking > purpose), but no silently incomplete read/write operation. > Shouldn't we rather return a "write through" buffered stream in this case > "buffering=0", to cleanly handle partial read/write ops ? > > regards, > Pascal > > PS : if you have 3 minutes, I'd be very interested by your opinion on the > "advanced modes" draft below. > Does it seem intuitive to you ? In particular, shouldn't the "+" and "-" > flags have the opposite meaning ? > http://bytebucket.org/pchambon/python-rock-solid-tools/wiki/rsopen.html > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) From collinwinter at google.com Thu Feb 18 22:36:31 2010 From: collinwinter at google.com (Collin Winter) Date: Thu, 18 Feb 2010 16:36:31 -0500 Subject: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython In-Reply-To: <693bc9ab1002122112u562d9dfaobf36e79bf71f194c@mail.gmail.com> References: <3c8293b61001201427y30fc9f28ke6f7152b2a112b4e@mail.gmail.com> <3c8293b61001210932i9c5d31i4bc71b7d9e0611f2@mail.gmail.com> <3c8293b61001211214m4b24c3b9x3738cf9e5375b0f8@mail.gmail.com> <3c8293b61002021454w664c7646ya5e2dd7395380f5f@mail.gmail.com> <693bc9ab1002110639r5ca143b1t281fe0135effc493@mail.gmail.com> <3c8293b61002121604i204cf579nafa26e53b75e1cc@mail.gmail.com> <693bc9ab1002122112u562d9dfaobf36e79bf71f194c@mail.gmail.com> Message-ID: <3c8293b61002181336k478e5e15he5c8b65be2821c58@mail.gmail.com> On Sat, Feb 13, 2010 at 12:12 AM, Maciej Fijalkowski wrote: > I like this wording far more. It's at the very least far more precise. > Those examples are fair enough (except the fact that PyPy is not 32bit > x86 only, the JIT is). [snip] > "slower than US on some workloads" is true, while not really telling > much to a potential reader. For any X and Y implementing the same > language "X is faster than Y on some workloads" is usually true. > > To be precise you would need to include the above table in the PEP, > which is probably a bit too much, given that PEP is not about PyPy at > all. I'm fine with any wording that is at least correct. I've updated the language: http://codereview.appspot.com/186247/diff2/9005:11001/11002. Thanks for the clarifications. Collin Winter From martin at v.loewis.de Thu Feb 18 22:42:24 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 18 Feb 2010 22:42:24 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <319e029f1002180138j11d5832ya968cc900ce757a5@mail.gmail.com> References: <19322.35330.429508.954732@montanaro.dyndns.org> <319e029f1002170332y560432aeg26b3b033f2534b77@mail.gmail.com> <19324.29137.534922.641250@montanaro.dyndns.org> <319e029f1002171501o2381d71ft12eecbeddbcdf290@mail.gmail.com> <6f4025011002171848y6d507e98p433f5225fb20dd42@mail.gmail.com> <319e029f1002180138j11d5832ya968cc900ce757a5@mail.gmail.com> Message-ID: <4B7DB440.7050307@v.loewis.de> >> That information is also >> available from the system on Mac OS and Windows. > > It is not available on Windows in any reasonable and useable form. That's not true. The registry is readable by any user, and the format is fully documented. Regards, Martin From regebro at gmail.com Thu Feb 18 22:45:39 2010 From: regebro at gmail.com (Lennart Regebro) Date: Thu, 18 Feb 2010 22:45:39 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <4B7DB440.7050307@v.loewis.de> References: <19322.35330.429508.954732@montanaro.dyndns.org> <319e029f1002170332y560432aeg26b3b033f2534b77@mail.gmail.com> <19324.29137.534922.641250@montanaro.dyndns.org> <319e029f1002171501o2381d71ft12eecbeddbcdf290@mail.gmail.com> <6f4025011002171848y6d507e98p433f5225fb20dd42@mail.gmail.com> <319e029f1002180138j11d5832ya968cc900ce757a5@mail.gmail.com> <4B7DB440.7050307@v.loewis.de> Message-ID: <319e029f1002181345t1b73f402o1c0f1c13939ac43f@mail.gmail.com> On Thu, Feb 18, 2010 at 22:42, "Martin v. L?wis" wrote: > That's not true. The registry is readable by any user, and the format is > fully documented. Yes, but they use non-standard locations, and afaik, pytz does not support it. If a stdlib pytz would use this you would have to use different timezone names for Unix and Windows. I don't think that's a good idea. Also, the windows data contains only current timezone data, so for calendars stretching back in time, the Olsen database would be preferable as it keeps history. -- Lennart Regebro: Python, Zope, Plone, Grok http://regebro.wordpress.com/ +33 661 58 14 64 From martin at v.loewis.de Thu Feb 18 22:46:41 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 18 Feb 2010 22:46:41 +0100 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: References: Message-ID: <4B7DB541.4000604@v.loewis.de> > It's time to comment and review. Unfortunately, it's not. I strongly object to any substantial change to the code base without explicit approval by Fredrik Lundh. Regards, Martin From trentm at activestate.com Thu Feb 18 22:56:10 2010 From: trentm at activestate.com (Trent Mick) Date: Thu, 18 Feb 2010 16:56:10 -0500 Subject: [Python-Dev] some notes from the first part of the lang summit Message-ID: <4B7DB77A.2070108@activestate.com> (http://trentmick.blogspot.com/2010/02/other-python-vms-upcoming-python.html) Note that this was just from the first 15 minutes or so... > > Some quick notes about the coming plans by the "other" Python implementations > from today's Python Language Summit at PyCon 2010: > > - IronPython: > - plan is to do Python 2.7 first, focus for this year > - python 3.2 for the end of next year hopefully > - other work on IDE stuff > - Pynie (i.e. Parrot) -- Allison Randall: > - about 4 major features away from pure Python syntax (did dicts last > night) > - targetting py3k repo and test suite: should be on track for python 3.2 > - Jython: > - plan to target 2.6 (b/c 2to3 depends on 2.6) > - temporarily skip 2.7 and target 3.x (probably 3.2) > - then if 3.x adoption isn't fully there, then go back and add Python 2.7 > - will require JDK 2.7 for Python 3 support (b/c of new support for > dynamic languages) > - PyPy (Holger): > - plan is Benjamin will port to Python 2.7 in the summer > - only have slight deviations from CPython: idea is to merge back with > CPython so don't have deviations. Typcically 1 or 2 line changes in ~25 > modules. > Trent -- Trent Mick trentm at activestate.com http://trentm.com/blog/ From solipsis at pitrou.net Fri Feb 19 04:30:49 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 19 Feb 2010 03:30:49 +0000 (UTC) Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 References: <4B7DB541.4000604@v.loewis.de> Message-ID: Le Thu, 18 Feb 2010 22:46:41 +0100, Martin v. L?wis a ?crit?: >> It's time to comment and review. > > Unfortunately, it's not. I strongly object to any substantial change to > the code base without explicit approval by Fredrik Lundh. Which most probably puts elementtree in bugfix-only mode. I don't necessarily disagree with such a decision, but it must be quite clear. Regards Antoine. From stuart at stuartbishop.net Fri Feb 19 05:59:24 2010 From: stuart at stuartbishop.net (Stuart Bishop) Date: Fri, 19 Feb 2010 11:59:24 +0700 (ICT) Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <319e029f1002181345t1b73f402o1c0f1c13939ac43f@mail.gmail.com> Message-ID: On Fri, Feb 19, 2010 at 4:45 AM, Lennart Regebro wrote: > On Thu, Feb 18, 2010 at 22:42, "Martin v. L?wis" wrote: >> That's not true. The registry is readable by any user, and the format is >> fully documented. > > Yes, but they use non-standard locations, and afaik, pytz does not > support it. If a stdlib pytz would use this you would have to use > different timezone names for Unix and Windows. I don't think that's a > good idea. Under Windows the only backend API I'm aware of we could use is tzwin and I think any standard library inclusion would require this or something similar. For the standard library, I think it would be great if you could do 'datetime.timezone(some_string)' and get a tzinfo instance suitable for most use cases which seems no problem for tzwin to provide (platform default DST information, simple interface with potentially incorrect localized datetime arithmetic over DST transitions). It is unfortunate that the timezone strings are platform specific, but I feel that is still better than end users having to keep updating timezone databases or not supporting non-zoneinfo systems at all. Even with pytz, the timezone strings are version specific to an extent (we had a real issue where we updated pytz on our web applications, which changed a preferred timezone name from Asia/Calcutta to Asia/Kolkata per requests from our users and as a result our wiki's exploded for some users as they where on separate boxes using a different pytz release that didn't understand that timezone string). It would also be trivial to always look up timezone information from compiled zoneinfo files in a well known location if the system timezone database cannot find the requested timezone information. So datetime.timezone('US/Eastern') would work on Windows if you had installed pytz (I'd update pytz to install its zoneinfo files into the blessed location). Best of both worlds. > Also, the windows data contains only current timezone data, so for > calendars stretching back in time, the Olsen database would be > preferable as it keeps history. Sure. I'm not saying pytz will disappear or become unmaintained for people who need it. I believe most people who are using it now don't need it and I'm sure there are real bugs in real code out there because of this, as using pytz correctly requires reading and understanding the pytz documentation. If this is all too ambitious, tzinfo implementations in the standard library for UTC and the current system timezone would be a step forward and solve most real world use cases. -- Stuart Bishop http://www.stuartbishop.net/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 263 bytes Desc: OpenPGP digital signature URL: From regebro at gmail.com Fri Feb 19 06:11:54 2010 From: regebro at gmail.com (Lennart Regebro) Date: Fri, 19 Feb 2010 06:11:54 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: References: <319e029f1002181345t1b73f402o1c0f1c13939ac43f@mail.gmail.com> Message-ID: <319e029f1002182111o254f5394m8c802f47ac941725@mail.gmail.com> But is the "You don't have to install it" really such a big problem so that it's worth the other problems like platform incompatibilities and such? In any case, since you want to make a version that can be included and uses the timezone API, I guess that's a moot question until we have that version. :) On Fri, Feb 19, 2010 at 05:59, Stuart Bishop wrote: > On Fri, Feb 19, 2010 at 4:45 AM, Lennart Regebro wrote: >> >> On Thu, Feb 18, 2010 at 22:42, "Martin v. L?wis" >> wrote: >>> >>> That's not true. The registry is readable by any user, and the format is >>> fully documented. >> >> Yes, but they use non-standard locations, and afaik, pytz does not >> support it. If a stdlib pytz would use this you would have to use >> different timezone names for Unix and Windows. I don't think that's a >> good idea. > > Under Windows the only backend API I'm aware of we could use is tzwin and I > think any standard library inclusion would require this or something > similar. For the standard library, I think it would be great if you could do > 'datetime.timezone(some_string)' and get a tzinfo instance suitable for most > use cases which seems no problem for tzwin to provide (platform default DST > information, simple interface with potentially incorrect localized datetime > arithmetic over DST transitions). It is unfortunate that the timezone > strings are platform specific, but I feel that is still better than end > users having to keep updating timezone databases or not supporting > non-zoneinfo systems at all. Even with pytz, the timezone strings are > version specific to an extent (we had a real issue where we updated pytz on > our web applications, which changed a preferred timezone name from > Asia/Calcutta to Asia/Kolkata per requests from our users and as a result > our wiki's exploded for some users as they where on separate boxes using a > different pytz release that didn't understand that timezone string). > > It would also be trivial to always look up timezone information from > compiled zoneinfo files in a well known location if the system timezone > database cannot find the requested timezone information. So > datetime.timezone('US/Eastern') would work on Windows if you had installed > pytz (I'd update pytz to install its zoneinfo files into the blessed > location). Best of both worlds. > >> Also, the windows data contains only current timezone data, so for >> calendars stretching back in time, the Olsen database would be >> preferable as it keeps history. > > Sure. I'm not saying pytz will disappear or become unmaintained for people > who need it. I believe most people who are using it now don't need it and > I'm sure there are real bugs in real code out there because of this, as > using pytz correctly requires reading and understanding the pytz > documentation. > > > If this is all too ambitious, tzinfo implementations in the standard library > for UTC and the current system timezone would be a step forward and solve > most real world use cases. > > -- > Stuart Bishop > http://www.stuartbishop.net/ > > -- Lennart Regebro: Python, Zope, Plone, Grok http://regebro.wordpress.com/ +33 661 58 14 64 From martin at v.loewis.de Fri Feb 19 06:40:00 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 19 Feb 2010 06:40:00 +0100 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: References: <4B7DB541.4000604@v.loewis.de> Message-ID: <4B7E2430.2070804@v.loewis.de> Antoine Pitrou wrote: > Le Thu, 18 Feb 2010 22:46:41 +0100, Martin v. L?wis a ?crit : >>> It's time to comment and review. >> Unfortunately, it's not. I strongly object to any substantial change to >> the code base without explicit approval by Fredrik Lundh. > > Which most probably puts elementtree in bugfix-only mode. I don't > necessarily disagree with such a decision, but it must be quite clear. My point is that the decision as already made when ElementTree was incorporated into the standard library; it's the same policy for most code that Fredrik Lundh has contributed (and which he still maintains outside the standard library as well). He has made it fairly clear on several occasions that this is how he expects things to work, and unless we want to truly fork the code, we should comply. Regards, Martin From stuart at stuartbishop.net Fri Feb 19 07:27:01 2010 From: stuart at stuartbishop.net (Stuart Bishop) Date: Fri, 19 Feb 2010 13:27:01 +0700 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <319e029f1002182111o254f5394m8c802f47ac941725@mail.gmail.com> References: <319e029f1002181345t1b73f402o1c0f1c13939ac43f@mail.gmail.com> <319e029f1002182111o254f5394m8c802f47ac941725@mail.gmail.com> Message-ID: <6bc73d4c1002182227m2a298ad2r13ef8d43e0ef85e4@mail.gmail.com> On Fri, Feb 19, 2010 at 12:11 PM, Lennart Regebro wrote: > But is the "You don't have to install it" really such a big problem so > that it's worth the other problems like platform incompatibilities and > such? I don't think there are any platform incompatibilities - it will work as documented on all platforms. You lose the ability to assume that two identical pytz versions on different platforms can use the same timezone strings, but gain the ability that system timezone strings can be used with Python. > In any case, since you want to make a version that can be included and > uses the timezone API, I guess that's a moot question until we have > that version. :) As I understand it dateutil pretty much already provides what I'm describing. -- Stuart Bishop http://www.stuartbishop.net/ From hodgestar+pythondev at gmail.com Fri Feb 19 08:36:26 2010 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Fri, 19 Feb 2010 09:36:26 +0200 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: <4B7E2430.2070804@v.loewis.de> References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> Message-ID: On Fri, Feb 19, 2010 at 7:40 AM, "Martin v. L?wis" wrote: >> Which most probably puts elementtree in bugfix-only mode. I don't >> necessarily disagree with such a decision, but it must be quite clear. The current situation is even worse than bugfix-only mode. Even bugfixes struggle to make it in. > My point is that the decision as already made when ElementTree was > incorporated into the standard library; it's the same policy for most > code that Fredrik Lundh has contributed (and which he still maintains > outside the standard library as well). He has made it fairly clear on > several occasions that this is how he expects things to work, and unless > we want to truly fork the code, we should comply. We need someone to maintain the copy of ElementTree in the Python repository. Ideally this means pulling upgrades and bugfixes from Fredrik's repository every now and then. If the goals of Python ElementTree and Fredrik ElementTree diverge I don't see a problem with an amicable fork. Fredrik and Python ElementTree do have rather different constraints (for example, Python ElementTree has fewer opportunities for breaking backwards compatibility). Schiavo Simon From sjoerd at acm.org Fri Feb 19 10:33:56 2010 From: sjoerd at acm.org (Sjoerd Mullender) Date: Fri, 19 Feb 2010 10:33:56 +0100 Subject: [Python-Dev] deprecated stuff in standard library Message-ID: <4B7E5B04.4060200@acm.org> I have noticed that deprecated stuff is still being used in the standard Python library. When using modules that contain deprecated stuff you get a warning, and as a mere user there isn't much you can do about that. As a general rule, the Python standard library should not use deprecated constructs in non-deprecated (or otherwise deprecated) modules. The case I am running into is that mhlib uses multifile (in 2.6). -- Sjoerd Mullender -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 371 bytes Desc: OpenPGP digital signature URL: From asmodai at in-nomine.org Fri Feb 19 11:04:07 2010 From: asmodai at in-nomine.org (Jeroen Ruigrok van der Werven) Date: Fri, 19 Feb 2010 11:04:07 +0100 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> Message-ID: <20100219100407.GG14271@nexus.in-nomine.org> -On [20100219 08:37], Simon Cross (hodgestar+pythondev at gmail.com) wrote: >We need someone to maintain the copy of ElementTree in the Python >repository. Ideally this means pulling upgrades and bugfixes from >Fredrik's repository every now and then. Which will give you nothing as that tree hasn't been touched in over three years. I can understand giving special consideration to maintainers, but that would imply they actually maintain something, no? -- Jeroen Ruigrok van der Werven / asmodai ????? ?????? ??? ?? ?????? http://www.in-nomine.org/ | http://www.rangaku.org/ | GPG: 2EAC625B Contentment that derives from knowing when to be content is eternal contentment... From guido at python.org Fri Feb 19 14:10:54 2010 From: guido at python.org (Guido van Rossum) Date: Fri, 19 Feb 2010 08:10:54 -0500 Subject: [Python-Dev] deprecated stuff in standard library In-Reply-To: <4B7E5B04.4060200@acm.org> References: <4B7E5B04.4060200@acm.org> Message-ID: Isn't mhlib itself deprecated? (It's gone in Py3k.) On Fri, Feb 19, 2010 at 4:33 AM, Sjoerd Mullender wrote: > I have noticed that deprecated stuff is still being used in the standard > Python library. ?When using modules that contain deprecated stuff you > get a warning, and as a mere user there isn't much you can do about that. > > As a general rule, the Python standard library should not use deprecated > constructs in non-deprecated (or otherwise deprecated) modules. > > The case I am running into is that mhlib uses multifile (in 2.6). -- --Guido van Rossum (python.org/~guido) From guido at python.org Fri Feb 19 14:13:04 2010 From: guido at python.org (Guido van Rossum) Date: Fri, 19 Feb 2010 08:13:04 -0500 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: <20100219100407.GG14271@nexus.in-nomine.org> References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> <20100219100407.GG14271@nexus.in-nomine.org> Message-ID: All, I hope that Fredrik himself has time to chime in at least briefly, but he told me off-line that he sees nothing controversial in the currently proposed set of changes. On Fri, Feb 19, 2010 at 5:04 AM, Jeroen Ruigrok van der Werven wrote: > -On [20100219 08:37], Simon Cross (hodgestar+pythondev at gmail.com) wrote: >>We need someone to maintain the copy of ElementTree in the Python >>repository. Ideally this means pulling upgrades and bugfixes from >>Fredrik's repository every now and then. > > Which will give you nothing as that tree hasn't been touched in over three > years. > > I can understand giving special consideration to maintainers, but that would > imply they actually maintain something, no? > > -- > Jeroen Ruigrok van der Werven / asmodai > ????? ?????? ??? ?? ?????? > http://www.in-nomine.org/ | http://www.rangaku.org/ | GPG: 2EAC625B > Contentment that derives from knowing when to be content is eternal > contentment... > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) From sjoerd at acm.org Fri Feb 19 14:40:37 2010 From: sjoerd at acm.org (Sjoerd Mullender) Date: Fri, 19 Feb 2010 14:40:37 +0100 Subject: [Python-Dev] deprecated stuff in standard library In-Reply-To: References: <4B7E5B04.4060200@acm.org> Message-ID: <4B7E94D5.4030206@acm.org> On 2010-02-19 14:10, Guido van Rossum wrote: > Isn't mhlib itself deprecated? (It's gone in Py3k.) I wouldn't like that, but it is beside my point. If a module is deprecated, then it should not be used in released code. If mhlib is deprecated, it doesn't tell you about it. mhlib uses multifile and multifile does tell you it is deprecated, and that is pretty annoying. > On Fri, Feb 19, 2010 at 4:33 AM, Sjoerd Mullender wrote: >> I have noticed that deprecated stuff is still being used in the standard >> Python library. When using modules that contain deprecated stuff you >> get a warning, and as a mere user there isn't much you can do about that. >> >> As a general rule, the Python standard library should not use deprecated >> constructs in non-deprecated (or otherwise deprecated) modules. >> >> The case I am running into is that mhlib uses multifile (in 2.6). > -- Sjoerd Mullender -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 371 bytes Desc: OpenPGP digital signature URL: From eric at trueblade.com Fri Feb 19 14:45:54 2010 From: eric at trueblade.com (Eric Smith) Date: Fri, 19 Feb 2010 08:45:54 -0500 Subject: [Python-Dev] deprecated stuff in standard library In-Reply-To: <4B7E94D5.4030206@acm.org> References: <4B7E5B04.4060200@acm.org> <4B7E94D5.4030206@acm.org> Message-ID: <4B7E9612.5020100@trueblade.com> Sjoerd Mullender wrote: > On 2010-02-19 14:10, Guido van Rossum wrote: >> Isn't mhlib itself deprecated? (It's gone in Py3k.) > > I wouldn't like that, but it is beside my point. If a module is > deprecated, then it should not be used in released code. > If mhlib is deprecated, it doesn't tell you about it. mhlib uses > multifile and multifile does tell you it is deprecated, and that is > pretty annoying. This is because no one has gotten around to it. Create a bug report for it, and preferably attach a patch with tests. Eric. From brian.curtin at gmail.com Fri Feb 19 14:47:27 2010 From: brian.curtin at gmail.com (Brian Curtin) Date: Fri, 19 Feb 2010 08:47:27 -0500 Subject: [Python-Dev] deprecated stuff in standard library In-Reply-To: <4B7E94D5.4030206@acm.org> References: <4B7E5B04.4060200@acm.org> <4B7E94D5.4030206@acm.org> Message-ID: On Fri, Feb 19, 2010 at 08:40, Sjoerd Mullender wrote: > On 2010-02-19 14:10, Guido van Rossum wrote: > > Isn't mhlib itself deprecated? (It's gone in Py3k.) > > I wouldn't like that, but it is beside my point. If a module is > deprecated, then it should not be used in released code. > If mhlib is deprecated, it doesn't tell you about it. mhlib uses > multifile and multifile does tell you it is deprecated, and that is > pretty annoying. > > I see the deprecation warning upon importing mhlib in 2.6 and trunk (with -Wd). -------------- next part -------------- An HTML attachment was scrubbed... URL: From florent.xicluna at gmail.com Fri Feb 19 14:53:23 2010 From: florent.xicluna at gmail.com (Florent Xicluna) Date: Fri, 19 Feb 2010 13:53:23 +0000 (UTC) Subject: [Python-Dev] deprecated stuff in standard library References: <4B7E5B04.4060200@acm.org> <4B7E94D5.4030206@acm.org> <4B7E9612.5020100@trueblade.com> Message-ID: Eric Smith trueblade.com> writes: > > This is because no one has gotten around to it. Create a bug report for > it, and preferably attach a patch with tests. > > Eric. > Actually, it gives py3k warning about "mhlib" + 2 others warnings: ./python/release26-maint/ $ ./python -Wd -3 -c "import mhlib" -c:1: DeprecationWarning: the mhlib module has been removed in Python 3.0; use the mailbox module instead ./python/release26-maint/Lib/mhlib.py:82: DeprecationWarning: in 3.x, mimetools has been removed in favor of the email package import mimetools ./python/release26-maint/Lib/mhlib.py:83: DeprecationWarning: the multifile module has been deprecated since Python 2.5 import multifile From sjoerd at acm.org Fri Feb 19 15:15:16 2010 From: sjoerd at acm.org (Sjoerd Mullender) Date: Fri, 19 Feb 2010 15:15:16 +0100 Subject: [Python-Dev] deprecated stuff in standard library In-Reply-To: <4B7E9612.5020100@trueblade.com> References: <4B7E5B04.4060200@acm.org> <4B7E94D5.4030206@acm.org> <4B7E9612.5020100@trueblade.com> Message-ID: <4B7E9CF4.9060303@acm.org> On 2010-02-19 14:45, Eric Smith wrote: > Sjoerd Mullender wrote: >> On 2010-02-19 14:10, Guido van Rossum wrote: >>> Isn't mhlib itself deprecated? (It's gone in Py3k.) >> >> I wouldn't like that, but it is beside my point. If a module is >> deprecated, then it should not be used in released code. >> If mhlib is deprecated, it doesn't tell you about it. mhlib uses >> multifile and multifile does tell you it is deprecated, and that is >> pretty annoying. > > This is because no one has gotten around to it. Create a bug report for > it, and preferably attach a patch with tests. My point is, as a matter of *policy*, nothing should be released that uses deprecated stuff. I can't create a bug report about wrong (or incomplete) policies. > Eric. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/sjoerd.mullender%40cwi.nl -- Sjoerd Mullender -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 371 bytes Desc: OpenPGP digital signature URL: From regebro at gmail.com Fri Feb 19 15:31:43 2010 From: regebro at gmail.com (Lennart Regebro) Date: Fri, 19 Feb 2010 15:31:43 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <6bc73d4c1002182227m2a298ad2r13ef8d43e0ef85e4@mail.gmail.com> References: <319e029f1002181345t1b73f402o1c0f1c13939ac43f@mail.gmail.com> <319e029f1002182111o254f5394m8c802f47ac941725@mail.gmail.com> <6bc73d4c1002182227m2a298ad2r13ef8d43e0ef85e4@mail.gmail.com> Message-ID: <319e029f1002190631m3f8210a5h3b6d6272e075fc8@mail.gmail.com> On Fri, Feb 19, 2010 at 07:27, Stuart Bishop wrote: >> In any case, since you want to make a version that can be included and >> uses the timezone API, I guess that's a moot question until we have >> that version. :) > > As I understand it dateutil pretty much already provides what I'm describing. Well, pretty much yes. I don't know how good it is at using the system data without an Olsen database, but it shouldn't be too much work to add that, I guess. But that changes the topic from moving pytz to stdlib into moving dateutil.tz into stdlib. :) Personally I like pytz "anal" timezone support though, and dateutil.tz doesn't have that, and I still think it would be possible to have both in one system, but using different API-calls. Also, people have uttered negativities about datetime.tz, but they have never been able to say what they don't like about it. I would like if we could look into making a timezone module that works on Python 2.5 to 3.2 that uses system data, unless there is also a "Olsen module" installed, and that has all the features of both dateutil.tz and pytz, ie: 1. Support for the standard API. 2. A Pytz extended API. 3. Using the system data. 4. Using a separate Olsen database installable by normal Python means. 5. Perhaps a timezone name alias map? That could map both old Olsen names and Windows names. -- Lennart Regebro: Python, Zope, Plone, Grok http://regebro.wordpress.com/ +33 661 58 14 64 From ncoghlan at gmail.com Fri Feb 19 16:23:02 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Feb 2010 01:23:02 +1000 Subject: [Python-Dev] deprecated stuff in standard library In-Reply-To: <4B7E9CF4.9060303@acm.org> References: <4B7E5B04.4060200@acm.org> <4B7E94D5.4030206@acm.org> <4B7E9612.5020100@trueblade.com> <4B7E9CF4.9060303@acm.org> Message-ID: <4B7EACD6.7070306@gmail.com> Sjoerd Mullender wrote: > My point is, as a matter of *policy*, nothing should be released that > uses deprecated stuff. I can't create a bug report about wrong (or > incomplete) policies. The policy is more that the test suite shouldn't raise Deprecation Warnings unless it is explicitly checking for them (or otherwise testing known-deprecated code). If there is a hole in the test suite coverage such that paths that trigger deprecated code are not exercised by the regression tests, then that is a bug. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From brett at python.org Fri Feb 19 16:50:19 2010 From: brett at python.org (Brett Cannon) Date: Fri, 19 Feb 2010 10:50:19 -0500 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: <4B7DB77A.2070108@activestate.com> References: <4B7DB77A.2070108@activestate.com> Message-ID: My notes from the session I led: + argparse - Same issues brought up. + Hg transition - Just updated everyone; Dirkjan said everything I did in his email update. + Stdlib breakout - Mentioned; nothing planned beyond a PEP at some point. + Extension module policy - If you write C code just for performance, you must provide a pure Python version (and both are tested). - When the stdlib is broken out, say ctypes-based versions are possible as fallback implementations. - Non-CPython VMs should contribute their pure Python implementations of modules back to the stdlib (e.g. datetime). + Proposing language changes - Need thorough dev docs. - People should vet ideas first, then PEP and code vetting, then submit for consideration. + Off-topic discussion - Need a "languishing" state in the tracker. - Probably should have a paid admin to help keep things such as the Roundup tracker in tip-top shape to take load off of people like Martin (e.g. bug fixes, new features, etc). On Thu, Feb 18, 2010 at 16:56, Trent Mick wrote: > (http://trentmick.blogspot.com/2010/02/other-python-vms-upcoming-python.html) > > Note that this was just from the first 15 minutes or so... > >> >> Some quick notes about the coming plans by the "other" Python >> implementations >> from today's Python Language Summit at PyCon 2010: >> >> - IronPython: >> ? ?- plan is to do Python 2.7 first, focus for this year >> ? ?- python 3.2 for the end of next year hopefully >> ? ?- other work on IDE stuff >> - Pynie (i.e. Parrot) -- Allison Randall: >> ? ?- about 4 major features away from pure Python syntax (did dicts last >> ? ? ?night) >> ? ?- targetting py3k repo and test suite: should be on track for python >> 3.2 >> - Jython: >> ? ?- plan to target 2.6 (b/c 2to3 depends on 2.6) >> ? ?- temporarily skip 2.7 and target 3.x (probably 3.2) >> ? ?- then if 3.x adoption isn't fully there, then go back and add Python >> 2.7 >> ? ?- will require JDK 2.7 for Python 3 support (b/c of new support for >> ? ? ?dynamic languages) >> - PyPy (Holger): >> ? ?- plan is Benjamin will port to Python 2.7 in the summer >> ? ?- only have slight deviations from CPython: idea is to merge back with >> ? ? ?CPython so don't have deviations. Typcically 1 or 2 line changes in >> ~25 >> ? ? ?modules. >> > > > Trent > > -- > Trent Mick > trentm at activestate.com > http://trentm.com/blog/ > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > From rdmurray at bitdance.com Fri Feb 19 17:35:04 2010 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 19 Feb 2010 11:35:04 -0500 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: <4B7E2430.2070804@v.loewis.de> References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> Message-ID: <20100219163504.1EFA91FA5F3@kimball.webabinitio.net> On Fri, 19 Feb 2010 06:40:00 +0100, wrote: > Antoine Pitrou wrote: > > Le Thu, 18 Feb 2010 22:46:41 +0100, Martin v. L??wis a ??crit : > >>> It's time to comment and review. > >> Unfortunately, it's not. I strongly object to any substantial change to > >> the code base without explicit approval by Fredrik Lundh. > > > > Which most probably puts elementtree in bugfix-only mode. I don't > > necessarily disagree with such a decision, but it must be quite clear. > > My point is that the decision as already made when ElementTree was > incorporated into the standard library; it's the same policy for most > code that Fredrik Lundh has contributed (and which he still maintains > outside the standard library as well). He has made it fairly clear on > several occasions that this is how he expects things to work, and unless > we want to truly fork the code, we should comply. Guido has already pretty much answered this concern, but for the bystanders, note that as I understand it the patch actually brings the standard library code in sync with Fredrick's codebase, so it is actually less of a fork than continuing to do our own bug fixes would be. And Frederick has commented on the patch on Reitveld. --David From sjoerd at acm.org Fri Feb 19 18:03:25 2010 From: sjoerd at acm.org (Sjoerd Mullender) Date: Fri, 19 Feb 2010 18:03:25 +0100 Subject: [Python-Dev] deprecated stuff in standard library In-Reply-To: <4B7EACD6.7070306@gmail.com> References: <4B7E5B04.4060200@acm.org> <4B7E94D5.4030206@acm.org> <4B7E9612.5020100@trueblade.com> <4B7E9CF4.9060303@acm.org> <4B7EACD6.7070306@gmail.com> Message-ID: <4B7EC45D.9020101@acm.org> On 2010-02-19 16:23, Nick Coghlan wrote: > Sjoerd Mullender wrote: >> My point is, as a matter of *policy*, nothing should be released that >> uses deprecated stuff. I can't create a bug report about wrong (or >> incomplete) policies. > > The policy is more that the test suite shouldn't raise Deprecation > Warnings unless it is explicitly checking for them (or otherwise testing > known-deprecated code). The policy should also be, if someone decides (or rather, implements) a deprecation of a module, they should do a grep to see where that module is used and fix the code. It's not rocket science. > If there is a hole in the test suite coverage such that paths that > trigger deprecated code are not exercised by the regression tests, then > that is a bug. > > Cheers, > Nick. > -- Sjoerd Mullender -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 371 bytes Desc: OpenPGP digital signature URL: From skip at pobox.com Fri Feb 19 19:37:14 2010 From: skip at pobox.com (skip at pobox.com) Date: Fri, 19 Feb 2010 12:37:14 -0600 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <319e029f1002190631m3f8210a5h3b6d6272e075fc8@mail.gmail.com> References: <319e029f1002181345t1b73f402o1c0f1c13939ac43f@mail.gmail.com> <319e029f1002182111o254f5394m8c802f47ac941725@mail.gmail.com> <6bc73d4c1002182227m2a298ad2r13ef8d43e0ef85e4@mail.gmail.com> <319e029f1002190631m3f8210a5h3b6d6272e075fc8@mail.gmail.com> Message-ID: <19326.55898.811342.13631@montanaro.dyndns.org> Lennart> I would like if we could look into making a timezone module Lennart> that works on Python 2.5 to 3.2 that uses system data... 2.5, 2.6 and 3.1 are completely off the radar screen at this point. The best you could hope for is that someone backports whatever is created for 2.7 or 3.2 and distributes it outside the normal distribution channel (say, as a patch on PyPI). Skip From ianb at colorstudy.com Fri Feb 19 19:49:23 2010 From: ianb at colorstudy.com (Ian Bicking) Date: Fri, 19 Feb 2010 13:49:23 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python Message-ID: This is a proto-proposal for including some functionality from virtualenv in Python itself. I'm not entirely confident about what I'm proposing, so it's not really PEP-ready, but I wanted to get feedback... First, a bit about how virtualenv works (this will use Linux conventions; Windows and some Mac installations are slightly different): * Let's say you are creating an environment in ~/env/ * /usr/bin/python is *copied* to ~/env/bin/python * This alone sets sys.prefix to ~/env/ (via existing code in Python) * At this point things are broken because the standard library is not available * virtualenv creates ~/env/lib/pythonX.Y/site.py, which adds the system standard library location (/usr/lib/pythonX.Y) to sys.path * site.py itself requires several modules to work, and each of these modules (from a pre-determined list of modules) is symlinked over from the standard library into ~/env/lib/pythonX.Y/ * site.py may or may not add /usr/lib/pythonX.Y/site-packages to sys.path * *Any* time you use ~/env/bin/python you'll get sys.prefix of ~/env/, and the appropriate path. No environmental variable is required. * No compiler is used; this is a fairly light tool There are some tweaks to this that could be made, but I believe virtualenv basically does things The Right Way. By setting sys.prefix All Tools Work (there are some virtualenv alternatives that do isolation without setting sys.prefix, but they typically break more often than virtualenv, or only support a limited number of workflows). Also by using a distinct interpreter (~/env/bin/python) it works fairly consistently and reliably compared to techniques like an environmental variable. The one serious alternative is what buildout (and virtualenv --relocatable) does, which is to use the system Python and change the path at the beginning of all scripts (it requires its own installer to accomplish this consistently). But virtualenv is kind of a hack, and I believe with a little support from Python this could be avoided. virtualenv can continue to exist to support the equivalent workflows on earlier versions of Python, but it would not exist (or would become much much simpler) on further Python versions. The specific parts of virtualenv that are a hack that I would like to replace with built-in functionality: * I'd rather ~/env/bin/python be a symlink instead of copying it. * I'd rather not copy (or symlink) *any* of the standard library. * I'd rather site.py support this functionality natively (and in turn that OS packagers support this when they make other modifications) * Compiling extensions can be tricky because code may not find headers (because they are installed in /usr, not ~/env/). I think this can be handled better if virtualenv is slightly less intrusive, or distutils is patched, or generally tools are more aware of this layout. * This gets more complicated with a Mac framework build of Python, and hopefully those hacks could go away too. I am not sure what the best way to do this is, but I will offer at least one suggestion (other suggestions welcome): In my (proto-)proposal, a new binary pythonv is created. This is slightly like pythonw.exe, which provides a Python interpreter on Windows which doesn't open a new window. This binary is primarily for creating new environments. It doesn't even need to be on $PATH, so it would be largely invisible to people unless they use it. If you symlink pythonv to a new location, it will effect sys.prefix (currently sys.prefix is calculated after dereferencing the symlink). Additionally, the binary will look for a configuration file. I'm not sure where this file should go; perhaps directly alongside the binary, or in some location based on sys.prefix. The configuration file would be a simple set of assignments; some I might imagine: * Maybe override sys.prefix * Control if the global site-packages is placed on sys.path * On some operating systems there are other locations for packages installed with the system packager; probably these should be possible to enable or disable * Maybe control installations or point to a file like distutils.cfg I got some feedback from the Debian/Ubuntu maintainer that he would like functionality that might be like this; for instance, if you have /usr/bin/python2.6 and /usr/bin/python2.6-dbg, he'd like them to work slightly different (e.g., /usr/bin/python2.6-dbg would look in a different place for libraries). So the configuration file location should be based on sys.prefix *and* the name of the binary itself (e.g., /usr/lib/python2.6/python-config-dbg.conf). I have no strong opinion on the location of the file itself, only that it can be specific to the directory and name of the interpreter. In addition to all this, I think sys would grow another prefixy value, e.g., sys.build_prefix, that points to the place where Python was actually built (virtualenv calls this sys.real_prefix, but that's not a very good name). Some code, especially in distutils, might need to be aware of this to compile extensions properly (we can be somewhat aware of these cases by looking at places where virtualenv already has problems compiling extensions). Some people have argued for something like sys.prefixes, a list of locations you might look at, which would allow a kind of nesting of these environments (where sys.prefixes[-1] == sys.prefix; or maybe reversed). Personally this seems like it would be hard to keep mental track of this, but I can understand the purpose -- you could for instance create a kind of template prefix that has *most* of what you want installed in it, then create sub-environments that contain for instance an actual application, or a checkout (to test just one new piece of code). I'm not sure how this should best work on Windows (without symlinks, and where things generally work differently), but I would hope if this idea is more visible that someone more opinionated than I would propose the appropriate analog on Windows. -- Ian Bicking | http://blog.ianbicking.org | http://twitter.com/ianbicking -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Feb 19 20:36:53 2010 From: guido at python.org (Guido van Rossum) Date: Fri, 19 Feb 2010 14:36:53 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: References: Message-ID: This sounds like a great idea (especially since I proposed something a little bit like it in yesterday's language summit :-). I have to admit I cannot remember what uses are made of sys.prefix; it would be good to explicitly enumerate these in the PEP when you write it. Regarding the Windows question, does virtualenv work on Windows? If so, its approach might be adopted. If not, maybe we shouldn't care (in the first version anyway)? --Guido On Fri, Feb 19, 2010 at 1:49 PM, Ian Bicking wrote: > This is a proto-proposal for including some functionality from virtualenv in > Python itself. ?I'm not entirely confident about what I'm proposing, so it's > not really PEP-ready, but I wanted to get feedback... > First, a bit about how virtualenv works (this will use Linux conventions; > Windows and some Mac installations are slightly different): > * Let's say you are creating an environment in ~/env/ > * /usr/bin/python is *copied* to ~/env/bin/python > * This alone sets sys.prefix to ~/env/ (via existing code in Python) > * At this point things are broken because the standard library is not > available > * virtualenv creates ~/env/lib/pythonX.Y/site.py, which adds the system > standard library location (/usr/lib/pythonX.Y) to sys.path > * site.py itself requires several modules to work, and each of these modules > (from a pre-determined list of modules) is symlinked over from the standard > library into ~/env/lib/pythonX.Y/ > * site.py may or may not add /usr/lib/pythonX.Y/site-packages to sys.path > * *Any* time you use ~/env/bin/python you'll get sys.prefix of ~/env/, and > the appropriate path. ?No environmental variable is required. > * No compiler is used; this is a fairly light tool > There are some tweaks to this that could be made, but I believe virtualenv > basically does things The Right Way. ?By setting sys.prefix All Tools Work > (there are some virtualenv alternatives that do isolation without setting > sys.prefix, but they typically break more often than virtualenv, or only > support a limited number of workflows). ?Also by using a distinct > interpreter (~/env/bin/python) it works fairly consistently and reliably > compared to techniques like an environmental variable. ?The one serious > alternative is what buildout (and virtualenv --relocatable) does, which is > to use the system Python and change the path at the beginning of all scripts > (it requires its own installer to accomplish this consistently). > But virtualenv is kind of a hack, and I believe with a little support from > Python this could be avoided. ?virtualenv can continue to exist to support > the equivalent workflows on earlier versions of Python, but it would not > exist (or would become much much simpler) on further Python versions. > The specific parts of virtualenv that are a hack that I would like to > replace with built-in functionality: > * I'd rather ~/env/bin/python be a symlink instead of copying it. > * I'd rather not copy (or symlink) *any* of the standard library. > * I'd rather site.py support this functionality natively (and in turn that > OS packagers support this when they make other modifications) > * Compiling extensions can be tricky because code may not find headers > (because they are installed in /usr, not ~/env/). ?I think this can be > handled better if virtualenv is slightly less intrusive, or distutils is > patched, or generally tools are more aware of this layout. > * This gets more complicated with a Mac framework build of Python, and > hopefully those hacks could go away too. > I am not sure what the best way to do this is, but I will offer at least one > suggestion (other suggestions welcome): > In my (proto-)proposal, a new binary pythonv is created. ?This is slightly > like pythonw.exe, which provides a Python interpreter on Windows which > doesn't open a new window. ?This binary is primarily for creating new > environments. ?It doesn't even need to be on $PATH, so it would be largely > invisible to people unless they use it. > If you symlink pythonv to a new location, it will effect sys.prefix > (currently sys.prefix is calculated after dereferencing the symlink). > Additionally, the binary will look for a configuration file. ?I'm not sure > where this file should go; perhaps directly alongside the binary, or in some > location based on sys.prefix. > The configuration file would be a simple set of assignments; some I might > imagine: > * Maybe override sys.prefix > * Control if the global site-packages is placed on sys.path > * On some operating systems there are other locations for packages installed > with the system packager; probably these should be possible to enable or > disable > * Maybe control installations or point to a file like distutils.cfg > I got some feedback from the Debian/Ubuntu maintainer that he would like > functionality that might be like this; for instance, if you have > /usr/bin/python2.6 and /usr/bin/python2.6-dbg, he'd like them to work > slightly different (e.g., /usr/bin/python2.6-dbg would look in a different > place for libraries). ?So the configuration file location should be based on > sys.prefix *and* the name of the binary itself (e.g., > /usr/lib/python2.6/python-config-dbg.conf). ?I have no strong opinion on the > location of the file itself, only that it can be specific to the directory > and name of the interpreter. > In addition to all this, I think sys would grow another prefixy value, e.g., > sys.build_prefix, that points to the place where Python was actually built > (virtualenv calls this sys.real_prefix, but that's not a very good name). > ?Some code, especially in distutils, might need to be aware of this to > compile extensions properly (we can be somewhat aware of these cases by > looking at places where virtualenv already has problems compiling > extensions). > Some people have argued for something like sys.prefixes, a list of locations > you might look at, which would allow a kind of nesting of these environments > (where sys.prefixes[-1] == sys.prefix; or maybe reversed). ?Personally this > seems like it would be hard to keep mental track of this, but I can > understand the purpose -- you could for instance create a kind of template > prefix that has *most* of what you want installed in it, then create > sub-environments that contain for instance an actual application, or a > checkout (to test just one new piece of code). > I'm not sure how this should best work on Windows (without symlinks, and > where things generally work differently), but I would hope if this idea is > more visible that someone more opinionated than I would propose the > appropriate analog on Windows. > > -- > Ian Bicking ?| ?http://blog.ianbicking.org ?| ?http://twitter.com/ianbicking > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) From solipsis at pitrou.net Fri Feb 19 20:45:26 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 19 Feb 2010 19:45:26 +0000 (UTC) Subject: [Python-Dev] Proposal for virtualenv functionality in Python References: Message-ID: Le Fri, 19 Feb 2010 13:49:23 -0500, Ian Bicking a ?crit?: > > * I'd rather ~/env/bin/python be a symlink instead of copying it. How about simply adding a --prefix argument to the interpreter. Then virtualenv can create a "python" script that simply adds --prefix and forwards all the arguments to the real python executable. Or am I missing something? From pjenvey at underboss.org Fri Feb 19 20:59:20 2010 From: pjenvey at underboss.org (Philip Jenvey) Date: Fri, 19 Feb 2010 11:59:20 -0800 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: References: Message-ID: <7A75C965-2AC0-40DB-98AB-36BE3069A76B@underboss.org> On Feb 19, 2010, at 11:45 AM, Antoine Pitrou wrote: > Le Fri, 19 Feb 2010 13:49:23 -0500, Ian Bicking a ?crit : >> >> * I'd rather ~/env/bin/python be a symlink instead of copying it. > > How about simply adding a --prefix argument to the interpreter. Then > virtualenv can create a "python" script that simply adds --prefix and > forwards all the arguments to the real python executable. > Or am I missing something? It couldn't be a shell script, as they wouldn't work as shebang line interpreters for other python scripts. -- Philip Jenvey From pje at telecommunity.com Fri Feb 19 22:18:33 2010 From: pje at telecommunity.com (P.J. Eby) Date: Fri, 19 Feb 2010 16:18:33 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: References: Message-ID: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> At 01:49 PM 2/19/2010 -0500, Ian Bicking wrote: >I'm not sure how this should best work on Windows (without symlinks, >and where things generally work differently), but I would hope if >this idea is more visible that someone more opinionated than I would >propose the appropriate analog on Windows. You'd probably have to just copy pythonv.exe to an appropriate directory, and have it use the configuration file to find the "real" prefix. At least, that'd be a relatively obvious way to do it, and it would have the advantage of being symmetrical across platforms: just copy or symlink pythonv, and make sure the real prefix is in your config file. (Windows does have "shortcuts" but I don't think that there's any way for a linked program to know *which* shortcut it was launched from.) From greg at krypto.org Fri Feb 19 22:30:39 2010 From: greg at krypto.org (Gregory P. Smith) Date: Fri, 19 Feb 2010 16:30:39 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> Message-ID: <52dc1c821002191330g34a71ea3i2eb118755348bba1@mail.gmail.com> On Fri, Feb 19, 2010 at 4:18 PM, P.J. Eby wrote: > At 01:49 PM 2/19/2010 -0500, Ian Bicking wrote: >> >> I'm not sure how this should best work on Windows (without symlinks, and >> where things generally work differently), but I would hope if this idea is >> more visible that someone more opinionated than I would propose the >> appropriate analog on Windows. > > You'd probably have to just copy pythonv.exe to an appropriate directory, > and have it use the configuration file to find the "real" prefix. ?At least, > that'd be a relatively obvious way to do it, and it would have the advantage > of being symmetrical across platforms: just copy or symlink pythonv, and > make sure the real prefix is in your config file. > > (Windows does have "shortcuts" but I don't think that there's any way for a > linked program to know *which* shortcut it was launched from.) Some recent discussion pointed out that vista and win7 ntfs actually supports symlinks. the same question about determining where it was launched from may still hold there? (and we need this to work on xp). How often do windows users need something like virtualenv? (Asking for experience from windows users of all forms here). I personally can't imagine anyone that would ever use a system generic python install from a .msi unless they're just learning python. I would hope people would already use py2exe or similar and include an entire CPython VM with their app with their own installer but as I really have nothing to do with windows these days I'm sure I'm wrong. What about using virtualenv with ironpython and jython? does it make any sense in that context? how do we make it not impossible for them to support? despite all the questions, I'm +1 on going ahead with a PEP and sprint discussions to figure out how to get it in for CPython 3.2 and 2.7. -gps From digitalxero at gmail.com Fri Feb 19 22:35:42 2010 From: digitalxero at gmail.com (Dj Gilcrease) Date: Fri, 19 Feb 2010 14:35:42 -0700 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> Message-ID: win2k and later have a form of sym link, the api for it is just not provided in a nice simple app like it is on nix platforms. On 2/19/10, P.J. Eby wrote: > At 01:49 PM 2/19/2010 -0500, Ian Bicking wrote: >>I'm not sure how this should best work on Windows (without symlinks, >>and where things generally work differently), but I would hope if >>this idea is more visible that someone more opinionated than I would >>propose the appropriate analog on Windows. > > You'd probably have to just copy pythonv.exe to an appropriate > directory, and have it use the configuration file to find the "real" > prefix. At least, that'd be a relatively obvious way to do it, and > it would have the advantage of being symmetrical across platforms: > just copy or symlink pythonv, and make sure the real prefix is in > your config file. > > (Windows does have "shortcuts" but I don't think that there's any way > for a linked program to know *which* shortcut it was launched from.) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/digitalxero%40gmail.com > -- Dj Gilcrease OpenRPG Developer ~~http://www.openrpg.com From digitalxero at gmail.com Fri Feb 19 22:49:07 2010 From: digitalxero at gmail.com (Dj Gilcrease) Date: Fri, 19 Feb 2010 14:49:07 -0700 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <52dc1c821002191330g34a71ea3i2eb118755348bba1@mail.gmail.com> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <52dc1c821002191330g34a71ea3i2eb118755348bba1@mail.gmail.com> Message-ID: I develop OpenRPG and 90% of our user base is on windows. We require the user to install python and wxPython from msi because our app supports GUI plugins so to ensure the user can use any plugin even if it isnt prepackaged they need to have the full python and wxPython installed. We are working on changing the code around to work better with py2exe & py2app. But I use virtual env on windows & linux to test multiple py/wx combos that our app supports On 2/19/10, Gregory P. Smith wrote: > On Fri, Feb 19, 2010 at 4:18 PM, P.J. Eby wrote: >> At 01:49 PM 2/19/2010 -0500, Ian Bicking wrote: >>> >>> I'm not sure how this should best work on Windows (without symlinks, and >>> where things generally work differently), but I would hope if this idea >>> is >>> more visible that someone more opinionated than I would propose the >>> appropriate analog on Windows. >> >> You'd probably have to just copy pythonv.exe to an appropriate directory, >> and have it use the configuration file to find the "real" prefix. ?At >> least, >> that'd be a relatively obvious way to do it, and it would have the >> advantage >> of being symmetrical across platforms: just copy or symlink pythonv, and >> make sure the real prefix is in your config file. >> >> (Windows does have "shortcuts" but I don't think that there's any way for >> a >> linked program to know *which* shortcut it was launched from.) > > Some recent discussion pointed out that vista and win7 ntfs actually > supports symlinks. the same question about determining where it was > launched from may still hold there? (and we need this to work on xp). > > How often do windows users need something like virtualenv? (Asking > for experience from windows users of all forms here). I personally > can't imagine anyone that would ever use a system generic python > install from a .msi unless they're just learning python. I would hope > people would already use py2exe or similar and include an entire > CPython VM with their app with their own installer but as I really > have nothing to do with windows these days I'm sure I'm wrong. > > What about using virtualenv with ironpython and jython? does it make > any sense in that context? how do we make it not impossible for them > to support? > > despite all the questions, I'm +1 on going ahead with a PEP and sprint > discussions to figure out how to get it in for CPython 3.2 and 2.7. > > -gps > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/digitalxero%40gmail.com > -- Dj Gilcrease OpenRPG Developer ~~http://www.openrpg.com From rdmurray at bitdance.com Fri Feb 19 22:57:14 2010 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 19 Feb 2010 16:57:14 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> Message-ID: <20100219215714.E03351FD27E@kimball.webabinitio.net> On Fri, 19 Feb 2010 14:35:42 -0700, Dj Gilcrease wrote: > On 2/19/10, P.J. Eby wrote: > > At 01:49 PM 2/19/2010 -0500, Ian Bicking wrote: > >>I'm not sure how this should best work on Windows (without symlinks, > >>and where things generally work differently), but I would hope if > >>this idea is more visible that someone more opinionated than I would > >>propose the appropriate analog on Windows. > > > > You'd probably have to just copy pythonv.exe to an appropriate > > directory, and have it use the configuration file to find the "real" > > prefix. At least, that'd be a relatively obvious way to do it, and > > it would have the advantage of being symmetrical across platforms: > > just copy or symlink pythonv, and make sure the real prefix is in > > your config file. > > > > (Windows does have "shortcuts" but I don't think that there's any way > > for a linked program to know *which* shortcut it was launched from.) > > win2k and later have a form of sym link, the api for it is just not > provided in a nice simple app like it is on nix platforms. See also http://bugs.python.org/issue1578269, which proposes an implementation of os.symlink for windows, and appears to be just about ready to go in. --David From fuzzyman at voidspace.org.uk Fri Feb 19 23:11:02 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 19 Feb 2010 17:11:02 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <52dc1c821002191330g34a71ea3i2eb118755348bba1@mail.gmail.com> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <52dc1c821002191330g34a71ea3i2eb118755348bba1@mail.gmail.com> Message-ID: <4B7F0C76.1060101@voidspace.org.uk> On 19/02/2010 16:30, Gregory P. Smith wrote: > On Fri, Feb 19, 2010 at 4:18 PM, P.J. Eby wrote: > >> At 01:49 PM 2/19/2010 -0500, Ian Bicking wrote: >> >>> I'm not sure how this should best work on Windows (without symlinks, and >>> where things generally work differently), but I would hope if this idea is >>> more visible that someone more opinionated than I would propose the >>> appropriate analog on Windows. >>> >> You'd probably have to just copy pythonv.exe to an appropriate directory, >> and have it use the configuration file to find the "real" prefix. At least, >> that'd be a relatively obvious way to do it, and it would have the advantage >> of being symmetrical across platforms: just copy or symlink pythonv, and >> make sure the real prefix is in your config file. >> >> (Windows does have "shortcuts" but I don't think that there's any way for a >> linked program to know *which* shortcut it was launched from.) >> > Some recent discussion pointed out that vista and win7 ntfs actually > supports symlinks. the same question about determining where it was > launched from may still hold there? (and we need this to work on xp). > > How often do windows users need something like virtualenv? (Asking > for experience from windows users of all forms here). I personally > can't imagine anyone that would ever use a system generic python > install from a .msi unless they're just learning python. I would hope > people would already use py2exe or similar and include an entire > CPython VM with their app with their own installer but as I really > have nothing to do with windows these days I'm sure I'm wrong. > I've used virtualenv on Windows and it is just as useful as on other platforms. *Most* Python developers I know work from an installed Python although application distribution is typically done with py2exe. The Windows msi installer is downloaded an insane amount from Python.org. Michael > What about using virtualenv with ironpython and jython? does it make > any sense in that context? how do we make it not impossible for them > to support? > > despite all the questions, I'm +1 on going ahead with a PEP and sprint > discussions to figure out how to get it in for CPython 3.2 and 2.7. > > -gps > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.ironpythoninaction.com/ From mike.klaas at gmail.com Fri Feb 19 23:41:59 2010 From: mike.klaas at gmail.com (Mike Klaas) Date: Fri, 19 Feb 2010 14:41:59 -0800 Subject: [Python-Dev] deprecated stuff in standard library In-Reply-To: <4B7EC45D.9020101@acm.org> References: <4B7E5B04.4060200@acm.org> <4B7E94D5.4030206@acm.org> <4B7E9612.5020100@trueblade.com> <4B7E9CF4.9060303@acm.org> <4B7EACD6.7070306@gmail.com> <4B7EC45D.9020101@acm.org> Message-ID: <3d2ce8cb1002191441l15755afcq4ff9c0353a304b06@mail.gmail.com> On Fri, Feb 19, 2010 at 9:03 AM, Sjoerd Mullender wrote: > The policy should also be, if someone decides (or rather, implements) a > deprecation of a module, they should do a grep to see where that module > is used and fix the code. ?It's not rocket science. I'm not sure if you're aware of it, but you're starting to sound a little rude. ISTM that it doesn't make sense to waste effort ensuring that deprecated code is updated to not call other deprecated modules. Of course, all released non-deprecated code should steer clear of deprecated apis. -Mike From v+python at g.nevcal.com Sat Feb 20 04:39:49 2010 From: v+python at g.nevcal.com (Glenn Linderman) Date: Fri, 19 Feb 2010 19:39:49 -0800 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> Message-ID: <4B7F5985.2000606@g.nevcal.com> On approximately 2/19/2010 1:18 PM, came the following characters from the keyboard of P.J. Eby: > At 01:49 PM 2/19/2010 -0500, Ian Bicking wrote: >> I'm not sure how this should best work on Windows (without symlinks, >> and where things generally work differently), but I would hope if >> this idea is more visible that someone more opinionated than I would >> propose the appropriate analog on Windows. > > You'd probably have to just copy pythonv.exe to an appropriate > directory, and have it use the configuration file to find the "real" > prefix. At least, that'd be a relatively obvious way to do it, and it > would have the advantage of being symmetrical across platforms: just > copy or symlink pythonv, and make sure the real prefix is in your > config file. > > (Windows does have "shortcuts" but I don't think that there's any way > for a linked program to know *which* shortcut it was launched from.) No automatic way, but shortcuts can include parameters, not just the program name. So a parameter could be --prefix as was suggested in another response, but for a different reason. Windows also has hard-links for files. A lot of Windows tools are completely ignorant of both of those linking concepts... resulting in disks that look to be over capacity when they are not, for example. -- Glenn -- http://nevcal.com/ =========================== A protocol is complete when there is nothing left to remove. -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking From eric at trueblade.com Sat Feb 20 04:52:46 2010 From: eric at trueblade.com (Eric Smith) Date: Fri, 19 Feb 2010 22:52:46 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <4B7F5985.2000606@g.nevcal.com> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <4B7F5985.2000606@g.nevcal.com> Message-ID: <4B7F5C8E.7080206@trueblade.com> Glenn Linderman wrote: > On approximately 2/19/2010 1:18 PM, came the following characters from > the keyboard of P.J. Eby: >> At 01:49 PM 2/19/2010 -0500, Ian Bicking wrote: >>> I'm not sure how this should best work on Windows (without symlinks, >>> and where things generally work differently), but I would hope if >>> this idea is more visible that someone more opinionated than I would >>> propose the appropriate analog on Windows. >> >> You'd probably have to just copy pythonv.exe to an appropriate >> directory, and have it use the configuration file to find the "real" >> prefix. At least, that'd be a relatively obvious way to do it, and it >> would have the advantage of being symmetrical across platforms: just >> copy or symlink pythonv, and make sure the real prefix is in your >> config file. >> >> (Windows does have "shortcuts" but I don't think that there's any way >> for a linked program to know *which* shortcut it was launched from.) > > No automatic way, but shortcuts can include parameters, not just the > program name. So a parameter could be --prefix as was suggested in > another response, but for a different reason. Shortcuts don't work from the shell (well, cmd.exe, at least), do they? Can't test from here. From v+python at g.nevcal.com Sat Feb 20 05:23:36 2010 From: v+python at g.nevcal.com (Glenn Linderman) Date: Fri, 19 Feb 2010 20:23:36 -0800 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <4B7F5C8E.7080206@trueblade.com> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <4B7F5985.2000606@g.nevcal.com> <4B7F5C8E.7080206@trueblade.com> Message-ID: <4B7F63C8.8020704@g.nevcal.com> On approximately 2/19/2010 7:52 PM, came the following characters from the keyboard of Eric Smith: > Glenn Linderman wrote: >> On approximately 2/19/2010 1:18 PM, came the following characters >> from the keyboard of P.J. Eby: >>> At 01:49 PM 2/19/2010 -0500, Ian Bicking wrote: >>>> I'm not sure how this should best work on Windows (without symlinks, >>>> and where things generally work differently), but I would hope if >>>> this idea is more visible that someone more opinionated than I would >>>> propose the appropriate analog on Windows. >>> >>> You'd probably have to just copy pythonv.exe to an appropriate >>> directory, and have it use the configuration file to find the "real" >>> prefix. At least, that'd be a relatively obvious way to do it, and it >>> would have the advantage of being symmetrical across platforms: just >>> copy or symlink pythonv, and make sure the real prefix is in your >>> config file. >>> >>> (Windows does have "shortcuts" but I don't think that there's any way >>> for a linked program to know *which* shortcut it was launched from.) >> >> No automatic way, but shortcuts can include parameters, not just the >> program name. So a parameter could be --prefix as was suggested in >> another response, but for a different reason. > > Shortcuts don't work from the shell (well, cmd.exe, at least), do > they? Can't test from here. So if you can't test it, why would you state it as a fact... and then back-pedal? :) Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. d:\>python.exe Python 3.1.1 (r311:74483, Aug 17 2009, 17:02:12) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> exit() d:\>c:\python26\python.exe ActivePython 2.6.2.2 (ActiveState Software Inc.) based on Python 2.6.2 (r262:71600, Apr 21 2009, 15:05:37) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> exit() d:\>d:\python.exe.lnk d:\>ActivePython 2.6.2.2 (ActiveState Software Inc.) based on Python 2.6.2 (r262:71600, Apr 21 2009, 15:05:37) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> So this makes it look like it works fine. But doing exit() from the shortcut-started python fails... it hangs, eventually the whole CMD shell dies, if you poke it enough. I'm going to let it sit there a long time ... until I shut down, and see if it ever exits properly. The form d:\>start python.exe.lnk gives the python its own window, and that works/exits fine. So, shortcuts do work from the shell, but there might be some issues, and you might have to type the .lnk to invoke them (I haven't played with adding .lnk or .exe.lnk to PATHEXT) I don't have a clue if the above problem is a Windows issue, or a Python issue. -- Glenn -- http://nevcal.com/ =========================== A protocol is complete when there is nothing left to remove. -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking From guido at python.org Sat Feb 20 05:46:04 2010 From: guido at python.org (Guido van Rossum) Date: Fri, 19 Feb 2010 23:46:04 -0500 Subject: [Python-Dev] deprecated stuff in standard library In-Reply-To: <3d2ce8cb1002191441l15755afcq4ff9c0353a304b06@mail.gmail.com> References: <4B7E5B04.4060200@acm.org> <4B7E94D5.4030206@acm.org> <4B7E9612.5020100@trueblade.com> <4B7E9CF4.9060303@acm.org> <4B7EACD6.7070306@gmail.com> <4B7EC45D.9020101@acm.org> <3d2ce8cb1002191441l15755afcq4ff9c0353a304b06@mail.gmail.com> Message-ID: On Fri, Feb 19, 2010 at 5:41 PM, Mike Klaas wrote: > On Fri, Feb 19, 2010 at 9:03 AM, Sjoerd Mullender wrote: > >> The policy should also be, if someone decides (or rather, implements) a >> deprecation of a module, they should do a grep to see where that module >> is used and fix the code. ?It's not rocket science. > > I'm not sure if you're aware of it, but you're starting to sound a little rude. He's just being Dutch. :-) > ISTM that it doesn't make sense to waste effort ensuring that > deprecated code is updated to not call other deprecated modules. ?Of > course, all released non-deprecated code should steer clear of > deprecated apis. Read again. Sjoerd meant it exactly the other way around. When a module is deprecated it is *not* a wasted effort to ensure that no other (undeprecated) modules depend on it! -- --Guido van Rossum (python.org/~guido) From fuzzyman at voidspace.org.uk Sat Feb 20 06:37:56 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sat, 20 Feb 2010 00:37:56 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <4B7F5C8E.7080206@trueblade.com> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <4B7F5985.2000606@g.nevcal.com> <4B7F5C8E.7080206@trueblade.com> Message-ID: -- http://www.ironpythoninaction.com On 19 Feb 2010, at 22:52, Eric Smith wrote: > Glenn Linderman wrote: >> On approximately 2/19/2010 1:18 PM, came the following characters >> from the keyboard of P.J. Eby: >>> At 01:49 PM 2/19/2010 -0500, Ian Bicking wrote: >>>> I'm not sure how this should best work on Windows (without >>>> symlinks, >>>> and where things generally work differently), but I would hope if >>>> this idea is more visible that someone more opinionated than I >>>> would >>>> propose the appropriate analog on Windows. >>> >>> You'd probably have to just copy pythonv.exe to an appropriate >>> directory, and have it use the configuration file to find the "real" >>> prefix. At least, that'd be a relatively obvious way to do it, >>> and it >>> would have the advantage of being symmetrical across platforms: just >>> copy or symlink pythonv, and make sure the real prefix is in your >>> config file. >>> >>> (Windows does have "shortcuts" but I don't think that there's any >>> way >>> for a linked program to know *which* shortcut it was launched from.) >> No automatic way, but shortcuts can include parameters, not just >> the program name. So a parameter could be --prefix as was >> suggested in another response, but for a different reason. > > Shortcuts don't work from the shell (well, cmd.exe, at least), do > they? Can't test from here. They do if you add .lnk to your PATHEXT environment variable. Michael > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk From martin at v.loewis.de Sat Feb 20 08:52:04 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 20 Feb 2010 08:52:04 +0100 Subject: [Python-Dev] deprecated stuff in standard library In-Reply-To: <4B7E9CF4.9060303@acm.org> References: <4B7E5B04.4060200@acm.org> <4B7E94D5.4030206@acm.org> <4B7E9612.5020100@trueblade.com> <4B7E9CF4.9060303@acm.org> Message-ID: <4B7F94A4.8030405@v.loewis.de> > My point is, as a matter of *policy*, nothing should be released that > uses deprecated stuff. I can't create a bug report about wrong (or > incomplete) policies. Sure you can. Write a bug report asking that PEP 4 gets amended with specific wording. Not that PEP 4 is followed in practice at all, but if the policy was in place and just not followed, that's a bug. Regards, Martin From martin at v.loewis.de Sat Feb 20 08:57:50 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 20 Feb 2010 08:57:50 +0100 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> Message-ID: <4B7F95FE.20703@v.loewis.de> > We need someone to maintain the copy of ElementTree in the Python > repository. We have one: Fredrik Lundh. > Ideally this means pulling upgrades and bugfixes from > Fredrik's repository every now and then. If the goals of Python > ElementTree and Fredrik ElementTree diverge I don't see a problem with > an amicable fork. I see one: Fredrik will not consider such a fork amicable. Of course, if you could make him state in public that he is fine with a procedure that you propose, go ahead. He had stated in public that he is fine with the procedure I'm defending right now, that's why I'm defending it: no substantial changes without his explicit approval (breakage due to language changes is about the only exception - not even bug fixes are allowed). Regards, Martin From hodgestar+pythondev at gmail.com Sat Feb 20 10:23:33 2010 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Sat, 20 Feb 2010 11:23:33 +0200 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: <4B7F95FE.20703@v.loewis.de> References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> <4B7F95FE.20703@v.loewis.de> Message-ID: On Sat, Feb 20, 2010 at 9:57 AM, "Martin v. L?wis" wrote: >> We need someone to maintain the copy of ElementTree in the Python >> repository. > > We have one: Fredrik Lundh. The last commits by Fredrik to ElementTree in Python SVN that I can see are dated 2006-08-16. The last commits I can see to ElementTree at http://svn.effbot.python-hosting.com/ are dated 2006-07-05. To paraphrase Antoine's comment [1] on Rietveld -- we need a process that results in bug fixes for users of the copy of ElementTree in Python. [1] http://codereview.appspot.com/207048/show (most direct link I could find) Schiavo Simon From stefan_ml at behnel.de Sat Feb 20 10:49:50 2010 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 20 Feb 2010 10:49:50 +0100 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: References: Message-ID: Florent Xicluna, 18.02.2010 10:21: > For this purpose, I grew the test suite from 300 lines to 1800 lines, using both > the tests from upstream and the tests proposed by Neil Muller on issue #6232. Just a comment on this. While the new tests may work with ElementTree as is, there are a couple of problem with them. They become apparent when running the test suite against lxml.etree. Some of the tests depend on specifics of the serialiser that may not be guaranteed, such as the order of attributes or namespace declarations in a tag, or whitespace before the self-closing tag terminator (" />"). ET 1.3 has a mostly redesigned serialiser, and it may receive another couple of improvements or changes before it comes out. None of theses features is really required to hold for anything but the current as-is implementation. Other tests rely on non-XML being serialised, such as 'textsubtext' This is achieved by setting "root.tag" to None - I'm not sure this is a feature of ElementTree, and I'd be surprised if it was documented anywhere. Several of the tests also use invalid namespace URIs like "uri" or non well-formed attribute names like "123". That's bad style at best. There are also some tests for implementation details, such as the "_cache" in ElementPath or the parser version (requiring expat), or even this test: element.getiterator == element.iter which doesn't apply to lxml.etree, as its element.getiterator() complies with the ET 1.2 interface for compatibility, whereas the new element.iter() obeys the ET 1.3 interface that defines it. Asserting both to be equal doesn't make much sense in the context of their specification. Another example is check_method(element.findall("*").next) In lxml.etree, this produces an AttributeError: 'list' object has no attribute 'next' because element.findall() is specified in the official ET documentation to return "a list or iterator", i.e. not necessarily an iterator but always an iterable. There is an iterfind() that would do the above, which matches ET 1.3. So my impression is that many of the tests try to provide guarantees where they cannot or should not exist, and even where the output is clearly non-conforming with respect to standards. I don't think it makes sense to put these into a regression test suite. That said, I should add that lxml's test suite includes about 250 unit tests that work with (and adapt to) lxml.etree, ElementTree and cElementTree, in Py2.3+ and Py3.x, and with ET 1.2 and ET 1.3. Although certainly not a copy&run replacement, those should be much better suited to accompany the existing stdlib tests. Stefan From florent.xicluna at gmail.com Sat Feb 20 11:53:29 2010 From: florent.xicluna at gmail.com (Florent Xicluna) Date: Sat, 20 Feb 2010 10:53:29 +0000 (UTC) Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 References: Message-ID: Stefan Behnel behnel.de> writes: > > Florent Xicluna, 18.02.2010 10:21: > > For this purpose, I grew the test suite from 300 lines to 1800 lines, > > using both the tests from upstream and the tests proposed by Neil Muller > > on issue #6232. > > Just a comment on this. While the new tests may work with ElementTree as > is, there are a couple of problem with them. They become apparent when > running the test suite against lxml.etree. > The test suite in the stdlib targets the "xml.etree" implementations only. > None of theses features is really required to hold for anything but the > current as-is implementation. > I agree. > So my impression is that many of the tests try to provide guarantees where > they cannot or should not exist, and even where the output is clearly > non-conforming with respect to standards. I don't think it makes sense to > put these into a regression test suite. > The test suite in the stdlib should try to cover every piece of code, even implementation details and edge cases. It guarantees that the implementation details do not change between minor releases. And it helps identify regression or evolution of the behavior when the library is updated. However we may identify better each category of tests: - tests for the ElementTree API 1.2, which should pass with lxml.etree, too. - tests of implementation details, which are not part of the specification. Additionally, these tests ensure that the C implementation can be used as a drop-in replacement of the Python implementation. It is a request expressed by many users of the "xml.etree" package. > That said, I should add that lxml's test suite includes about 250 unit > tests that work with (and adapt to) lxml.etree, ElementTree and > cElementTree, in Py2.3+ and Py3.x, and with ET 1.2 and ET 1.3. Although > certainly not a copy&run replacement, those should be much better suited to > accompany the existing stdlib tests. > Interesting. I may add these tests to the test_suite, if they are not completely redundant. -- Florent Xicluna From eric at trueblade.com Sat Feb 20 11:56:34 2010 From: eric at trueblade.com (Eric Smith) Date: Sat, 20 Feb 2010 05:56:34 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <4B7F63C8.8020704@g.nevcal.com> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <4B7F5985.2000606@g.nevcal.com> <4B7F5C8E.7080206@trueblade.com> <4B7F63C8.8020704@g.nevcal.com> Message-ID: <4B7FBFE2.7020101@trueblade.com> Glenn Linderman wrote: >> Shortcuts don't work from the shell (well, cmd.exe, at least), do >> they? Can't test from here. > > So if you can't test it, why would you state it as a fact... and then > back-pedal? :) It was a question, not a statement! Plus, I figured I could con someone into testing it for me. Mission accomplished. :) Thanks for investigating. Eric. From chambon.pascal at gmail.com Sat Feb 20 12:20:39 2010 From: chambon.pascal at gmail.com (Pascal Chambon) Date: Sat, 20 Feb 2010 12:20:39 +0100 Subject: [Python-Dev] Buffered streams design + raw io gotchas In-Reply-To: References: <4B7D8E10.20609@wanadoo.fr> Message-ID: <4B7FC587.7020904@wanadoo.fr> Allright, so in the case of regular files I may content myself of BufferedRandom. And maybe I'll put some warnings concerning the returning of raw streams by factory functions. Thanks, Regards, Pascal Guido van Rossum a ?crit : > IIRC here is the use case for buffered reader/writer vs. random: a > disk file opened for reading and writing uses a random access buffer; > but a TCP stream stream, while both writable and readable, should use > separate read and write buffers. The reader and writer don't have to > worry about reversing the I/O direction. > > But maybe I'm missing something about your question? > > --Guido > > On Thu, Feb 18, 2010 at 1:59 PM, Pascal Chambon > wrote: > >> Hello, >> >> As I continue experimenting with advanced streams, I'm currently beginning >> an important modification of io's Buffered and Text streams (removal of >> locks, adding of methods...), to fit the optimization process of the whole >> library. >> However, I'm now wondering what the idea is behind the 3 main buffer classes >> : Bufferedwriter, Bufferedreader and Bufferedrandom. >> >> The i/o PEP claimed that the two first ones were for sequential streams >> only, and the latter for all kinds of seekable streams; but as it is >> implemented, actually the 3 classes can be returned by open() for seekable >> files. >> >> Am I missing some use case in which this distinction would be useful (for >> optimizations ?) ? Else, I guess I should just create a RSBufferedStream >> class which handles all kinds of situations, raising InsupportedOperation >> exceptions whenever needed.... after all, text streams act that way (there >> is no TextWriter or TextReader stream), and they seem fine. >> >> Also, io.open() might return a raw file stream when we set buffering=0. The >> problem is that raw file streams are NOT like buffered streams with a buffer >> limit of zero : raw streams might fail writing/reading all the data asked, >> without raising errors. I agree this case should be rare, but it might be a >> gotcha for people wanting direct control of the stream (eg. for locking >> purpose), but no silently incomplete read/write operation. >> Shouldn't we rather return a "write through" buffered stream in this case >> "buffering=0", to cleanly handle partial read/write ops ? >> >> regards, >> Pascal >> >> PS : if you have 3 minutes, I'd be very interested by your opinion on the >> "advanced modes" draft below. >> Does it seem intuitive to you ? In particular, shouldn't the "+" and "-" >> flags have the opposite meaning ? >> http://bytebucket.org/pchambon/python-rock-solid-tools/wiki/rsopen.html >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> http://mail.python.org/mailman/options/python-dev/guido%40python.org >> >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sat Feb 20 12:36:09 2010 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 20 Feb 2010 12:36:09 +0100 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: References: Message-ID: Florent Xicluna, 20.02.2010 11:53: > Stefan Behnel writes: >> None of theses features is really required to hold for anything but the >> current as-is implementation. > > I agree. > >> So my impression is that many of the tests try to provide guarantees where >> they cannot or should not exist, and even where the output is clearly >> non-conforming with respect to standards. I don't think it makes sense to >> put these into a regression test suite. > > The test suite in the stdlib should try to cover every piece of code, even > implementation details and edge cases. It shouldn't test for un(der)defined or non-guaranteed behaviour, bugs or the /absence/ of certain features, though. Apart from that, I agree with the following: > It guarantees that the implementation > details do not change between minor releases. And it helps identify regression > or evolution of the behavior when the library is updated. > Additionally, these tests ensure that the C implementation can be used as a > drop-in replacement of the Python implementation. It is a request expressed by > many users of the "xml.etree" package. That's certainly a worthy goal, but it's orthogonal to the interest of changing/improving/evolving ElementTree itself. The goal here is to make cElementTree compatible with ET, without any impact on ET. I agree with Fredrik that there are a number of additional features in ET 1.3 (and lxml 2.x) that can be easily added to the existing API, e.g. the element.extend() method. Other new features (and certainly the incompatible changes) are a lot more controversial, though. >> That said, I should add that lxml's test suite includes about 250 unit >> tests that work with (and adapt to) lxml.etree, ElementTree and >> cElementTree, in Py2.3+ and Py3.x, and with ET 1.2 and ET 1.3. Although >> certainly not a copy&run replacement, those should be much better suited to >> accompany the existing stdlib tests. > > Interesting. I may add these tests to the test_suite, if they are not > completely redundant. They certainly were not redundant with the original tests that shipped with ET, and they test all sorts of funny cases. The main files are https://codespeak.net/viewvc/lxml/trunk/src/lxml/tests/test_elementtree.py?view=markup https://codespeak.net/viewvc/lxml/trunk/src/lxml/tests/test_io.py?view=markup There are also a number of ET API tests in other parts of the test suite (mostly test_etree.py, also test_unicode.py). Some of them would work with ET but produce different results due to various reasons, including the fact that lxml.etree behaves "more correct" in some cases. The latter are the kind of tests that I would prefer not to see in the stdlib test suite. Stefan From florent.xicluna at gmail.com Sat Feb 20 12:43:57 2010 From: florent.xicluna at gmail.com (Florent Xicluna) Date: Sat, 20 Feb 2010 11:43:57 +0000 (UTC) Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> <4B7F95FE.20703@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > > > If the goals of Python ElementTree and Fredrik ElementTree diverge I don't > > see a problem with an amicable fork. > > I see one: Fredrik will not consider such a fork amicable. Of course, if > you could make him state in public that he is fine with a procedure that > you propose, go ahead. He had stated in public that he is fine with the > procedure I'm defending right now, that's why I'm defending it: no > substantial changes without his explicit approval (breakage due to > language changes is about the only exception - not even bug fixes are > allowed). Actually this should not be a fork of the upstream library. The goal is to improve stability and predictability of the ElementTree implementations in the stdlib, and to fix some bugs. I thought that it is better to backport the fixes from upstream than to fix each bug separately in the stdlib. I try to get some clear assessment from Fredrik. If it is accepted, I will probably cut some parts which are in the upstream library, but which are not in the API 1.2. If it is not accepted, it is bad news for the "xml.etree" users... It is qualified as a "best effort" to get something better for ET. Nothing else. -- Florent ?Nobody expects the Spanish Inquisition!? From martin at v.loewis.de Sat Feb 20 13:03:08 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 20 Feb 2010 13:03:08 +0100 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> <4B7F95FE.20703@v.loewis.de> Message-ID: <4B7FCF7C.7080001@v.loewis.de> > The last commits by Fredrik to ElementTree in Python SVN that I can > see are dated 2006-08-16. The last commits I can see to ElementTree at > http://svn.effbot.python-hosting.com/ are dated 2006-07-05. And? > To paraphrase Antoine's comment [1] on Rietveld -- we need a process > that results in bug fixes for users of the copy of ElementTree in > Python. > > [1] http://codereview.appspot.com/207048/show (most direct link I could find) To quote Fredrik Lundh from the same reviews: # You do realize that you're merging in an experimental release, right? # I'm a bit worried that the result of this effort will be plenty of # incompatibilities with the upstream library (and there are also signs # on bugs.python.org that some people involved don't understand the # difference between specification of a portable API and artifacts of a # certain implementation of the same API), but I'm travelling right now, # and have no bandwidth to deal with this. Just be careful. # Since you've effectively hijacked the library, and have created your # own fork that's not fully compatible with any formal release of the # upstream library, and am not contributing any patches back to # upstream, I suggest renaming it instead. This may be politely phrased, but it seems that he is quite upset about these proposed changes. I'd rather drop ElementTree from the standard library than fork it. Regards, Martin From martin at v.loewis.de Sat Feb 20 13:08:39 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 20 Feb 2010 13:08:39 +0100 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> <4B7F95FE.20703@v.loewis.de> Message-ID: <4B7FD0C7.4050207@v.loewis.de> > Actually this should not be a fork of the upstream library. > The goal is to improve stability and predictability of the ElementTree > implementations in the stdlib, and to fix some bugs. > I thought that it is better to backport the fixes from upstream than to > fix each bug separately in the stdlib. > > I try to get some clear assessment from Fredrik. > If it is accepted, I will probably cut some parts which are in the upstream > library, but which are not in the API 1.2. If it is not accepted, it is bad > news for the "xml.etree" users... Not sure about the timing, but in case you have not got the message: we should rather drop ElementTree from the standard library than integrate unreleased changes from an experimental upstream repository. > It is qualified as a "best effort" to get something better for ET. Nothing else. Unfortunately, it hurts ET users if it ultimately leads to a fork, or to a removal of ET from the standard library. Please be EXTREMELY careful. I urge you not to act on this until mid-March (which is the earliest time at which Fredrik has said he may have time to look into this). Regards, Martin From guido at python.org Sat Feb 20 14:13:56 2010 From: guido at python.org (Guido van Rossum) Date: Sat, 20 Feb 2010 08:13:56 -0500 Subject: [Python-Dev] Buffered streams design + raw io gotchas In-Reply-To: <4B7FC587.7020904@wanadoo.fr> References: <4B7D8E10.20609@wanadoo.fr> <4B7FC587.7020904@wanadoo.fr> Message-ID: Not really, BufferedRandom is only suitable when the file is open for reading *and* writing. The 'rb' and 'wb' modes should return BufferedReader and BufferedWriter, respectively. On Sat, Feb 20, 2010 at 6:20 AM, Pascal Chambon wrote: > > Allright, so in the case of regular files I may content myself of > BufferedRandom. > And maybe I'll put some warnings concerning the returning of raw streams by > factory functions. > > Thanks, > > Regards, > Pascal > > > Guido van Rossum a ?crit?: > > IIRC here is the use case for buffered reader/writer vs. random: a > disk file opened for reading and writing uses a random access buffer; > but a TCP stream stream, while both writable and readable, should use > separate read and write buffers. The reader and writer don't have to > worry about reversing the I/O direction. > > But maybe I'm missing something about your question? > > --Guido > > On Thu, Feb 18, 2010 at 1:59 PM, Pascal Chambon > wrote: > > > Hello, > > As I continue experimenting with advanced streams, I'm currently beginning > an important modification of io's Buffered and Text streams (removal of > locks, adding of methods...), to fit the optimization process of the whole > library. > However, I'm now wondering what the idea is behind the 3 main buffer classes > : Bufferedwriter, Bufferedreader and Bufferedrandom. > > The i/o PEP claimed that the two first ones were for sequential streams > only, and the latter for all kinds of seekable streams; but as it is > implemented, actually the 3 classes can be returned by open() for seekable > files. > > Am I missing some use case in which this distinction would be useful (for > optimizations ?) ? Else, I guess I should just create a RSBufferedStream > class which handles all kinds of situations, raising InsupportedOperation > exceptions whenever needed.... after all, text streams act that way (there > is no TextWriter or TextReader stream), and they seem fine. > > Also, io.open() might return a raw file stream when we set buffering=0. The > problem is that raw file streams are NOT like buffered streams with a buffer > limit of zero : raw streams might fail writing/reading all the data asked, > without raising errors. I agree this case should be rare, but it might be a > gotcha for people wanting direct control of the stream (eg. for locking > purpose), but no silently incomplete read/write operation. > Shouldn't we rather return a "write through" buffered stream in this case > "buffering=0", to cleanly handle partial read/write ops ? > > regards, > Pascal > > PS : if you have 3 minutes, I'd be very interested by your opinion on the > "advanced modes" draft below. > Does it seem intuitive to you ? In particular, shouldn't the "+" and "-" > flags have the opposite meaning ? > http://bytebucket.org/pchambon/python-rock-solid-tools/wiki/rsopen.html > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > > -- --Guido van Rossum (python.org/~guido) From lists at cheimes.de Sat Feb 20 16:50:56 2010 From: lists at cheimes.de (Christian Heimes) Date: Sat, 20 Feb 2010 16:50:56 +0100 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <4B7F5985.2000606@g.nevcal.com> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <4B7F5985.2000606@g.nevcal.com> Message-ID: <4B8004E0.5010803@cheimes.de> Glenn Linderman wrote: > Windows also has hard-links for files. > > A lot of Windows tools are completely ignorant of both of those linking > concepts... resulting in disks that look to be over capacity when they > are not, for example. Here comes my nit picking mode again. ;) First of all the links are not a feature of the operating system but rather a feature of the file system (version). The fact is valid for Unix as well but most Unix file systems support hard- and soft links anyway. To my best knowledge links are only supported on NTFS. FAT doesn't support links and IIRC it's not possible to create a hard link on a remote file system. NTFS supports POSIX style hard links of files that are limited to one file system. It's not possible to create a hard link that points to another file system. This constrain also applies to Unix. Since Windows 2000 NTFS has junction points that work similar to symbolic link on directories within a local file system. Junction points should be avoided because the Windows explorer can't handle them properly until Windows Vista. Since Vista NTFS also has symbolic links that work across file systems and can point to remote locations and non-existing files, too. However only administrators are allowed to create symlinks on Vista. Vista has no builtin tool to lift the restriction for ordinary users. You have to grab some files from Windows Server 2003 for the task. As long as Python supports XP we shouldn't use symlinks on Windows for stuff like virtualenv. The python.exe on Windows is small (just a few kb) since it is linked against the dll. Let's copy it and we are on the safe side. Christian From eric at trueblade.com Sat Feb 20 17:06:04 2010 From: eric at trueblade.com (Eric Smith) Date: Sat, 20 Feb 2010 11:06:04 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <4B8004E0.5010803@cheimes.de> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <4B7F5985.2000606@g.nevcal.com> <4B8004E0.5010803@cheimes.de> Message-ID: <4B80086C.8080400@trueblade.com> Christian Heimes wrote: > As long as Python supports XP we shouldn't use symlinks on Windows for > stuff like virtualenv. The python.exe on Windows is small (just a few > kb) since it is linked against the dll. Let's copy it and we are on the > safe side. +1. Even if we dropped XP I'm not sure moving to symlinks for this would be the right thing to do. Eric. From martin at v.loewis.de Sat Feb 20 17:40:44 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 20 Feb 2010 17:40:44 +0100 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <4B8004E0.5010803@cheimes.de> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <4B7F5985.2000606@g.nevcal.com> <4B8004E0.5010803@cheimes.de> Message-ID: <4B80108C.50208@v.loewis.de> > First of all the links are not a feature of the operating system but > rather a feature of the file system (version). That's not really true. Even though ext2 supports symbolic links, on XP with an ext2 driver, you still don't get symbolic links. So you need the feature *both* in the operating system and the file system. > The fact is valid for > Unix as well but most Unix file systems support hard- and soft links > anyway. As do most Unix implementations - but I still remember Unix implementations which didn't support symlinks, not even in the API. > To my best knowledge links are only supported on NTFS. FAT > doesn't support links and IIRC it's not possible to create a hard link > on a remote file system. The latter is not really true: NFS most certainly supports hard links. I can't try right now, but I would be surprised if SMB didn't support both symbolic and hard links, given the right server and client versions. Regards, Martin From solipsis at pitrou.net Sat Feb 20 17:04:29 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 20 Feb 2010 16:04:29 +0000 (UTC) Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> <4B7F95FE.20703@v.loewis.de> <4B7FD0C7.4050207@v.loewis.de> Message-ID: Le Sat, 20 Feb 2010 13:08:39 +0100, Martin v. L?wis a ?crit?: > > Please be EXTREMELY careful. I urge you not to act on this until > mid-March (which is the earliest time at which Fredrik has said he may > have time to look into this). Ok, so let's wait until then before we make a decision. cheers Antoine. From lists at cheimes.de Sat Feb 20 18:24:32 2010 From: lists at cheimes.de (Christian Heimes) Date: Sat, 20 Feb 2010 18:24:32 +0100 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <4B80108C.50208@v.loewis.de> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <4B7F5985.2000606@g.nevcal.com> <4B8004E0.5010803@cheimes.de> <4B80108C.50208@v.loewis.de> Message-ID: <4B801AD0.8040008@cheimes.de> Martin v. L?wis wrote: > The latter is not really true: NFS most certainly supports hard links. > I can't try right now, but I would be surprised if SMB didn't support > both symbolic and hard links, given the right server and client versions. I've never seen nor used NFS on Windows so I can't comment on NFS. Some time ago I did some experiments with links but I wasn't able to create a hard link on a remote SMB server. However Wikipedia claims that CIFS support hard and sym links. Your conclusion regarding server and client version sounds plausible. In my humble opinion links are aliens on Windows. I wouldn't use them to implement virtualenv. :) Christian From g.brandl at gmx.net Sat Feb 20 18:16:07 2010 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 20 Feb 2010 18:16:07 +0100 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <4B7F5985.2000606@g.nevcal.com> <4B7F5C8E.7080206@trueblade.com> Message-ID: Am 20.02.2010 06:37, schrieb Michael Foord: > > > > -- > http://www.ironpythoninaction.com Nice signature! > On 19 Feb 2010, at 22:52, Eric Smith wrote: > >> Glenn Linderman wrote: >>> On approximately 2/19/2010 1:18 PM, came the following characters >>> from the keyboard of P.J. Eby: >>>> At 01:49 PM 2/19/2010 -0500, Ian Bicking wrote: >>>>> I'm not sure how this should best work on Windows (without >>>>> symlinks, >>>>> and where things generally work differently), but I would hope if >>>>> this idea is more visible that someone more opinionated than I >>>>> would >>>>> propose the appropriate analog on Windows. >>>> >>>> You'd probably have to just copy pythonv.exe to an appropriate >>>> directory, and have it use the configuration file to find the "real" >>>> prefix. At least, that'd be a relatively obvious way to do it, >>>> and it >>>> would have the advantage of being symmetrical across platforms: just >>>> copy or symlink pythonv, and make sure the real prefix is in your >>>> config file. >>>> >>>> (Windows does have "shortcuts" but I don't think that there's any >>>> way >>>> for a linked program to know *which* shortcut it was launched from.) >>> No automatic way, but shortcuts can include parameters, not just >>> the program name. So a parameter could be --prefix as was >>> suggested in another response, but for a different reason. >> >> Shortcuts don't work from the shell (well, cmd.exe, at least), do >> they? Can't test from here. > > They do if you add .lnk to your PATHEXT environment variable. Which is something we probably don't want to do globally. Georg From ianb at colorstudy.com Sat Feb 20 20:41:45 2010 From: ianb at colorstudy.com (Ian Bicking) Date: Sat, 20 Feb 2010 14:41:45 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <4B7F5985.2000606@g.nevcal.com> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <4B7F5985.2000606@g.nevcal.com> Message-ID: On Fri, Feb 19, 2010 at 10:39 PM, Glenn Linderman > wrote: > On approximately 2/19/2010 1:18 PM, came the following characters from the > keyboard of P.J. Eby: > > At 01:49 PM 2/19/2010 -0500, Ian Bicking wrote: >> >>> I'm not sure how this should best work on Windows (without symlinks, >>> and where things generally work differently), but I would hope if >>> this idea is more visible that someone more opinionated than I would >>> propose the appropriate analog on Windows. >>> >> >> You'd probably have to just copy pythonv.exe to an appropriate >> directory, and have it use the configuration file to find the "real" >> prefix. At least, that'd be a relatively obvious way to do it, and it >> would have the advantage of being symmetrical across platforms: just >> copy or symlink pythonv, and make sure the real prefix is in your >> config file. >> >> (Windows does have "shortcuts" but I don't think that there's any way >> for a linked program to know *which* shortcut it was launched from.) >> > > No automatic way, but shortcuts can include parameters, not just the > program name. So a parameter could be --prefix as was suggested in another > response, but for a different reason. > > Windows also has hard-links for files. > > A lot of Windows tools are completely ignorant of both of those linking > concepts... resulting in disks that look to be over capacity when they are > not, for example. Virtualenv uses copies when it can't use symlinks. A copy (or hard link) seems appropriate on systems that do not have symlinks. It would seem reasonable that on Windows it might look in the registry to find the actual location where Python was installed. Or... whatever technique Windows people think is best; it's simply necessary that the interpreter know its location (the isolated environment) and also know where Python is installed. All this needs to be calculated in C, as the standard library needs to be on the path very early (so os.symlink wouldn't help, but any C-level function to determine this would be helpful). (It's maybe a bit lame of me that I'm dropping this in the middle of PyCon, as I'm not online frequently during the conference; sorry about that) -- Ian Bicking | http://blog.ianbicking.org | http://twitter.com/ianbicking -------------- next part -------------- An HTML attachment was scrubbed... URL: From hodgestar+pythondev at gmail.com Sat Feb 20 21:16:10 2010 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Sat, 20 Feb 2010 22:16:10 +0200 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: <4B7FCF7C.7080001@v.loewis.de> References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> <4B7F95FE.20703@v.loewis.de> <4B7FCF7C.7080001@v.loewis.de> Message-ID: On Sat, Feb 20, 2010 at 2:03 PM, "Martin v. L?wis" wrote: > I'd rather drop ElementTree from the standard library than fork it. Fork what? Upstream ElementTree is dead. Schiavo Simon From pje at telecommunity.com Sat Feb 20 21:31:35 2010 From: pje at telecommunity.com (P.J. Eby) Date: Sat, 20 Feb 2010 15:31:35 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <4B7F5985.2000606@g.nevcal.com> Message-ID: <20100220203145.BF8933A4114@sparrow.telecommunity.com> At 02:41 PM 2/20/2010 -0500, Ian Bicking wrote: >Virtualenv uses copies when it can't use symlinks. ? A copy (or hard >link) seems appropriate on systems that do not have symlinks. ? It >would seem reasonable that on Windows it might look in the registry >to find the actual location where Python was installed. ? Or... >whatever technique Windows people think is best; it's simply >necessary that the interpreter know its location (the isolated >environment) and also know where Python is installed. ? All this >needs to be calculated in C, as the standard library needs to be on >the path very early (so os.symlink wouldn't help, but any C-level >function to determine this would be helpful). The ways pretty much boil down to: 1. Explicit per-instance configuration (either appended to the .exe or in a file adjacent to it), 2. An implicit global search/lookup (PATH, registry, etc.) 3. A combination of the two, checking explicit configuration before implicit. Since the virtualenv itself may need some kind of nearby configuration anyway, putting it in that file seems to me like the One Obvious Way To Do It. Windows does have C-level APIs for reading and writing .ini files, from the good old days before the registry existed. And the C-level code might only need to read one entry prior to booting Python anyway - a single call to the GetPrivateProfileString function, once you've determined the name of the file to be read from. From asmodai at in-nomine.org Sat Feb 20 22:27:08 2010 From: asmodai at in-nomine.org (Jeroen Ruigrok van der Werven) Date: Sat, 20 Feb 2010 22:27:08 +0100 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: <4B7FCF7C.7080001@v.loewis.de> References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> <4B7F95FE.20703@v.loewis.de> <4B7FCF7C.7080001@v.loewis.de> Message-ID: <20100220212708.GJ14271@nexus.in-nomine.org> -On [20100220 13:04], "Martin v. L?wis" (martin at v.loewis.de) wrote: >> The last commits by Fredrik to ElementTree in Python SVN that I can >> see are dated 2006-08-16. The last commits I can see to ElementTree at >> http://svn.effbot.python-hosting.com/ are dated 2006-07-05. > >And? [snip] ># Since you've effectively hijacked the library, and have created your ># own fork that's not fully compatible with any formal release of the ># upstream library, and am not contributing any patches back to ># upstream, I suggest renaming it instead. > >This may be politely phrased, but it seems that he is quite upset about >these proposed changes. > >I'd rather drop ElementTree from the standard library than fork it. Maybe I am fully misunderstanding something here and I am also known for just bluntly stating things but: Isn't inclusion into the standard library under the assumption that maintenance will be performed on the code? With all due respect to Frederik, but if you add such a module to the base distribution and then ignore it for 3-4 years I personally have a hard time feeling your 'outrage' being justified for someone who is trying to fix outstanding issues in ElementTree. I also do not find your idea of dropping the module productive either Martin. Just dropping it for no other reason because someone cannot be bothered to act as a responsible maintainer just seems not useful for Python users at all. Especially since patches *are* available. If Frederik has problems with that he should have put a bit more effort into maintaining it in the first place. -- Jeroen Ruigrok van der Werven / asmodai ????? ?????? ??? ?? ?????? http://www.in-nomine.org/ | http://www.rangaku.org/ | GPG: 2EAC625B In this short time of promise, you're a memory... From martin at v.loewis.de Sat Feb 20 22:47:21 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 20 Feb 2010 22:47:21 +0100 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: <20100220212708.GJ14271@nexus.in-nomine.org> References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> <4B7F95FE.20703@v.loewis.de> <4B7FCF7C.7080001@v.loewis.de> <20100220212708.GJ14271@nexus.in-nomine.org> Message-ID: <4B805869.40303@v.loewis.de> > Maybe I am fully misunderstanding something here and I am also known for > just bluntly stating things but: > > Isn't inclusion into the standard library under the assumption that > maintenance will be performed on the code? In general, that's the assumption, and Guido has stated that he dislikes exceptions. However, Fredrik's code was included only under the exception. ElementTree wouldn't be part of the standard library if an exception had not been made. > With all due respect to Frederik, > but if you add such a module to the base distribution and then ignore it for > 3-4 years I personally have a hard time feeling your 'outrage' being > justified for someone who is trying to fix outstanding issues in > ElementTree. If users and co-developers think that these issues absolutely must be resolved now (rather than waiting some more), I see only two options: a) ElementTree is removed from the library b) we declare that we fork ElementTree, and designate a maintainer. Just fixing the bugs without designating a maintainer is *not* an option, because we absolutely need somebody to pronounce on changes. It will not be Guido, and if it is not Fredrik, somebody else must step forward. I would then ask that person, as the first thing, to rename the package when making incompatible changes. > I also do not find your idea of dropping the module productive either > Martin. Just dropping it for no other reason because someone cannot be > bothered to act as a responsible maintainer just seems not useful for Python > users at all. Especially since patches *are* available. Well, I promised that we will stick to the procedure when integrating ElementTree. I'm not willing to break this promise. Regards, Martin From asmodai at in-nomine.org Sat Feb 20 23:19:16 2010 From: asmodai at in-nomine.org (Jeroen Ruigrok van der Werven) Date: Sat, 20 Feb 2010 23:19:16 +0100 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: <4B805869.40303@v.loewis.de> References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> <4B7F95FE.20703@v.loewis.de> <4B7FCF7C.7080001@v.loewis.de> <20100220212708.GJ14271@nexus.in-nomine.org> <4B805869.40303@v.loewis.de> Message-ID: <20100220221916.GK14271@nexus.in-nomine.org> -On [20100220 22:47], "Martin v. L?wis" (martin at v.loewis.de) wrote: >In general, that's the assumption, and Guido has stated that he dislikes >exceptions. However, Fredrik's code was included only under the >exception. ElementTree wouldn't be part of the standard library if an >exception had not been made. I was not fully aware of that bit of history, my thanks for enlightening me on it. >If users and co-developers think that these issues absolutely must be >resolved now (rather than waiting some more), I see only two options: >a) ElementTree is removed from the library >b) we declare that we fork ElementTree, and designate a maintainer. > >Just fixing the bugs without designating a maintainer is *not* an >option, because we absolutely need somebody to pronounce on changes. It >will not be Guido, and if it is not Fredrik, somebody else must step >forward. I would then ask that person, as the first thing, to rename the >package when making incompatible changes. Call me a sceptic or pragmatist, but I don't see the situation change suddenly from what it has been for the past couple of years. I vaguely remember running into problems or limitations myself with ElementTree and switching to lxml at one point. It sort of has to escalate now in order to get the maintainer to look at it and I doubt that's how we want to keep operating in the future? So the choice of removal or forking may actually be quite imminent, but of course, that's my interpretation of things. >Well, I promised that we will stick to the procedure when integrating >ElementTree. I'm not willing to break this promise. Honourable and I can understand that. Although it doesn't make it flexible to work on. -- Jeroen Ruigrok van der Werven / asmodai ????? ?????? ??? ?? ?????? http://www.in-nomine.org/ | http://www.rangaku.org/ | GPG: 2EAC625B The fragrance always stays in the hand that gives the rose... From greg.ewing at canterbury.ac.nz Sat Feb 20 23:31:17 2010 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 21 Feb 2010 11:31:17 +1300 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> Message-ID: <4B8062B5.7050606@canterbury.ac.nz> Dj Gilcrease wrote: > win2k and later have a form of sym link, the api for it is just not > provided in a nice simple app like it is on nix platforms. Yes, it's possible to create symlinks on win2k using a command line tool called 'linkd' (I've done it). However, they're extremely dangerous, because the GUI side of win2k doesn't know about them. It thinks that a symlink to a folder is a real folder, and if you delete it, you end up deleting the contents of the folder that the symlink points to. So if you use them, you need to keep your wits about you. -- Greg From ilya.sandler at gmail.com Sun Feb 21 02:48:22 2010 From: ilya.sandler at gmail.com (Ilya Sandler) Date: Sat, 20 Feb 2010 17:48:22 -0800 Subject: [Python-Dev] Ctrl-C handling in pdb Message-ID: <738e97031002201748u44343433g44b73cc7471c03c9@mail.gmail.com> I have used pdb for several years and have always wanted a gdb-like Ctrl-C handling: in gdb pressing Ctrl-C interrupts the program but the execution can be resumed later by the user (while pdb will terminate the program and throw you into postmortem debugging with no ability to resume the execution). So I implemented this functionality as http://bugs.python.org/issue7245. The patch is very simple: install a SIGINT handler and when SIGINT arrives, set the tracing. The signal handler is only activated when pdb is run as a script. I cann't think of any disadvantages. If this functionality is indeed useful and I am not missing some serious side effects, would it be possible to review the patch? Thanks, Ilya Sandler From steven.bethard at gmail.com Sun Feb 21 08:08:26 2010 From: steven.bethard at gmail.com (Steven Bethard) Date: Sat, 20 Feb 2010 23:08:26 -0800 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: References: <4B7DB77A.2070108@activestate.com> Message-ID: On Fri, Feb 19, 2010 at 7:50 AM, Brett Cannon wrote: > My notes from the session I led: > > + argparse > > ? ?- Same issues brought up. For those of us not at PyCon, what were the issues? Steve -- Where did you get that preposterous hypothesis? Did Steve tell you that? --- The Hiphopopotamus From eric at trueblade.com Sun Feb 21 10:30:37 2010 From: eric at trueblade.com (Eric Smith) Date: Sun, 21 Feb 2010 04:30:37 -0500 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: References: <4B7DB77A.2070108@activestate.com> Message-ID: <4B80FD3D.7030301@trueblade.com> Steven Bethard wrote: > On Fri, Feb 19, 2010 at 7:50 AM, Brett Cannon wrote: >> My notes from the session I led: >> >> + argparse >> >> - Same issues brought up. > > For those of us not at PyCon, what were the issues? I think they were all related to deprecation of optparse, not anything to do with argparse itself. I don't recall any specific decision on deprecation, but my sense was that optparse will be around for a long, long time. There was also a quick discussion on maybe implementing optparse using argparse, then getting rid of the existing optparse. Maybe you can comment on that. Eric. From guido at python.org Sun Feb 21 14:45:11 2010 From: guido at python.org (Guido van Rossum) Date: Sun, 21 Feb 2010 08:45:11 -0500 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: <4B80FD3D.7030301@trueblade.com> References: <4B7DB77A.2070108@activestate.com> <4B80FD3D.7030301@trueblade.com> Message-ID: On Sun, Feb 21, 2010 at 4:30 AM, Eric Smith wrote: > Steven Bethard wrote: >> >> On Fri, Feb 19, 2010 at 7:50 AM, Brett Cannon wrote: >>> >>> My notes from the session I led: >>> >>> + argparse >>> >>> ? - Same issues brought up. >> >> For those of us not at PyCon, what were the issues? > > I think they were all related to deprecation of optparse, not anything to do > with argparse itself. I don't recall any specific decision on deprecation, > but my sense was that optparse will be around for a long, long time. There > was also a quick discussion on maybe implementing optparse using argparse, > then getting rid of the existing optparse. Maybe you can comment on that. Maybe the best thing is to make optparse *silently* deprecated, with a big hint at the top of its documentation telling new users to use argparse instead, but otherwise leaving it in indefinitely for the benefit of the many existing users. -- --Guido van Rossum (python.org/~guido) From fuzzyman at voidspace.org.uk Sun Feb 21 14:49:07 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sun, 21 Feb 2010 08:49:07 -0500 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: References: <4B7DB77A.2070108@activestate.com> <4B80FD3D.7030301@trueblade.com> Message-ID: <4B8139D3.6010206@voidspace.org.uk> On 21/02/2010 08:45, Guido van Rossum wrote: > On Sun, Feb 21, 2010 at 4:30 AM, Eric Smith wrote: > >> Steven Bethard wrote: >> >>> On Fri, Feb 19, 2010 at 7:50 AM, Brett Cannon wrote: >>> >>>> My notes from the session I led: >>>> >>>> + argparse >>>> >>>> - Same issues brought up. >>>> >>> For those of us not at PyCon, what were the issues? >>> >> I think they were all related to deprecation of optparse, not anything to do >> with argparse itself. I don't recall any specific decision on deprecation, >> but my sense was that optparse will be around for a long, long time. There >> was also a quick discussion on maybe implementing optparse using argparse, >> then getting rid of the existing optparse. Maybe you can comment on that. >> > Maybe the best thing is to make optparse *silently* deprecated, with a > big hint at the top of its documentation telling new users to use > argparse instead, but otherwise leaving it in indefinitely for the > benefit of the many existing users. > > +1 argparse is a great step forward but there is no need to disrupt existing users - just direct new users to the place they should go. We've done that with a couple of the commonly used but extraneous methods in unittest - deprecation via documentation. Michael -- http://www.ironpythoninaction.com/ From skip at pobox.com Sun Feb 21 15:19:31 2010 From: skip at pobox.com (skip at pobox.com) Date: Sun, 21 Feb 2010 08:19:31 -0600 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: References: <4B7DB77A.2070108@activestate.com> <4B80FD3D.7030301@trueblade.com> Message-ID: <19329.16627.899420.785071@montanaro.dyndns.org> Guido> Maybe the best thing is to make optparse *silently* deprecated, Guido> with a big hint at the top of its documentation telling new users Guido> to use argparse instead, but otherwise leaving it in indefinitely Guido> for the benefit of the many existing users. Would a 2to3 fixer be possible? S From benjamin at python.org Sun Feb 21 15:29:43 2010 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 21 Feb 2010 08:29:43 -0600 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: <19329.16627.899420.785071@montanaro.dyndns.org> References: <4B7DB77A.2070108@activestate.com> <4B80FD3D.7030301@trueblade.com> <19329.16627.899420.785071@montanaro.dyndns.org> Message-ID: <1afaf6161002210629m5e4ff08s299f717a467c2239@mail.gmail.com> 2010/2/21 : > > ? ?Guido> Maybe the best thing is to make optparse *silently* deprecated, > ? ?Guido> with a big hint at the top of its documentation telling new users > ? ?Guido> to use argparse instead, but otherwise leaving it in indefinitely > ? ?Guido> for the benefit of the many existing users. > > Would a 2to3 fixer be possible? I don't think so. There would be subtle semantic difference 2to3 couldn't detect. -- Regards, Benjamin From larry at hastings.org Sun Feb 21 18:32:43 2010 From: larry at hastings.org (Larry Hastings) Date: Sun, 21 Feb 2010 12:32:43 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: References: Message-ID: <4B816E3B.9080509@hastings.org> Ian Bicking wrote: > This is a proto-proposal for including some functionality from > virtualenv in Python itself. I'm not entirely confident about what > I'm proposing, so it's not really PEP-ready, but I wanted to get > feedback... > > First, a bit about how virtualenv works (this will use Linux > conventions; Windows and some Mac installations are slightly different): > > * Let's say you are creating an environment in ~/env/ > * /usr/bin/python is *copied* to ~/env/bin/python > * This alone sets sys.prefix to ~/env/ (via existing code in Python) > * At this point things are broken because the standard library is not > available > * virtualenv creates ~/env/lib/pythonX.Y/site.py, which adds the > system standard library location (/usr/lib/pythonX.Y) to sys.path > * site.py itself requires several modules to work, and each of these > modules (from a pre-determined list of modules) is symlinked over from > the standard library into ~/env/lib/pythonX.Y/ > * site.py may or may not add /usr/lib/pythonX.Y/site-packages to sys.path > * *Any* time you use ~/env/bin/python you'll get sys.prefix of ~/env/, > and the appropriate path. No environmental variable is required. > * No compiler is used; this is a fairly light tool I have a tool that also creates a virtualized Python environment, but doesn't solve the problem as thoroughly as virtualenv. I limp along by tweaking PYTHONUSERBASE and PYTHONPATH. I'm very interested in seeing something like this make it in to Python. A few inline comments: > * I'd rather ~/env/bin/python be a symlink instead of copying it. The thread discussing Windows suggests that we shouldn't use symlinks there. I'd say either copying or symlinking pythonv should be supported, and on Windows we recommend copying pythonv.exe. > * Compiling extensions can be tricky because code may not find headers > (because they are installed in /usr, not ~/env/). I think this can be > handled better if virtualenv is slightly less intrusive, or distutils > is patched, or generally tools are more aware of this layout. Conversely, headers may be installed in ~/env and not /usr. The compiler should probably look in both places. But IIUC telling the compiler how to do that is only vaguely standardized--Microsoft's CL.EXE doesn't seem to support any environment variable containing an include /path/. I suspect solving this in a general way is out-of-band for pythonv, but I'm willing to have my mind changed. Certainly pythonv should add its prefix directory to LD_LIBRARY_PATH on Linux. > Additionally, the binary will look for a configuration file. I'm not > sure where this file should go; perhaps directly alongside the binary, > or in some location based on sys.prefix. > > The configuration file would be a simple set of assignments; some I > might imagine: > > * Maybe override sys.prefix > * Control if the global site-packages is placed on sys.path > * On some operating systems there are other locations for packages > installed with the system packager; probably these should be possible > to enable or disable > * Maybe control installations or point to a file like distutils.cfg I'm unexcited by this; I think simpler is better. pythonv should virtualize environments layered on top of python, and should have one obvious predictable behavior. Certainly if it supports a configuration file pythonv should run without it and pick sensible defaults. What are the use cases where you need these things to be configurable? Let me propose further about python and pythonv: * As Antoine suggested, the CPython interpreter should sprout a new command-line switch, "--prefix", which adds a new prefix directory. * pythonv's purpose in life is to infer your prefix directory and run "pythonX.X --prefix [ all args it got ... ]". * Should pythonv should be tied to the specific Python executable? If you run install pythonv as "python", should it look for "python" or explicitly look for the specific Python it shipped with, like "python3.2"? I suspect the latter though I'm no longer sure. I'm one of those folks who'd like to see this be stackable. If we tweak the semantics just a bit I think it works: * pythonv should inspect its --prefix arguments, as well as passing them on to the child python process it runs. * When pythonv wants to run the next python process in line, it scans the path looking for the pythonX.X interpreter but /ignores/ all the interpreters that are in in a --prefix bin directory it's already seen. * python handles multiple --prefix options, and later ones take precedence over earlier ones. * What should sys.interpreter be? Explicit is better than implicit: the first pythonv to run also adds a --interpreter to the front of the command-line. Or they could all add it and python only uses the last one. This is one area where "python" vs "python3.2" makes things a little complicated. I'm at PyCon and would be interested in debating / sprinting on this if there's interest. /larry/ From amentajo at msu.edu Sun Feb 21 19:00:55 2010 From: amentajo at msu.edu (Joe Amenta) Date: Sun, 21 Feb 2010 13:00:55 -0500 Subject: [Python-Dev] lib3to2 in the 3.2 standard library? Message-ID: <4dc473a51002211000u157cb58ic7e5e2efa88a535b@mail.gmail.com> Hey folks, I'm going to write a PEP for inclusion of lib3to2 in the standard library for 3.2 and above. Before do, though, do people have any quick thoughts about it? My inclination is to get it stabilized beforehand (perhaps during another GSoC) by fleshing out the fixer that warns about backwards-incompatible features in Python 3 and by finishing up the fix_imports2 fixer, probably involving a rewrite. http://bitbucket.org/amentajo/lib3to2 is where the source is hosted (there's a separate branch for 3.1) http://www.startcodon.com/wordpress/?cat=8 is where I blog about it. http://pypi.python.org/pypi/3to2 is the PyPI page that has both of those links. --Joe Amenta -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sun Feb 21 19:15:42 2010 From: guido at python.org (Guido van Rossum) Date: Sun, 21 Feb 2010 13:15:42 -0500 Subject: [Python-Dev] lib3to2 in the 3.2 standard library? In-Reply-To: <4dc473a51002211000u157cb58ic7e5e2efa88a535b@mail.gmail.com> References: <4dc473a51002211000u157cb58ic7e5e2efa88a535b@mail.gmail.com> Message-ID: What would be the advantage of having it in the stdlib? It would put a stop on further development once it's been released as such, which strike me as a bad idea for a heuristic tool like 3to2. I would expect that you will continue to add more rules (and more exceptions) to it for the foreseeable future. You'll probably want to continue to maintain it as a 3rd party library so you can release those rules easily. Sounds like a bad idea to put it also in the stdlib, since you can't effectively update the stdlib version. --Guido On Sun, Feb 21, 2010 at 1:00 PM, Joe Amenta wrote: > Hey folks, > I'm going to write a PEP for inclusion of lib3to2 in the standard library > for 3.2 and above. ?Before do, though, do people have any quick thoughts > about it? > My inclination is to get it stabilized beforehand (perhaps during another > GSoC) by fleshing out the fixer that warns about backwards-incompatible > features in Python 3 and by finishing up the fix_imports2 fixer, probably > involving a rewrite. > http://bitbucket.org/amentajo/lib3to2 is where the source is hosted (there's > a separate branch for 3.1) > http://www.startcodon.com/wordpress/?cat=8?is where I blog about it. > http://pypi.python.org/pypi/3to2?is the PyPI page that has both of those > links. > --Joe Amenta > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) From steven.bethard at gmail.com Sun Feb 21 19:26:00 2010 From: steven.bethard at gmail.com (Steven Bethard) Date: Sun, 21 Feb 2010 10:26:00 -0800 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: <1afaf6161002210629m5e4ff08s299f717a467c2239@mail.gmail.com> References: <4B7DB77A.2070108@activestate.com> <4B80FD3D.7030301@trueblade.com> <19329.16627.899420.785071@montanaro.dyndns.org> <1afaf6161002210629m5e4ff08s299f717a467c2239@mail.gmail.com> Message-ID: Thanks all for the updates. Sorry I can't make it to PyCon this year! On Sun, Feb 21, 2010 at 1:30 AM, Eric Smith wrote: > There was also a quick discussion on maybe implementing optparse using > argparse, then getting rid of the existing optparse. I think the PEP pretty much already covers why this isn't possible. See: http://www.python.org/dev/peps/pep-0389/#why-isn-t-the-functionality-just-being-added-to-optparse Some of the reasons this would be really difficult include optparse's baroque extension API and the fact that it exposes the internals of its parsing algorithm, which means it's impossible to use a better algorithm that has a different implementation. On Sun, Feb 21, 2010 at 6:19 AM, wrote: > Would a 2to3 fixer be possible? On Sun, Feb 21, 2010 at 6:29 AM, Benjamin Peterson wrote: > I don't think so. There would be subtle semantic difference 2to3 > couldn't detect. Yep, that's probably right. And I don't know how I'd write the fixers for anyone who was using the old optparse extension API. On Sun, Feb 21, 2010 at 5:45 AM, Guido van Rossum wrote: > Maybe the best thing is to make optparse *silently* deprecated, with a > big hint at the top of its documentation telling new users to use > argparse instead, but otherwise leaving it in indefinitely for the > benefit of the many existing users. So basically do what the PEP does now, except don't remove optparse in Python 3.5? For reference, the current proposal is: * Python 2.7+ and 3.2+ -- The following note will be added to the optparse documentation: The optparse module is deprecated and will not be developed further; development will continue with the argparse module. * Python 2.7+ -- If the Python 3 compatibility flag, -3, is provided at the command line, then importing optparse will issue a DeprecationWarning. Otherwise no warnings will be issued. * Python 3.2 (estimated Jun 2010) -- Importing optparse will issue a PendingDeprecationWarning, which is not displayed by default. * Python 3.3 (estimated Jan 2012) -- Importing optparse will issue a PendingDeprecationWarning, which is not displayed by default. * Python 3.4 (estimated Jun 2013) -- Importing optparse will issue a DeprecationWarning, which is displayed by default. * Python 3.5 (estimated Jan 2015) -- The optparse module will be removed. So if I drop that last bullet, is the PEP ready for pronouncement? Steve -- Where did you get that preposterous hypothesis? Did Steve tell you that? --- The Hiphopopotamus From guido at python.org Sun Feb 21 19:31:50 2010 From: guido at python.org (Guido van Rossum) Date: Sun, 21 Feb 2010 13:31:50 -0500 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: References: <4B7DB77A.2070108@activestate.com> <4B80FD3D.7030301@trueblade.com> <19329.16627.899420.785071@montanaro.dyndns.org> <1afaf6161002210629m5e4ff08s299f717a467c2239@mail.gmail.com> Message-ID: On Sun, Feb 21, 2010 at 1:26 PM, Steven Bethard wrote: > On Sun, Feb 21, 2010 at 5:45 AM, Guido van Rossum wrote: >> Maybe the best thing is to make optparse *silently* deprecated, with a >> big hint at the top of its documentation telling new users to use >> argparse instead, but otherwise leaving it in indefinitely for the >> benefit of the many existing users. > > So basically do what the PEP does now, except don't remove optparse in > Python 3.5? ?For reference, the current proposal is: > > * Python 2.7+ and 3.2+ -- The following note will be added to the > optparse documentation: > ? ?The optparse module is deprecated and will not be developed > further; development will continue with the argparse module. > * Python 2.7+ -- If the Python 3 compatibility flag, -3, is provided > at the command line, then importing optparse will issue a > DeprecationWarning. Otherwise no warnings will be issued. > * Python 3.2 (estimated Jun 2010) -- Importing optparse will issue a > PendingDeprecationWarning, which is not displayed by default. > * Python 3.3 (estimated Jan 2012) -- Importing optparse will issue a > PendingDeprecationWarning, which is not displayed by default. > * Python 3.4 (estimated Jun 2013) -- Importing optparse will issue a > DeprecationWarning, which is displayed by default. > * Python 3.5 (estimated Jan 2015) -- The optparse module will be removed. > > So if I drop that last bullet, is the PEP ready for pronouncement? Drop the last two bullets and it's a deal. (OTOH AFAIK we changed DeprecationWarning so it is *not* displayed by default.) -- --Guido van Rossum (python.org/~guido) From martin at v.loewis.de Sun Feb 21 19:39:16 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 21 Feb 2010 19:39:16 +0100 Subject: [Python-Dev] lib3to2 in the 3.2 standard library? In-Reply-To: <4dc473a51002211000u157cb58ic7e5e2efa88a535b@mail.gmail.com> References: <4dc473a51002211000u157cb58ic7e5e2efa88a535b@mail.gmail.com> Message-ID: <4B817DD4.7030402@v.loewis.de> Joe Amenta wrote: > Hey folks, > > I'm going to write a PEP for inclusion of lib3to2 in the standard > library for 3.2 and above. Before do, though, do people have any quick > thoughts about it? I do: it seems too early to me. Before it is added, I'd like a see a significant success story of its use. When lib2to3 was added, both Guido and Benjamin were also concerned that it is too early, and that 2to3 would change over time, so somehow freezing the API was undesirable. I personally felt more comfortable: I had ported a large project myself (Django), so I knew what little API I used could actually help porting, and also that the set of fixers implemented was useful (although the initial port had to made quite some work-arounds for missing and incorrect fixers which aren't need today anymore). So find somebody to write a large project in Python 3, and then have them backport it with 3to2 :-) Seriously, find somebody who ports a large Python 2 project (e.g. with 2to3) in a burn-your-bridges fashion, and then uses 3to2 to provide 2.x releases. In addition, I recall talk (again from Guido and Benjamin, and perhaps also Collin) that they would prefer all of this to be rewritten as a general rewriting library, where 2to3 and 3to2 would just be specific sets of transformations. If that is still the case, I think such a PEP should address that (and the code should actually be refactored > My inclination is to get it stabilized beforehand (perhaps during > another GSoC) by fleshing out the fixer that warns about > backwards-incompatible features in Python 3 and by finishing up the > fix_imports2 fixer, probably involving a rewrite. I think stabilization must require applications. So any further development should focus on that. In my experience, this means that you'll have to do the porting yourself, and then offer patches. Approach some package author(s) whether they would be interested in using 3to2 if you did the work for them, and then pick one such project as the litmus test. My expectation is that you'll find a need for additional changes in doing so. Regards, Martin From ianb at colorstudy.com Sun Feb 21 19:43:59 2010 From: ianb at colorstudy.com (Ian Bicking) Date: Sun, 21 Feb 2010 13:43:59 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <4B816E3B.9080509@hastings.org> References: <4B816E3B.9080509@hastings.org> Message-ID: On Sun, Feb 21, 2010 at 12:32 PM, Larry Hastings wrote: > * I'd rather ~/env/bin/python be a symlink instead of copying it. >> > > The thread discussing Windows suggests that we shouldn't use symlinks > there. I'd say either copying or symlinking pythonv should be supported, > and on Windows we recommend copying pythonv.exe. Sure, on Windows this is clearly the case. I'm not sure if it's worth supporting elsewhere. One problem with copying is that (a) you don't know where it is copied from (requiring extra information somewhere) and (b) if there is a minor Python release then things break (because you've copied an old interpreter). Probably there should be a check to catch (b) and print an appropriate (helpful) error. > * Compiling extensions can be tricky because code may not find headers >> (because they are installed in /usr, not ~/env/). I think this can be >> handled better if virtualenv is slightly less intrusive, or distutils is >> patched, or generally tools are more aware of this layout. >> > > Conversely, headers may be installed in ~/env and not /usr. The compiler > should probably look in both places. But IIUC telling the compiler how to > do that is only vaguely standardized--Microsoft's CL.EXE doesn't seem to > support any environment variable containing an include /path/. > > I suspect solving this in a general way is out-of-band for pythonv, but I'm > willing to have my mind changed. Certainly pythonv should add its prefix > directory to LD_LIBRARY_PATH on Linux. Yes, it might be possible to change distutils to be aware of this, and some things will work okay as a result. Some things will really require changes to the problematic project's setup.py to support this. > > Additionally, the binary will look for a configuration file. I'm not sure >> where this file should go; perhaps directly alongside the binary, or in some >> location based on sys.prefix. >> >> The configuration file would be a simple set of assignments; some I might >> imagine: >> >> * Maybe override sys.prefix >> * Control if the global site-packages is placed on sys.path >> * On some operating systems there are other locations for packages >> installed with the system packager; probably these should be possible to >> enable or disable >> * Maybe control installations or point to a file like distutils.cfg >> > > I'm unexcited by this; I think simpler is better. pythonv should > virtualize environments layered on top of python, and should have one > obvious predictable behavior. Certainly if it supports a configuration file > pythonv should run without it and pick sensible defaults. > > What are the use cases where you need these things to be configurable? > * Override sys.prefix: allow you to put the binary in someplace other than, say, ~/env/bin/python and still support an environment in ~/env/. Also the use case of looking for libraries in a location based on the interpreter name (not the containing directory), like supporting /usr/bin/python2.7 and /usr/bin/python2.7-dbg. * Control global site-packages: people use this all the time with virtualenv. * Other locations: well, since Ubuntu/Debian are using dist-packages and whatnot, to get *full* isolation you might want to avoid this. This is really handy when testing setup instructions. * Control installations: right now distutils only really looks in /usr/lib/pythonX.Y/distutils/distutils.cfg for settings. virtualenv monkeypatches distutils to look in /lib/pythonX.Y/distutils/distutils.cfg in addition, and several people use this feature to control virtualenv-local installation. > > Let me propose further about python and pythonv: > > * As Antoine suggested, the CPython interpreter should sprout a new > command-line switch, "--prefix", which adds a new prefix directory. > OK; or at least, it seems fine that this would be equivalent. > * pythonv's purpose in life is to infer your prefix directory and > run "pythonX.X --prefix [ all args it got ... ]". > I don't see any reason to call the other Python binary, it might as well just act like it was changed. sys.executable *must* point to the originally called interpreter anyway. > * Should pythonv should be tied to the specific Python executable? If > you run install pythonv as "python", should it look for > "python" or explicitly look for the specific Python it shipped > with, like "python3.2"? I suspect the latter though I'm no longer > sure. > Experience shows the latter, plus this would only really make sense if you really called the other interpreter (which I guess you could if you also added an --executable option or something to fix sys.executable). If you did that, then maybe it would be possible to do with PEP 3147 ( http://www.python.org/dev/peps/pep-3147/) since that makes it more feasible to support multiple Python versions with a single set of installed libraries. 3147 is important (and that it be backported to 2.7). I have to think about it a bit... but maybe with this it would be possible to move these environments around without breaking things. That would be compelling. > I'm one of those folks who'd like to see this be stackable. If we tweak > the semantics just a bit I think it works: > > * pythonv should inspect its --prefix arguments, as well as passing > them on to the child python process it runs. > With a config file I'd just expect a list of prefixes being allowed; directly nesting feels unnecessarily awkward. You could use a : (or Windows-semicolon) list just like with PYTHONPATH. > * When pythonv wants to run the next python process in line, it > scans the path looking for the pythonX.X interpreter but /ignores/ > all the interpreters that are in in a --prefix bin directory it's > already seen. > * python handles multiple --prefix options, and later ones take > precedence over earlier ones. > * What should sys.interpreter be? Explicit is better than implicit: > the first pythonv to run also adds a --interpreter to > the front of the command-line. Or they could all add it and > python only uses the last one. This is one area where "python" vs > "python3.2" makes things a little complicated. > > Ah, yes, the same problem I note above. It should definitely be the thing the person actually typed, or what is in the #! line. > > I'm at PyCon and would be interested in debating / sprinting on this if > there's interest. > Yeah, if you see me around, please catch me! -- Ian Bicking | http://blog.ianbicking.org | http://twitter.com/ianbicking -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.bethard at gmail.com Sun Feb 21 20:00:44 2010 From: steven.bethard at gmail.com (Steven Bethard) Date: Sun, 21 Feb 2010 11:00:44 -0800 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: References: <4B7DB77A.2070108@activestate.com> <4B80FD3D.7030301@trueblade.com> <19329.16627.899420.785071@montanaro.dyndns.org> <1afaf6161002210629m5e4ff08s299f717a467c2239@mail.gmail.com> Message-ID: On Sun, Feb 21, 2010 at 10:31 AM, Guido van Rossum wrote: > On Sun, Feb 21, 2010 at 1:26 PM, Steven Bethard wrote: >> So basically do what the PEP does now, except don't remove optparse in >> Python 3.5? ?For reference, the current proposal is: >> >> * Python 2.7+ and 3.2+ -- The following note will be added to the >> optparse documentation: >> ? ?The optparse module is deprecated and will not be developed >> further; development will continue with the argparse module. >> * Python 2.7+ -- If the Python 3 compatibility flag, -3, is provided >> at the command line, then importing optparse will issue a >> DeprecationWarning. Otherwise no warnings will be issued. >> * Python 3.2 (estimated Jun 2010) -- Importing optparse will issue a >> PendingDeprecationWarning, which is not displayed by default. >> * Python 3.3 (estimated Jan 2012) -- Importing optparse will issue a >> PendingDeprecationWarning, which is not displayed by default. >> * Python 3.4 (estimated Jun 2013) -- Importing optparse will issue a >> DeprecationWarning, which is displayed by default. >> * Python 3.5 (estimated Jan 2015) -- The optparse module will be removed. >> >> So if I drop that last bullet, is the PEP ready for pronouncement? > > Drop the last two ?bullets and it's a deal. (OTOH AFAIK we changed > DeprecationWarning so it is *not* displayed by default.) Done: http://www.python.org/dev/peps/pep-0389/#deprecation-of-optparse Thank you, and thanks to all who helped in the discussion of this PEP! My plan is to make a final external release of argparse (1.1) fixing some current issues, and then merge that into the Python repository. I should be able to get this done before Python 2.7 alpha 4 on 2010-03-06. Steve -- Where did you get that preposterous hypothesis? Did Steve tell you that? --- The Hiphopopotamus From brett at python.org Sun Feb 21 20:36:45 2010 From: brett at python.org (Brett Cannon) Date: Sun, 21 Feb 2010 14:36:45 -0500 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: References: <4B7DB77A.2070108@activestate.com> <4B80FD3D.7030301@trueblade.com> <19329.16627.899420.785071@montanaro.dyndns.org> <1afaf6161002210629m5e4ff08s299f717a467c2239@mail.gmail.com> Message-ID: On Sun, Feb 21, 2010 at 13:31, Guido van Rossum wrote: > On Sun, Feb 21, 2010 at 1:26 PM, Steven Bethard > wrote: > > On Sun, Feb 21, 2010 at 5:45 AM, Guido van Rossum > wrote: > >> Maybe the best thing is to make optparse *silently* deprecated, with a > >> big hint at the top of its documentation telling new users to use > >> argparse instead, but otherwise leaving it in indefinitely for the > >> benefit of the many existing users. > > > > So basically do what the PEP does now, except don't remove optparse in > > Python 3.5? For reference, the current proposal is: > > > > * Python 2.7+ and 3.2+ -- The following note will be added to the > > optparse documentation: > > The optparse module is deprecated and will not be developed > > further; development will continue with the argparse module. > > * Python 2.7+ -- If the Python 3 compatibility flag, -3, is provided > > at the command line, then importing optparse will issue a > > DeprecationWarning. Otherwise no warnings will be issued. > > * Python 3.2 (estimated Jun 2010) -- Importing optparse will issue a > > PendingDeprecationWarning, which is not displayed by default. > > * Python 3.3 (estimated Jan 2012) -- Importing optparse will issue a > > PendingDeprecationWarning, which is not displayed by default. > > * Python 3.4 (estimated Jun 2013) -- Importing optparse will issue a > > DeprecationWarning, which is displayed by default. > > * Python 3.5 (estimated Jan 2015) -- The optparse module will be removed. > > > > So if I drop that last bullet, is the PEP ready for pronouncement? > > Drop the last two bullets and it's a deal. (OTOH AFAIK we changed > DeprecationWarning so it is *not* displayed by default. Yes, DeprecationWarning is now silent under Python 2.7 and 3.1 so a DeprecationWarning would only pop up if developers exposed DeprecationWarning. But if the module is not about to be removed in 3.x then I think regardless of the silence of both warnings it should stay PendingDeprecationWarning. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Sun Feb 21 19:51:41 2010 From: tseaver at palladion.com (Tres Seaver) Date: Sun, 21 Feb 2010 13:51:41 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <52dc1c821002191330g34a71ea3i2eb118755348bba1@mail.gmail.com> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <52dc1c821002191330g34a71ea3i2eb118755348bba1@mail.gmail.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Gregory P. Smith wrote: > On Fri, Feb 19, 2010 at 4:18 PM, P.J. Eby wrote: >> At 01:49 PM 2/19/2010 -0500, Ian Bicking wrote: >>> I'm not sure how this should best work on Windows (without symlinks, and >>> where things generally work differently), but I would hope if this idea is >>> more visible that someone more opinionated than I would propose the >>> appropriate analog on Windows. >> You'd probably have to just copy pythonv.exe to an appropriate directory, >> and have it use the configuration file to find the "real" prefix. At least, >> that'd be a relatively obvious way to do it, and it would have the advantage >> of being symmetrical across platforms: just copy or symlink pythonv, and >> make sure the real prefix is in your config file. +1 for having the conf file in the same directory as the pythonv esecutable (yes, I know it isn't FHS compatible, but virtualevn is kind of antithetical to the spirit of FHS anyway). >> (Windows does have "shortcuts" but I don't think that there's any way for a >> linked program to know *which* shortcut it was launched from.) > > Some recent discussion pointed out that vista and win7 ntfs actually > supports symlinks. the same question about determining where it was > launched from may still hold there? (and we need this to work on xp). > > How often do windows users need something like virtualenv? (Asking > for experience from windows users of all forms here). I personally > can't imagine anyone that would ever use a system generic python > install from a .msi unless they're just learning python. I would hope > people would already use py2exe or similar and include an entire > CPython VM with their app with their own installer but as I really > have nothing to do with windows these days I'm sure I'm wrong. > > What about using virtualenv with ironpython and jython? does it make > any sense in that context? how do we make it not impossible for them > to support? virtualenv already works with jython: I used it just the other day to test installing BFG in a jython sandbox (which also worked fine). Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkuBgLgACgkQ+gerLs4ltQ5x8ACghv5gXczECU+gKHmZg6L+LYA1 CWMAn0j99m9TtE0LeQ2Z9zOUpse3P53b =l+uZ -----END PGP SIGNATURE----- From eric at trueblade.com Sun Feb 21 20:46:01 2010 From: eric at trueblade.com (Eric Smith) Date: Sun, 21 Feb 2010 14:46:01 -0500 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: References: <4B7DB77A.2070108@activestate.com> <4B80FD3D.7030301@trueblade.com> <19329.16627.899420.785071@montanaro.dyndns.org> <1afaf6161002210629m5e4ff08s299f717a467c2239@mail.gmail.com> Message-ID: <4B818D79.2040609@trueblade.com> Brett Cannon wrote: > Yes, DeprecationWarning is now silent under Python 2.7 and 3.1 so a > DeprecationWarning would only pop up if developers exposed > DeprecationWarning. But if the module is not about to be removed in 3.x > then I think regardless of the silence of both warnings it should stay > PendingDeprecationWarning. But if we're never going to change it to a DeprecationWarning, it's not pending. So why not just change the docs and not add any warnings to the code? Eric. From brett at python.org Sun Feb 21 21:00:00 2010 From: brett at python.org (Brett Cannon) Date: Sun, 21 Feb 2010 15:00:00 -0500 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: <4B818D79.2040609@trueblade.com> References: <4B7DB77A.2070108@activestate.com> <4B80FD3D.7030301@trueblade.com> <19329.16627.899420.785071@montanaro.dyndns.org> <1afaf6161002210629m5e4ff08s299f717a467c2239@mail.gmail.com> <4B818D79.2040609@trueblade.com> Message-ID: On Sun, Feb 21, 2010 at 14:46, Eric Smith wrote: > Brett Cannon wrote: > >> Yes, DeprecationWarning is now silent under Python 2.7 and 3.1 so a >> DeprecationWarning would only pop up if developers exposed >> DeprecationWarning. But if the module is not about to be removed in 3.x then >> I think regardless of the silence of both warnings it should stay >> PendingDeprecationWarning. >> > > But if we're never going to change it to a DeprecationWarning, it's not > pending. Well, it's pending until Py4K, so it's accurate, we just don't know yet when it will change to an actual deprecation. =) > So why not just change the docs and not add any warnings to the code? Could, but the code will go away some day and not everyone will read the docs to realize that they might want to upgrade their code if they care to use the shiniest thing in the standard library. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From nyamatongwe at gmail.com Sun Feb 21 21:26:25 2010 From: nyamatongwe at gmail.com (Neil Hodgson) Date: Mon, 22 Feb 2010 07:26:25 +1100 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <4B816E3B.9080509@hastings.org> References: <4B816E3B.9080509@hastings.org> Message-ID: <50862ebd1002211226r3c2e974fib2991e6661b06f7a@mail.gmail.com> Larry Hastings: > But IIUC telling the compiler how to > do that is only vaguely standardized--Microsoft's CL.EXE doesn't seem to > support any environment variable containing an include /path/. The INCLUDE environment variable is a list of ';' separated paths http://msdn.microsoft.com/en-us/library/36k2cdd4%28VS.100%29.aspx Neil From collinwinter at google.com Sun Feb 21 21:28:02 2010 From: collinwinter at google.com (Collin Winter) Date: Sun, 21 Feb 2010 15:28:02 -0500 Subject: [Python-Dev] Mercurial repository for Python benchmarks Message-ID: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> Hey Dirkjan, Would it be possible for us to get a Mercurial repository on python.org for the Unladen Swallow benchmarks? Maciej and I would like to move the benchmark suite out of Unladen Swallow and into python.org, where all implementations can share it and contribute to it. PyPy has been adding some benchmarks to their copy of the Unladen benchmarks, and we'd like to have as well, and Mercurial seems to be an ideal solution to this. Thanks, Collin Winter From djc.ochtman at gmail.com Sun Feb 21 21:31:03 2010 From: djc.ochtman at gmail.com (Dirkjan Ochtman) Date: Sun, 21 Feb 2010 15:31:03 -0500 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> Message-ID: Hi Collin (and others), On Sun, Feb 21, 2010 at 15:28, Collin Winter wrote: > Would it be possible for us to get a Mercurial repository on > python.org for the Unladen Swallow benchmarks? Maciej and I would like > to move the benchmark suite out of Unladen Swallow and into > python.org, where all implementations can share it and contribute to > it. PyPy has been adding some benchmarks to their copy of the Unladen > benchmarks, and we'd like to have as well, and Mercurial seems to be > an ideal solution to this. Just a repository on hg.python.org? Sounds good to me. Are you staying for the sprints? We'll just do it. (Might need to figure out some hooks we want to put up with it.) Cheers, Dirkjan From collinwinter at google.com Sun Feb 21 21:34:36 2010 From: collinwinter at google.com (Collin Winter) Date: Sun, 21 Feb 2010 15:34:36 -0500 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> Message-ID: <3c8293b61002211234i74acaefbv6fd269aa8fc9a595@mail.gmail.com> On Sun, Feb 21, 2010 at 3:31 PM, Dirkjan Ochtman wrote: > Hi Collin (and others), > > On Sun, Feb 21, 2010 at 15:28, Collin Winter wrote: >> Would it be possible for us to get a Mercurial repository on >> python.org for the Unladen Swallow benchmarks? Maciej and I would like >> to move the benchmark suite out of Unladen Swallow and into >> python.org, where all implementations can share it and contribute to >> it. PyPy has been adding some benchmarks to their copy of the Unladen >> benchmarks, and we'd like to have as well, and Mercurial seems to be >> an ideal solution to this. > > Just a repository on hg.python.org? > > Sounds good to me. Are you staying for the sprints? We'll just do it. > (Might need to figure out some hooks we want to put up with it.) Yep, that's all we want. I'll be around for the sprints through Tuesday, sitting at the Unladen Swallow sprint. Collin Winter From larry at hastings.org Sun Feb 21 21:53:33 2010 From: larry at hastings.org (Larry Hastings) Date: Sun, 21 Feb 2010 15:53:33 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: References: <4B816E3B.9080509@hastings.org> Message-ID: <4B819D4D.20904@hastings.org> Ian Bicking wrote: > On Sun, Feb 21, 2010 at 12:32 PM, Larry Hastings > wrote: > * Override sys.prefix: allow you to put the binary in someplace other > than, say, ~/env/bin/python and still support an environment in > ~/env/. Also the use case of looking for libraries in a location > based on the interpreter name (not the containing directory), like > supporting /usr/bin/python2.7 and /usr/bin/python2.7-dbg. I'm new to this: why would you want to change sys.prefix in the first place? Its documentation implies that it's where Python itself is installed. I see two uses in the standard library (trace and gettext) and they both look like they'd get confused if sys.prefix pointed at a virtualized directory. > * Control global site-packages: people use this all the time with > virtualenv. > * Other locations: well, since Ubuntu/Debian are using dist-packages > and whatnot, to get *full* isolation you might want to avoid this. > This is really handy when testing setup instructions. > * Control installations: right now distutils only really looks in > /usr/lib/pythonX.Y/distutils/distutils.cfg for settings. virtualenv > monkeypatches distutils to look in > /lib/pythonX.Y/distutils/distutils.cfg in addition, and > several people use this feature to control virtualenv-local installation. Okey-doke, I defer to your experience. Obviously if this is going into Python we can do better than monkeypatching distutils. > * pythonv's purpose in life is to infer your prefix directory and > run "pythonX.X --prefix [ all args it got ... ]". > > > I don't see any reason to call the other Python binary, it might as > well just act like it was changed. sys.executable *must* point to the > originally called interpreter anyway. If by this you mean pythonv should load the Python shared library / DLL directly, that would make it impossible to stack environments. Which I'm still angling for. /larry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Sun Feb 21 22:33:28 2010 From: barry at python.org (Barry Warsaw) Date: Sun, 21 Feb 2010 16:33:28 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <52dc1c821002191330g34a71ea3i2eb118755348bba1@mail.gmail.com> Message-ID: <20100221163328.7ad81000@freewill.wooz.org> On Feb 21, 2010, at 01:51 PM, Tres Seaver wrote: >+1 for having the conf file in the same directory as the pythonv >esecutable (yes, I know it isn't FHS compatible, but virtualevn is kind >of antithetical to the spirit of FHS anyway). Which is okay, right? because virtualenv is really about development and the FHS is really about installation. Is that what you meant by "antithetical"? -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From tseaver at palladion.com Sun Feb 21 22:43:35 2010 From: tseaver at palladion.com (Tres Seaver) Date: Sun, 21 Feb 2010 16:43:35 -0500 Subject: [Python-Dev] Proposal for virtualenv functionality in Python In-Reply-To: <20100221163328.7ad81000@freewill.wooz.org> References: <20100219211845.701DE3A40A7@sparrow.telecommunity.com> <52dc1c821002191330g34a71ea3i2eb118755348bba1@mail.gmail.com> <20100221163328.7ad81000@freewill.wooz.org> Message-ID: <4B81A907.1080109@palladion.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Barry Warsaw wrote: > On Feb 21, 2010, at 01:51 PM, Tres Seaver wrote: > >> +1 for having the conf file in the same directory as the pythonv >> esecutable (yes, I know it isn't FHS compatible, but virtualevn is kind >> of antithetical to the spirit of FHS anyway). > > Which is okay, right? because virtualenv is really about development and the > FHS is really about installation. Is that what you meant by "antithetical"? FHS is about keeping the "system" components in known location, and mandates stuff like separation of binaries, configuration, shared libs, data, etc.. virtualenv is about building an envirnoment which is insultated from the system components (and avoids polluting them). Putting the config file next to the binary would be verboten under the FHS, but I don't think it is relevant. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkuBqQMACgkQ+gerLs4ltQ6hsACeJl44YWskNdPiPhAkLcu0RSom sXAAn0/8dD++Z17VvtknD2hGQcYRGOPX =WXX4 -----END PGP SIGNATURE----- From daniel at stutzbachenterprises.com Sun Feb 21 22:51:44 2010 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Sun, 21 Feb 2010 15:51:44 -0600 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> Message-ID: On Sun, Feb 21, 2010 at 2:28 PM, Collin Winter wrote: > Would it be possible for us to get a Mercurial repository on > python.org for the Unladen Swallow benchmarks? Maciej and I would like > to move the benchmark suite out of Unladen Swallow and into > python.org, where all implementations can share it and contribute to > it. PyPy has been adding some benchmarks to their copy of the Unladen > benchmarks, and we'd like to have as well, and Mercurial seems to be > an ideal solution to this. > If and when you have a benchmark repository set up, could you announce it via a reply to this thread? I'd like to check it out. -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Feb 22 03:31:46 2010 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 21 Feb 2010 21:31:46 -0500 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> Message-ID: <693bc9ab1002211831i3e08b83teaebdc616638012f@mail.gmail.com> Hello. We probably also need some people, besides CPython devs having some access to it (like me). Cheers, fijal On Sun, Feb 21, 2010 at 4:51 PM, Daniel Stutzbach wrote: > On Sun, Feb 21, 2010 at 2:28 PM, Collin Winter > wrote: >> >> Would it be possible for us to get a Mercurial repository on >> python.org for the Unladen Swallow benchmarks? Maciej and I would like >> to move the benchmark suite out of Unladen Swallow and into >> python.org, where all implementations can share it and contribute to >> it. PyPy has been adding some benchmarks to their copy of the Unladen >> benchmarks, and we'd like to have as well, and Mercurial seems to be >> an ideal solution to this. > > If and when you have a benchmark repository set up, could you announce it > via a reply to this thread?? I'd like to check it out. > -- > Daniel Stutzbach, Ph.D. > President, Stutzbach Enterprises, LLC > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com > > From djc.ochtman at gmail.com Mon Feb 22 03:41:00 2010 From: djc.ochtman at gmail.com (Dirkjan Ochtman) Date: Sun, 21 Feb 2010 21:41:00 -0500 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: <693bc9ab1002211831i3e08b83teaebdc616638012f@mail.gmail.com> References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> <693bc9ab1002211831i3e08b83teaebdc616638012f@mail.gmail.com> Message-ID: On Sun, Feb 21, 2010 at 21:31, Maciej Fijalkowski wrote: > We probably also need some people, besides CPython devs having some > access to it (like me). Right. I've setup a public repository on hg.python.org: http://hg.python.org/benchmarks/ Right now, I still need to have Martin change some configuration so I will be able to set up push access for people other than me, so it's pull-only for now. I've already sent an email to Martin to help me get this fixed, so it should be fixed soon. Let me know if there are any issues. Cheers, Dirkjan From collinwinter at google.com Mon Feb 22 03:43:01 2010 From: collinwinter at google.com (Collin Winter) Date: Sun, 21 Feb 2010 21:43:01 -0500 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> Message-ID: <3c8293b61002211843s5580ed0dgba58d7a20bdba441@mail.gmail.com> Hey Daniel, On Sun, Feb 21, 2010 at 4:51 PM, Daniel Stutzbach wrote: > On Sun, Feb 21, 2010 at 2:28 PM, Collin Winter > wrote: >> >> Would it be possible for us to get a Mercurial repository on >> python.org for the Unladen Swallow benchmarks? Maciej and I would like >> to move the benchmark suite out of Unladen Swallow and into >> python.org, where all implementations can share it and contribute to >> it. PyPy has been adding some benchmarks to their copy of the Unladen >> benchmarks, and we'd like to have as well, and Mercurial seems to be >> an ideal solution to this. > > If and when you have a benchmark repository set up, could you announce it > via a reply to this thread?? I'd like to check it out. Will do. In the meantime, you can read http://code.google.com/p/unladen-swallow/wiki/Benchmarks to find out how to check out the current draft of the benchmarks, as well as which benchmarks are currently included. Thanks, Collin Winter From ziade.tarek at gmail.com Mon Feb 22 03:44:31 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Sun, 21 Feb 2010 21:44:31 -0500 Subject: [Python-Dev] Another mercurial repo Message-ID: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> Hello Dirkjan, We are going to start working on a few things in Distutils at Pycon outside the trunk. I was about to start this work in the svn sandbox, but if possible I'd rather have a repo at hg.python.org as well Regards Tarek -- Tarek Ziad? | http://ziade.org From dirkjan at ochtman.nl Mon Feb 22 03:47:26 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Sun, 21 Feb 2010 21:47:26 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> Message-ID: On Sun, Feb 21, 2010 at 21:44, Tarek Ziad? wrote: > We are going to start working on a few things in Distutils at Pycon > outside the trunk. This would be full branch of Python? In that case, we probably don't want to go there yet, because the hashes of the python repository currently on hg.p.o will definitely change more and that will cause problems. Cheers, Dirkjan From ziade.tarek at gmail.com Mon Feb 22 03:56:53 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Sun, 21 Feb 2010 21:56:53 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> Message-ID: <94bdd2611002211856n429dc752ne3de1b1a79456cf6@mail.gmail.com> On Sun, Feb 21, 2010 at 9:47 PM, Dirkjan Ochtman wrote: > On Sun, Feb 21, 2010 at 21:44, Tarek Ziad? wrote: >> We are going to start working on a few things in Distutils at Pycon >> outside the trunk. > > This would be full branch of Python? In that case, we probably don't > want to go there yet, because the hashes of the python repository > currently on hg.p.o will definitely change more and that will cause > problems. No that would be just a new fresh empty repository for "distutils 2" that will be developed outside the Python stdlib for a while, and will enventually get back into it when it's ready I guess. I figured it would be easier for people to fork/clone it to work on it if ts a DVCS, even if it's just me that can push in it for the moment. But if this is too much trouble right now don't worry about it. Tarek -- Tarek Ziad? | http://ziade.org From benjamin at python.org Mon Feb 22 04:21:16 2010 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 21 Feb 2010 21:21:16 -0600 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> <693bc9ab1002211831i3e08b83teaebdc616638012f@mail.gmail.com> Message-ID: <1afaf6161002211921u7371873fjd3f865cd810fb58a@mail.gmail.com> 2010/2/21 Dirkjan Ochtman : > Right. I've setup a public repository on hg.python.org: I think we should probably develop a policy about hg.python.org repos before we start handing them out. Who will be able to have a repo on hg.python.org? What kinds of projects? -- Regards, Benjamin From dirkjan at ochtman.nl Mon Feb 22 04:25:24 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Sun, 21 Feb 2010 22:25:24 -0500 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: <1afaf6161002211921u7371873fjd3f865cd810fb58a@mail.gmail.com> References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> <693bc9ab1002211831i3e08b83teaebdc616638012f@mail.gmail.com> <1afaf6161002211921u7371873fjd3f865cd810fb58a@mail.gmail.com> Message-ID: On Sun, Feb 21, 2010 at 22:21, Benjamin Peterson wrote: > I think we should probably develop a policy about hg.python.org repos > before we start handing them out. Who will be able to have a repo on > hg.python.org? What kinds of projects? I'd be happy to host stuff for people who are already Python committers, and limit it to stuff that would otherwise live somewhere in Python's svn repository. Cheers, Dirkjan From benjamin at python.org Mon Feb 22 04:29:42 2010 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 21 Feb 2010 21:29:42 -0600 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> <693bc9ab1002211831i3e08b83teaebdc616638012f@mail.gmail.com> <1afaf6161002211921u7371873fjd3f865cd810fb58a@mail.gmail.com> Message-ID: <1afaf6161002211929i4284aacdybb9349d3b5d254d8@mail.gmail.com> 2010/2/21 Dirkjan Ochtman : > On Sun, Feb 21, 2010 at 22:21, Benjamin Peterson wrote: >> I think we should probably develop a policy about hg.python.org repos >> before we start handing them out. Who will be able to have a repo on >> hg.python.org? What kinds of projects? > > I'd be happy to host stuff for people who are already Python > committers, and limit it to stuff that would otherwise live somewhere > in Python's svn repository. +1 Sounds like a good starting place. -- Regards, Benjamin From dirkjan at ochtman.nl Mon Feb 22 04:54:51 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Sun, 21 Feb 2010 22:54:51 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: <94bdd2611002211856n429dc752ne3de1b1a79456cf6@mail.gmail.com> References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002211856n429dc752ne3de1b1a79456cf6@mail.gmail.com> Message-ID: On Sun, Feb 21, 2010 at 21:56, Tarek Ziad? wrote: > No that would be just a new fresh empty repository for "distutils 2" > that will be developed outside the Python stdlib for a while, and will > enventually get back into it when it's ready I guess. > > I figured it would be easier for people to fork/clone it to work on it > if ts a DVCS, even if it's just me that can push in it for the moment. > > But if this is too much trouble right now don't worry about it. Sounds good, per the policy for Mercurial hosting, see the other thread (but weren't you going to improve distutils, instead of rewriting it?). Only problem is, right now you could only push through me. So, start off in hg, put it on bitbucket for now and we'll set it up on hg.p.o when Martin gets back to me (also see the other thread). Cheers, Dirkjan From ziade.tarek at gmail.com Mon Feb 22 05:15:53 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Sun, 21 Feb 2010 23:15:53 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002211856n429dc752ne3de1b1a79456cf6@mail.gmail.com> Message-ID: <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> On Sun, Feb 21, 2010 at 10:54 PM, Dirkjan Ochtman wrote: > On Sun, Feb 21, 2010 at 21:56, Tarek Ziad? wrote: >> No that would be just a new fresh empty repository for "distutils 2" >> that will be developed outside the Python stdlib for a while, and will >> enventually get back into it when it's ready I guess. >> >> I figured it would be easier for people to fork/clone it to work on it >> if ts a DVCS, even if it's just me that can push in it for the moment. >> >> But if this is too much trouble right now don't worry about it. > > Sounds good, per the policy for Mercurial hosting, see the other > thread (but weren't you going to improve distutils, instead of > rewriting it?). I am improving it, continuing the work that has been started. Here's the bottom line: We decided during the language summit that I would perform this work outside the stdlib (e.g. fork distutils), and leave the current distutils in its current state to avoid any backward compat work nightmares, and possible issues with some third party projects that might be sensitive to some changes even if performed in the internals. (it happened) The new PEPs will be implemented in distutils2 (non-definitive name) and I'll be able to remove stuff we don't want to keep in there without worrying. I was reluctant with this idea at first because I've started to work on this in distutils itself and I wanted the work to be included in 2.7 so we wouldn't wait 18 more months. But now I am fully convinced this is the best plan : around the time 2.7 is out, we will be able to release distutils2 and provide a version for 2.4/2.5/2.6/3.1 as well, and get feedback from the community to prepare a rock-solid version for 3.3. And if we make some mistakes in it, we will be able to correct them faster. We do want this package in the stdlib, because its part of what people are expecting in the batteries included, but as Guido explained during the summit, a package in the stdlib has one foot in the grave. > Only problem is, right now you could only push through > me. So, start off in hg, put it on bitbucket for now and we'll set it > up on hg.p.o when Martin gets back to me (also see the other thread). Sounds good, thanks Tarek -- Tarek Ziad? | http://ziade.org From rdmurray at bitdance.com Mon Feb 22 06:17:38 2010 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 22 Feb 2010 00:17:38 -0500 Subject: [Python-Dev] 'languishing' status for the tracker Message-ID: <20100222051738.3585F1FCDB4@kimball.webabinitio.net> I believe Brett mentioned the 'languishing' status for the tracker in passing in his notes from the language summit. To expand on this: the desire for this arises from the observation that we have a lot of bugs in the tracker that we don't want to close, because they are real bugs or non-crazy enhancement requests, but for one reason or another (lack of an interested party, lack of a good, non-controversial solution, lack of a test platform on which to test the bug fix, the fix is hard but the bug is not of a commensurate priority, etc) the issue just isn't getting dealt with, and won't get dealt with until the blocking factor changes. The motivation for introducing a new status, rather than continuing to leave these bugs in 'open' state, is to make it clear which bugs are the *active* bugs, the ones we should be focusing on and working to fix first. This would allow the count of open bugs to be a more meaningful metric than it is currently, and might even allow us to get to a point where the open bug count stops increasing. More importantly, however, it would act as a first level partition of the non-closed bugs, such that the ones for which it is most likely that effective action can be taken will be what appear first. This is especially important for new people coming in to help. How many people have looked at the tracker, and decided to start from the oldest bugs and see what they can fix? I know I started out that way. Looking through those oldest bugs can be quite discouraging, since many of them are old because they are hard, or blocked for one reason or another. We would in addition propose that a 'languishing' search be added to the standard searches. I would also propose that the default search be changed to search all bugs, and group them by status, not priority. This will make it more likely that bug *reporters* will find existing bugs that relate to the problem they are experiencing or the feature they want to propose. The other searches (except open) should probably return languishing issues as well as open, since they sort in reverse chronological order the more active bugs will be at the top, and that way the languishing bugs won't be forgotten. (Note: languishing bugs should probably never have the 'needs review' keyword, since bugs should not be allowd to languish simply for need of a review.) To move a bug to state languishing, the procedure should be to post a comment saying why you are moving the bug to that status, that by implication or explicitly lays out the conditions required for it to move back to open. Doing so may wake someone up who wants to and can deal with the issue, in which case it can be moved back to open. --David PS: I believe that the other change that would be required in addition to adding the new status and search is to alter the bug summary email script to handle the languishing state. If I've missed anything else please let me know. From ncoghlan at gmail.com Mon Feb 22 13:45:07 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 22 Feb 2010 22:45:07 +1000 Subject: [Python-Dev] some notes from the first part of the lang summit In-Reply-To: References: <4B7DB77A.2070108@activestate.com> <4B80FD3D.7030301@trueblade.com> <19329.16627.899420.785071@montanaro.dyndns.org> <1afaf6161002210629m5e4ff08s299f717a467c2239@mail.gmail.com> <4B818D79.2040609@trueblade.com> Message-ID: <4B827C53.5000508@gmail.com> Brett Cannon wrote: > Could, but the code will go away some day and not everyone will read the > docs to realize that they might want to upgrade their code if they care > to use the shiniest thing in the standard library. I agree with Brett here - PendingDeprecationWarning for "there's a better option available, this approach is probably going to go away some day, but you're in no imminent danger of that happening any time soon". DeprecationWarning is significantly stronger, saying "this will go away some time within the next few years". The softest version (documentation warnings only) doesn't really apply in this case - optparse will almost certainly become a PyPI external package some day, even if that day is a decade or more from now. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Mon Feb 22 13:51:22 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 22 Feb 2010 22:51:22 +1000 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: <1afaf6161002211929i4284aacdybb9349d3b5d254d8@mail.gmail.com> References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> <693bc9ab1002211831i3e08b83teaebdc616638012f@mail.gmail.com> <1afaf6161002211921u7371873fjd3f865cd810fb58a@mail.gmail.com> <1afaf6161002211929i4284aacdybb9349d3b5d254d8@mail.gmail.com> Message-ID: <4B827DCA.7030504@gmail.com> Benjamin Peterson wrote: > 2010/2/21 Dirkjan Ochtman : >> I'd be happy to host stuff for people who are already Python >> committers, and limit it to stuff that would otherwise live somewhere >> in Python's svn repository. > > +1 Sounds like a good starting place. This is pretty much the same approach we use for creating subdirectories of /sandbox on the SVN side so it sounds reasonable to me too. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Mon Feb 22 13:55:46 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 22 Feb 2010 22:55:46 +1000 Subject: [Python-Dev] 'languishing' status for the tracker In-Reply-To: <20100222051738.3585F1FCDB4@kimball.webabinitio.net> References: <20100222051738.3585F1FCDB4@kimball.webabinitio.net> Message-ID: <4B827ED2.8040507@gmail.com> R. David Murray wrote: > I believe Brett mentioned the 'languishing' status for the tracker in > passing in his notes from the language summit. Thanks for that. I had assumed Brett meant something along those lines, but it is good to have the rationale made explicit. Cheers, Nick. P.S. Not that it's needed, but +1 :) -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From dirkjan at ochtman.nl Mon Feb 22 18:04:18 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Mon, 22 Feb 2010 12:04:18 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002211856n429dc752ne3de1b1a79456cf6@mail.gmail.com> <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> Message-ID: On Sun, Feb 21, 2010 at 23:15, Tarek Ziad? wrote: > Sounds good, thanks It's right here: ssh://hg at hg.python.org/repos/distutils2 Cheers, Dirkjan From ziade.tarek at gmail.com Mon Feb 22 18:14:07 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Mon, 22 Feb 2010 12:14:07 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002211856n429dc752ne3de1b1a79456cf6@mail.gmail.com> <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> Message-ID: <94bdd2611002220914t162f5b2x7f740ffc8bcbcd95@mail.gmail.com> On Mon, Feb 22, 2010 at 12:04 PM, Dirkjan Ochtman wrote: > On Sun, Feb 21, 2010 at 23:15, Tarek Ziad? wrote: >> Sounds good, thanks > > It's right here: ssh://hg at hg.python.org/repos/distutils2 Thanks a lot From ssteinerx at gmail.com Mon Feb 22 18:35:50 2010 From: ssteinerx at gmail.com (ssteinerX@gmail.com) Date: Mon, 22 Feb 2010 12:35:50 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002211856n429dc752ne3de1b1a79456cf6@mail.gmail.com> <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> Message-ID: <1FD7C648-C50C-4CA2-9390-847B4253E707@gmail.com> On Feb 22, 2010, at 12:04 PM, Dirkjan Ochtman wrote: > On Sun, Feb 21, 2010 at 23:15, Tarek Ziad? wrote: >> Sounds good, thanks > > It's right here: ssh://hg at hg.python.org/repos/distutils2 The checkout URL for non-ssh read-only access is: http://hg.python.org/distutils2/ in case anyone else is searching for it. S From dirkjan at ochtman.nl Mon Feb 22 18:44:57 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Mon, 22 Feb 2010 12:44:57 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: <1FD7C648-C50C-4CA2-9390-847B4253E707@gmail.com> References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002211856n429dc752ne3de1b1a79456cf6@mail.gmail.com> <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> <1FD7C648-C50C-4CA2-9390-847B4253E707@gmail.com> Message-ID: 2010/2/22 ssteinerX at gmail.com : > The checkout URL for non-ssh read-only access is: > > ? ? ? ?http://hg.python.org/distutils2/ > > in case anyone else is searching for it. Right. As Maciej asked about this, let's discuss it here: there are currently no emails for these repositories. I'd like to get that going; should I just have it send emails for each push to the normal commits mailing list for now? Cheers, Dirkjan From eric at trueblade.com Mon Feb 22 20:16:11 2010 From: eric at trueblade.com (Eric Smith) Date: Mon, 22 Feb 2010 14:16:11 -0500 Subject: [Python-Dev] 3.1 and 2.7 break format() when used with complex (sometimes) Message-ID: <4B82D7FB.3000605@trueblade.com> This code works on 2.6 and 3.0: >>> format(1+1j, '10s') '(1+1j) ' That's because format ends up calling object.__format__ because complex doesn't have its own __format__. Then object.__format__ calls str(self) which returns '(1+1j) '. So the original call basically turns into "format('(1+1j) ', '10s')". In 3.1 (released) and 2.7 (not yet released) I implemented __format__ on complex. So now that same code is an error: >>> format(1+1j, '10s') Traceback (most recent call last): File "", line 1, in ValueError: Unknown format code 's' for object of type 'complex' That's because complex._format__ doesn't recognize string formatting codes, in particular 's'. There's a general problem that types that sprout __format__ will break existing usages of format() that use some string formatting codes, unless the types recognize string formats in addition to their own. I think we should change the documentation of format() to warn that you should really call str() on the first argument if you're relying on the second argument being a string formatting code. But what to do about 3.1 and 2.7 with respect to complex? I see 2 options: 1. Leave it as-is, such that 3.1 and 2.7 might break some uses of format(complex, str). 2. Modify format to understand 's' and do the conversion itself. But we don't do this for int and float, that's why we added '!s'. I'm sort of leaning toward #1, but I'd like to know if anyone has an opinion. I haven't heard of anyone complaining about this yet; it would only have tripped up people moving from 3.0 -> 3.1, or from 2.6 -> 3.1 who used format (or str.format()) while specifying 's' or some other str-specific format codes. Eric. From collinwinter at google.com Mon Feb 22 21:17:09 2010 From: collinwinter at google.com (Collin Winter) Date: Mon, 22 Feb 2010 15:17:09 -0500 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: <3c8293b61002211843s5580ed0dgba58d7a20bdba441@mail.gmail.com> References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> <3c8293b61002211843s5580ed0dgba58d7a20bdba441@mail.gmail.com> Message-ID: <3c8293b61002221217v4b7f3b91y76b78763d69fb99d@mail.gmail.com> On Sun, Feb 21, 2010 at 9:43 PM, Collin Winter wrote: > Hey Daniel, > > On Sun, Feb 21, 2010 at 4:51 PM, Daniel Stutzbach > wrote: >> On Sun, Feb 21, 2010 at 2:28 PM, Collin Winter >> wrote: >>> >>> Would it be possible for us to get a Mercurial repository on >>> python.org for the Unladen Swallow benchmarks? Maciej and I would like >>> to move the benchmark suite out of Unladen Swallow and into >>> python.org, where all implementations can share it and contribute to >>> it. PyPy has been adding some benchmarks to their copy of the Unladen >>> benchmarks, and we'd like to have as well, and Mercurial seems to be >>> an ideal solution to this. >> >> If and when you have a benchmark repository set up, could you announce it >> via a reply to this thread?? I'd like to check it out. > > Will do. The benchmarks repository is now available at http://hg.python.org/benchmarks/. It contains all the benchmarks that the Unladen Swallow svn repository contains, including the beginnings of a README.txt that describes the available benchmarks and a quick-start guide for running perf.py (the main interface to the benchmarks). This will eventually contain all the information from http://code.google.com/p/unladen-swallow/wiki/Benchmarks, as well as guidelines on how to write good benchmarks. If you have svn commit access, you should be able to run `hg clone ssh://hg at hg.python.org/repos/benchmarks`. I'm not sure how to get read-only access; Dirkjan can comment on that. Still todo: - Replace the static snapshots of 2to3, Mercurial and other hg-based projects with clones of the respective repositories. - Fix the 2to3 and nbody benchmarks to work with Python 2.5 for Jython and PyPy. - Import some of the benchmarks PyPy has been using. Any access problems with the hg repo should be directed to Dirkjan. Thanks so much for getting the repo set up so fast! Thanks, Collin Winter From florent.xicluna at gmail.com Mon Feb 22 21:28:41 2010 From: florent.xicluna at gmail.com (Florent Xicluna) Date: Mon, 22 Feb 2010 20:28:41 +0000 (UTC) Subject: [Python-Dev] 'languishing' status for the tracker References: <20100222051738.3585F1FCDB4@kimball.webabinitio.net> Message-ID: R. David Murray bitdance.com> writes: > > I believe Brett mentioned the 'languishing' status for the tracker in > passing in his notes from the language summit. > I see a bunch of existing "Status / Resolution" choices. "open" / "later" "open" / "postponed" "open" / "remind" I did not find any documentation about them in both places: * http://wiki.python.org/moin/TrackerDocs/ "Tracker documentation" * http://www.python.org/dev/workflow/ "Issue workflow" Maybe these 2 documentation entry points could be merged and improved, first. They are not available on the same menu, and there's no cross-link between them: * "Issue workflow" from http://www.python.org/dev/ * "Tracker documentation" from http://bugs.python.org/ -- Florent Xicluna From eric at trueblade.com Mon Feb 22 21:36:33 2010 From: eric at trueblade.com (Eric Smith) Date: Mon, 22 Feb 2010 15:36:33 -0500 Subject: [Python-Dev] 3.1 and 2.7 break format() when used with complex (sometimes) In-Reply-To: <4B82D7FB.3000605@trueblade.com> References: <4B82D7FB.3000605@trueblade.com> Message-ID: <4B82EAD1.6090107@trueblade.com> Eric Smith wrote: > This code works on 2.6 and 3.0: > >>> format(1+1j, '10s') > '(1+1j) ' > > That's because format ends up calling object.__format__ because complex > doesn't have its own __format__. Then object.__format__ calls str(self) > which returns '(1+1j) '. So the original call basically turns into > "format('(1+1j) ', '10s')". Guido pointed out this should have been: """That's because format ends up calling object.__format__ because complex doesn't have its own __format__. Then object.__format__ calls str(self) which returns '(1+1j)'. So the original call basically turns into "format('(1+1j)', '10s')".""" (I had inserted the spaces added by str.__format__ too early.) We discussed this at the sprint. We agreed that we'd just allow this specific issue with complex formatting to possibly break existing uses in 2.7, as it did in 3.1. While that's unfortunate, it's better than the alternatives. The root cause of this problem is object.__format__, which is basically: def __format__(self, fmt): return str(self).__format__(fmt) So here we're changing the type of the object (to str) but still keeping the same format string. That doesn't make any sense: the format string is type specific. I think the correct thing to do here is to make it an error if fmt is non-empty. In 2.7 and 3.2 I can make this a PendingDeprecationWarning, then in 3.3 a DeprecationWarning, and finally make it an error in 3.4. Eric. From thomas at python.org Mon Feb 22 21:42:18 2010 From: thomas at python.org (Thomas Wouters) Date: Mon, 22 Feb 2010 21:42:18 +0100 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: <3c8293b61002221217v4b7f3b91y76b78763d69fb99d@mail.gmail.com> References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> <3c8293b61002211843s5580ed0dgba58d7a20bdba441@mail.gmail.com> <3c8293b61002221217v4b7f3b91y76b78763d69fb99d@mail.gmail.com> Message-ID: <9e804ac1002221242td6ec0d4hffcb1a48c3924fce@mail.gmail.com> On Mon, Feb 22, 2010 at 21:17, Collin Winter wrote: > If you have svn commit access, you should be able to run `hg clone > ssh://hg at hg.python.org/repos/benchmarks`. I'm not sure how to get > read-only access; Dirkjan can comment on that. > It would be http://hg.python.org/benchmarks (http, not ssh; no username; no '/repos' toplevel directory.) > > Still todo: > - Replace the static snapshots of 2to3, Mercurial and other hg-based > projects with clones of the respective repositories. > - Fix the 2to3 and nbody benchmarks to work with Python 2.5 for Jython and > PyPy. > - Import some of the benchmarks PyPy has been using. > > Any access problems with the hg repo should be directed to Dirkjan. > Thanks so much for getting the repo set up so fast! > > Thanks, > Collin Winter > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/thomas%40python.org > -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: From djc.ochtman at gmail.com Mon Feb 22 21:45:20 2010 From: djc.ochtman at gmail.com (Dirkjan Ochtman) Date: Mon, 22 Feb 2010 15:45:20 -0500 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: <9e804ac1002221242td6ec0d4hffcb1a48c3924fce@mail.gmail.com> References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> <3c8293b61002211843s5580ed0dgba58d7a20bdba441@mail.gmail.com> <3c8293b61002221217v4b7f3b91y76b78763d69fb99d@mail.gmail.com> <9e804ac1002221242td6ec0d4hffcb1a48c3924fce@mail.gmail.com> Message-ID: On Mon, Feb 22, 2010 at 15:42, Thomas Wouters wrote: > It would be http://hg.python.org/benchmarks (http, not ssh; no username; no > '/repos' toplevel directory.) Correct. Another todo is to get commit mails; I'm currently working on that. Cheers, Dirkjan From martin at v.loewis.de Mon Feb 22 22:06:37 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Mon, 22 Feb 2010 22:06:37 +0100 Subject: [Python-Dev] Another mercurial repo In-Reply-To: References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002211856n429dc752ne3de1b1a79456cf6@mail.gmail.com> <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> <1FD7C648-C50C-4CA2-9390-847B4253E707@gmail.com> Message-ID: <4B82F1DD.8030509@v.loewis.de> > Right. As Maciej asked about this, let's discuss it here: there are > currently no emails for these repositories. I'd like to get that > going; should I just have it send emails for each push to the normal > commits mailing list for now? I think sending them there "for now" is fine; in the long term, I propose to add an X-hgrepo header to the messages so that people can filter on that if they want to. Regards, Martin From collinw at gmail.com Mon Feb 22 22:09:14 2010 From: collinw at gmail.com (Collin Winter) Date: Mon, 22 Feb 2010 16:09:14 -0500 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> References: <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> Message-ID: <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> On Sat, Feb 13, 2010 at 2:23 PM, Benjamin Peterson wrote: > 2010/2/13 "Martin v. L?wis" : >> I still think that the best approach for projects to use 2to3 is to run >> 2to3 at install time from a single-source release. For that, projects >> will have to adjust to whatever bugs certain 2to3 releases have, rather >> than requiring users to download a newer version of 2to3 that fixes >> them. For this use case, a tightly-integrated lib2to3 (with that name >> and sole purpose) is the best thing. > > Alright. That is reasonable. > > The other thing is that we will loose some vcs history and some > history granularity by switching development to the trunk version, > since just the svnmerged revisions will be converted. So the consensus is that 2to3 should be pulled out of the main Python tree? Should the 2to3 hg repository be deleted, then? Thanks, Collin From collinwinter at google.com Mon Feb 22 22:15:22 2010 From: collinwinter at google.com (Collin Winter) Date: Mon, 22 Feb 2010 16:15:22 -0500 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: <3c8293b61002221217v4b7f3b91y76b78763d69fb99d@mail.gmail.com> References: <3c8293b61002211228v34562bf5t7c4ad161684d0874@mail.gmail.com> <3c8293b61002211843s5580ed0dgba58d7a20bdba441@mail.gmail.com> <3c8293b61002221217v4b7f3b91y76b78763d69fb99d@mail.gmail.com> Message-ID: <3c8293b61002221315k668390cbv6ef2e888603b8fff@mail.gmail.com> On Mon, Feb 22, 2010 at 3:17 PM, Collin Winter wrote: > The benchmarks repository is now available at > http://hg.python.org/benchmarks/. It contains all the benchmarks that > the Unladen Swallow svn repository contains, including the beginnings > of a README.txt that describes the available benchmarks and a > quick-start guide for running perf.py (the main interface to the > benchmarks). This will eventually contain all the information from > http://code.google.com/p/unladen-swallow/wiki/Benchmarks, as well as > guidelines on how to write good benchmarks. We now have a "Benchmarks" component in the bug tracker. Suggestions for new benchmarks, feature requests for perf.py, and bugs in existing benchmarks should be reported under that component. Thanks, Collin Winter From martin at v.loewis.de Mon Feb 22 22:27:56 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 22 Feb 2010 22:27:56 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> References: <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> Message-ID: <4B82F6DC.9080204@v.loewis.de> >> The other thing is that we will loose some vcs history and some >> history granularity by switching development to the trunk version, >> since just the svnmerged revisions will be converted. > > So the consensus is that 2to3 should be pulled out of the main Python > tree? Not sure what you mean by "pull out"; I had expect that the right verb should be "pull into": 2to3 should be pulled into the main Python tree. > Should the 2to3 hg repository be deleted, then? Which one? To my knowledge, there is no official 2to3 repository yet. When the switchover happens, 2to3 should not be converted to its own hg repository, yes. Regards, Martin From collinw at gmail.com Mon Feb 22 22:31:34 2010 From: collinw at gmail.com (Collin Winter) Date: Mon, 22 Feb 2010 16:31:34 -0500 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B82F6DC.9080204@v.loewis.de> References: <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> <4B82F6DC.9080204@v.loewis.de> Message-ID: <43aa6ff71002221331l2ca42725r6f6b6701be3c8a3b@mail.gmail.com> On Mon, Feb 22, 2010 at 4:27 PM, "Martin v. L?wis" wrote: >>> The other thing is that we will loose some vcs history and some >>> history granularity by switching development to the trunk version, >>> since just the svnmerged revisions will be converted. >> >> So the consensus is that 2to3 should be pulled out of the main Python >> tree? > > Not sure what you mean by "pull out"; I had expect that the right verb > should be "pull into": 2to3 should be pulled into the main Python tree. Sorry, I meant "pulled out" as in: I want an updated version for the benchmark suite, where should I get that? >> Should the 2to3 hg repository be deleted, then? > > Which one? To my knowledge, there is no official 2to3 repository yet. > When the switchover happens, 2to3 should not be converted to its own hg > repository, yes. This one: http://hg.python.org/2to3 Collin From djc.ochtman at gmail.com Mon Feb 22 22:33:50 2010 From: djc.ochtman at gmail.com (Dirkjan Ochtman) Date: Mon, 22 Feb 2010 16:33:50 -0500 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> References: <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> Message-ID: On Mon, Feb 22, 2010 at 16:09, Collin Winter wrote: > So the consensus is that 2to3 should be pulled out of the main Python > tree? Should the 2to3 hg repository be deleted, then? Wouldn't the former be reason to officialize the hg repository, instead of deleting it? Cheers, Dirkjan From dirkjan at ochtman.nl Mon Feb 22 22:35:07 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Mon, 22 Feb 2010 16:35:07 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: <4B82F1DD.8030509@v.loewis.de> References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002211856n429dc752ne3de1b1a79456cf6@mail.gmail.com> <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> <1FD7C648-C50C-4CA2-9390-847B4253E707@gmail.com> <4B82F1DD.8030509@v.loewis.de> Message-ID: On Mon, Feb 22, 2010 at 16:06, "Martin v. L?wis" wrote: > I think sending them there "for now" is fine; in the long term, I > propose to add an X-hgrepo header to the messages so that people can > filter on that if they want to. We get the X-Hg-Notification header (which has the changeset ID) for free. Cheers, Dirkjan From ncoghlan at gmail.com Mon Feb 22 23:03:31 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 23 Feb 2010 08:03:31 +1000 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: <4B75A961.3000309@v.loewis.de> <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> Message-ID: <4B82FF33.8080803@gmail.com> Dirkjan Ochtman wrote: > On Mon, Feb 22, 2010 at 16:09, Collin Winter wrote: >> So the consensus is that 2to3 should be pulled out of the main Python >> tree? Should the 2to3 hg repository be deleted, then? > > Wouldn't the former be reason to officialize the hg repository, > instead of deleting it? I think the difference between "pull out" and "pull from" is causing confusion here (and no, I'm not sure which of those Collin actually meant either). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From collinw at gmail.com Mon Feb 22 23:05:41 2010 From: collinw at gmail.com (Collin Winter) Date: Mon, 22 Feb 2010 17:05:41 -0500 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B82FF33.8080803@gmail.com> References: <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> <4B82FF33.8080803@gmail.com> Message-ID: <43aa6ff71002221405j5ef608a2q7f5f0aa24e5d3dc7@mail.gmail.com> On Mon, Feb 22, 2010 at 5:03 PM, Nick Coghlan wrote: > Dirkjan Ochtman wrote: >> On Mon, Feb 22, 2010 at 16:09, Collin Winter wrote: >>> So the consensus is that 2to3 should be pulled out of the main Python >>> tree? Should the 2to3 hg repository be deleted, then? >> >> Wouldn't the former be reason to officialize the hg repository, >> instead of deleting it? > > I think the difference between "pull out" and "pull from" is causing > confusion here (and no, I'm not sure which of those Collin actually > meant either). Sorry, I meant "pull from". I want an updated snapshot of 2to3 for the benchmark suite, and I'm looking for the best place to grab it from. Collin From dirkjan at ochtman.nl Mon Feb 22 23:40:26 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Mon, 22 Feb 2010 17:40:26 -0500 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <43aa6ff71002221405j5ef608a2q7f5f0aa24e5d3dc7@mail.gmail.com> References: <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> <4B82FF33.8080803@gmail.com> <43aa6ff71002221405j5ef608a2q7f5f0aa24e5d3dc7@mail.gmail.com> Message-ID: On Mon, Feb 22, 2010 at 17:05, Collin Winter wrote: > Sorry, I meant "pull from". I want an updated snapshot of 2to3 for the > benchmark suite, and I'm looking for the best place to grab it from. Well, the server that has all the stuff for doing the conversions has annoyingly been down for about 24 hours now. I've got a friend coming in who should hopefully be fixing this tomorrow in the afternoon (Atlanta time), after which I should be able to make my conversion stuff update the hg.p.o repositories with more regularity. Cheers, Dirkjan From rdmurray at bitdance.com Mon Feb 22 23:41:35 2010 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 22 Feb 2010 17:41:35 -0500 Subject: [Python-Dev] 'languishing' status for the tracker In-Reply-To: References: <20100222051738.3585F1FCDB4@kimball.webabinitio.net> Message-ID: <20100222224135.A86D91FD41D@kimball.webabinitio.net> On Mon, 22 Feb 2010 20:28:41 +0000, Florent Xicluna wrote: > R. David Murray bitdance.com> writes: > > > I believe Brett mentioned the 'languishing' status for the tracker in > > passing in his notes from the language summit. > > I see a bunch of existing "Status / Resolution" choices. > "open" / "later" > "open" / "postponed" > "open" / "remind" > > I did not find any documentation about them in both places: > * http://wiki.python.org/moin/TrackerDocs/ "Tracker documentation" > * http://www.python.org/dev/workflow/ "Issue workflow" > > Maybe these 2 documentation entry points could be merged and improved, first. > They are not available on the same menu, and there's no cross-link between them: > * "Issue workflow" from http://www.python.org/dev/ > * "Tracker documentation" from http://bugs.python.org/ There is a plan to improve the dev docs, and to merge a bunch of stuff that is scattered here and there into them. Brett will presumably add this item to his his punch list if it isn't already on it; thanks for pointing it out. It's a good question what the difference between 'later' and 'postponed' is. I'm guessing that 'later' is equivalent to 'languishing' (ie: its a good idea but nobody wants to do it right now), while 'postponed' is for something that needs to wait until the next release is out the door, or something like that. There is exactly one open ticket with 'remind' set (by Skip, issue 1374063), and 10 closed tickets. I'll review the closed tickets and move them to languishing if appropriate. I suspect remind is not a particularly useful resolution value. It would probably be better as a keyword. So I would suggest removing the resolutions 'later' and 'remind', and adding a 'remind' keyword if anyone speaks up as wanting it. Postponed I think is useful for the 'wait for next release' case on open tickets, although again it might be more useful as a keyword (it isn't really a 'resolution'). --David From fwierzbicki at gmail.com Mon Feb 22 23:45:54 2010 From: fwierzbicki at gmail.com (Frank Wierzbicki) Date: Mon, 22 Feb 2010 17:45:54 -0500 Subject: [Python-Dev] Mercurial move? Message-ID: <4dab5f761002221445y24e41ab2xd514b0a66e55b892@mail.gmail.com> An advantage of being at PyCon :) We *may* be able to get on mercurial very fast -- since all of the interested parties are here. I'm going to get an svndump now -- the downside to this is whatever anyone checks in during this in between stage would need to get re-checked in after we move. I'll let you know how it goes. -Frank From fwierzbicki at gmail.com Mon Feb 22 23:57:19 2010 From: fwierzbicki at gmail.com (Frank Wierzbicki) Date: Mon, 22 Feb 2010 17:57:19 -0500 Subject: [Python-Dev] Mercurial move? In-Reply-To: <4dab5f761002221445y24e41ab2xd514b0a66e55b892@mail.gmail.com> References: <4dab5f761002221445y24e41ab2xd514b0a66e55b892@mail.gmail.com> Message-ID: <4dab5f761002221457u78be97b6ve29ddc2c6831898f@mail.gmail.com> On Mon, Feb 22, 2010 at 5:45 PM, Frank Wierzbicki wrote: > An advantage of being at PyCon :) > > We *may* be able to get on mercurial very fast -- since all of the > interested parties are here. I'm going to get an svndump now -- the > downside to this is whatever anyone checks in during this in between > stage would need to get re-checked in after we move. > > I'll let you know how it goes. Sorry python-dev autocomplete fail -- I meant this to go to jython-dev, sorry. Please ignore. From guido at python.org Tue Feb 23 00:12:10 2010 From: guido at python.org (Guido van Rossum) Date: Mon, 22 Feb 2010 18:12:10 -0500 Subject: [Python-Dev] Mercurial move? In-Reply-To: <4dab5f761002221457u78be97b6ve29ddc2c6831898f@mail.gmail.com> References: <4dab5f761002221445y24e41ab2xd514b0a66e55b892@mail.gmail.com> <4dab5f761002221457u78be97b6ve29ddc2c6831898f@mail.gmail.com> Message-ID: On Mon, Feb 22, 2010 at 5:57 PM, Frank Wierzbicki wrote: > On Mon, Feb 22, 2010 at 5:45 PM, Frank Wierzbicki wrote: >> An advantage of being at PyCon :) >> >> We *may* be able to get on mercurial very fast -- since all of the >> interested parties are here. I'm going to get an svndump now -- the >> downside to this is whatever anyone checks in during this in between >> stage would need to get re-checked in after we move. >> >> I'll let you know how it goes. > Sorry python-dev autocomplete fail -- I meant this to go to > jython-dev, sorry. ?Please ignore. In that case congrats on beating us to the punch! Let us know how it goes. -- --Guido van Rossum (python.org/~guido) From mg at lazybytes.net Tue Feb 23 00:38:43 2010 From: mg at lazybytes.net (Martin Geisler) Date: Tue, 23 Feb 2010 00:38:43 +0100 Subject: [Python-Dev] PEP 385 progress report References: Message-ID: <874ol8yh6k.fsf@hbox.dyndns.org> Dirkjan Ochtman writes: Hi everybody! I hope you have fun at PyCon :-) > As for the current state of The Dreaded EOL Issue, there is an > extension which seems to be provide all the needed features, but it > appears there are some nasty corner cases still to be fixed. Martin > Geisler has been working on it over the sprint, but I think there's > more work to be done here. Anyone who wants to jump in would be quite > welcome (maybe Martin will clarify here what exactly the remaining > issues are). I'm sorry about the delay in my response -- but things have now finally moved forward after Benoit Boissinot (another Mercurial developer) looked at things. With the most recent fixes pushed to the eol repository[1], I can no longer break the tests by running them repeatedly in a loop. In other words, they finally appear to be stable. I feel this would be a good opportunity for people to begin testing the extension again. It seems that people has not done that so far, or at least we haven't gotten any feedback in a long time. It is now easier to test than before since changes to the .hgeol file is picked up immediatedly without it being committed. This means that you can enable eol (in .hg/hgrc, say) and play around *without* affecting others who use the repository. When you change patterns in .hgeol, you'll see the effects in the output of 'hg status' -- files that will be updated on the next commit appear modified. My dissertation is due this Friday(!), so I will not have much time to look at EOL issues this week (as usual). But please give it a spin anyway and let us hear what you think! [1]: http://bitbucket.org/mg/hg-eol/ -- Martin Geisler -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From dirkjan at ochtman.nl Tue Feb 23 00:56:17 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Mon, 22 Feb 2010 18:56:17 -0500 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <874ol8yh6k.fsf@hbox.dyndns.org> References: <874ol8yh6k.fsf@hbox.dyndns.org> Message-ID: On Mon, Feb 22, 2010 at 18:38, Martin Geisler wrote: > My dissertation is due this Friday(!), so I will not have much time to > look at EOL issues this week (as usual). But please give it a spin > anyway and let us hear what you think! I've got about 48 more hours of PyCon sprints ahead of me, so if anyone comes up with bugs (preferably concrete and reproducible) in that time frame, I can look into them more or less directly. Cheers, Dirkjan From g.brandl at gmx.net Tue Feb 23 01:35:03 2010 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 23 Feb 2010 01:35:03 +0100 Subject: [Python-Dev] 'languishing' status for the tracker In-Reply-To: References: <20100222051738.3585F1FCDB4@kimball.webabinitio.net> Message-ID: Am 22.02.2010 21:28, schrieb Florent Xicluna: > R. David Murray bitdance.com> writes: > >> >> I believe Brett mentioned the 'languishing' status for the tracker in >> passing in his notes from the language summit. >> > > I see a bunch of existing "Status / Resolution" choices. > "open" / "later" > "open" / "postponed" > "open" / "remind" Those are taken from SourceForge, and I'm not sure we need all of them, as David says. :) But the point of the "languishing" status is really to not have them in your results when searching for open issues. Searching for "open, but not with one of these three resolutions" is much harder. > I did not find any documentation about them in both places: > * http://wiki.python.org/moin/TrackerDocs/ "Tracker documentation" > * http://www.python.org/dev/workflow/ "Issue workflow" As David says, I have a plan to consolidate the dev docs and bring them into the source repo. Of course, just because it is my plan, it doesn't need to be done by me :) Aren't there people sprinting somewhere? :) Georg From benjamin at python.org Tue Feb 23 04:17:35 2010 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 22 Feb 2010 21:17:35 -0600 Subject: [Python-Dev] small suggestion for a sprint project Message-ID: <1afaf6161002221917k66553f9v453b5df8d3f9a733@mail.gmail.com> Somebody at the sprint looking for a thankless task of drudgery could resolve the current svnmerge queue into the py3k branch. -- Regards, Benjamin From rdmurray at bitdance.com Tue Feb 23 05:52:28 2010 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 22 Feb 2010 23:52:28 -0500 Subject: [Python-Dev] 'languishing' status for the tracker In-Reply-To: References: <20100222051738.3585F1FCDB4@kimball.webabinitio.net> Message-ID: <20100223045228.2EFA11FA71E@kimball.webabinitio.net> On Tue, 23 Feb 2010 01:35:03 +0100, Georg Brandl wrote: > Am 22.02.2010 21:28, schrieb Florent Xicluna: > > I did not find any documentation about them in both places: > > * http://wiki.python.org/moin/TrackerDocs/ "Tracker documentation" > > * http://www.python.org/dev/workflow/ "Issue workflow" > > As David says, I have a plan to consolidate the dev docs and bring them > into the source repo. Of course, just because it is my plan, it doesn't > need to be done by me :) > > Aren't there people sprinting somewhere? :) Yeah, but none of the people who are good at driving doc stuff (you or Brett or Ezio) are here :) --David From martin at v.loewis.de Tue Feb 23 06:55:06 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 23 Feb 2010 06:55:06 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <43aa6ff71002221331l2ca42725r6f6b6701be3c8a3b@mail.gmail.com> References: <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> <4B82F6DC.9080204@v.loewis.de> <43aa6ff71002221331l2ca42725r6f6b6701be3c8a3b@mail.gmail.com> Message-ID: <4B836DBA.1040700@v.loewis.de> >>> Should the 2to3 hg repository be deleted, then? >> Which one? To my knowledge, there is no official 2to3 repository yet. >> When the switchover happens, 2to3 should not be converted to its own hg >> repository, yes. > > This one: http://hg.python.org/2to3 Ah, this shouldn't be used at all for anything (except for studying how Mercurial works). Along with the cpython repository, it is Dirkjan's test conversion. Even if it survived the ultimate migration (which it probably won't), it would get regenerated from scratch, probably changing all revision numbers. Regards, Martin From martin at v.loewis.de Tue Feb 23 06:55:44 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 23 Feb 2010 06:55:44 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <43aa6ff71002221405j5ef608a2q7f5f0aa24e5d3dc7@mail.gmail.com> References: <4B76059E.3060101@gmail.com> <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> <4B82FF33.8080803@gmail.com> <43aa6ff71002221405j5ef608a2q7f5f0aa24e5d3dc7@mail.gmail.com> Message-ID: <4B836DE0.3020907@v.loewis.de> > Sorry, I meant "pull from". I want an updated snapshot of 2to3 for the > benchmark suite, and I'm looking for the best place to grab it from. The 2to3 code currently still lives in the subversion sandbox. Regards, Martin From martin at v.loewis.de Tue Feb 23 06:57:07 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 23 Feb 2010 06:57:07 +0100 Subject: [Python-Dev] Another mercurial repo In-Reply-To: References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002211856n429dc752ne3de1b1a79456cf6@mail.gmail.com> <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> <1FD7C648-C50C-4CA2-9390-847B4253E707@gmail.com> <4B82F1DD.8030509@v.loewis.de> Message-ID: <4B836E33.1010006@v.loewis.de> Dirkjan Ochtman wrote: > On Mon, Feb 22, 2010 at 16:06, "Martin v. L?wis" wrote: >> I think sending them there "for now" is fine; in the long term, I >> propose to add an X-hgrepo header to the messages so that people can >> filter on that if they want to. > > We get the X-Hg-Notification header (which has the changeset ID) for free. Can I use that to filter out distutils2 commits? Regards, Martin From djc.ochtman at gmail.com Tue Feb 23 07:01:14 2010 From: djc.ochtman at gmail.com (Dirkjan Ochtman) Date: Tue, 23 Feb 2010 01:01:14 -0500 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B836DBA.1040700@v.loewis.de> References: <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> <4B82F6DC.9080204@v.loewis.de> <43aa6ff71002221331l2ca42725r6f6b6701be3c8a3b@mail.gmail.com> <4B836DBA.1040700@v.loewis.de> Message-ID: On Tue, Feb 23, 2010 at 00:55, "Martin v. L?wis" wrote: > Ah, this shouldn't be used at all for anything (except for studying how > Mercurial works). Along with the cpython repository, it is Dirkjan's > test conversion. Even if it survived the ultimate migration (which it > probably won't), it would get regenerated from scratch, probably > changing all revision numbers. Actually, since this one is much simpler, it's more or less ready to be the canonical repository, if Benjamin would like that. It's very different from the cpython repository in that respect. Cheers, Dirkjan From dirkjan at ochtman.nl Tue Feb 23 07:04:20 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Tue, 23 Feb 2010 01:04:20 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: <4B836E33.1010006@v.loewis.de> References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002211856n429dc752ne3de1b1a79456cf6@mail.gmail.com> <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> <1FD7C648-C50C-4CA2-9390-847B4253E707@gmail.com> <4B82F1DD.8030509@v.loewis.de> <4B836E33.1010006@v.loewis.de> Message-ID: On Tue, Feb 23, 2010 at 00:57, "Martin v. L?wis" wrote: > Can I use that to filter out distutils2 commits? Well, I don't think we'll do commit emails for distutils2 or unittest2, only for benchmarks (to python-checkins, anyway). I'll make sure that you can filter something out by repository. Cheers, Dirkjan From martin at v.loewis.de Tue Feb 23 07:06:38 2010 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 23 Feb 2010 07:06:38 +0100 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: <1afaf6161002121802j577f90bdlb86cb21a7515bbc2@mail.gmail.com> <4B7648D7.9060301@v.loewis.de> <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> <4B82F6DC.9080204@v.loewis.de> <43aa6ff71002221331l2ca42725r6f6b6701be3c8a3b@mail.gmail.com> <4B836DBA.1040700@v.loewis.de> Message-ID: <4B83706E.1020700@v.loewis.de> > On Tue, Feb 23, 2010 at 00:55, "Martin v. L?wis" wrote: >> Ah, this shouldn't be used at all for anything (except for studying how >> Mercurial works). Along with the cpython repository, it is Dirkjan's >> test conversion. Even if it survived the ultimate migration (which it >> probably won't), it would get regenerated from scratch, probably >> changing all revision numbers. > > Actually, since this one is much simpler, it's more or less ready to > be the canonical repository, if Benjamin would like that. It's very > different from the cpython repository in that respect. I thought we decided not to have a 2to3 repository at all, but let this live in the Python trunk exclusively. Regards, Martin From dirkjan at ochtman.nl Tue Feb 23 07:13:31 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Tue, 23 Feb 2010 01:13:31 -0500 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: <4B83706E.1020700@v.loewis.de> References: <1afaf6161002130848rb4952edn8d3d596cb3a56bef@mail.gmail.com> <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> <4B82F6DC.9080204@v.loewis.de> <43aa6ff71002221331l2ca42725r6f6b6701be3c8a3b@mail.gmail.com> <4B836DBA.1040700@v.loewis.de> <4B83706E.1020700@v.loewis.de> Message-ID: On Tue, Feb 23, 2010 at 01:06, "Martin v. L?wis" wrote: > I thought we decided not to have a 2to3 repository at all, but let this > live in the Python trunk exclusively. That would be fine with me, I just remembered that Benjamin would like to start using hg sooner and having it as a separate repo was okay. Cheers, Dirkjan From ziade.tarek at gmail.com Tue Feb 23 07:16:52 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Tue, 23 Feb 2010 01:16:52 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> <1FD7C648-C50C-4CA2-9390-847B4253E707@gmail.com> <4B82F1DD.8030509@v.loewis.de> <4B836E33.1010006@v.loewis.de> Message-ID: <94bdd2611002222216s2721cea7ve2c114b5d5f32331@mail.gmail.com> On Tue, Feb 23, 2010 at 1:04 AM, Dirkjan Ochtman wrote: > On Tue, Feb 23, 2010 at 00:57, "Martin v. L?wis" wrote: >> Can I use that to filter out distutils2 commits? > > Well, I don't think we'll do commit emails for distutils2 or > unittest2, only for benchmarks (to python-checkins, anyway). I'll make > sure that you can filter something out by repository. Why is that ? I do want them as much as someone else would want the benchmarks ones I suppose. The whole subversion repository (including the sandbox) is sending mails to python-checkins, so I think we need to have the same policy here if possible, and use for example a subject prefix to make the filtering easier (which should be simpler with multiple mercurial repositories as a matter of fact) Tarek -- Tarek Ziad? | http://ziade.org From dirkjan at ochtman.nl Tue Feb 23 07:17:59 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Tue, 23 Feb 2010 01:17:59 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: <94bdd2611002222216s2721cea7ve2c114b5d5f32331@mail.gmail.com> References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> <1FD7C648-C50C-4CA2-9390-847B4253E707@gmail.com> <4B82F1DD.8030509@v.loewis.de> <4B836E33.1010006@v.loewis.de> <94bdd2611002222216s2721cea7ve2c114b5d5f32331@mail.gmail.com> Message-ID: 2010/2/23 Tarek Ziad? : > Why is that ? I do want them as much as someone else would want the > benchmarks ones I suppose. > > The whole subversion repository (including the sandbox) is sending > mails to python-checkins, so I think we need to have the same policy > here if possible, and use for example a subject prefix to make the > filtering easier (which should be simpler with multiple mercurial > repositories as a matter of fact) Some people expressed doubts, but maybe that was mainly about unittest2, not distutils2. I don't actually care either way. Cheers, Dirkjan From ziade.tarek at gmail.com Tue Feb 23 07:28:35 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Tue, 23 Feb 2010 01:28:35 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <1FD7C648-C50C-4CA2-9390-847B4253E707@gmail.com> <4B82F1DD.8030509@v.loewis.de> <4B836E33.1010006@v.loewis.de> <94bdd2611002222216s2721cea7ve2c114b5d5f32331@mail.gmail.com> Message-ID: <94bdd2611002222228k2bd29bc3x3f21b4c25ce6a854@mail.gmail.com> 2010/2/23 Dirkjan Ochtman : > 2010/2/23 Tarek Ziad? : >> Why is that ? I do want them as much as someone else would want the >> benchmarks ones I suppose. >> >> The whole subversion repository (including the sandbox) is sending >> mails to python-checkins, so I think we need to have the same policy >> here if possible, and use for example a subject prefix to make the >> filtering easier (which should be simpler with multiple mercurial >> repositories as a matter of fact) > > Some people expressed doubts, but maybe that was mainly about > unittest2, not distutils2. I don't actually care either way. Note sure what do you mean by doubts. I have no doubts I want to receive those emails to work on this code ;) Tarek From dirkjan at ochtman.nl Tue Feb 23 07:32:17 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Tue, 23 Feb 2010 01:32:17 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: <94bdd2611002222228k2bd29bc3x3f21b4c25ce6a854@mail.gmail.com> References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <1FD7C648-C50C-4CA2-9390-847B4253E707@gmail.com> <4B82F1DD.8030509@v.loewis.de> <4B836E33.1010006@v.loewis.de> <94bdd2611002222216s2721cea7ve2c114b5d5f32331@mail.gmail.com> <94bdd2611002222228k2bd29bc3x3f21b4c25ce6a854@mail.gmail.com> Message-ID: 2010/2/23 Tarek Ziad? : > Note sure what do you mean by doubts. I have no doubts I want to > receive those emails to work on this code ;) Some of the other committers didn't think they wanted email on python-checkins from all the projects that will ever be hosted on hg.python.org, so there should be some selection. I'll let you fight with them over where the email from distutils2 goes; I'm just maintaining this. Cheers, Dirkjan From ziade.tarek at gmail.com Tue Feb 23 07:56:10 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Tue, 23 Feb 2010 01:56:10 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <4B82F1DD.8030509@v.loewis.de> <4B836E33.1010006@v.loewis.de> <94bdd2611002222216s2721cea7ve2c114b5d5f32331@mail.gmail.com> <94bdd2611002222228k2bd29bc3x3f21b4c25ce6a854@mail.gmail.com> Message-ID: <94bdd2611002222256x2bcadbeaib4f51f2426af7866@mail.gmail.com> 2010/2/23 Dirkjan Ochtman : > 2010/2/23 Tarek Ziad? : >> Note sure what do you mean by doubts. I have no doubts I want to >> receive those emails to work on this code ;) > > Some of the other committers didn't think they wanted email on > python-checkins from all the projects that will ever be hosted on > hg.python.org, so there should be some selection. I'll let you fight > with them over where the email from distutils2 goes; I'm just > maintaining this. Sorry I didn't mean to stress you on this, I just want to make sure I'll be able to get those mails at some point. The solution, I think, is to create one checking mailing list per hosted project at hg.python.org. And possibly reunite several projects under a single mailing list. Let me know if it fits your maintenance process. if so I'll ask for a mailing list for this particular repo. Thanks for your work, Tarek -- Tarek Ziad? | http://ziade.org From barry at python.org Tue Feb 23 14:32:43 2010 From: barry at python.org (Barry Warsaw) Date: Tue, 23 Feb 2010 08:32:43 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002211856n429dc752ne3de1b1a79456cf6@mail.gmail.com> <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> <1FD7C648-C50C-4CA2-9390-847B4253E707@gmail.com> <4B82F1DD.8030509@v.loewis.de> <4B836E33.1010006@v.loewis.de> Message-ID: <20100223083243.50106bf5@freewill.wooz.org> On Feb 23, 2010, at 01:04 AM, Dirkjan Ochtman wrote: >On Tue, Feb 23, 2010 at 00:57, "Martin v. L?wis" wrote: >> Can I use that to filter out distutils2 commits? > >Well, I don't think we'll do commit emails for distutils2 or >unittest2, only for benchmarks (to python-checkins, anyway). I'll make >sure that you can filter something out by repository. I think we should have commit emails for all projects hosted on hg.python.org. Those emails should perhaps go to different mailing lists, but post-commit emails are a great way for community members to follow the developments of our various projects. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Tue Feb 23 14:34:14 2010 From: barry at python.org (Barry Warsaw) Date: Tue, 23 Feb 2010 08:34:14 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: <94bdd2611002222256x2bcadbeaib4f51f2426af7866@mail.gmail.com> References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <4B82F1DD.8030509@v.loewis.de> <4B836E33.1010006@v.loewis.de> <94bdd2611002222216s2721cea7ve2c114b5d5f32331@mail.gmail.com> <94bdd2611002222228k2bd29bc3x3f21b4c25ce6a854@mail.gmail.com> <94bdd2611002222256x2bcadbeaib4f51f2426af7866@mail.gmail.com> Message-ID: <20100223083414.67cded14@freewill.wooz.org> On Feb 23, 2010, at 01:56 AM, Tarek Ziad? wrote: >Let me know if it fits your maintenance process. if so I'll ask for a >mailing list for this particular repo. Ask and ye shall receive :) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From fwierzbicki at gmail.com Tue Feb 23 15:27:46 2010 From: fwierzbicki at gmail.com (Frank Wierzbicki) Date: Tue, 23 Feb 2010 09:27:46 -0500 Subject: [Python-Dev] Mercurial move? In-Reply-To: References: <4dab5f761002221445y24e41ab2xd514b0a66e55b892@mail.gmail.com> <4dab5f761002221457u78be97b6ve29ddc2c6831898f@mail.gmail.com> Message-ID: <4dab5f761002230627v550d842fx25aa64b84e58b093@mail.gmail.com> On Mon, Feb 22, 2010 at 6:12 PM, Guido van Rossum wrote: > In that case congrats on beating us to the punch! > > Let us know how it goes. Will, do, thanks! -Frank From dirkjan at ochtman.nl Tue Feb 23 15:34:09 2010 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Tue, 23 Feb 2010 09:34:09 -0500 Subject: [Python-Dev] Another mercurial repo In-Reply-To: <20100223083243.50106bf5@freewill.wooz.org> References: <94bdd2611002211844y5cfbd80dq19e761ffbe6e939b@mail.gmail.com> <94bdd2611002212015i2cf6adf5rea25e9413d9fec1d@mail.gmail.com> <1FD7C648-C50C-4CA2-9390-847B4253E707@gmail.com> <4B82F1DD.8030509@v.loewis.de> <4B836E33.1010006@v.loewis.de> <20100223083243.50106bf5@freewill.wooz.org> Message-ID: On Tue, Feb 23, 2010 at 08:32, Barry Warsaw wrote: > I think we should have commit emails for all projects hosted on > hg.python.org. ?Those emails should perhaps go to different mailing lists, but > post-commit emails are a great way for community members to follow the > developments of our various projects. I definitely agree that there should be post-push mails for every repo, it's just a matter of what list to send them too, since python-checkins may not be the right one for each repo hosted there. Cheers, Dirkjan From nas at arctrix.com Tue Feb 23 18:17:47 2010 From: nas at arctrix.com (Neil Schemenauer) Date: Tue, 23 Feb 2010 11:17:47 -0600 Subject: [Python-Dev] Using git to checkout Python Message-ID: <20100223171747.GA4608@arctrix.com> For those interested, I updated my instructions for using the git repositories on svn.python.org. They are now on the Python wiki: http://wiki.python.org/moin/Git Regards, Neil From eric at trueblade.com Tue Feb 23 19:36:30 2010 From: eric at trueblade.com (Eric Smith) Date: Tue, 23 Feb 2010 13:36:30 -0500 Subject: [Python-Dev] 3.1 and 2.7 break format() when used with complex (sometimes) In-Reply-To: <4B82EAD1.6090107@trueblade.com> References: <4B82D7FB.3000605@trueblade.com> <4B82EAD1.6090107@trueblade.com> Message-ID: <4B84202E.6020206@trueblade.com> > The root cause of this problem is object.__format__, which is basically: > > def __format__(self, fmt): > return str(self).__format__(fmt) > > So here we're changing the type of the object (to str) but still keeping > the same format string. That doesn't make any sense: the format string > is type specific. I think the correct thing to do here is to make it an > error if fmt is non-empty. In 2.7 and 3.2 I can make this a > PendingDeprecationWarning, then in 3.3 a DeprecationWarning, and finally > make it an error in 3.4. I have this implemented in issue 7994 with a PendingDeprecationWarning. Unless someone objects I'm going to document it and apply the patch. Eric. From fijall at gmail.com Tue Feb 23 19:50:30 2010 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 23 Feb 2010 13:50:30 -0500 Subject: [Python-Dev] Platform extension for distutils on other interpreters than CPython Message-ID: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> Hello. I would like to have a feature on platform module (or sys or somewhere) that can tell distutils or distutils2 that this platform (be it PyPy or Jython) is not able to compile any C module. The purpose of this is to make distutils bail out in more reasonable manner than a compilation error in case this module is not going to work on anything but CPython. What do you think? Cheers, fijal From daniel at stutzbachenterprises.com Tue Feb 23 20:03:29 2010 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Tue, 23 Feb 2010 13:03:29 -0600 Subject: [Python-Dev] Platform extension for distutils on other interpreters than CPython In-Reply-To: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> References: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> Message-ID: On Tue, Feb 23, 2010 at 12:50 PM, Maciej Fijalkowski wrote: > I would like to have a feature on platform module (or sys or > somewhere) that can tell distutils or distutils2 that this platform > (be it PyPy or Jython) is not able to compile any C module. The > purpose of this is to make distutils bail out in more reasonable > manner than a compilation error in case this module is not going to > work on anything but CPython. > Also, it would be nice if the package could tell distutils that the compilation is option, in order to support modules where the C version simply replaces functions for speed. -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From fwierzbicki at gmail.com Tue Feb 23 20:05:52 2010 From: fwierzbicki at gmail.com (Frank Wierzbicki) Date: Tue, 23 Feb 2010 14:05:52 -0500 Subject: [Python-Dev] Platform extension for distutils on other interpreters than CPython In-Reply-To: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> References: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> Message-ID: <4dab5f761002231105s2a5506dbobf19d1b8c0541a08@mail.gmail.com> On Tue, Feb 23, 2010 at 1:50 PM, Maciej Fijalkowski wrote: > Hello. > > I would like to have a feature on platform module (or sys or > somewhere) that can tell distutils or distutils2 that this platform > (be it PyPy or Jython) is not able to compile any C module. The > purpose of this is to make distutils bail out in more reasonable > manner than a compilation error in case this module is not going to > work on anything but CPython. FWIW this would be helpful for Jython too. -Frank From ziade.tarek at gmail.com Tue Feb 23 20:10:20 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Tue, 23 Feb 2010 14:10:20 -0500 Subject: [Python-Dev] Platform extension for distutils on other interpreters than CPython In-Reply-To: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> References: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> Message-ID: <94bdd2611002231110o3d6e22e5k4146d9e9ee7f8bd9@mail.gmail.com> On Tue, Feb 23, 2010 at 1:50 PM, Maciej Fijalkowski wrote: > Hello. > > I would like to have a feature on platform module (or sys or > somewhere) that can tell distutils or distutils2 that this platform > (be it PyPy or Jython) is not able to compile any C module. The > purpose of this is to make distutils bail out in more reasonable > manner than a compilation error in case this module is not going to > work on anything but CPython. > > What do you think? +1 I think we could have a global variable in sys, called "dont_compile", distutils would look at before it tris to compile stuff, exactly like how it does for pyc file (sys.dont_write_bytecode) Regards Tarek From ziade.tarek at gmail.com Tue Feb 23 20:17:17 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Tue, 23 Feb 2010 14:17:17 -0500 Subject: [Python-Dev] Platform extension for distutils on other interpreters than CPython In-Reply-To: References: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> Message-ID: <94bdd2611002231117k80ea66dhd276a86b1c742eff@mail.gmail.com> On Tue, Feb 23, 2010 at 2:03 PM, Daniel Stutzbach wrote: > On Tue, Feb 23, 2010 at 12:50 PM, Maciej Fijalkowski > wrote: >> >> I would like to have a feature on platform module (or sys or >> somewhere) that can tell distutils or distutils2 that this platform >> (be it PyPy or Jython) is not able to compile any C module. The >> purpose of this is to make distutils bail out in more reasonable >> manner than a compilation error in case this module is not going to >> work on anything but CPython. > > Also, it would be nice if the package could tell distutils that the > compilation is option, in order to support modules where the C version > simply replaces functions for speed. There's an option called "optional" in the Extension class right now that will silent any compilation failures (like gcc not being there), so your project still installs. That's basically what people now can use when they have a pure Python fallback version they want to provide in case the C version cannot be built for any reason. Tarek -- Tarek Ziad? | http://ziade.org From ziade.tarek at gmail.com Tue Feb 23 20:27:53 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Tue, 23 Feb 2010 14:27:53 -0500 Subject: [Python-Dev] Platform extension for distutils on other interpreters than CPython In-Reply-To: <94bdd2611002231110o3d6e22e5k4146d9e9ee7f8bd9@mail.gmail.com> References: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> <94bdd2611002231110o3d6e22e5k4146d9e9ee7f8bd9@mail.gmail.com> Message-ID: <94bdd2611002231127md60e52dh57cfa86f4a6df862@mail.gmail.com> On Tue, Feb 23, 2010 at 2:10 PM, Tarek Ziad? wrote: > On Tue, Feb 23, 2010 at 1:50 PM, Maciej Fijalkowski wrote: >> Hello. >> >> I would like to have a feature on platform module (or sys or >> somewhere) that can tell distutils or distutils2 that this platform >> (be it PyPy or Jython) is not able to compile any C module. The >> purpose of this is to make distutils bail out in more reasonable >> manner than a compilation error in case this module is not going to >> work on anything but CPython. >> >> What do you think? > > +1 > > I think we could have a global variable in sys, called "dont_compile", > distutils would look at > before it tris to compile stuff, exactly like how it does for pyc file > (sys.dont_write_bytecode) Or... wait : we already know if we are using CPython, or Jython reading sys.platform. So I could simply not trigger the compilation in case sys.platform is one of the CPythons and keep in distutils side a list of the platform names, Extension is incompatible with. That makes me wonder : why don't we have a sys.implementation variable ? (cython/jython/pypi), since we can have several values for cython in sys.platform Tarek From fijall at gmail.com Tue Feb 23 20:44:21 2010 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 23 Feb 2010 14:44:21 -0500 Subject: [Python-Dev] Platform extension for distutils on other interpreters than CPython In-Reply-To: <94bdd2611002231127md60e52dh57cfa86f4a6df862@mail.gmail.com> References: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> <94bdd2611002231110o3d6e22e5k4146d9e9ee7f8bd9@mail.gmail.com> <94bdd2611002231127md60e52dh57cfa86f4a6df862@mail.gmail.com> Message-ID: <693bc9ab1002231144o7946771asabdafc595c5c20d3@mail.gmail.com> On Tue, Feb 23, 2010 at 2:27 PM, Tarek Ziad? wrote: > On Tue, Feb 23, 2010 at 2:10 PM, Tarek Ziad? wrote: >> On Tue, Feb 23, 2010 at 1:50 PM, Maciej Fijalkowski wrote: >>> Hello. >>> >>> I would like to have a feature on platform module (or sys or >>> somewhere) that can tell distutils or distutils2 that this platform >>> (be it PyPy or Jython) is not able to compile any C module. The >>> purpose of this is to make distutils bail out in more reasonable >>> manner than a compilation error in case this module is not going to >>> work on anything but CPython. >>> >>> What do you think? >> >> +1 >> >> I think we could have a global variable in sys, called "dont_compile", >> distutils would look at >> before it tris to compile stuff, exactly like how it does for pyc file >> (sys.dont_write_bytecode) > > Or... wait : we already know if we are using CPython, or Jython > reading sys.platform. > > So I could simply not trigger the compilation in case sys.platform is > one of the CPythons > and keep in distutils side a list of the platform names, Extension is > incompatible with. > > That makes me wonder : why don't we have a sys.implementation variable ? > (cython/jython/pypi), since we can have several values for cython in > sys.platform > > > Tarek > That's pypy. pypi is something else. sys.platform is not any good, since for example PyPy, and possibly any other python implementation that is not CPython, but it's not tied to any particular platform (like parrot) would say "linux2" or "win32". sys.implementation sounds good, but it'll also require a list in stdlib what's fine and what's not fine and a flag sounds like something that everyone can set, not asking to be listed in stdlib. How about sys.implementation.supports_extensions? Cheers, fijal From ziade.tarek at gmail.com Tue Feb 23 21:10:01 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Tue, 23 Feb 2010 15:10:01 -0500 Subject: [Python-Dev] Platform extension for distutils on other interpreters than CPython In-Reply-To: <693bc9ab1002231144o7946771asabdafc595c5c20d3@mail.gmail.com> References: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> <94bdd2611002231110o3d6e22e5k4146d9e9ee7f8bd9@mail.gmail.com> <94bdd2611002231127md60e52dh57cfa86f4a6df862@mail.gmail.com> <693bc9ab1002231144o7946771asabdafc595c5c20d3@mail.gmail.com> Message-ID: <94bdd2611002231210l774a257fybbb2a5f8465197f2@mail.gmail.com> On Tue, Feb 23, 2010 at 2:44 PM, Maciej Fijalkowski wrote: > On Tue, Feb 23, 2010 at 2:27 PM, Tarek Ziad? wrote: >> On Tue, Feb 23, 2010 at 2:10 PM, Tarek Ziad? wrote: >>> On Tue, Feb 23, 2010 at 1:50 PM, Maciej Fijalkowski wrote: >>>> Hello. >>>> >>>> I would like to have a feature on platform module (or sys or >>>> somewhere) that can tell distutils or distutils2 that this platform >>>> (be it PyPy or Jython) is not able to compile any C module. The >>>> purpose of this is to make distutils bail out in more reasonable >>>> manner than a compilation error in case this module is not going to >>>> work on anything but CPython. >>>> >>>> What do you think? >>> >>> +1 >>> >>> I think we could have a global variable in sys, called "dont_compile", >>> distutils would look at >>> before it tris to compile stuff, exactly like how it does for pyc file >>> (sys.dont_write_bytecode) >> >> Or... wait : we already know if we are using CPython, or Jython >> reading sys.platform. >> >> So I could simply not trigger the compilation in case sys.platform is >> one of the CPythons >> and keep in distutils side a list of the platform names, Extension is >> incompatible with. >> >> That makes me wonder : why don't we have a sys.implementation variable ? >> (cython/jython/pypi), since we can have several values for cython in >> sys.platform >> >> >> Tarek >> > > That's pypy. pypi is something else. sys.platform is not any good, > since for example PyPy, and possibly any other python implementation > that is not CPython, but it's not tied to any particular platform > (like parrot) would say "linux2" or "win32". > > sys.implementation sounds good, but it'll also require a list in > stdlib what's fine and what's not fine and a flag sounds like > something that everyone can set, not asking to be listed in stdlib. > > How about sys.implementation.supports_extensions? I think its the other way around: You are making the assumption that, sys knows about distutils extensions. and there's no indication about the kind of extension here (C, etc..). In distutils, the Extension class works with a compiler class (ccompiler, mingwcompiler, etc) And in theory, people could add a new compiler that works under PyPy or Jython. So I think it's up to the Compiler class (through the Extension instance) to decide if the environment is suitable. For instance in the CCompiler class I can write things like: if sys.implementation != 'cython': warning.warn('Sorry I cannot compile this under %s' % sys.implementation) Tarek -- Tarek Ziad? | http://ziade.org From Craig.Connor at ngc.com Tue Feb 23 21:41:43 2010 From: Craig.Connor at ngc.com (Connor, Craig A.) Date: Tue, 23 Feb 2010 14:41:43 -0600 Subject: [Python-Dev] Question for you Message-ID: <36FA01F429A6354CB4C8C185B307F35503F4F080@XMBTX133.northgrum.com> Hello, Dave; My name is Craig Connor and I am a senior s/w developer at Northrop Grumman. I have a question for you. I have installed Boost (via the Installer), and stored it into my C Drive inside a dir called: C:\boost_1_42 I also installed the Boost Jam into a directory called: C:\boost-jam-3.1.17 I am using 2 separate compilers in my Win OS XP (SP3) and I would like to be able to use the Python module of Boost in order to embed Python.h into my C++ compiler. The C++ compilers that I have are: o Dev-cpp, and o Visual C++.net (of MS Visual Studio.Net 2008). Problem: When I compile a simple program, I keep getting the error: "pyconfig.h: No such file or directory". The program I am trying to start with is (below): #include #include #include using namespace std; int main( ) { cout << "Hello, Boost World!!" << endl; boost::any a(5); a = 7.67; std::cout<(a)< From fuzzyman at voidspace.org.uk Tue Feb 23 22:57:02 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 23 Feb 2010 16:57:02 -0500 Subject: [Python-Dev] Question for you In-Reply-To: <36FA01F429A6354CB4C8C185B307F35503F4F080@XMBTX133.northgrum.com> References: <36FA01F429A6354CB4C8C185B307F35503F4F080@XMBTX133.northgrum.com> Message-ID: <4B844F2E.9010300@voidspace.org.uk> Hello Connor, I think you have the wrong email address - this is Python-dev, an email list for the development *of* Python. All the best, Michael Foord On 23/02/2010 15:41, Connor, Craig A. wrote: > > Hello, Dave; > > My name is Craig Connor and I am a senior s/w developer at Northrop > Grumman. > > I have a question for you. I have installed* Boost* (via the > Installer), and stored it into my > > C Drive inside a dir called: > > * C:\boost_1_42* > > I also installed the* Boost Jam* into a directory called: > > * C:\boost-jam-3.1.17* > > I am using 2 separate compilers in my* Win OS XP (SP3)* > > and I would like to be able to use the Python module of Boost > > in order to embed Python.h into my C++ compiler. > > The C++ compilers that I have are: > > o* Dev-cpp*, and > > o* Visual C++.net* (of* MS Visual Studio.Net 2008*). > > Problem: > > When I compile a simple program, I keep getting the error: > "*pyconfig.h: No such file or directory*". > > The program I am trying to start with is (below): > > *#include * > > *#include* > > *#include* > > *using namespace std;* > > *int main( )* > > *{* > > * cout << "Hello, Boost World!!" << endl;* > > * boost::any a(5);* > > * a = 7.67;* > > * std::cout<(a)< > * * > > * system( "PAUSE" );* > > * return 0;* > > *}* > > > Also: > > I did set up my environmental config to go to the Boost dir. > > Question: > > Do you know what am I doing wrong? > > Regards, > > Craig Connor > > 720.622.2209 > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.ironpythoninaction.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Wed Feb 24 02:28:05 2010 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 23 Feb 2010 19:28:05 -0600 Subject: [Python-Dev] PEP 385 progress report In-Reply-To: References: <4B76DDF7.7040306@v.loewis.de> <1afaf6161002131123x13e88611x194de9f89f042017@mail.gmail.com> <43aa6ff71002221309y2712fa8ctfed16235ecdac4e@mail.gmail.com> <4B82F6DC.9080204@v.loewis.de> <43aa6ff71002221331l2ca42725r6f6b6701be3c8a3b@mail.gmail.com> <4B836DBA.1040700@v.loewis.de> <4B83706E.1020700@v.loewis.de> Message-ID: <1afaf6161002231728k32aaeef4sa1726ccb8ad1c941@mail.gmail.com> 2010/2/23 Dirkjan Ochtman : > On Tue, Feb 23, 2010 at 01:06, "Martin v. L?wis" wrote: >> I thought we decided not to have a 2to3 repository at all, but let this >> live in the Python trunk exclusively. > > That would be fine with me, I just remembered that Benjamin would like > to start using hg sooner and having it as a separate repo was okay. I would :), but Martin is correct in that we agreed to start developing it as part of the trunk when the hg transition takes place. -- Regards, Benjamin From ncoghlan at gmail.com Wed Feb 24 12:35:21 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 24 Feb 2010 21:35:21 +1000 Subject: [Python-Dev] Platform extension for distutils on other interpreters than CPython In-Reply-To: <94bdd2611002231127md60e52dh57cfa86f4a6df862@mail.gmail.com> References: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> <94bdd2611002231110o3d6e22e5k4146d9e9ee7f8bd9@mail.gmail.com> <94bdd2611002231127md60e52dh57cfa86f4a6df862@mail.gmail.com> Message-ID: <4B850EF9.3060202@gmail.com> Tarek Ziad? wrote: > That makes me wonder : why don't we have a sys.implementation variable ? > (cython/jython/pypi), since we can have several values for cython in > sys.platform You might want to try and catch up with Christian Heimes. He was going to write a PEP to add one, but must have been caught up with other things. See the thread starting at: http://mail.python.org/pipermail/python-dev/2009-October/092893.html Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Wed Feb 24 12:37:03 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 24 Feb 2010 21:37:03 +1000 Subject: [Python-Dev] Question for you In-Reply-To: <4B844F2E.9010300@voidspace.org.uk> References: <36FA01F429A6354CB4C8C185B307F35503F4F080@XMBTX133.northgrum.com> <4B844F2E.9010300@voidspace.org.uk> Message-ID: <4B850F5F.8040705@gmail.com> Michael Foord wrote: > Hello Connor, > > I think you have the wrong email address - this is Python-dev, an email > list for the development *of* Python. One of the boost specific lists or the general python-list (aka comp.lang.python) would be the place to go for help of this nature. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From florent.xicluna at gmail.com Wed Feb 24 13:13:10 2010 From: florent.xicluna at gmail.com (Florent Xicluna) Date: Wed, 24 Feb 2010 12:13:10 +0000 (UTC) Subject: [Python-Dev] contributor to committer Message-ID: Hello, I am a semi-regular contributor for Python: I have contributed many patches since end of last year, some of them were reviewed by Antoine. Lately, he suggested that I should apply for commit rights. Some of the accepted patches: - Fix refleaks in py3k branch (#5596) - Extend stringlib fastsearch for various methods of bytes, bytearray and unicode objects (#7462 and #7622) - Various documentation patches, including FAQ Recently, I worked with Ezio to fix some tests and to silence py3k warnings in the test suite (#7092). And I am in touch with Fredrik to release ElementTree 1.3 and port it to Python 2.7 (#6472). As a final note, I would like to highlight a small script in the same spirit as Pyflakes: pep8.py I've contributed few patches for the version 0.5, released a week ago: http://pypi.python.org/pypi/pep8/ -- Florent Xicluna From eric at trueblade.com Wed Feb 24 14:13:16 2010 From: eric at trueblade.com (Eric Smith) Date: Wed, 24 Feb 2010 08:13:16 -0500 Subject: [Python-Dev] Add alternate float formatting styles to new-style formatting: allowed under moratorium? Message-ID: <4B8525EC.6040806@trueblade.com> http://bugs.python.org/issue7094 proposes adding "alternate" formatting [1] to floating point new-style formatting (float.__format__ and probably Decimal.__format__). I'd like to add this to make automated translation from %-formatting strings to str.format strings easier. Would this be allowed under the moratorium? I think it falls under the Case-by-Case Exemptions section, but let me know what you think. Eric. [1] Alternate formatting for floats modifies how trailing zeros and decimal points are treated. See http://docs.python.org/dev/library/stdtypes.html#string-formatting-operations From ncoghlan at gmail.com Wed Feb 24 14:42:05 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 24 Feb 2010 23:42:05 +1000 Subject: [Python-Dev] Add alternate float formatting styles to new-style formatting: allowed under moratorium? In-Reply-To: <4B8525EC.6040806@trueblade.com> References: <4B8525EC.6040806@trueblade.com> Message-ID: <4B852CAD.9080204@gmail.com> Eric Smith wrote: > http://bugs.python.org/issue7094 proposes adding "alternate" formatting > [1] to floating point new-style formatting (float.__format__ and > probably Decimal.__format__). I'd like to add this to make automated > translation from %-formatting strings to str.format strings easier. > > Would this be allowed under the moratorium? I think it falls under the > Case-by-Case Exemptions section, but let me know what you think. +1 to add it. We want the new-style formatting to be able to replace as many old style uses as possible (preferably all of them) and this is a well-defined formatting operation from C99. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From rdmurray at bitdance.com Wed Feb 24 16:07:08 2010 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 24 Feb 2010 10:07:08 -0500 Subject: [Python-Dev] contributor to committer In-Reply-To: References: Message-ID: <20100224150709.0F3A31FD385@kimball.webabinitio.net> On Wed, 24 Feb 2010 12:13:10 +0000, Florent Xicluna wrote: > I am a semi-regular contributor for Python: I have contributed many patches > since end of last year, some of them were reviewed by Antoine. > Lately, he suggested that I should apply for commit rights. +1 --David From skip at pobox.com Wed Feb 24 16:37:02 2010 From: skip at pobox.com (skip at pobox.com) Date: Wed, 24 Feb 2010 09:37:02 -0600 Subject: [Python-Dev] Another version of Python Message-ID: <19333.18334.467261.97439@montanaro.dyndns.org> Some of you have probably already seen this, but in case you haven't: http://www.staringispolite.com/likepython/ :-) Skip From listsin at integrateddevcorp.com Wed Feb 24 17:09:05 2010 From: listsin at integrateddevcorp.com (Steve Steiner (listsin)) Date: Wed, 24 Feb 2010 11:09:05 -0500 Subject: [Python-Dev] Another version of Python In-Reply-To: <19333.18334.467261.97439@montanaro.dyndns.org> References: <19333.18334.467261.97439@montanaro.dyndns.org> Message-ID: On Feb 24, 2010, at 10:37 AM, skip at pobox.com wrote: > Some of you have probably already seen this, but in case you haven't: > > http://www.staringispolite.com/likepython/ I wish I actually had, like, that much time on my hands bro. S From dfugate at microsoft.com Wed Feb 24 18:12:27 2010 From: dfugate at microsoft.com (Dave Fugate) Date: Wed, 24 Feb 2010 17:12:27 +0000 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: References: Message-ID: <7CEEC335D70FFE4B957737DDE836F51B34891B4E@TK5EX14MBXC123.redmond.corp.microsoft.com> Would there be any interest in accepting IronPython's in-house benchmarks into this repository as well? Internally we run the usual suspects (PyStone, PyBench, etc), but we also have a plethora of custom benchmarks we've written that also happen to run under CPython. My best, Dave ------------------------------ Message: 2 Date: Mon, 22 Feb 2010 15:17:09 -0500 From: Collin Winter To: Daniel Stutzbach Cc: Dirkjan Ochtman , Python Dev Subject: Re: [Python-Dev] Mercurial repository for Python benchmarks Message-ID: <3c8293b61002221217v4b7f3b91y76b78763d69fb99d at mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 On Sun, Feb 21, 2010 at 9:43 PM, Collin Winter wrote: > Hey Daniel, > > On Sun, Feb 21, 2010 at 4:51 PM, Daniel Stutzbach > wrote: >> On Sun, Feb 21, 2010 at 2:28 PM, Collin Winter >> wrote: >>> >>> Would it be possible for us to get a Mercurial repository on >>> python.org for the Unladen Swallow benchmarks? Maciej and I would like >>> to move the benchmark suite out of Unladen Swallow and into >>> python.org, where all implementations can share it and contribute to >>> it. PyPy has been adding some benchmarks to their copy of the Unladen >>> benchmarks, and we'd like to have as well, and Mercurial seems to be >>> an ideal solution to this. >> >> If and when you have a benchmark repository set up, could you announce it >> via a reply to this thread?? I'd like to check it out. > > Will do. The benchmarks repository is now available at http://hg.python.org/benchmarks/. It contains all the benchmarks that the Unladen Swallow svn repository contains, including the beginnings of a README.txt that describes the available benchmarks and a quick-start guide for running perf.py (the main interface to the benchmarks). This will eventually contain all the information from http://code.google.com/p/unladen-swallow/wiki/Benchmarks, as well as guidelines on how to write good benchmarks. If you have svn commit access, you should be able to run `hg clone ssh://hg at hg.python.org/repos/benchmarks`. I'm not sure how to get read-only access; Dirkjan can comment on that. Still todo: - Replace the static snapshots of 2to3, Mercurial and other hg-based projects with clones of the respective repositories. - Fix the 2to3 and nbody benchmarks to work with Python 2.5 for Jython and PyPy. - Import some of the benchmarks PyPy has been using. Any access problems with the hg repo should be directed to Dirkjan. Thanks so much for getting the repo set up so fast! Thanks, Collin Winter From fijall at gmail.com Wed Feb 24 18:21:39 2010 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 24 Feb 2010 12:21:39 -0500 Subject: [Python-Dev] Platform extension for distutils on other interpreters than CPython In-Reply-To: <4B850EF9.3060202@gmail.com> References: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> <94bdd2611002231110o3d6e22e5k4146d9e9ee7f8bd9@mail.gmail.com> <94bdd2611002231127md60e52dh57cfa86f4a6df862@mail.gmail.com> <4B850EF9.3060202@gmail.com> Message-ID: <693bc9ab1002240921ja438b88qe6fa6357f1dfc4cc@mail.gmail.com> On Wed, Feb 24, 2010 at 6:35 AM, Nick Coghlan wrote: > Tarek Ziad? wrote: >> That makes me wonder : why don't we have a sys.implementation variable ? >> (cython/jython/pypi), since we can have several values for cython in >> sys.platform > Hello. So I propose to have a sys.implementation which would have string representation like "CPython" or "Jython" and have a couple of extra attributes on that. I can think about a lot of attributes, however, couple comes to mind as obvious. * supports_c_api - whether it can load and use CPython C modules * gc_strategy - probably equals "refcounting" or not, useful for some people * frame_introspection - This is mostly True for everybody except IronPython which has it as an optional command line argument. Might be useful to have it for some projects, unsure. What do you think? I'm willing to implement this for CPython and PyPy (this should be dead-simple anyway). Cheers, fijal From fijall at gmail.com Wed Feb 24 19:51:17 2010 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 24 Feb 2010 13:51:17 -0500 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: <7CEEC335D70FFE4B957737DDE836F51B34891B4E@TK5EX14MBXC123.redmond.corp.microsoft.com> References: <7CEEC335D70FFE4B957737DDE836F51B34891B4E@TK5EX14MBXC123.redmond.corp.microsoft.com> Message-ID: <693bc9ab1002241051k4e005876v59bb8f7d4e930d73@mail.gmail.com> On Wed, Feb 24, 2010 at 12:12 PM, Dave Fugate wrote: > Would there be any interest in accepting IronPython's in-house benchmarks into this repository as well? ?Internally we run the usual suspects (PyStone, PyBench, etc), but we also have a plethora of custom benchmarks we've written that also happen to run under CPython. > > My best, > > Dave > >From my perspective the more we have there the better. We might not run all of them on nightly run for example (we as in PyPy). Are you up to adhering to perf.py expectation scheme? (The benchmark being runnable by perf.py) Cheers, fijal From alexandre at peadrop.com Wed Feb 24 20:03:36 2010 From: alexandre at peadrop.com (Alexandre Vassalotti) Date: Wed, 24 Feb 2010 14:03:36 -0500 Subject: [Python-Dev] contributor to committer In-Reply-To: References: Message-ID: On Wed, Feb 24, 2010 at 7:13 AM, Florent Xicluna wrote: > Hello, > > I am a semi-regular contributor for Python: I have contributed many patches > since end of last year, some of them were reviewed by Antoine. > Lately, he suggested that I should apply for commit rights. > +1 -- Alexandre From glyph at twistedmatrix.com Wed Feb 24 17:17:30 2010 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Wed, 24 Feb 2010 11:17:30 -0500 Subject: [Python-Dev] Platform extension for distutils on other interpreters than CPython In-Reply-To: <94bdd2611002231110o3d6e22e5k4146d9e9ee7f8bd9@mail.gmail.com> References: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> <94bdd2611002231110o3d6e22e5k4146d9e9ee7f8bd9@mail.gmail.com> Message-ID: <25C1F0C8-CCD0-473A-B994-A91631C22CCB@twistedmatrix.com> On Feb 23, 2010, at 2:10 PM, Tarek Ziad? wrote: > On Tue, Feb 23, 2010 at 1:50 PM, Maciej Fijalkowski wrote: >> Hello. >> >> I would like to have a feature on platform module (or sys or >> somewhere) that can tell distutils or distutils2 that this platform >> (be it PyPy or Jython) is not able to compile any C module. The >> purpose of this is to make distutils bail out in more reasonable >> manner than a compilation error in case this module is not going to >> work on anything but CPython. >> >> What do you think? > > +1 > > I think we could have a global variable in sys, called "dont_compile", > distutils would look at > before it tris to compile stuff, exactly like how it does for pyc file > (sys.dont_write_bytecode) Every time somebody says "let's have a global variable", God kills a kitten. If it's in sys, He bludgeons the kitten to death *with another kitten*. sys.dont_write_bytecode really ought to have been an API of an importer object somewhere; hopefully, when Brett has the time to finish the refactoring which he alluded to at the language summit, it will be. Similarly, functionally speaking this API is a good idea, but running the C compiler is distutils' job. Therefore any API which describes this functionality should be close to distutils itself. From glyph at twistedmatrix.com Wed Feb 24 17:25:59 2010 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Wed, 24 Feb 2010 11:25:59 -0500 Subject: [Python-Dev] 'languishing' status for the tracker In-Reply-To: <20100222051738.3585F1FCDB4@kimball.webabinitio.net> References: <20100222051738.3585F1FCDB4@kimball.webabinitio.net> Message-ID: On Feb 22, 2010, at 12:17 AM, R. David Murray wrote: > To expand on this: the desire for this arises from the observation > that we have a lot of bugs in the tracker that we don't want to close, > because they are real bugs or non-crazy enhancement requests, but for > one reason or another (lack of an interested party, lack of a good, > non-controversial solution, lack of a test platform on which to test the > bug fix, the fix is hard but the bug is not of a commensurate priority, > etc) the issue just isn't getting dealt with, and won't get dealt with > until the blocking factor changes. In my opinion, the problem is not so much that tickets are left open for a long time, as that there's no distinction between triaged and un-triaged tickets. I think it's perfectly fine for tickets to languish as "open", in no special state, as long as it's easy to find out whether someone has gotten back to the original patch-submitter or bug-reporter to clarify the status at least once. Of course, then the submitter needs to be able to put it back into the un-triaged state by making a counterproposal, or attaching a new patch. To the extent that people are frustrated with the Python development process, it's generally not that their bugs don't get fixed (they understand that they're depending on volunteer labor); it's that they went to the trouble to diagnose the bug, specify the feature, and possibly even develop a complete fix or implementation, only to never hear *anything* about what the likelihood is that it will be incorporated. In the Twisted tracker, whenever we provide feedback or do a code review that includes critical feedback that needs to be dealt with before it's merged, we re-assign the ticket to its original submitter. I feel that this is pretty clear: it means "the ticket is open, it's valid, but it's also not my problem; if you want it fixed, fix it yourself". -------------- next part -------------- An HTML attachment was scrubbed... URL: From orsenthil at gmail.com Wed Feb 24 20:58:21 2010 From: orsenthil at gmail.com (Senthil Kumaran) Date: Thu, 25 Feb 2010 01:28:21 +0530 Subject: [Python-Dev] contributor to committer In-Reply-To: References: Message-ID: <20100224195821.GC4698@ubuntu.ubuntu-domain> > On Wed, Feb 24, 2010 at 7:13 AM, Florent Xicluna > > Hello, > > > > I am a semi-regular contributor for Python: I have contributed many patches > > since end of last year, some of them were reviewed by Antoine. > > Lately, he suggested that I should apply for commit rights. > > Another +1. :) -- Senthil Every living thing wants to survive. -- Spock, "The Ultimate Computer", stardate 4731.3 From fijall at gmail.com Wed Feb 24 21:15:22 2010 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 24 Feb 2010 15:15:22 -0500 Subject: [Python-Dev] Platform extension for distutils on other interpreters than CPython In-Reply-To: <25C1F0C8-CCD0-473A-B994-A91631C22CCB@twistedmatrix.com> References: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> <94bdd2611002231110o3d6e22e5k4146d9e9ee7f8bd9@mail.gmail.com> <25C1F0C8-CCD0-473A-B994-A91631C22CCB@twistedmatrix.com> Message-ID: <693bc9ab1002241215k6aa81ef1p68723d78f8a22c10@mail.gmail.com> On Wed, Feb 24, 2010 at 11:17 AM, Glyph Lefkowitz wrote: > > On Feb 23, 2010, at 2:10 PM, Tarek Ziad? wrote: > >> On Tue, Feb 23, 2010 at 1:50 PM, Maciej Fijalkowski wrote: >>> Hello. >>> >>> I would like to have a feature on platform module (or sys or >>> somewhere) that can tell distutils or distutils2 that this platform >>> (be it PyPy or Jython) is not able to compile any C module. The >>> purpose of this is to make distutils bail out in more reasonable >>> manner than a compilation error in case this module is not going to >>> work on anything but CPython. >>> >>> What do you think? >> >> +1 >> >> I think we could have a global variable in sys, called "dont_compile", >> distutils would look at >> before it tris to compile stuff, exactly like how it does for pyc file >> (sys.dont_write_bytecode) > > Every time somebody says "let's have a global variable", God kills a kitten. I't not a global *variable*, it's a global *constant*, unlike dont_write_bytecode. > Similarly, functionally speaking this API is a good idea, but running the C compiler is distutils' job. ? Therefore any API which describes this functionality should be close to distutils itself. We're talking here about a way how interpreter can tell distutils what it can and what it can't do. Other option would be to modify distutils shipped with other python implementations, but that's a bit against goals of having a unified stdlib. Cheers, fijal From greg.ewing at canterbury.ac.nz Wed Feb 24 22:27:23 2010 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 25 Feb 2010 10:27:23 +1300 Subject: [Python-Dev] Another version of Python In-Reply-To: References: <19333.18334.467261.97439@montanaro.dyndns.org> Message-ID: <4B8599BB.2030402@canterbury.ac.nz> Reminiscent of INTERCAL, where you had to say PLEASE at regular but not too frequent intervals, or the compiler would accuse you of being either too impolite or too smarmy. -- Greg From barry at python.org Wed Feb 24 22:19:22 2010 From: barry at python.org (Barry Warsaw) Date: Wed, 24 Feb 2010 16:19:22 -0500 Subject: [Python-Dev] Using git to checkout Python In-Reply-To: <20100223171747.GA4608@arctrix.com> References: <20100223171747.GA4608@arctrix.com> Message-ID: <20100224161922.645397e4@freewill.wooz.org> On Feb 23, 2010, at 11:17 AM, Neil Schemenauer wrote: >For those interested, I updated my instructions for using the git >repositories on svn.python.org. They are now on the Python wiki: > > http://wiki.python.org/moin/Git Along those lines, I've updated the wiki for instructions on using Bazaar via the Launchpad mirrors of the Subversion masters: http://wiki.python.org/moin/Bazaar -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From tjreedy at udel.edu Wed Feb 24 22:20:20 2010 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 24 Feb 2010 16:20:20 -0500 Subject: [Python-Dev] Add alternate float formatting styles to new-style formatting: allowed under moratorium? In-Reply-To: <4B8525EC.6040806@trueblade.com> References: <4B8525EC.6040806@trueblade.com> Message-ID: On 2/24/2010 8:13 AM, Eric Smith wrote: > http://bugs.python.org/issue7094 proposes adding "alternate" formatting > [1] to floating point new-style formatting (float.__format__ and > probably Decimal.__format__). I'd like to add this to make automated > translation from %-formatting strings to str.format strings easier. An excellent goal, for the reasons Nick stated. > > Would this be allowed under the moratorium? I see the formating mini-language as somewhat separate from Python syntax itself (as is the re minilanguage). In any case, it is new and somewhat experimental, rather than being the result of 20 years of testing. Terry Jan Reedy From dfugate at microsoft.com Thu Feb 25 01:34:34 2010 From: dfugate at microsoft.com (Dave Fugate) Date: Thu, 25 Feb 2010 00:34:34 +0000 Subject: [Python-Dev] Mercurial repository for Python benchmarks In-Reply-To: <693bc9ab1002241051k4e005876v59bb8f7d4e930d73@mail.gmail.com> References: <7CEEC335D70FFE4B957737DDE836F51B34891B4E@TK5EX14MBXC123.redmond.corp.microsoft.com> <693bc9ab1002241051k4e005876v59bb8f7d4e930d73@mail.gmail.com> Message-ID: <7CEEC335D70FFE4B957737DDE836F51B34894494@TK5EX14MBXC123.redmond.corp.microsoft.com> perf.py - I'll look into this. At this point we'll need to refactor them any ways as there are a few dependencies on internal Microsoft stuff the IronPython Team didn't create. Thanks, Dave -----Original Message----- From: Maciej Fijalkowski [mailto:fijall at gmail.com] Sent: Wednesday, February 24, 2010 10:51 AM To: Dave Fugate Cc: python-dev at python.org Subject: Re: [Python-Dev] Mercurial repository for Python benchmarks On Wed, Feb 24, 2010 at 12:12 PM, Dave Fugate wrote: > Would there be any interest in accepting IronPython's in-house benchmarks into this repository as well? ?Internally we run the usual suspects (PyStone, PyBench, etc), but we also have a plethora of custom benchmarks we've written that also happen to run under CPython. > > My best, > > Dave > From my perspective the more we have there the better. We might not run all of them on nightly run for example (we as in PyPy). Are you up to adhering to perf.py expectation scheme? (The benchmark being runnable by perf.py) Cheers, fijal From solipsis at pitrou.net Thu Feb 25 01:48:52 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 25 Feb 2010 00:48:52 +0000 (UTC) Subject: [Python-Dev] contributor to committer References: Message-ID: Le Wed, 24 Feb 2010 12:13:10 +0000, Florent Xicluna a ?crit?: > Hello, > > I am a semi-regular contributor for Python: I have contributed many > patches since end of last year, some of them were reviewed by Antoine. Semi-regular is quite humble. You have been cranking out patches at a higher frequency than almost any of us in the last 3 months. We are exhausted of reviewing and (most of the time) committing your patches :) (fortunately, your work happens to be of consistently good quality) Regards Antoine. From brian.curtin at gmail.com Thu Feb 25 02:15:07 2010 From: brian.curtin at gmail.com (Brian Curtin) Date: Wed, 24 Feb 2010 19:15:07 -0600 Subject: [Python-Dev] contributor to committer In-Reply-To: References: Message-ID: On Wed, Feb 24, 2010 at 18:48, Antoine Pitrou wrote: > Le Wed, 24 Feb 2010 12:13:10 +0000, Florent Xicluna a ?crit : > > Hello, > > > > I am a semi-regular contributor for Python: I have contributed many > > patches since end of last year, some of them were reviewed by Antoine. > > Semi-regular is quite humble. You have been cranking out patches at a > higher frequency than almost any of us in the last 3 months. We are > exhausted of reviewing and (most of the time) committing your patches :) > (fortunately, your work happens to be of consistently good quality) > > Regards > > Antoine. > > Sometimes it seems like half of the tracker updates are Florent's. Semi-regular is quite the understatement. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Thu Feb 25 07:37:54 2010 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 25 Feb 2010 06:37:54 +0000 (UTC) Subject: [Python-Dev] contributor to committer References: Message-ID: Florent Xicluna gmail.com> writes: > I am a semi-regular contributor for Python: I have contributed many patches > since end of last year, some of them were reviewed by Antoine. > Lately, he suggested that I should apply for commit rights. +1 Regards, Vinay Sajip From ncoghlan at gmail.com Thu Feb 25 13:21:46 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 25 Feb 2010 22:21:46 +1000 Subject: [Python-Dev] contributor to committer In-Reply-To: References: Message-ID: <4B866B5A.3080205@gmail.com> Antoine Pitrou wrote: > Le Wed, 24 Feb 2010 12:13:10 +0000, Florent Xicluna a ?crit : >> Hello, >> >> I am a semi-regular contributor for Python: I have contributed many >> patches since end of last year, some of them were reviewed by Antoine. > > Semi-regular is quite humble. You have been cranking out patches at a > higher frequency than almost any of us in the last 3 months. We are > exhausted of reviewing and (most of the time) committing your patches :) > (fortunately, your work happens to be of consistently good quality) Agreed, I'd been thinking this may be deserved based on the number of "patch by Florent Xicluna" commit messages I had seen go by on the checkins list. The usual caveats apply though: - don't get carried away with the privileges - even core devs still put patches on the tracker sometimes - if in doubt, ask for advice on python-dev (or IRC) - make sure to subscribe to python-checkins The last point covers the fact that most checkin messages will get an after-the-fact review from other developers and those comments usually go straight to the checkins list. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Thu Feb 25 13:29:03 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 25 Feb 2010 22:29:03 +1000 Subject: [Python-Dev] Platform extension for distutils on other interpreters than CPython In-Reply-To: <693bc9ab1002240921ja438b88qe6fa6357f1dfc4cc@mail.gmail.com> References: <693bc9ab1002231050w296784b4ic02c6052fb84f129@mail.gmail.com> <94bdd2611002231110o3d6e22e5k4146d9e9ee7f8bd9@mail.gmail.com> <94bdd2611002231127md60e52dh57cfa86f4a6df862@mail.gmail.com> <4B850EF9.3060202@gmail.com> <693bc9ab1002240921ja438b88qe6fa6357f1dfc4cc@mail.gmail.com> Message-ID: <4B866D0F.2070905@gmail.com> Maciej Fijalkowski wrote: > On Wed, Feb 24, 2010 at 6:35 AM, Nick Coghlan wrote: >> Tarek Ziad? wrote: >>> That makes me wonder : why don't we have a sys.implementation variable ? >>> (cython/jython/pypi), since we can have several values for cython in >>> sys.platform > > Hello. > > So I propose to have a sys.implementation which would have string > representation like "CPython" or "Jython" and have a couple of extra > attributes on that. I can think about a lot of attributes, however, > couple comes to mind as obvious. > > * supports_c_api - whether it can load and use CPython C modules > * gc_strategy - probably equals "refcounting" or not, useful for some people > * frame_introspection - This is mostly True for everybody except > IronPython which has it as an optional command line argument. Might be > useful to have it for some projects, unsure. > > What do you think? I think anything along these lines needs to be a PEP so that the developers of the different implementations all get a chance to comment and then have a firm standard to code to afterwards :) Christian was going to write one in the PEP 370 context, so it's worth following up with him to see if he ever got around to drafting anything. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From doug.hellmann at gmail.com Thu Feb 25 18:31:13 2010 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Thu, 25 Feb 2010 12:31:13 -0500 Subject: [Python-Dev] status of issue #7242 Message-ID: <4D667BBF-1318-4746-86F5-FF8BD3ABE475@gmail.com> We've recently run into an issue with subprocess on Solaris, as described (by an earlier reporter) in issue #7242. The patch there solves our problem, and has been verified to work by other users as well. What's the status of the ticket? Is there anything I can do to help move it forward to be included in a patch release for 2.6? Thanks, Doug From martin at v.loewis.de Thu Feb 25 20:34:02 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 25 Feb 2010 20:34:02 +0100 Subject: [Python-Dev] status of issue #7242 In-Reply-To: <4D667BBF-1318-4746-86F5-FF8BD3ABE475@gmail.com> References: <4D667BBF-1318-4746-86F5-FF8BD3ABE475@gmail.com> Message-ID: <4B86D0AA.60904@v.loewis.de> Doug Hellmann wrote: > We've recently run into an issue with subprocess on Solaris, as > described (by an earlier reporter) in issue #7242. The patch there > solves our problem, and has been verified to work by other users as > well. What's the status of the ticket? Is there anything I can do to > help move it forward to be included in a patch release for 2.6? The usual 5-for-one deal applies: review five issues, and I'll review that one (with more energy than at the moment). Regards, Martin From doug.hellmann at gmail.com Thu Feb 25 20:38:42 2010 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Thu, 25 Feb 2010 14:38:42 -0500 Subject: [Python-Dev] status of issue #7242 In-Reply-To: <4B86D0AA.60904@v.loewis.de> References: <4D667BBF-1318-4746-86F5-FF8BD3ABE475@gmail.com> <4B86D0AA.60904@v.loewis.de> Message-ID: On Feb 25, 2010, at 2:34 PM, Martin v. L?wis wrote: > Doug Hellmann wrote: >> We've recently run into an issue with subprocess on Solaris, as >> described (by an earlier reporter) in issue #7242. The patch there >> solves our problem, and has been verified to work by other users as >> well. What's the status of the ticket? Is there anything I can do >> to >> help move it forward to be included in a patch release for 2.6? > > The usual 5-for-one deal applies: review five issues, and I'll review > that one (with more energy than at the moment). I have commit access, can I just check in the patch? Doug From solipsis at pitrou.net Thu Feb 25 20:47:37 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 25 Feb 2010 19:47:37 +0000 (UTC) Subject: [Python-Dev] status of issue #7242 References: <4D667BBF-1318-4746-86F5-FF8BD3ABE475@gmail.com> <4B86D0AA.60904@v.loewis.de> Message-ID: Le Thu, 25 Feb 2010 14:38:42 -0500, Doug Hellmann a ?crit?: > > I have commit access, can I just check in the patch? If you are sure of yourself, you can. But in this case see my comment on the tracker. Regards Antoine. From martin at v.loewis.de Thu Feb 25 20:58:59 2010 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 25 Feb 2010 20:58:59 +0100 Subject: [Python-Dev] status of issue #7242 In-Reply-To: References: <4D667BBF-1318-4746-86F5-FF8BD3ABE475@gmail.com> <4B86D0AA.60904@v.loewis.de> Message-ID: <4B86D683.2060005@v.loewis.de> Doug Hellmann wrote: > > On Feb 25, 2010, at 2:34 PM, Martin v. L?wis wrote: > >> Doug Hellmann wrote: >>> We've recently run into an issue with subprocess on Solaris, as >>> described (by an earlier reporter) in issue #7242. The patch there >>> solves our problem, and has been verified to work by other users as >>> well. What's the status of the ticket? Is there anything I can do to >>> help move it forward to be included in a patch release for 2.6? >> >> The usual 5-for-one deal applies: review five issues, and I'll review >> that one (with more energy than at the moment). > > I have commit access, can I just check in the patch? If you think it's good, and feel comfortable with it, etc.: sure. Regards, Martin From doug.hellmann at gmail.com Thu Feb 25 21:25:39 2010 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Thu, 25 Feb 2010 15:25:39 -0500 Subject: [Python-Dev] status of issue #7242 In-Reply-To: References: <4D667BBF-1318-4746-86F5-FF8BD3ABE475@gmail.com> <4B86D0AA.60904@v.loewis.de> Message-ID: On Feb 25, 2010, at 2:47 PM, Antoine Pitrou wrote: > Le Thu, 25 Feb 2010 14:38:42 -0500, Doug Hellmann a ?crit : >> >> I have commit access, can I just check in the patch? > > If you are sure of yourself, you can. But in this case see my > comment on > the tracker. OK, good point. I'll see about a test or two and post an update to the ticket. Thanks, Doug From barry at python.org Thu Feb 25 21:47:47 2010 From: barry at python.org (Barry Warsaw) Date: Thu, 25 Feb 2010 15:47:47 -0500 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <2987c46d1001301821n72606673x1c84ba7fc9b4712@mail.gmail.com> <87wryzro4a.fsf@benfinney.id.au> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <4B6520C5.4080103@gmail.com> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> Message-ID: <20100225154747.420fca3a@freewill.wooz.org> On Feb 03, 2010, at 01:07 PM, Antoine Pitrou wrote: >Well, I don't think we need another transition... Just keep __file__ for the >source file, and add a __cache__ or __compiled__ attribute for the compiled >file(s). >Since there might be several compiled files for a single source file (for >example, a .pyc along with a JITted native .so), __cache__ should probably be >a tuple rather than a string. I'm going to call the attribute __cached__ and leave its contents implementation defined. For CPython it will be the path to the pyc file if it exists (or was written), or the path to where the pyc file /would/ exist if the source lives on a read-only file system or -B/$PYTHONDONTWRITEBYTECODE is set. For alternative implementations of Python that compose modules from multiple sources, __cached__ can be a tuple. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Thu Feb 25 22:08:32 2010 From: barry at python.org (Barry Warsaw) Date: Thu, 25 Feb 2010 16:08:32 -0500 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <87wryzro4a.fsf@benfinney.id.au> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <4B6520C5.4080103@gmail.com> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> Message-ID: <20100225160832.6e7a3063@freewill.wooz.org> On Feb 03, 2010, at 12:42 PM, Brett Cannon wrote: >So what happens when only bytecode is present? We discussed this at Pycon and agreed that we will not support source-less deployments by default. The source file must exist or it will be an ImportError. This does not mean source-less deployments are not possible though. To support this use case, you'd have to write a custom import hook. We may write one as part of the PEP 3147 implementation. Contributions are of course welcome! -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From greg.ewing at canterbury.ac.nz Fri Feb 26 00:56:26 2010 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 26 Feb 2010 12:56:26 +1300 Subject: [Python-Dev] __file__ In-Reply-To: <20100225160832.6e7a3063@freewill.wooz.org> References: <20100130190005.058c8187@freewill.wooz.org> <87wryzro4a.fsf@benfinney.id.au> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <4B6520C5.4080103@gmail.com> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> Message-ID: <4B870E2A.8090406@canterbury.ac.nz> Barry Warsaw wrote: > We discussed this at Pycon and agreed that we will not support source-less > deployments by default. The source file must exist or it will be an > ImportError. > > This does not mean source-less deployments are not possible though. To > support this use case, you'd have to write a custom import hook. What???? I don't like this idea at all. I object to being forced to jump through an obscure hoop to do something that's been totally straightforward until now. -- Greg From fuzzyman at voidspace.org.uk Fri Feb 26 00:50:45 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Thu, 25 Feb 2010 23:50:45 +0000 Subject: [Python-Dev] __file__ In-Reply-To: <4B870E2A.8090406@canterbury.ac.nz> References: <20100130190005.058c8187@freewill.wooz.org> <87wryzro4a.fsf@benfinney.id.au> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <4B6520C5.4080103@gmail.com> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> Message-ID: <4B870CD5.8020105@voidspace.org.uk> On 25/02/2010 23:56, Greg Ewing wrote: > Barry Warsaw wrote: > >> We discussed this at Pycon and agreed that we will not support >> source-less >> deployments by default. The source file must exist or it will be an >> ImportError. >> >> This does not mean source-less deployments are not possible though. To >> support this use case, you'd have to write a custom import hook. > > What???? > > I don't like this idea at all. I object to being forced to > jump through an obscure hoop to do something that's been > totally straightforward until now. > I thought we agreed at the language summit that if a .pyc was in the place of the source file it *could* be imported from - making pyc only distributions possible. As the pyc files are in the __pycache__ (or whatever) directory by default they *won't* be importable without the source files. A pyc only distribution can easily be created though with this scheme. Michael -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From guido at python.org Fri Feb 26 01:00:15 2010 From: guido at python.org (Guido van Rossum) Date: Thu, 25 Feb 2010 16:00:15 -0800 Subject: [Python-Dev] __file__ In-Reply-To: <4B870CD5.8020105@voidspace.org.uk> References: <20100130190005.058c8187@freewill.wooz.org> <4B6520C5.4080103@gmail.com> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> Message-ID: On Thu, Feb 25, 2010 at 3:50 PM, Michael Foord wrote: > On 25/02/2010 23:56, Greg Ewing wrote: >> >> Barry Warsaw wrote: >> >>> We discussed this at Pycon and agreed that we will not support >>> source-less >>> deployments by default. The source file must exist or it will be an >>> ImportError. >>> >>> This does not mean source-less deployments are not possible though. To >>> support this use case, you'd have to write a custom import hook. >> >> What???? >> >> I don't like this idea at all. I object to being forced to >> jump through an obscure hoop to do something that's been >> totally straightforward until now. >> > I thought we agreed at the language summit that if a .pyc was in the place > of the source file it *could* be imported from - making pyc only > distributions possible. As the pyc files are in the __pycache__ (or > whatever) directory by default they *won't* be importable without the source > files. A pyc only distribution can easily be created though with this > scheme. That's also my recollection. Basically, for .pyc-only modules, nothing changes. PS. I still prefer __compiled__ over __cached__ but I don't feel strong about it. > Michael > > -- > http://www.ironpythoninaction.com/ > http://www.voidspace.org.uk/blog > > READ CAREFULLY. By accepting and reading this email you agree, on behalf of > your employer, to release me from all obligations and waivers arising from > any and all NON-NEGOTIATED agreements, licenses, terms-of-service, > shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, > non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have > entered into with your employer, its partners, licensors, agents and > assigns, in perpetuity, without prejudice to my ongoing rights and > privileges. You further represent that you have the authority to release me > from any BOGUS AGREEMENTS on behalf of your employer. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) From greg.ewing at canterbury.ac.nz Fri Feb 26 01:13:27 2010 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 26 Feb 2010 13:13:27 +1300 Subject: [Python-Dev] __file__ In-Reply-To: <4B870CD5.8020105@voidspace.org.uk> References: <20100130190005.058c8187@freewill.wooz.org> <87wryzro4a.fsf@benfinney.id.au> <4B64F397.2050600@mrabarnett.plus.com> <4B64FC82.7070400@gmail.com> <87sk9msysa.fsf@benfinney.id.au> <4B6520C5.4080103@gmail.com> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> Message-ID: <4B871227.2030707@canterbury.ac.nz> Michael Foord wrote: > I thought we agreed at the language summit that if a .pyc was in the > place of the source file it *could* be imported from - making pyc only > distributions possible. Ah, that's okay, then. Sorry about the panic! -- Greg From meadori at gmail.com Fri Feb 26 05:51:13 2010 From: meadori at gmail.com (Meador Inge) Date: Thu, 25 Feb 2010 22:51:13 -0600 Subject: [Python-Dev] PEP 3188: Implementation Questions Message-ID: <4095897c1002252051w24377ec2tc60d0b04adee8c5d@mail.gmail.com> Hi All, Recently some discussion began in the issue 3132 thread ( http://bugs.python.org/issue3132) regarding implementation of the new struct string syntax for PEP 3118. Mark Dickinson suggested that I bring the discussion on over to Python Dev. Below is a summary of the questions\comments from the thread. Unpacking a long-double =================== 1. Should this return a Decimal object or a ctypes 'long double'? 2. Using ctypes 'long double' is easier to implement, but precision is lost when needing to do arithmetic, since the value for cytpes 'long double' is converted to a Python float. 3. Using Decimal keeps the desired precision, but the implementation would be non-trivial and architecture specific (unless we just picked a fixed number of bytes regardless of the architecture). 4. What representation should be used for standard size and alignment? IEEE 754 extended double precision? Pointers ====== 1. What is a specific pointer? For example, is '&d' is a pointer to a double? 2. How would unpacking a pointer to a Python Object work out? Given an address how would the appropriate object to be unpacked be determined? 3. Can pointers be nested, e.g. '&&d' ? 4. For the 'X{}' format (pointer to a function), is this supposed to mean a Python function or a C function? String Syntax ========== The syntax seems to have transcended verbal description. I think we need to put forth a grammar. There are also some questions regarding nesting levels and mixing specifiers that could perhaps be answered more clearly by having a grammar: 1. What nesting level can structures have? Arbitrary? 2. The new array syntax claims "multi-dimensional array of whatever follows". Truly whatever? Arrays of structures? Arrays of pointers? 3. How do array specifiers and pointer specifiers mix? For example, would '(2, 2)&d' be a two-by-two array of pointers to doubles? What about '&(2, 2)d'? Is this a pointer to an two-by-two array of doubles? An example grammar is contained in a diff against the PEP attached to this mail. NOTE: I am *not* actually submitting a patch against the PEP. This was just the clearest way to present the example grammar. Use Cases ======== 1. What are the real world use cases for these struct string extensions? These should be fleshed out and documented. -- Meador -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pep-3118.diff Type: application/octet-stream Size: 7378 bytes Desc: not available URL: From v+python at g.nevcal.com Fri Feb 26 06:12:23 2010 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 25 Feb 2010 21:12:23 -0800 Subject: [Python-Dev] PEP 3188: Implementation Questions In-Reply-To: <4095897c1002252051w24377ec2tc60d0b04adee8c5d@mail.gmail.com> References: <4095897c1002252051w24377ec2tc60d0b04adee8c5d@mail.gmail.com> Message-ID: <4B875837.30207@g.nevcal.com> On approximately 2/25/2010 8:51 PM, came the following characters from the keyboard of Meador Inge: > Hi All, > > Recently some discussion began in the issue 3132 thread > (http://bugs.python.org/issue3132) regarding > implementation of the new struct string syntax for PEP 3118. Mark > Dickinson > suggested that I bring the discussion on over to Python Dev. Below is > a summary > of the questions\comments from the thread. > > Unpacking a long-double > =================== > > 1. Should this return a Decimal object or a ctypes 'long double'? > 2. Using ctypes 'long double' is easier to implement, but precision is > lost when needing to do arithmetic, since the value for cytpes > 'long double' > is converted to a Python float. > 3. Using Decimal keeps the desired precision, but the implementation > would > be non-trivial and architecture specific (unless we just picked a > fixed number of bytes regardless of the architecture). > 4. What representation should be used for standard size and alignment? > IEEE 754 extended double precision? Because of 2 (lossy, dependency), and 3 (non-trivial, architecture specific), neither choice in 1 seems appropriate. Because of the nature of floats, because the need for manipulation may vary between applications, and because the required precision may vary between applications, I would recommend adding a "CLongDoubleStructWrapper" class (a better name would be welcome), which would copy the architecture-specific byte-stream and preserve it. If converted back to a struct, it would be lossless. If manipulation is required, the class could have converters to Python float (lossy), and Decimal of user-specifiable precision (punt the precision question to the application developer, who should know the needs of the application, and the expected platforms). It might be reasonable to handle double and float similarly, at least as an option. On the other hand, if there can be options, perhaps they could be given when supplying the struct string syntax.... except the application may only wish to manipulate a few of the long double values, and converting the others would be wasteful. -- Glenn -- http://nevcal.com/ =========================== A protocol is complete when there is nothing left to remove. -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking From theller at ctypes.org Fri Feb 26 08:32:07 2010 From: theller at ctypes.org (Thomas Heller) Date: Fri, 26 Feb 2010 08:32:07 +0100 Subject: [Python-Dev] PEP 3188: Implementation Questions In-Reply-To: <4095897c1002252051w24377ec2tc60d0b04adee8c5d@mail.gmail.com> References: <4095897c1002252051w24377ec2tc60d0b04adee8c5d@mail.gmail.com> Message-ID: Meador Inge schrieb: > Hi All, > > Recently some discussion began in the issue 3132 thread ( > http://bugs.python.org/issue3132) regarding > implementation of the new struct string syntax for PEP 3118. Mark Dickinson > suggested that I bring the discussion on over to Python Dev. Below is a > summary > of the questions\comments from the thread. > > Unpacking a long-double > =================== > > 1. Should this return a Decimal object or a ctypes 'long double'? > 2. Using ctypes 'long double' is easier to implement, but precision is > lost when needing to do arithmetic, since the value for cytpes 'long > double' > is converted to a Python float. > 3. Using Decimal keeps the desired precision, but the implementation would > be non-trivial and architecture specific (unless we just picked a > fixed number of bytes regardless of the architecture). > 4. What representation should be used for standard size and alignment? > IEEE 754 extended double precision? A variant of 2. would be to unpack into a ctypes 'long double', and extend the ctypes 'long double' type to retrive the value as Decimal instance, in addition to the default conversion into a Python float. -- Thanks, Thomas From florent.xicluna at gmail.com Fri Feb 26 12:43:09 2010 From: florent.xicluna at gmail.com (Florent Xicluna) Date: Fri, 26 Feb 2010 11:43:09 +0000 (UTC) Subject: [Python-Dev] contributor to committer References: <4B866B5A.3080205@gmail.com> Message-ID: Hello, > > +1 > Thanks all, for your warm welcome. > > The usual caveats apply though: > - don't get carried away with the privileges > - even core devs still put patches on the tracker sometimes > - if in doubt, ask for advice on python-dev (or IRC) > - make sure to subscribe to python-checkins Usually I tend to be cautious. -- Florent From simon at brunningonline.net Fri Feb 26 15:24:25 2010 From: simon at brunningonline.net (Simon Brunning) Date: Fri, 26 Feb 2010 14:24:25 +0000 Subject: [Python-Dev] Another version of Python In-Reply-To: <19333.18334.467261.97439@montanaro.dyndns.org> References: <19333.18334.467261.97439@montanaro.dyndns.org> Message-ID: <8c7f10c61002260624v1c3291fag71708d5d6031b720@mail.gmail.com> 2010/2/24 skip : > Some of you have probably already seen this, but in case you haven't: > > ? ?http://www.staringispolite.com/likepython/ > > :-) I'm reminded of LOLPython: . -- Cheers, Simon B. From michael at voidspace.org.uk Fri Feb 26 18:05:08 2010 From: michael at voidspace.org.uk (Michael Foord) Date: Fri, 26 Feb 2010 17:05:08 +0000 Subject: [Python-Dev] Pickling named tuples on IronPython Message-ID: <4B87FF44.2070103@voidspace.org.uk> Hello Raymond, Named tuples have compatibility code to enable them to work on IronPython without frame support, but unfortunately this doesn't allow pickling / unpickling of named tuples. One fix is to manually set __module__ on the named tuples once created, but I wonder if it would be possible to change the API to better support this - perhaps a default __module__ or providing an optional argument to specify it at creation time? Michael -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From skip at pobox.com Fri Feb 26 18:26:51 2010 From: skip at pobox.com (skip at pobox.com) Date: Fri, 26 Feb 2010 11:26:51 -0600 Subject: [Python-Dev] Another version of Python In-Reply-To: <8c7f10c61002260624v1c3291fag71708d5d6031b720@mail.gmail.com> References: <19333.18334.467261.97439@montanaro.dyndns.org> <8c7f10c61002260624v1c3291fag71708d5d6031b720@mail.gmail.com> Message-ID: <19336.1115.507895.576160@montanaro.dyndns.org> >> ?? ??http://www.staringispolite.com/likepython/ Simon> I'm reminded of LOLPython: . You know, I'm thinking while both are obviously tongue-in-cheek we should probably include them on the /dev/implementations page of python.org, probably in a separate section at the end of the page. Skip From fuzzyman at voidspace.org.uk Fri Feb 26 18:53:36 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 26 Feb 2010 17:53:36 +0000 Subject: [Python-Dev] Another version of Python In-Reply-To: <19336.1115.507895.576160@montanaro.dyndns.org> References: <19333.18334.467261.97439@montanaro.dyndns.org> <8c7f10c61002260624v1c3291fag71708d5d6031b720@mail.gmail.com> <19336.1115.507895.576160@montanaro.dyndns.org> Message-ID: <4B880AA0.6070402@voidspace.org.uk> On 26/02/2010 17:26, skip at pobox.com wrote: > >> ? ? http://www.staringispolite.com/likepython/ > > Simon> I'm reminded of LOLPython:. > > You know, I'm thinking while both are obviously tongue-in-cheek we should > probably include them on the /dev/implementations page of python.org, > probably in a separate section at the end of the page. > They're certainly fun - but they seem to be fly-by-night projects (i.e. unlikely to be maintained in the long run). The risk is that we end up with even more outdated links / material on the website. Anyway, if the consensus is that it would be good to link to them then I will update the page (which could already do with some updating by the looks of it). All the best, Michael > Skip > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From jnoller at gmail.com Fri Feb 26 20:36:30 2010 From: jnoller at gmail.com (Jesse Noller) Date: Fri, 26 Feb 2010 14:36:30 -0500 Subject: [Python-Dev] Another version of Python In-Reply-To: <4B880AA0.6070402@voidspace.org.uk> References: <19333.18334.467261.97439@montanaro.dyndns.org> <8c7f10c61002260624v1c3291fag71708d5d6031b720@mail.gmail.com> <19336.1115.507895.576160@montanaro.dyndns.org> <4B880AA0.6070402@voidspace.org.uk> Message-ID: <4222a8491002261136w7dd61409if8be1ece07b709ac@mail.gmail.com> On Fri, Feb 26, 2010 at 12:53 PM, Michael Foord wrote: > On 26/02/2010 17:26, skip at pobox.com wrote: >> >> ? ? >> ?? ?? http://www.staringispolite.com/likepython/ >> >> ? ? Simon> ?I'm reminded of LOLPython:. >> >> You know, I'm thinking while both are obviously tongue-in-cheek we should >> probably include them on the /dev/implementations page of python.org, >> probably in a separate section at the end of the page. >> > > They're certainly fun - but they seem to be fly-by-night projects (i.e. > unlikely to be maintained in the long run). The risk is that we end up with > even more outdated links / material on the website. > > Anyway, if the consensus is that it would be good to link to them then I > will update the page (which could already do with some updating by the looks > of it). > Sure, we can link to them, and then add titles to all job postings like "Ninja" and "Wizard" so we can really seem awesome. From regebro at gmail.com Fri Feb 26 21:59:50 2010 From: regebro at gmail.com (Lennart Regebro) Date: Fri, 26 Feb 2010 21:59:50 +0100 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <19326.55898.811342.13631@montanaro.dyndns.org> References: <319e029f1002181345t1b73f402o1c0f1c13939ac43f@mail.gmail.com> <319e029f1002182111o254f5394m8c802f47ac941725@mail.gmail.com> <6bc73d4c1002182227m2a298ad2r13ef8d43e0ef85e4@mail.gmail.com> <319e029f1002190631m3f8210a5h3b6d6272e075fc8@mail.gmail.com> <19326.55898.811342.13631@montanaro.dyndns.org> Message-ID: <319e029f1002261259j6c2ebbcdn60eb883cd6bb6113@mail.gmail.com> On Fri, Feb 19, 2010 at 19:37, wrote: > > ? ?Lennart> I would like if we could look into making a timezone module > ? ?Lennart> that works on Python 2.5 to 3.2 that uses system data... > > 2.5, 2.6 and 3.1 are completely off the radar screen at this point. ?The > best you could hope for is that someone backports whatever is created for > 2.7 or 3.2 and distributes it outside the normal distribution channel (say, > as a patch on PyPI). My argument was that we should create a module distributed on PyPI, and once that's stable, move it into stdlib. The suggestions in this thread of moving things into stdlib has included a lot of new features, and are as such not stable. I'm worrying that adding such a thing to stdlib will do so in an unfinished state, and we'll just en up with yet another state of semi-brokenness. -- Lennart Regebro: Python, Zope, Plone, Grok http://regebro.wordpress.com/ +33 661 58 14 64 From fdrake at acm.org Fri Feb 26 22:30:18 2010 From: fdrake at acm.org (Fred Drake) Date: Fri, 26 Feb 2010 16:30:18 -0500 Subject: [Python-Dev] Add UTC to 2.7 (PyCon sprint idea) In-Reply-To: <319e029f1002261259j6c2ebbcdn60eb883cd6bb6113@mail.gmail.com> References: <319e029f1002181345t1b73f402o1c0f1c13939ac43f@mail.gmail.com> <319e029f1002182111o254f5394m8c802f47ac941725@mail.gmail.com> <6bc73d4c1002182227m2a298ad2r13ef8d43e0ef85e4@mail.gmail.com> <319e029f1002190631m3f8210a5h3b6d6272e075fc8@mail.gmail.com> <19326.55898.811342.13631@montanaro.dyndns.org> <319e029f1002261259j6c2ebbcdn60eb883cd6bb6113@mail.gmail.com> Message-ID: <9cee7ab81002261330p2e2ac6d5j8df8cc261ea6172@mail.gmail.com> On Fri, Feb 26, 2010 at 3:59 PM, Lennart Regebro wrote: > I'm worrying that adding such a > thing to stdlib will do so in an unfinished state, and we'll just en > up with yet another state of semi-brokenness. I valid worry, and compelling. As I've alluded to before, leaving it out and allowing applications to just use pytz (or whatever else) is entirely reasonable. -Fred -- Fred L. Drake, Jr. "Chaos is the score upon which reality is written." --Henry Miller From ziade.tarek at gmail.com Fri Feb 26 22:44:21 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Fri, 26 Feb 2010 22:44:21 +0100 Subject: [Python-Dev] The fate of Distutils in Python 2.7 Message-ID: <94bdd2611002261344q361e702av10fdf99ea5eb9f71@mail.gmail.com> Hello, This is a follow-up of the Pycon summit + sprints on packaging. This is what we have planned to do: 1. refactor distutils in a new standalone version called distutils2 [this is done already and we are actively working in the code] 2. completely revert distutils in Lib/ and Doc/ so the code + doc is the same than the current 2.6.x branch 3. leave the new sysconfig module, that is used by the Makefile and the site module The rest of the work will happen in distutils2 and we will try to release a version asap for Python 2.x and 3.x (2.4 to 3.2), and the goal is to put it back in the stdlib in Python 3.3 Distutils in Python will be feature-frozen and I will only do bug fixes there. All feature requests will be redirected to Distutils2. I think the easiest way to manage this for me and for the feedback of the community is to add in bugs.python.org a "Distutils2" component, so I can start to reorganize the issues in there and reassign new issues to Distutils2 when it applies. Regards Tarek -- Tarek Ziad? | http://ziade.org From greg.ewing at canterbury.ac.nz Fri Feb 26 23:08:14 2010 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 27 Feb 2010 11:08:14 +1300 Subject: [Python-Dev] PEP 3188: Implementation Questions In-Reply-To: <4095897c1002252051w24377ec2tc60d0b04adee8c5d@mail.gmail.com> References: <4095897c1002252051w24377ec2tc60d0b04adee8c5d@mail.gmail.com> Message-ID: <4B88464E.9080700@canterbury.ac.nz> Meador Inge wrote: > 3. Using Decimal keeps the desired precision, Well, sort of, but then you end up doing arithmetic in decimal instead of binary, which could give different results. Maybe the solution is to give ctypes long double objects the ability to do arithmetic? -- Greg From brett at python.org Fri Feb 26 23:09:26 2010 From: brett at python.org (Brett Cannon) Date: Fri, 26 Feb 2010 14:09:26 -0800 Subject: [Python-Dev] __file__ In-Reply-To: <4B871227.2030707@canterbury.ac.nz> References: <20100130190005.058c8187@freewill.wooz.org> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> Message-ID: On Thu, Feb 25, 2010 at 16:13, Greg Ewing wrote: > Michael Foord wrote: > > I thought we agreed at the language summit that if a .pyc was in the place >> of the source file it *could* be imported from - making pyc only >> distributions possible. >> > > Ah, that's okay, then. Sorry about the panic! > > Michael is right about what as discussed at the language summit, but Barry means what he says; if you look at the PEP as it currently stands it does not support bytecode-only modules. Barry and I discussed how to implement the PEP at PyCon after the summit and supporting bytecode-only modules quickly began to muck with the semantics and made it harder to explain (i.e. what to set __file__ vs. __compiled__ based on what is or is not available and how to properly define get_paths for loaders). But a benefit of no longer supporting bytecode-only modules by default is it cuts back on possible stat calls which slows down Python's startup time (a complaint I hear a lot). Performance issues become even more acute if you try to come up with even a remotely proper way to have backwards-compatible support in importlib for its ABCs w/o forcing caching on all implementors of the ABCs. As for having a dependency on a loader, I don't see how that is obscure; it's just a dependency your package has that you handle at install-time. And personally, I don't see what bytecode-only modules buy you. The obfuscation argument is bunk as we all know. Bytecode contains so much data that disassembling it gives you a very clear picture of what the original code was like. I think it's almost a dis-service to support bytecode-only files as it leads people who are misinformed or simply don't take the time to understand what is contained in a .pyc file into a false sense of security about their code not being easy to examine by someone else. The only perk I can see is space-saving, but that's dangerous as that ties you to a specific VM with a specific magic number (let alone that it leads to people tying themselves to CPython and ignoring the other VMs that simply do not support bytecode). -Brett > -- > Greg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Fri Feb 26 23:13:32 2010 From: brett at python.org (Brett Cannon) Date: Fri, 26 Feb 2010 14:13:32 -0800 Subject: [Python-Dev] The fate of Distutils in Python 2.7 In-Reply-To: <94bdd2611002261344q361e702av10fdf99ea5eb9f71@mail.gmail.com> References: <94bdd2611002261344q361e702av10fdf99ea5eb9f71@mail.gmail.com> Message-ID: On Fri, Feb 26, 2010 at 13:44, Tarek Ziad? wrote: > Hello, > > This is a follow-up of the Pycon summit + sprints on packaging. > > This is what we have planned to do: > > 1. refactor distutils in a new standalone version called distutils2 > [this is done already and we are actively working in the code] > 2. completely revert distutils in Lib/ and Doc/ so the code + doc is > the same than the current 2.6.x branch > 3. leave the new sysconfig module, that is used by the Makefile and > the site module > > The rest of the work will happen in distutils2 and we will try to > release a version asap for Python 2.x and 3.x (2.4 to 3.2), and the > goal > is to put it back in the stdlib in Python 3.3 > > Distutils in Python will be feature-frozen and I will only do bug > fixes there. All feature requests will be redirected to Distutils2. > > I think the easiest way to manage this for me and for the feedback of > the community is to add in bugs.python.org a "Distutils2" component, > so I can > start to reorganize the issues in there and reassign new issues to > Distutils2 when it applies. > > I assume you want the Distutils2 component to auto-assign to you like Distutils currently does? If so I can add the component for you if people don't object to the new component. -Brett > Regards > Tarek > > -- > Tarek Ziad? | http://ziade.org > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ziade.tarek at gmail.com Fri Feb 26 23:15:56 2010 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Fri, 26 Feb 2010 23:15:56 +0100 Subject: [Python-Dev] The fate of Distutils in Python 2.7 In-Reply-To: References: <94bdd2611002261344q361e702av10fdf99ea5eb9f71@mail.gmail.com> Message-ID: <94bdd2611002261415p5342b6dbv6a4a87ba0f4c7700@mail.gmail.com> On Fri, Feb 26, 2010 at 11:13 PM, Brett Cannon wrote: [..] > I assume you want the Distutils2 component to auto-assign to you like > Distutils currently does? If so I can add the component for you if people > don't object to the new component. Sounds good -- Thanks From guido at python.org Fri Feb 26 23:29:03 2010 From: guido at python.org (Guido van Rossum) Date: Fri, 26 Feb 2010 14:29:03 -0800 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> Message-ID: On Fri, Feb 26, 2010 at 2:09 PM, Brett Cannon wrote: > And personally, I don't see what bytecode-only modules buy you. The > obfuscation argument is bunk as we all know. Bytecode contains so much data > that disassembling it gives you a very clear picture of what the original > code was like. I think it's almost a dis-service to support bytecode-only > files as it leads people who are misinformed or simply don't take the time > to understand what is contained in a .pyc file into a false sense of > security about their code not being easy to examine by someone else. Byte-code only wasn't always supported. We added it knowing full well it had all those problems (plus, it locks in the Python version), simply because a certain class of developers won't stop asking for it. Their users are apparently too dumb to decode bytecode but smart enough to read source code, even if they don't understand it, and this knowledge could hurt them. Presumably users smart enough to decode bytecode will know enough not to hurt themselves. Maybe Greg's and my response to the mention of dropping this feature is too strong -- after all we're both dinosaurs. And maybe the developers who want the feature can write their own loader. But given that this feature takes an entirely different path through import.c anyway, I still don't see how dropping it is necessary in order to implement the PEP. If you have separate motivation to drop the feature, you should deprecate it properly. -- --Guido van Rossum (python.org/~guido) From barry at python.org Fri Feb 26 23:37:30 2010 From: barry at python.org (Barry Warsaw) Date: Fri, 26 Feb 2010 17:37:30 -0500 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> Message-ID: <20100226173730.1a3ddd37@freewill.wooz.org> On Feb 26, 2010, at 02:09 PM, Brett Cannon wrote: >But a benefit of no longer supporting bytecode-only modules by default is it >cuts back on possible stat calls which slows down Python's startup time (a >complaint I hear a lot). Performance issues become even more acute if you try >to come up with even a remotely proper way to have backwards-compatible >support in importlib for its ABCs w/o forcing caching on all implementors of >the ABCs. >And personally, I don't see what bytecode-only modules buy you. The >obfuscation argument is bunk as we all know. Brett really hits the nail on the head, and yes I'm sorry for not being clear about what "we discussed this at Pycon" meant. The "we" being Brett and I of course (and Chris Withers IIRC). Bytecode-only deployments are a bit of a sham, and definitely a minority use case, so why should all of Python pay for the extra stat calls to support this by default? How many people would actually be hurt if this wasn't available out of the box, especially since you can still support it if you really want it and can't convince your manager that it provides essentially zero useful obfuscation of your code? I say this having been down that road myself with a previous employer. Management was pretty adamant about wanting this until I explained how easy it was to defeat and convinced them that the engineering resources to do it were better spent elsewhere. Having said that, I'd be all for including a reference implementation of a bytecode-only loader in the PEP for demonstration purposes. Greg, would you like to contribute that? -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From brett at python.org Fri Feb 26 23:55:19 2010 From: brett at python.org (Brett Cannon) Date: Fri, 26 Feb 2010 14:55:19 -0800 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> Message-ID: On Fri, Feb 26, 2010 at 14:29, Guido van Rossum wrote: > On Fri, Feb 26, 2010 at 2:09 PM, Brett Cannon wrote: > > And personally, I don't see what bytecode-only modules buy you. The > > obfuscation argument is bunk as we all know. Bytecode contains so much > data > > that disassembling it gives you a very clear picture of what the original > > code was like. I think it's almost a dis-service to support bytecode-only > > files as it leads people who are misinformed or simply don't take the > time > > to understand what is contained in a .pyc file into a false sense of > > security about their code not being easy to examine by someone else. > > Byte-code only wasn't always supported. We added it knowing full well > it had all those problems (plus, it locks in the Python version), > simply because a certain class of developers won't stop asking for it. > Their users are apparently too dumb to decode bytecode but smart > enough to read source code, even if they don't understand it, and this > knowledge could hurt them. Presumably users smart enough to decode > bytecode will know enough not to hurt themselves. > > Maybe it should be made optional much like the talk of frozen modules eventually becoming an optional thing. > Maybe Greg's and my response to the mention of dropping this feature > is too strong -- after all we're both dinosaurs. And maybe the > developers who want the feature can write their own loader. We could also provide if necessary. > But given > that this feature takes an entirely different path through import.c > anyway, I still don't see how dropping it is necessary in order to > implement the PEP. It's not necessary at all. I think what Barry was going for was simply cleaning up semantics once instead of having to drag it out. > If you have separate motivation to drop the > feature, you should deprecate it properly. > Fine by me. It would be easy enough to raise ImportWarning in the bytecode-only case if Barry decides to push for this. Here is a question for Barry to think about if he decides to move forward with all of this: would mixed support for both bytecode-only and source/bytecode be required for the same directory, or could it be one or the other but not both? Differing semantics based on what is found in the directory would make the path hook more expensive (which is a one-time cost per directory), but it would cut stat calls in the finder in half (which is a cost made per import). -Brett > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at voidspace.org.uk Fri Feb 26 23:59:58 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 26 Feb 2010 22:59:58 +0000 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> Message-ID: <4B88526E.7030808@voidspace.org.uk> On 26/02/2010 22:09, Brett Cannon wrote: > > > On Thu, Feb 25, 2010 at 16:13, Greg Ewing > wrote: > > Michael Foord wrote: > > I thought we agreed at the language summit that if a .pyc was > in the place of the source file it *could* be imported from - > making pyc only distributions possible. > > > Ah, that's okay, then. Sorry about the panic! > > > Michael is right about what as discussed at the language summit, but > Barry means what he says; if you look at the PEP as it currently > stands it does not support bytecode-only modules. > > Barry and I discussed how to implement the PEP at PyCon after the > summit and supporting bytecode-only modules quickly began to muck with > the semantics and made it harder to explain (i.e. what to set __file__ > vs. __compiled__ based on what is or is not available and how to > properly define get_paths for loaders). But a benefit of no longer > supporting bytecode-only modules by default is it cuts back on > possible stat calls which slows down Python's startup time (a > complaint I hear a lot). Performance issues become even more acute if > you try to come up with even a remotely proper way to have > backwards-compatible support in importlib for its ABCs w/o forcing > caching on all implementors of the ABCs. > > As for having a dependency on a loader, I don't see how that is > obscure; it's just a dependency your package has that you handle at > install-time. > > And personally, I don't see what bytecode-only modules buy you. The > obfuscation argument is bunk as we all know. Bytecode contains so much > data that disassembling it gives you a very clear picture of what the > original code was like. Well, understanding bytecode is *still* requires a higher level of understanding than the *majority* of Python programmers. Added to which there are no widely available tools that *I'm* aware of for decompiling recent versions of Python (decompyle worked up to Python 2.4 but then went closed source as a commercial service [1]. The situation is analagous to .NET assemblies by the way (which *can* be trivially decompiled by several widely available tools). Having a non-source distribution prevents your users from changing things and then calling you for support without them having to go to a lot more effort than it is worth. There are several companies who currently ship bytecode only. (There was someone on the IronPython mailing list only last week asking if IronPython could support pyc files for this reason). For many pointy-haired-bosses 'some' protection is enough and having Python not support this (out of the box) would be a black mark against Python for them. > I think it's almost a dis-service to support bytecode-only files as it > leads people who are misinformed or simply don't take the time to > understand what is contained in a .pyc file into a false sense of > security about their code not being easy to examine by someone else. For many use-cases some protection is enough. After all *any* DRM or source-code obfuscation is breakable in the medium / long term - so just enough to discourage the casual looker is probably sufficient. The fact that bytecode only distributions exist speaks to that. Whether you believe that allowing companies who ship bytecode is a disservice to them or not is fundamentally irrelevant. If they believe it is a service to them then it is... :-) As you can tell, I would be disappointed to see bytecode only distributions be removed from the out-of-the-box functionality. All the best, Michael > The only perk I can see is space-saving, but that's dangerous as that > ties you to a specific VM with a specific magic number (let alone that > it leads to people tying themselves to CPython and ignoring the other > VMs that simply do not support bytecode). > [1] http://www.crazy-compilers.com/decompyle/ > -Brett > > > -- > Greg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Sat Feb 27 00:18:24 2010 From: barry at python.org (Barry Warsaw) Date: Fri, 26 Feb 2010 18:18:24 -0500 Subject: [Python-Dev] Python 2.6.5 rc 1 Message-ID: <64F69FF4-1DFA-4C7A-A4CD-2F653A834BD2@python.org> Hello everybody! I hope you all had as great a time at Pycon 2010 as I did. No time to begin recovering though, we're on to Python 2.6.5 rc 1, which I would like to release on Monday. We have one showstopper still open, and I'll try to respond to that asap. http://bugs.python.org/issue7250 If there is anything else that absolutely should go into 2.6.5, now's the time to let me know. If there are no patches ready to be reviewed and landed though, you're probably running out of time. I will be very conservative about landing patches after rc1. Cheers, -Barry From v+python at g.nevcal.com Sat Feb 27 00:35:16 2010 From: v+python at g.nevcal.com (Glenn Linderman) Date: Fri, 26 Feb 2010 15:35:16 -0800 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> Message-ID: <4B885AB4.7040004@g.nevcal.com> On approximately 2/26/2010 2:55 PM, came the following characters from the keyboard of Brett Cannon: > > Maybe Greg's and my response to the mention of dropping this feature > is too strong -- after all we're both dinosaurs. And maybe the > developers who want the feature can write their own loader. > > > We could also provide if necessary. So if the implementation stores .pyc by default in a version-specific place, then it seems there are only two things needed to make a python byte-code only distribution... 1) rename all the .pyc to .py 2) packaging When a .pyc is renamed to .py, Python (3.1 at least) recognizes and uses it... I assume by design, rather than accident, but I don't know the history. I didn't experiment to discover what __file__ and __cached__ get set to in this case (especially since I don't have a version with the latter :) ). I speculate that packaging a distribution in this manner would be slightly different that how it is currently done, but I also suspect that it would avoid the same half of the stat calls, to aid performance. -- Glenn -- http://nevcal.com/ =========================== A protocol is complete when there is nothing left to remove. -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking From fuzzyman at voidspace.org.uk Sat Feb 27 00:37:26 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 26 Feb 2010 23:37:26 +0000 Subject: [Python-Dev] __file__ In-Reply-To: <4B885AB4.7040004@g.nevcal.com> References: <20100130190005.058c8187@freewill.wooz.org> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> Message-ID: <4B885B36.8090600@voidspace.org.uk> On 26/02/2010 23:35, Glenn Linderman wrote: > On approximately 2/26/2010 2:55 PM, came the following characters from > the keyboard of Brett Cannon: >> >> Maybe Greg's and my response to the mention of dropping this feature >> is too strong -- after all we're both dinosaurs. And maybe the >> developers who want the feature can write their own loader. >> >> >> We could also provide if necessary. > > So if the implementation stores .pyc by default in a version-specific > place, then it seems there are only two things needed to make a python > byte-code only distribution... > > 1) rename all the .pyc to .py > 2) packaging > > When a .pyc is renamed to .py, Python (3.1 at least) recognizes and > uses it... I assume by design, rather than accident, but I don't know > the history. > > I didn't experiment to discover what __file__ and __cached__ get set > to in this case (especially since I don't have a version with the > latter :) ). > > I speculate that packaging a distribution in this manner would be > slightly different that how it is currently done, but I also suspect > that it would avoid the same half of the stat calls, to aid performance. > If this is possible with the new scheme, so long as the Python version and magic number match, then it is slightly kooky but meets the use case. All the best, Michael -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From ianb at colorstudy.com Sat Feb 27 01:58:59 2010 From: ianb at colorstudy.com (Ian Bicking) Date: Fri, 26 Feb 2010 18:58:59 -0600 Subject: [Python-Dev] __file__ In-Reply-To: <4B885B36.8090600@voidspace.org.uk> References: <20100130190005.058c8187@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> <4B885B36.8090600@voidspace.org.uk> Message-ID: The one issue I thought would be resolved by not easily allowing .pyc-only distributions is the case when you rename a file (say module.py to newmodule.py) and there is a module.pyc laying around, and you don't get the ImportError you would expect from "import module" -- and to make it worse everything basically works, except there's two versions of the module that slowly become different. This regularly causes problems for me, and those problems would get more common and obscure if the pyc files were stashed away in a more invisible location. I can't even tell what the current proposal is; maybe this is resolved? If distributing bytecode required renaming pyc files to .py as Glenn suggested that would resolve the problem quite nicely from my perspective. (Frankly I find the whole use case for distributing bytecodes a bit specious, but whatever.) -- Ian Bicking | http://blog.ianbicking.org | http://twitter.com/ianbicking From brett at python.org Sat Feb 27 02:09:16 2010 From: brett at python.org (Brett Cannon) Date: Fri, 26 Feb 2010 17:09:16 -0800 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> <4B885B36.8090600@voidspace.org.uk> Message-ID: On Fri, Feb 26, 2010 at 16:58, Ian Bicking wrote: > The one issue I thought would be resolved by not easily allowing > .pyc-only distributions is the case when you rename a file (say > module.py to newmodule.py) and there is a module.pyc laying around, > and you don't get the ImportError you would expect from "import > module" -- and to make it worse everything basically works, except > there's two versions of the module that slowly become different. Yes, that problem would go away if bytecode-only modules were no longer supported. > This > regularly causes problems for me, and those problems would get more > common and obscure if the pyc files were stashed away in a more > invisible location. > > That has never been an issue with this proposal. The bytecode pulled from the __pycache__ directory only occurs if source exists. What we have been discussing is whether bytecode-only files in the directory of a package or something exists. -Brett > I can't even tell what the current proposal is; maybe this is > resolved? If distributing bytecode required renaming pyc files to .py > as Glenn suggested that would resolve the problem quite nicely from my > perspective. (Frankly I find the whole use case for distributing > bytecodes a bit specious, but whatever.) > > -- > Ian Bicking | http://blog.ianbicking.org | > http://twitter.com/ianbicking > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Feb 27 02:11:11 2010 From: guido at python.org (Guido van Rossum) Date: Fri, 26 Feb 2010 17:11:11 -0800 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> <4B885B36.8090600@voidspace.org.uk> Message-ID: On Fri, Feb 26, 2010 at 4:58 PM, Ian Bicking wrote: > The one issue I thought would be resolved by not easily allowing > .pyc-only distributions is the case when you rename a file (say > module.py to newmodule.py) and there is a module.pyc laying around, > and you don't get the ImportError you would expect from "import > module" -- and to make it worse everything basically works, except > there's two versions of the module that slowly become different. ?This > regularly causes problems for me, and those problems would get more > common and obscure if the pyc files were stashed away in a more > invisible location. > > I can't even tell what the current proposal is; maybe this is > resolved? ?If distributing bytecode required renaming pyc files to .py > as Glenn suggested that would resolve the problem quite nicely from my > perspective. ?(Frankly I find the whole use case for distributing > bytecodes a bit specious, but whatever.) Barry's PEP would fix this even if we kept supporting .pyc-only files: the lingering .pyc files will be in the __pycache__ directory which is *not* searched -- only .pyc files directly in the source directory will be found -- where the PEP will never place them, at least not by default. -- --Guido van Rossum (python.org/~guido) From brett at python.org Sat Feb 27 02:13:38 2010 From: brett at python.org (Brett Cannon) Date: Fri, 26 Feb 2010 17:13:38 -0800 Subject: [Python-Dev] __file__ In-Reply-To: <4B885AB4.7040004@g.nevcal.com> References: <20100130190005.058c8187@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> Message-ID: On Fri, Feb 26, 2010 at 15:35, Glenn Linderman > wrote: > On approximately 2/26/2010 2:55 PM, came the following characters from the > keyboard of Brett Cannon: > > >> Maybe Greg's and my response to the mention of dropping this feature >> is too strong -- after all we're both dinosaurs. And maybe the >> developers who want the feature can write their own loader. >> >> >> We could also provide if necessary. >> > > So if the implementation stores .pyc by default in a version-specific > place, then it seems there are only two things needed to make a python > byte-code only distribution... > > 1) rename all the .pyc to .py > 2) packaging > > When a .pyc is renamed to .py, Python (3.1 at least) recognizes and uses > it... I assume by design, rather than accident, but I don't know the > history. > This does not work for me (nor should it): > touch temp.py > python3 -c "import temp" > rm temp.py > mv temp.pyc temp.py > python3 -c "import temp" Traceback (most recent call last): File "", line 1, in File "temp.py", line 2 SyntaxError: Non-UTF-8 code starting with '\x95' in file temp.py on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details -Brett > > I didn't experiment to discover what __file__ and __cached__ get set to in > this case (especially since I don't have a version with the latter :) ). > > I speculate that packaging a distribution in this manner would be slightly > different that how it is currently done, but I also suspect that it would > avoid the same half of the stat calls, to aid performance. > > -- > Glenn -- http://nevcal.com/ > =========================== > A protocol is complete when there is nothing left to remove. > -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug.hellmann at gmail.com Sat Feb 27 02:20:26 2010 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Fri, 26 Feb 2010 20:20:26 -0500 Subject: [Python-Dev] __file__ In-Reply-To: <4B88526E.7030808@voidspace.org.uk> References: <20100130190005.058c8187@freewill.wooz.org> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> Message-ID: On Feb 26, 2010, at 5:59 PM, Michael Foord wrote: > On 26/02/2010 22:09, Brett Cannon wrote: >> >> >> >> On Thu, Feb 25, 2010 at 16:13, Greg Ewing > > wrote: >> Michael Foord wrote: >> >> I thought we agreed at the language summit that if a .pyc was in >> the place of the source file it *could* be imported from - making >> pyc only distributions possible. >> >> Ah, that's okay, then. Sorry about the panic! >> >> >> Michael is right about what as discussed at the language summit, >> but Barry means what he says; if you look at the PEP as it >> currently stands it does not support bytecode-only modules. >> >> Barry and I discussed how to implement the PEP at PyCon after the >> summit and supporting bytecode-only modules quickly began to muck >> with the semantics and made it harder to explain (i.e. what to set >> __file__ vs. __compiled__ based on what is or is not available and >> how to properly define get_paths for loaders). But a benefit of no >> longer supporting bytecode-only modules by default is it cuts back >> on possible stat calls which slows down Python's startup time (a >> complaint I hear a lot). Performance issues become even more acute >> if you try to come up with even a remotely proper way to have >> backwards-compatible support in importlib for its ABCs w/o forcing >> caching on all implementors of the ABCs. >> >> As for having a dependency on a loader, I don't see how that is >> obscure; it's just a dependency your package has that you handle at >> install-time. >> >> And personally, I don't see what bytecode-only modules buy you. The >> obfuscation argument is bunk as we all know. Bytecode contains so >> much data that disassembling it gives you a very clear picture of >> what the original code was like. > > Well, understanding bytecode is *still* requires a higher level of > understanding than the *majority* of Python programmers. Added to > which there are no widely available tools that *I'm* aware of for > decompiling recent versions of Python (decompyle worked up to Python > 2.4 but then went closed source as a commercial service [1]. > > The situation is analagous to .NET assemblies by the way (which > *can* be trivially decompiled by several widely available tools). > Having a non-source distribution prevents your users from changing > things and then calling you for support without them having to go to > a lot more effort than it is worth. > > There are several companies who currently ship bytecode only. (There > was someone on the IronPython mailing list only last week asking if > IronPython could support pyc files for this reason). For many pointy- > haired-bosses 'some' protection is enough and having Python not > support this (out of the box) would be a black mark against Python > for them. We ship bytecode only, basically for the reason Michael states here (keeping support costs under control from "ambitious" users). >> I think it's almost a dis-service to support bytecode-only files as >> it leads people who are misinformed or simply don't take the time >> to understand what is contained in a .pyc file into a false sense >> of security about their code not being easy to examine by someone >> else. > > For many use-cases some protection is enough. After all *any* DRM or > source-code obfuscation is breakable in the medium / long term - so > just enough to discourage the casual looker is probably sufficient. > The fact that bytecode only distributions exist speaks to that. Right. We're more concerned with not having users muck with stuff than with keeping the implementation a secret, although having a bit of obfuscation doesn't hurt. > Whether you believe that allowing companies who ship bytecode is a > disservice to them or not is fundamentally irrelevant. If they > believe it is a service to them then it is... :-) > > As you can tell, I would be disappointed to see bytecode only > distributions be removed from the out-of-the-box functionality. +1 Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Sat Feb 27 02:24:36 2010 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 27 Feb 2010 12:24:36 +1100 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B871227.2030707@canterbury.ac.nz> Message-ID: <201002271224.38093.steve@pearwood.info> On Sat, 27 Feb 2010 09:09:26 am Brett Cannon wrote: > I think it's almost a dis-service to support bytecode-only > files as it leads people who are misinformed or simply don't take the > time to understand what is contained in a .pyc file into a false > sense of security about their code not being easy to examine by > someone else. You say that as if it were a bad thing. *wink* Personally, I can't imagine ever wanting to ship a .pyc module without the .py, but since Python already gives people the opportunity to shoot themselves in the foot, meh, we're all adults here. I do recall a poster on comp.lang.python pulling his hair out over a customer who was too big to fire, but who had the obnoxious habit of making random so-called "fixes" to the poster's .py files, so perhaps byte-code only distribution isn't all bad. But I don't care much either way. -- Steven D'Aprano From brett at python.org Sat Feb 27 02:30:06 2010 From: brett at python.org (Brett Cannon) Date: Fri, 26 Feb 2010 17:30:06 -0800 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> Message-ID: On Fri, Feb 26, 2010 at 17:20, Doug Hellmann wrote: > > On Feb 26, 2010, at 5:59 PM, Michael Foord wrote: > > On 26/02/2010 22:09, Brett Cannon wrote: > > > > On Thu, Feb 25, 2010 at 16:13, Greg Ewing wrote: > >> Michael Foord wrote: >> >> I thought we agreed at the language summit that if a .pyc was in the >>> place of the source file it *could* be imported from - making pyc only >>> distributions possible. >>> >> >> Ah, that's okay, then. Sorry about the panic! >> >> > Michael is right about what as discussed at the language summit, but > Barry means what he says; if you look at the PEP as it currently stands it > does not support bytecode-only modules. > > Barry and I discussed how to implement the PEP at PyCon after the summit > and supporting bytecode-only modules quickly began to muck with the > semantics and made it harder to explain (i.e. what to set __file__ vs. > __compiled__ based on what is or is not available and how to properly define > get_paths for loaders). But a benefit of no longer supporting bytecode-only > modules by default is it cuts back on possible stat calls which slows down > Python's startup time (a complaint I hear a lot). Performance issues become > even more acute if you try to come up with even a remotely proper way to > have backwards-compatible support in importlib for its ABCs w/o forcing > caching on all implementors of the ABCs. > > As for having a dependency on a loader, I don't see how that is obscure; > it's just a dependency your package has that you handle at install-time. > > And personally, I don't see what bytecode-only modules buy you. The > obfuscation argument is bunk as we all know. Bytecode contains so much data > that disassembling it gives you a very clear picture of what the original > code was like. > > > Well, understanding bytecode is *still* requires a higher level of > understanding than the *majority* of Python programmers. Added to which > there are no widely available tools that *I'm* aware of for decompiling > recent versions of Python (decompyle worked up to Python 2.4 but then went > closed source as a commercial service [1]. > > The situation is analagous to .NET assemblies by the way (which *can* be > trivially decompiled by several widely available tools). Having a non-source > distribution prevents your users from changing things and then calling you > for support without them having to go to a lot more effort than it is worth. > > There are several companies who currently ship bytecode only. (There was > someone on the IronPython mailing list only last week asking if IronPython > could support pyc files for this reason). For many pointy-haired-bosses > 'some' protection is enough and having Python not support this (out of the > box) would be a black mark against Python for them. > > > We ship bytecode only, basically for the reason Michael states here > (keeping support costs under control from "ambitious" users). > > I think it's almost a dis-service to support bytecode-only files as it > leads people who are misinformed or simply don't take the time to understand > what is contained in a .pyc file into a false sense of security about their > code not being easy to examine by someone else. > > > For many use-cases some protection is enough. After all *any* DRM or > source-code obfuscation is breakable in the medium / long term - so just > enough to discourage the casual looker is probably sufficient. The fact that > bytecode only distributions exist speaks to that. > > > Right. We're more concerned with not having users muck with stuff than with > keeping the implementation a secret, although having a bit of obfuscation > doesn't hurt. > > Whether you believe that allowing companies who ship bytecode is a > disservice to them or not is fundamentally irrelevant. If they believe it is > a service to them then it is... :-) > > As you can tell, I would be disappointed to see bytecode only distributions > be removed from the out-of-the-box functionality. > > > +1 > So what is the burden of including a single source file that added the support to load from bytecode-only modules? I am not saying you shouldn't be able to have this functionality, just that I personally don't want to pay for the overhead (both performance-wise and development-wise) by default just because you and some other people want this functionality for some clients. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Sat Feb 27 02:43:37 2010 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 27 Feb 2010 12:43:37 +1100 Subject: [Python-Dev] Another version of Python In-Reply-To: <4B880AA0.6070402@voidspace.org.uk> References: <19333.18334.467261.97439@montanaro.dyndns.org> <19336.1115.507895.576160@montanaro.dyndns.org> <4B880AA0.6070402@voidspace.org.uk> Message-ID: <201002271243.38600.steve@pearwood.info> On Sat, 27 Feb 2010 04:53:36 am Michael Foord wrote: > On 26/02/2010 17:26, skip at pobox.com wrote: > > >> ? ? http://www.staringispolite.com/likepython/ > > > > Simon> I'm reminded of LOLPython:. > > > > You know, I'm thinking while both are obviously tongue-in-cheek we > > should probably include them on the /dev/implementations page of > > python.org, probably in a separate section at the end of the page. > > They're certainly fun - but they seem to be fly-by-night projects > (i.e. unlikely to be maintained in the long run). The risk is that we > end up with even more outdated links / material on the website. > > Anyway, if the consensus is that it would be good to link to them > then I will update the page (which could already do with some > updating by the looks of it). For what it's worth, I have compiled a list of between 14 and 27 implementations of Python, depending on how conservative you are at defining "implementation". I then went to the wiki and discovered my list was nowhere near complete. Obviously this information is extensive and rapidly changing, so I think it would be better to have the current implementation page be fairly minimal but link to the wiki for more details: http://wiki.python.org/moin/implementation http://www.python.org/dev/implementations/ -- Steven D'Aprano From doug.hellmann at gmail.com Sat Feb 27 02:54:12 2010 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Fri, 26 Feb 2010 20:54:12 -0500 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> Message-ID: <0D2B682C-1442-42D4-A837-007F182DF094@gmail.com> On Feb 26, 2010, at 8:30 PM, Brett Cannon wrote: > So what is the burden of including a single source file that added > the support to load from bytecode-only modules? I am not saying you > shouldn't be able to have this functionality, just that I personally > don't want to pay for the overhead (both performance-wise and > development-wise) by default just because you and some other people > want this functionality for some clients. If such a module was available, we'd use it if that was the way to achieve what we want. We could write something like that on our own, but we'd be more likely to decide to just stick with Python 2 for longer because we're going to prioritize new features over doing "hidden" maintenance work like that. So, we want the ability to ship bytecode-only versions of the software, but the specific mechanism for doing so doesn't matter a lot. Doug From rrr at ronadam.com Sat Feb 27 03:30:26 2010 From: rrr at ronadam.com (Ron Adam) Date: Fri, 26 Feb 2010 20:30:26 -0600 Subject: [Python-Dev] __file__ In-Reply-To: <20100226173730.1a3ddd37@freewill.wooz.org> References: <20100130190005.058c8187@freewill.wooz.org> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <20100226173730.1a3ddd37@freewill.wooz.org> Message-ID: Barry Warsaw wrote: > On Feb 26, 2010, at 02:09 PM, Brett Cannon wrote: > >> But a benefit of no longer supporting bytecode-only modules by default is it >> cuts back on possible stat calls which slows down Python's startup time (a >> complaint I hear a lot). Performance issues become even more acute if you try >> to come up with even a remotely proper way to have backwards-compatible >> support in importlib for its ABCs w/o forcing caching on all implementors of >> the ABCs. > >> And personally, I don't see what bytecode-only modules buy you. The >> obfuscation argument is bunk as we all know. > > Brett really hits the nail on the head, and yes I'm sorry for not being clear > about what "we discussed this at Pycon" meant. The "we" being Brett and I of > course (and Chris Withers IIRC). > > Bytecode-only deployments are a bit of a sham, and definitely a minority use > case, so why should all of Python pay for the extra stat calls to support this > by default? How many people would actually be hurt if this wasn't available > out of the box, especially since you can still support it if you really want > it and can't convince your manager that it provides essentially zero useful > obfuscation of your code? > > I say this having been down that road myself with a previous employer. > Management was pretty adamant about wanting this until I explained how easy it > was to defeat and convinced them that the engineering resources to do it were > better spent elsewhere. > > Having said that, I'd be all for including a reference implementation of a > bytecode-only loader in the PEP for demonstration purposes. Greg, would you > like to contribute that? > > -Barry Micheal Foords view point on this strikes me as the most realistic. Some people do find it to be a value for their particular needs and circumstance. Michael Foord Wrote: > For many use-cases some protection is enough. After all *any* DRM or > source-code obfuscation is breakable in the medium / long term - so just > enough to discourage the casual looker is probably sufficient. The fact > that bytecode only distributions exist speaks to that. > > Whether you believe that allowing companies who ship bytecode is a > disservice to them or not is fundamentally irrelevant. If they believe > it is a service to them then it is... :-) To possibly qualify it a bit more: It does not make sense (to me) to have byte code only modules and packages in python's lib directory. The whole purpose (as far as I know) is for modules and packages located there to be shared. And as such, the source file becomes a source of documentation. Not supporting bytecode only python modules and packages in pythons "lib" directory may be good. For python programs located and installed elsewhere I think Michaels view point is applicable. For some files that are not meant to be shared, some form of discouragement can be a feature. Ron Adam From v+python at g.nevcal.com Sat Feb 27 05:08:17 2010 From: v+python at g.nevcal.com (Glenn Linderman) Date: Fri, 26 Feb 2010 20:08:17 -0800 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> Message-ID: <4B889AB1.4050402@g.nevcal.com> On approximately 2/26/2010 5:13 PM, came the following characters from the keyboard of Brett Cannon: > On Fri, Feb 26, 2010 at 15:35, Glenn Linderman > wrote: > > On approximately 2/26/2010 2:55 PM, came the following characters > from the keyboard of Brett Cannon: > > > Maybe Greg's and my response to the mention of dropping > this feature > is too strong -- after all we're both dinosaurs. And maybe the > developers who want the feature can write their own loader. > > > We could also provide if necessary. > > > So if the implementation stores .pyc by default in a > version-specific place, then it seems there are only two things > needed to make a python byte-code only distribution... > > 1) rename all the .pyc to .py > 2) packaging > > When a .pyc is renamed to .py, Python (3.1 at least) recognizes > and uses it... I assume by design, rather than accident, but I > don't know the history. > > > This does not work for me (nor should it): > > > touch temp.py > > python3 -c "import temp" > > rm temp.py > > mv temp.pyc temp.py > > python3 -c "import temp" > Traceback (most recent call last): > File "", line 1, in > File "temp.py", line 2 > SyntaxError: Non-UTF-8 code starting with '\x95' in file temp.py on > line 2, but no encoding declared; see > http://python.org/dev/peps/pep-0263/ for details > > -Brett I'll admit to not doing exhaustive testing, but I'll not admit to not doing any testing... because it was sort of a wild idea. Someone else called it "kooky", which is fair. What I did was: python -m test ren test.pyc foo.py foo.py and it worked. Then I posted, knowing that I'd also tested, the other day, several .py into a .zip named .py, and once that worked, then I changed to putting all .pyc into the .zip named .py and that worked too... including imports of the several modules from the "__main__.pyc". Of course, all those were still named .pyc inside the .zip named .py. So I'm not sure what the difference is... .pyc as .py works from the command line, but not from import? Some specialty because of using -c ? I'd guess the technique could be made to work, probably not require extensive changes, if Python developers wanted to make it work. I think it could be efficient and that same someone that called it "kooky" admitted it would solve their use case, at least. I'm not sure why what you did is different than what I did, nor why you state without justification that it shouldn't work... I might be able to figure out the former if I spend enough time with the documentation, if it is documented, but I'm too new to Python to understand the latter without explanation. Could you supply at least the latter explanation? I'd like to understand the issue here, whether or not the "kooky" idea goes forward. -- Glenn -- http://nevcal.com/ =========================== A protocol is complete when there is nothing left to remove. -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking From brett at python.org Sat Feb 27 05:31:56 2010 From: brett at python.org (Brett Cannon) Date: Fri, 26 Feb 2010 20:31:56 -0800 Subject: [Python-Dev] __file__ In-Reply-To: <4B889AB1.4050402@g.nevcal.com> References: <20100130190005.058c8187@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> <4B889AB1.4050402@g.nevcal.com> Message-ID: On Fri, Feb 26, 2010 at 20:08, Glenn Linderman > wrote: > On approximately 2/26/2010 5:13 PM, came the following characters from the > keyboard of Brett Cannon: > > On Fri, Feb 26, 2010 at 15:35, Glenn Linderman > v%2Bpython at g.nevcal.com >> wrote: >> >> On approximately 2/26/2010 2:55 PM, came the following characters >> from the keyboard of Brett Cannon: >> >> >> Maybe Greg's and my response to the mention of dropping >> this feature >> is too strong -- after all we're both dinosaurs. And maybe the >> developers who want the feature can write their own loader. >> >> >> We could also provide if necessary. >> >> >> So if the implementation stores .pyc by default in a >> version-specific place, then it seems there are only two things >> needed to make a python byte-code only distribution... >> >> 1) rename all the .pyc to .py >> 2) packaging >> >> When a .pyc is renamed to .py, Python (3.1 at least) recognizes >> and uses it... I assume by design, rather than accident, but I >> don't know the history. >> >> >> This does not work for me (nor should it): >> >> > touch temp.py >> > python3 -c "import temp" >> > rm temp.py >> > mv temp.pyc temp.py >> > python3 -c "import temp" >> Traceback (most recent call last): >> File "", line 1, in >> File "temp.py", line 2 >> SyntaxError: Non-UTF-8 code starting with '\x95' in file temp.py on line >> 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for >> details >> >> -Brett >> > > I'll admit to not doing exhaustive testing, but I'll not admit to not doing > any testing... because it was sort of a wild idea. Someone else called it > "kooky", which is fair. > > What I did was: > > python -m test > ren test.pyc foo.py > foo.py > > and it worked. Then I posted, knowing that I'd also tested, the other day, > several .py into a .zip named .py, and once that worked, then I changed to > putting all .pyc into the .zip named .py and that worked too... including > imports of the several modules from the "__main__.pyc". Of course, all > those were still named .pyc inside the .zip named .py. > > So I'm not sure what the difference is... .pyc as .py works from the > command line, but not from import? Some specialty because of using -c ? > > I'd guess the technique could be made to work, probably not require > extensive changes, if Python developers wanted to make it work. I think it > could be efficient and that same someone that called it "kooky" admitted it > would solve their use case, at least. > > I'm not sure why what you did is different than what I did, -M uses runpy which is not directly equivalent to importing. > nor why you state without justification that it shouldn't work... It just is not supposed to happen that way. Masquerading a bytecode file as a source file shouldn't work; imp.get_suffixes() controls how files should be interpreted based on their file extension. -Brett > I might be able to figure out the former if I spend enough time with the > documentation, if it is documented, but I'm too new to Python to understand > the latter without explanation. Could you supply at least the latter > explanation? I'd like to understand the issue here, whether or not the > "kooky" idea goes forward. > > > -- > Glenn -- http://nevcal.com/ > =========================== > A protocol is complete when there is nothing left to remove. > -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Feb 27 05:32:18 2010 From: guido at python.org (Guido van Rossum) Date: Fri, 26 Feb 2010 20:32:18 -0800 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> Message-ID: On Fri, Feb 26, 2010 at 5:13 PM, Brett Cannon wrote: > On Fri, Feb 26, 2010 at 15:35, Glenn Linderman >> When a .pyc is renamed to .py, Python (3.1 at least) recognizes and uses >> it... I assume by design, rather than accident, but I don't know the >> history. > > This does not work for me (nor should it): >> touch temp.py >> >> python3 -c "import temp" >> >> rm temp.py >> >> mv temp.pyc temp.py >> >> python3 -c "import temp" >> > Traceback (most recent call last): > ??File "", line 1, in > ??File "temp.py", line 2 > SyntaxError: Non-UTF-8 code starting with '\x95' in file temp.py on line 2, > but no encoding declared; see http://python.org/dev/peps/pep-0263/ for > details Try "python temp.py" though. -- --Guido van Rossum (python.org/~guido) From v+python at g.nevcal.com Sat Feb 27 06:03:30 2010 From: v+python at g.nevcal.com (Glenn Linderman) Date: Fri, 26 Feb 2010 21:03:30 -0800 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> Message-ID: <4B88A7A2.8040405@g.nevcal.com> On approximately 2/26/2010 8:31 PM, came the following characters from the keyboard of Brett Cannon: > > > I'm not sure why what you did is different than what I did, > > > -M uses runpy which is not directly equivalent to importing. OK, that gives me some good keywords for searching documentation. What I (thought I) knew so far, was that it seemed to be equivalent, but that could easily be the 10,000' view instead of the reality. Thanks. > nor why you state without justification that it shouldn't work... > > > It just is not supposed to happen that way. Masquerading a bytecode > file as a source file shouldn't work; imp.get_suffixes() controls how > files should be interpreted based on their file extension. Well, since a .py can be a .zip, why not a .pyc? Just because no one thought of doing it before? Of course, I realize that I only know that a .py can be a .zip on the command line (is that runpy again, I'll bet it is), not for importing, which probably doesn't work, from what you imply. But if the technique can work from the command line, it seems the same technique could be re-used in the importer. A bytecode only .py would result in identical values for __file__ and __cached__ methinks. -- Glenn -- http://nevcal.com/ =========================== A protocol is complete when there is nothing left to remove. -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking From skip at pobox.com Sat Feb 27 15:49:54 2010 From: skip at pobox.com (skip at pobox.com) Date: Sat, 27 Feb 2010 08:49:54 -0600 Subject: [Python-Dev] Another version of Python In-Reply-To: <201002271243.38600.steve@pearwood.info> References: <19333.18334.467261.97439@montanaro.dyndns.org> <19336.1115.507895.576160@montanaro.dyndns.org> <4B880AA0.6070402@voidspace.org.uk> <201002271243.38600.steve@pearwood.info> Message-ID: <19337.12562.673278.14628@montanaro.dyndns.org> >>>>> "Steven" == Steven D'Aprano writes: Steven> For what it's worth, I have compiled a list of between 14 and 27 Steven> implementations of Python, depending on how conservative you are Steven> at defining "implementation". Steven> I then went to the wiki and discovered my list was nowhere near Steven> complete. Obviously this information is extensive and rapidly Steven> changing, so I think it would be better to have the current Steven> implementation page be fairly minimal but link to the wiki for Steven> more details: Steven> http://wiki.python.org/moin/implementation Steven> http://www.python.org/dev/implementations/ I added Like, Python and LOLPython to the wiki page in a new section, "Just For Fun". I don't see the source for the /dev/implementations page in my python.org website checkout. I'll suggest the link to the other pydotorg types. Skip From ncoghlan at gmail.com Sat Feb 27 16:24:39 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 28 Feb 2010 01:24:39 +1000 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> <4B889AB1.4050402@g.nevcal.com> Message-ID: <4B893937.2020000@gmail.com> Brett Cannon wrote: > On Fri, Feb 26, 2010 at 20:08, Glenn Linderman I'm not sure why what you did is different than what I did, > > > -M uses runpy which is not directly equivalent to importing. It's actually execution which is different from importing. Direct execution doesn't care about filenames (it inspects the file itself to figure out what it is), while importing cares a great deal. Note that Glenn ran "foo.py" directly, while Brett did "import temp". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Sat Feb 27 16:27:42 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 28 Feb 2010 01:27:42 +1000 Subject: [Python-Dev] __file__ In-Reply-To: <4B88A7A2.8040405@g.nevcal.com> References: <20100130190005.058c8187@freewill.wooz.org> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B88A7A2.8040405@g.nevcal.com> Message-ID: <4B8939EE.7030005@gmail.com> Glenn Linderman wrote: > But if the technique can work from the command line, it seems the > same technique could be re-used in the importer. Not really - we only get away with the fun and games at execution time because __main__ is a bit special (and always has been). Those tricks would be a lot harder to pull off for a normal module import (if they were possible at all - I'm not sure they would be). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Sat Feb 27 16:38:40 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 28 Feb 2010 01:38:40 +1000 Subject: [Python-Dev] __file__ In-Reply-To: <201002271224.38093.steve@pearwood.info> References: <20100130190005.058c8187@freewill.wooz.org> <4B871227.2030707@canterbury.ac.nz> <201002271224.38093.steve@pearwood.info> Message-ID: <4B893C80.5040200@gmail.com> Steven D'Aprano wrote: > On Sat, 27 Feb 2010 09:09:26 am Brett Cannon wrote: >> I think it's almost a dis-service to support bytecode-only >> files as it leads people who are misinformed or simply don't take the >> time to understand what is contained in a .pyc file into a false >> sense of security about their code not being easy to examine by >> someone else. > > You say that as if it were a bad thing. > > *wink* > > Personally, I can't imagine ever wanting to ship a .pyc module without > the .py, but since Python already gives people the opportunity to shoot > themselves in the foot, meh, we're all adults here. I do recall a > poster on comp.lang.python pulling his hair out over a customer who was > too big to fire, but who had the obnoxious habit of making random > so-called "fixes" to the poster's .py files, so perhaps byte-code only > distribution isn't all bad. I think the use case of "keep the user from fiddling casually with our application" is a valid one. There's a fairly vast difference between "open source file, edit code, hit save" and "decompile pyc, open decompiled source file, edit code, save next to pyc with correct name". The former makes it easy for folks that know just enough to be dangerous to get themselves in trouble. The latter raises the bar far enough that people with the ability to do it should also know better than to try (or at least, not to call the support line when it doesn't work). I do like the idea of pulling .pyc only imports out into a separate importer, but would go so far as to suggest keeping them as a command line option rather than as a separately distributed module. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From barry at python.org Sat Feb 27 16:53:10 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 27 Feb 2010 10:53:10 -0500 Subject: [Python-Dev] __file__ In-Reply-To: <4B893C80.5040200@gmail.com> References: <20100130190005.058c8187@freewill.wooz.org> <4B871227.2030707@canterbury.ac.nz> <201002271224.38093.steve@pearwood.info> <4B893C80.5040200@gmail.com> Message-ID: <20100227105310.43e6602e@freewill.wooz.org> On Feb 28, 2010, at 01:38 AM, Nick Coghlan wrote: >I think the use case of "keep the user from fiddling casually with our >application" is a valid one. Doesn't the existing support for zipimport satisfy that use case already, and probably better so? Heck you can even name your zip file "application.dat" to really throw naive users off the scent. ;) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sat Feb 27 16:56:13 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 27 Feb 2010 10:56:13 -0500 Subject: [Python-Dev] __file__ In-Reply-To: <4B88526E.7030808@voidspace.org.uk> References: <20100130190005.058c8187@freewill.wooz.org> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> Message-ID: <20100227105613.6581a198@freewill.wooz.org> On Feb 26, 2010, at 10:59 PM, Michael Foord wrote: >There are several companies who currently ship bytecode only. (There was >someone on the IronPython mailing list only last week asking if >IronPython could support pyc files for this reason). For many >pointy-haired-bosses 'some' protection is enough and having Python not >support this (out of the box) would be a black mark against Python for them. Would it not be better to ship a zip file with an obfuscated name? Doesn't that satisfy the use case nicely? -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sat Feb 27 16:58:44 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 27 Feb 2010 10:58:44 -0500 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <20100226173730.1a3ddd37@freewill.wooz.org> Message-ID: <20100227105844.16ddfc31@freewill.wooz.org> On Feb 26, 2010, at 08:30 PM, Ron Adam wrote: >It does not make sense (to me) to have byte code only modules and packages >in python's lib directory. The whole purpose (as far as I know) is for >modules and packages located there to be shared. And as such, the source >file becomes a source of documentation. Not supporting bytecode only >python modules and packages in pythons "lib" directory may be good. Actually, it's not the standard library that's the issue, it's third party modules that OS vendors distribute. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sat Feb 27 17:17:36 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 27 Feb 2010 11:17:36 -0500 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> Message-ID: <20100227111736.4e9dca51@freewill.wooz.org> On Feb 26, 2010, at 02:29 PM, Guido van Rossum wrote: >Byte-code only wasn't always supported. We added it knowing full well >it had all those problems (plus, it locks in the Python version), >simply because a certain class of developers won't stop asking for it. >Their users are apparently too dumb to decode bytecode but smart >enough to read source code, even if they don't understand it, and this >knowledge could hurt them. Presumably users smart enough to decode >bytecode will know enough not to hurt themselves. For now, I've added a open issues section to the PEP describing the options for bytecode-only support. I think there are better ways to satisfy the bytecode-only packager requirements than supporting it by default, always, in the standard importer, but let's enumerate the pros and cons and then make a decision. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sat Feb 27 17:23:15 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 27 Feb 2010 11:23:15 -0500 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> Message-ID: <20100227112315.15f399e0@freewill.wooz.org> On Feb 26, 2010, at 02:55 PM, Brett Cannon wrote: >Here is a question for Barry to think about if he decides to move forward >with all of this: would mixed support for both bytecode-only and >source/bytecode be required for the same directory, or could it be one or >the other but not both? Differing semantics based on what is found in the >directory would make the path hook more expensive (which is a one-time cost >per directory), but it would cut stat calls in the finder in half (which is >a cost made per import). It seems a bit magical to me, and the rules a bit difficult to predict. For example, what would be the trigger to enable bytecode-only support for a package directory? Would it be the absence of an __init__.py file? What if some .pyc files had .py file but not all of them? Wouldn't the trigger depend on import order? OTOH, maybe you're on to something. Perhaps we could add a flag to the package's namespace to turn this on. You'd have to include the __init__.py to get things going, but after that, everything else in the package could be .pyc-only. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From barry at python.org Sat Feb 27 17:26:16 2010 From: barry at python.org (Barry Warsaw) Date: Sat, 27 Feb 2010 11:26:16 -0500 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> <4B885B36.8090600@voidspace.org.uk> Message-ID: <20100227112616.1d103e96@freewill.wooz.org> On Feb 26, 2010, at 05:11 PM, Guido van Rossum wrote: >Barry's PEP would fix this even if we kept supporting .pyc-only files: >the lingering .pyc files will be in the __pycache__ directory which is >*not* searched -- only .pyc files directly in the source directory >will be found -- where the PEP will never place them, at least not by >default. Exactly so. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From meadori at gmail.com Sat Feb 27 17:58:43 2010 From: meadori at gmail.com (Meador Inge) Date: Sat, 27 Feb 2010 10:58:43 -0600 Subject: [Python-Dev] PEP 3188: Implementation Questions In-Reply-To: <4B88464E.9080700@canterbury.ac.nz> References: <4095897c1002252051w24377ec2tc60d0b04adee8c5d@mail.gmail.com> <4B88464E.9080700@canterbury.ac.nz> Message-ID: <4095897c1002270858j5e14579dhe01c1fb4958f087b@mail.gmail.com> On Fri, Feb 26, 2010 at 4:08 PM, Greg Ewing wrote: > Meador Inge wrote: > > 3. Using Decimal keeps the desired precision, >> > > Well, sort of, but then you end up doing arithmetic in > decimal instead of binary, which could give different > results. > Even with the user-defined precision capabilities of the 'Decimal' class? In other words, can I create an instance of a 'Decimal' that behaves (in all operations: arithmetic, comparison, etc...) exactly as the extended double precision type offered by a given machine? Maybe the solution is to give ctypes long double objects > the ability to do arithmetic? > Maybe, but then we would have to give all numeric 'ctypes' the ability to do arithmetic -- which may be more than we want. -- Meador -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Sat Feb 27 18:03:18 2010 From: tseaver at palladion.com (Tres Seaver) Date: Sat, 27 Feb 2010 12:03:18 -0500 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> <4B885B36.8090600@voidspace.org.uk> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Ian Bicking wrote: > The one issue I thought would be resolved by not easily allowing > .pyc-only distributions is the case when you rename a file (say > module.py to newmodule.py) and there is a module.pyc laying around, > and you don't get the ImportError you would expect from "import > module" -- and to make it worse everything basically works, except > there's two versions of the module that slowly become different. This > regularly causes problems for me, and those problems would get more > common and obscure if the pyc files were stashed away in a more > invisible location. > > I can't even tell what the current proposal is; maybe this is > resolved? If distributing bytecode required renaming pyc files to .py > as Glenn suggested that would resolve the problem quite nicely from my > perspective. (Frankly I find the whole use case for distributing > bytecodes a bit specious, but whatever.) The consensus as I recal was that a .pyc file in the main package directory would be importable without a .py file (just as it is today), but that .pyc files in the cache directory would not be importable in the absence of a .py file. Package distributors who wanted to ship bytecode-only distributions would need to arrange to have the .pyc files created "in place' (by disabling the cachedir option) or move them from the cachedir before bundling. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkuJUFIACgkQ+gerLs4ltQ6pnwCfVmDO8uiP9eSsjJf4ees35xus SEUAn0oKJwv9bGksxcMTHSfBbDV2Ujb7 =Vdpi -----END PGP SIGNATURE----- From tseaver at palladion.com Sat Feb 27 18:08:09 2010 From: tseaver at palladion.com (Tres Seaver) Date: Sat, 27 Feb 2010 12:08:09 -0500 Subject: [Python-Dev] __file__ In-Reply-To: <20100227112315.15f399e0@freewill.wooz.org> References: <20100130190005.058c8187@freewill.wooz.org> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <20100227112315.15f399e0@freewill.wooz.org> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Barry Warsaw wrote: > On Feb 26, 2010, at 02:55 PM, Brett Cannon wrote: > >> Here is a question for Barry to think about if he decides to move forward >> with all of this: would mixed support for both bytecode-only and >> source/bytecode be required for the same directory, or could it be one or >> the other but not both? Differing semantics based on what is found in the >> directory would make the path hook more expensive (which is a one-time cost >> per directory), but it would cut stat calls in the finder in half (which is >> a cost made per import). > > It seems a bit magical to me, and the rules a bit difficult to predict. For > example, what would be the trigger to enable bytecode-only support for a > package directory? Would it be the absence of an __init__.py file? What if > some .pyc files had .py file but not all of them? Wouldn't the trigger depend > on import order? > > OTOH, maybe you're on to something. Perhaps we could add a flag to the > package's namespace to turn this on. You'd have to include the __init__.py to > get things going, but after that, everything else in the package could be > .pyc-only. Why not just leave the code for import in the package directory as it is today, where .pyc files are already importable in the absence of a .py file? As long as file in the cachedir are *not* importable without the source, both sides win, AFAICT: most people will no longer have .pyc's in their package direoctories, and those who want them can get them, thorugh some means (moving from the cachedir, or disabling the cachedir feature). Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkuJUXkACgkQ+gerLs4ltQ76UACeMtgUz+mxmxlU1wLgl58R4ZA0 aVMAoKEmVG0D8a37Ftag6srPQSWfptON =49Tz -----END PGP SIGNATURE----- From theller at ctypes.org Sat Feb 27 18:20:01 2010 From: theller at ctypes.org (Thomas Heller) Date: Sat, 27 Feb 2010 18:20:01 +0100 Subject: [Python-Dev] PEP 3188: Implementation Questions In-Reply-To: <4095897c1002270858j5e14579dhe01c1fb4958f087b@mail.gmail.com> References: <4095897c1002252051w24377ec2tc60d0b04adee8c5d@mail.gmail.com> <4B88464E.9080700@canterbury.ac.nz> <4095897c1002270858j5e14579dhe01c1fb4958f087b@mail.gmail.com> Message-ID: Meador Inge schrieb: > On Fri, Feb 26, 2010 at 4:08 PM, Greg Ewing wrote: > >> Meador Inge wrote: >> >> 3. Using Decimal keeps the desired precision, >>> >> >> Well, sort of, but then you end up doing arithmetic in >> decimal instead of binary, which could give different >> results. >> > > Even with the user-defined precision capabilities of the 'Decimal' class? > In other words, can I create an instance of a 'Decimal' that behaves (in all > operations: arithmetic, comparison, etc...) exactly as the extended double > precision type offered by a given machine? > > Maybe the solution is to give ctypes long double objects >> the ability to do arithmetic? >> > > Maybe, but then we would have to give all numeric 'ctypes' the ability to do > arithmetic -- which may be more than we want. See issue 887237: http://bugs.python.org/issue887237 -- Thanks, Thomas From floris.bruynooghe at gmail.com Sat Feb 27 18:35:02 2010 From: floris.bruynooghe at gmail.com (Floris Bruynooghe) Date: Sat, 27 Feb 2010 17:35:02 +0000 Subject: [Python-Dev] __file__ In-Reply-To: <20100227105613.6581a198@freewill.wooz.org> References: <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> <20100227105613.6581a198@freewill.wooz.org> Message-ID: <20100227173502.GA9522@laurie.devork> On Sat, Feb 27, 2010 at 10:56:13AM -0500, Barry Warsaw wrote: > On Feb 26, 2010, at 10:59 PM, Michael Foord wrote: > > >There are several companies who currently ship bytecode only. (There was > >someone on the IronPython mailing list only last week asking if > >IronPython could support pyc files for this reason). For many > >pointy-haired-bosses 'some' protection is enough and having Python not > >support this (out of the box) would be a black mark against Python for them. > > Would it not be better to ship a zip file with an obfuscated name? Doesn't > that satisfy the use case nicely? Sure, we combine that with putting .pyo files inside the zipfile tough (for assert statements and if __debug__ blocks). I'm rather confused about everything proposed by now but would that keep working? Also somewhere else in the thread it seemed like both you and Guido suggested that simply creating a directory with some .pyc (or .pyo I guess) files in would keep working, just by default they won't be written there by python. Or is it that functionality some want to cut because of the doubling of the stat calls? (But even then I'm not convinced that would double the stat calls for normal users, only for those who only ship .pyc files) Regards Floris -- Debian GNU/Linux -- The Power of Freedom www.debian.org | www.gnu.org | www.kernel.org From db3l.net at gmail.com Sun Feb 28 00:10:26 2010 From: db3l.net at gmail.com (David Bolen) Date: Sat, 27 Feb 2010 18:10:26 -0500 Subject: [Python-Dev] __file__ References: <20100130190005.058c8187@freewill.wooz.org> <4B871227.2030707@canterbury.ac.nz> <201002271224.38093.steve@pearwood.info> Message-ID: Steven D'Aprano writes: > Personally, I can't imagine ever wanting to ship a .pyc module without > the .py, but since Python already gives people the opportunity to shoot > themselves in the foot, meh, we're all adults here. Not sure I've seen it mentioned in this thread, but for myself, I've certainly used (indirectly) such a distribution many times when packaging applications with py2exe for installation on Windows clients. That puts all the pyc files into a single support zip file from which the application runs. That seems a perfectly useful use case, and not due to any issues with security/obfuscation. The matching interpreter is being packaged with the application, so there's no version worries with the pyc. The files are internal to a zip, so why complicate things with recompiling and writing locally on the user's machine, particularly when on newer versions of Windows the installation directory might not be writable anyway. As long as executing from pyc files continues to work, presumably py2exe can be updated to collect those files from any new cache location during the build process. But I do think it's useful to continue to support executing them directly outside of any new cache location, which it sounds like is the direction being taken. -- David From greg.ewing at canterbury.ac.nz Sun Feb 28 01:36:44 2010 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 28 Feb 2010 13:36:44 +1300 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> Message-ID: <4B89BA9C.10908@canterbury.ac.nz> Guido van Rossum wrote: > Their users are apparently too dumb to decode bytecode but smart > enough to read source code, even if they don't understand it, and this > knowledge could hurt them. I think it's like putting a lock on your door. It won't stop anyone who's determined to get in, but it makes it hard for them to argue in court that they wandered in accidentally. Also it may make it easier to get the idea of using Python past PHBs. That seems to me like a good reason for keeping the feature. -- Greg From greg.ewing at canterbury.ac.nz Sun Feb 28 02:25:15 2010 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 28 Feb 2010 14:25:15 +1300 Subject: [Python-Dev] __file__ In-Reply-To: <4B889AB1.4050402@g.nevcal.com> References: <20100130190005.058c8187@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> <4B889AB1.4050402@g.nevcal.com> Message-ID: <4B89C5FB.40104@canterbury.ac.nz> Glenn Linderman wrote: > What I did was: > > python -m test > ren test.pyc foo.py > foo.py > > and it worked. Source files mentioned on the command line aren't required to have a .py extension. I think what's happening is that the interpreter ignores the filename altogether in that case and examines the contents of the file to figure out what it is, in order to support running .pyc files from the command line. -- Greg From solipsis at pitrou.net Sun Feb 28 02:22:23 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 28 Feb 2010 01:22:23 +0000 (UTC) Subject: [Python-Dev] __file__ References: <20100130190005.058c8187@freewill.wooz.org> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> Message-ID: Le Fri, 26 Feb 2010 14:29:03 -0800, Guido van Rossum a ?crit?: > > Byte-code only wasn't always supported. We added it knowing full well it > had all those problems (plus, it locks in the Python version), simply > because a certain class of developers won't stop asking for it. Their > users are apparently too dumb to decode bytecode but smart enough to > read source code, even if they don't understand it, and this knowledge > could hurt them. The idea that too much knowledge hurts users doesn't sound very Pythonic to me. As I understand it, the people interested in bytecode-only distributions are commercial companies willing to ease support. Why don't they whip up a specialized importer, and perhaps make it available as a recipe or a PyPI module somewhere? The idea that we should provide built-in support for a stupid (non-)security mechanism sounds insane to me. Finally, the sight of commercial companies not being able to do their work and begging open source projects to do it for them makes me *yawn*. If you aren't proficient or motivated enough to build your own internal commodities, perhaps you shouldn't do claim to do business at all. regards Antoine. From fuzzyman at voidspace.org.uk Sun Feb 28 02:25:38 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sun, 28 Feb 2010 01:25:38 +0000 Subject: [Python-Dev] __file__ In-Reply-To: References: <20100130190005.058c8187@freewill.wooz.org> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> Message-ID: <4B89C612.1040109@voidspace.org.uk> On 28/02/2010 01:22, Antoine Pitrou wrote: > Le Fri, 26 Feb 2010 14:29:03 -0800, Guido van Rossum a ?crit : > >> Byte-code only wasn't always supported. We added it knowing full well it >> had all those problems (plus, it locks in the Python version), simply >> because a certain class of developers won't stop asking for it. Their >> users are apparently too dumb to decode bytecode but smart enough to >> read source code, even if they don't understand it, and this knowledge >> could hurt them. >> > The idea that too much knowledge hurts users doesn't sound very Pythonic > to me. > > As I understand it, the people interested in bytecode-only distributions > are commercial companies willing to ease support. Why don't they whip up > a specialized importer, and perhaps make it available as a recipe or a > PyPI module somewhere? The idea that we should provide built-in support > for a stupid (non-)security mechanism sounds insane to me. > > Finally, the sight of commercial companies not being able to do their > work and begging open source projects to do it for them makes me *yawn*. > If you aren't proficient or motivated enough to build your own internal > commodities, perhaps you shouldn't do claim to do business at all. > Well if we'd *never* had this feature this argument would be very strong indeed. On the other hand if we want them to switch to Python 3 - but by the way we cut one of the features you rely on, but don't worry all you have to do is recode it yourself - doesn't make a very convincing argument. Michael Foord > regards > > Antoine. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (?BOGUS AGREEMENTS?) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From greg.ewing at canterbury.ac.nz Sun Feb 28 02:39:18 2010 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 28 Feb 2010 14:39:18 +1300 Subject: [Python-Dev] PEP 3188: Implementation Questions In-Reply-To: <4095897c1002270858j5e14579dhe01c1fb4958f087b@mail.gmail.com> References: <4095897c1002252051w24377ec2tc60d0b04adee8c5d@mail.gmail.com> <4B88464E.9080700@canterbury.ac.nz> <4095897c1002270858j5e14579dhe01c1fb4958f087b@mail.gmail.com> Message-ID: <4B89C946.6010403@canterbury.ac.nz> Meador Inge wrote: > Even with the user-defined precision capabilities of the 'Decimal' > class? In other words, can I create an instance of a 'Decimal' that > behaves (in all operations: arithmetic, comparison, etc...) exactly as > the extended double precision type offered by a given machine? It's not precision that's the issue, it's that the number base is different. That affects which numbers can be represented exactly, and how results that can't be represented exactly are rounded. I would be very surprised if there is a way of configuring the Decimal type so that it gives identical results to that of any IEEE binary floating point type, including rounding behaviour, denormalisation, etc. -- Greg From solipsis at pitrou.net Sun Feb 28 02:31:33 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 27 Feb 2010 20:31:33 -0500 Subject: [Python-Dev] __file__ In-Reply-To: <4B89C612.1040109@voidspace.org.uk> References: <20100130190005.058c8187@freewill.wooz.org> <4B65E86A.9040602@v.loewis.de> <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B89C612.1040109@voidspace.org.uk> Message-ID: <20100227203133.50f72697@msiwind> Le Sun, 28 Feb 2010 01:25:38 +0000, Michael Foord a ?crit : > > Well if we'd *never* had this feature this argument would be very > strong indeed. On the other hand if we want them to switch to Python > 3 - but by the way we cut one of the features you rely on, but don't > worry all you have to do is recode it yourself - doesn't make a very > convincing argument. I understand it. On the other hand, it is certainly one of the least important issues involved in porting to py3k. (even for those people who liked the feature) And I think the prospect of a slight simplification (or de-complexification) of the import machinery is an important selling point. Regards Antoine. From greg.ewing at canterbury.ac.nz Sun Feb 28 02:51:16 2010 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 28 Feb 2010 14:51:16 +1300 Subject: [Python-Dev] __file__ In-Reply-To: <20100227173502.GA9522@laurie.devork> References: <20100202231252.77a79b0e@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> <20100227105613.6581a198@freewill.wooz.org> <20100227173502.GA9522@laurie.devork> Message-ID: <4B89CC14.5070902@canterbury.ac.nz> Floris Bruynooghe wrote: > (But even then I'm not > convinced that would double the stat calls for normal users, only for > those who only ship .pyc files) It would increase the number of stat calls for normal users by 50%. You would need to look for a .pyc in the source directory, then .py in the source directory and .pyc in the cache directory. That's compared to two stat calls currently, for .py and .pyc. A solution might be to look for the presence of the cache directory, and only look for a .pyc in the source directory if there is no cache directory. Testing for the cache directory would only have to be done once per package and the result remembered, so it would add very little overhead. -- Greg From v+python at g.nevcal.com Sun Feb 28 05:50:02 2010 From: v+python at g.nevcal.com (Glenn Linderman) Date: Sat, 27 Feb 2010 20:50:02 -0800 Subject: [Python-Dev] __file__ In-Reply-To: <4B89C5FB.40104@canterbury.ac.nz> References: <20100130190005.058c8187@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> <4B889AB1.4050402@g.nevcal.com> <4B89C5FB.40104@canterbury.ac.nz> Message-ID: <4B89F5FA.2010604@g.nevcal.com> On approximately 2/27/2010 5:25 PM, came the following characters from the keyboard of Greg Ewing: > Glenn Linderman wrote: > >> What I did was: >> >> python -m test >> ren test.pyc foo.py >> foo.py >> >> and it worked. > > Source files mentioned on the command line aren't required to > have a .py extension. I think what's happening is that the > interpreter ignores the filename altogether in that case and > examines the contents of the file to figure out what it is, > in order to support running .pyc files from the command line. Thanks for the explanation. Brett mentioned something like runpy vs import using different techniques. Which is OK, I guess, but if the command line/runpy can do it, the importer could do it. Just a matter of desire and coding. Whether it is worth pursuing further depends on people's perceptions of "kookiness" vs. functional and performance considerations. -- Glenn -- http://nevcal.com/ =========================== A protocol is complete when there is nothing left to remove. -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking From ncoghlan at gmail.com Sun Feb 28 06:11:54 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 28 Feb 2010 15:11:54 +1000 Subject: [Python-Dev] __file__ In-Reply-To: <4B89F5FA.2010604@g.nevcal.com> References: <20100130190005.058c8187@freewill.wooz.org> <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B885AB4.7040004@g.nevcal.com> <4B889AB1.4050402@g.nevcal.com> <4B89C5FB.40104@canterbury.ac.nz> <4B89F5FA.2010604@g.nevcal.com> Message-ID: <4B89FB1A.907@gmail.com> Glenn Linderman wrote: > Thanks for the explanation. Brett mentioned something like runpy vs > import using different techniques. Which is OK, I guess, but if the > command line/runpy can do it, the importer could do it. Just a matter > of desire and coding. Whether it is worth pursuing further depends on > people's perceptions of "kookiness" vs. functional and performance > considerations. As I said previously, don't underestimate how different __main__ is from everything else. The most obvious difference is that the code for __main__ is executed without holding the import lock, but there are other important differences as well (such as the module object being created directly by the interpreter startup sequence and hence a lot of the import machinery being bypassed). Even the -m switch doesn't really follow the normal import paths (it just finds the code object for the named module and then runs it directly instead of importing it). Direct execution starts with a filename (or a module name when using -m) then works out how to execute it as __main__. Importing starts with a module name, tries to find a matching filename and create the corresponding module. The different starting points and the different end goals affect the assumptions that are made while the interpreter figures out what it needs to do. The behaviour of runpy is different from import precisely because it aims to mimic execution of __main__ rather than a normal import. If there weren't quite so many semantic differences between direct execution and normal import, the module would have been a lot easier to write :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From stefan_ml at behnel.de Sun Feb 28 12:00:53 2010 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 28 Feb 2010 12:00:53 +0100 Subject: [Python-Dev] Update xml.etree.ElementTree for Python 2.7 and 3.2 In-Reply-To: <4B7FD0C7.4050207@v.loewis.de> References: <4B7DB541.4000604@v.loewis.de> <4B7E2430.2070804@v.loewis.de> <4B7F95FE.20703@v.loewis.de> <4B7FD0C7.4050207@v.loewis.de> Message-ID: Martin v. L?wis, 20.02.2010 13:08: >> Actually this should not be a fork of the upstream library. >> The goal is to improve stability and predictability of the ElementTree >> implementations in the stdlib, and to fix some bugs. >> I thought that it is better to backport the fixes from upstream than to >> fix each bug separately in the stdlib. >> >> I try to get some clear assessment from Fredrik. >> If it is accepted, I will probably cut some parts which are in the upstream >> library, but which are not in the API 1.2. If it is not accepted, it is bad >> news for the "xml.etree" users... > > Not sure about the timing, but in case you have not got the message: we > should rather drop ElementTree from the standard library than integrate > unreleased changes from an experimental upstream repository. > >> It is qualified as a "best effort" to get something better for ET. Nothing else. > > Unfortunately, it hurts ET users if it ultimately leads to a fork, or to > a removal of ET from the standard library. > > Please be EXTREMELY careful. I urge you not to act on this until > mid-March (which is the earliest time at which Fredrik has said he may > have time to look into this). I would actually encourage Florent to do the opposite: act now and prepare a patch against the latest official ET 1.2 and cET releases (or their SVN version respectively) that integrates everything that is considered safe, i.e. everything that makes cET compatible with ET and everything that seems clearly stable in ET 1.3 and does not break compatibility for existing code that uses ET 1.2. If you send that to Fredrik, I expect little opposition to making that the base for a 1.2.8 release, which can then be folded back into the stdlib. Stefan From floris.bruynooghe at gmail.com Sun Feb 28 13:19:07 2010 From: floris.bruynooghe at gmail.com (Floris Bruynooghe) Date: Sun, 28 Feb 2010 12:19:07 +0000 Subject: [Python-Dev] __file__ In-Reply-To: <4B89CC14.5070902@canterbury.ac.nz> References: <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> <20100227105613.6581a198@freewill.wooz.org> <20100227173502.GA9522@laurie.devork> <4B89CC14.5070902@canterbury.ac.nz> Message-ID: <20100228121907.GA13564@laurie.devork> On Sun, Feb 28, 2010 at 02:51:16PM +1300, Greg Ewing wrote: > Floris Bruynooghe wrote: > >(But even then I'm not > >convinced that would double the stat calls for normal users, only for > >those who only ship .pyc files) > > It would increase the number of stat calls for normal > users by 50%. You would need to look for a .pyc in the > source directory, then .py in the source directory and > .pyc in the cache directory. That's compared to two > stat calls currently, for .py and .pyc. Can't it look for a .py file in the source directory first (1st stat)? When it's there check for the .pyc in the cache directory (2nd stat, magic number encoded in filename), if it's not check for .pyc in the source directory (2nd stat + read for magic number check). Or am I missing a subtlety? > A solution might be to look for the presence of the > cache directory, and only look for a .pyc in the source > directory if there is no cache directory. Testing for > the cache directory would only have to be done once > per package and the result remembered, so it would > add very little overhead. That would work too, but I don't understand yet why the .pyc check in the source directory can't be done last. Regards Floris -- Debian GNU/Linux -- The Power of Freedom www.debian.org | www.gnu.org | www.kernel.org From fuzzyman at voidspace.org.uk Sun Feb 28 13:32:31 2010 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sun, 28 Feb 2010 12:32:31 +0000 Subject: [Python-Dev] __file__ In-Reply-To: <20100228121907.GA13564@laurie.devork> References: <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> <20100227105613.6581a198@freewill.wooz.org> <20100227173502.GA9522@laurie.devork> <4B89CC14.5070902@canterbury.ac.nz> <20100228121907.GA13564@laurie.devork> Message-ID: <690612F5-13B4-4538-BABA-D24FB432A8CF@voidspace.org.uk> -- http://www.ironpythoninaction.com On 28 Feb 2010, at 12:19, Floris Bruynooghe wrote: > On Sun, Feb 28, 2010 at 02:51:16PM +1300, Greg Ewing wrote: >> Floris Bruynooghe wrote: >>> (But even then I'm not >>> convinced that would double the stat calls for normal users, only >>> for >>> those who only ship .pyc files) >> >> It would increase the number of stat calls for normal >> users by 50%. You would need to look for a .pyc in the >> source directory, then .py in the source directory and >> .pyc in the cache directory. That's compared to two >> stat calls currently, for .py and .pyc. > > Can't it look for a .py file in the source directory first (1st stat)? > When it's there check for the .pyc in the cache directory (2nd stat, > magic number encoded in filename), if it's not check for .pyc in the > source directory (2nd stat + read for magic number check). Or am I > missing a subtlety? > > The problem is doing this little dance for every path on sys.path. Michael >> A solution might be to look for the presence of the >> cache directory, and only look for a .pyc in the source >> directory if there is no cache directory. Testing for >> the cache directory would only have to be done once >> per package and the result remembered, so it would >> add very little overhead. > > That would work too, but I don't understand yet why the .pyc check in > the source directory can't be done last. > > Regards > Floris > > -- > Debian GNU/Linux -- The Power of Freedom > www.debian.org | www.gnu.org | www.kernel.org > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk From ncoghlan at gmail.com Sun Feb 28 14:07:27 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 28 Feb 2010 23:07:27 +1000 Subject: [Python-Dev] __file__ In-Reply-To: <690612F5-13B4-4538-BABA-D24FB432A8CF@voidspace.org.uk> References: <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> <20100227105613.6581a198@freewill.wooz.org> <20100227173502.GA9522@laurie.devork> <4B89CC14.5070902@canterbury.ac.nz> <20100228121907.GA13564@laurie.devork> <690612F5-13B4-4538-BABA-D24FB432A8CF@voidspace.org.uk> Message-ID: <4B8A6A8F.7050008@gmail.com> Michael Foord wrote: >> Can't it look for a .py file in the source directory first (1st stat)? >> When it's there check for the .pyc in the cache directory (2nd stat, >> magic number encoded in filename), if it's not check for .pyc in the >> source directory (2nd stat + read for magic number check). Or am I >> missing a subtlety? > > The problem is doing this little dance for every path on sys.path. To unpack this a little bit for those not quite as familiar with the import system (and to make it clear for my own benefit!): for a top-level module/package, each path on sys.path needs to be eliminated as a possible location before the interpreter can move on to check the next path in the list. So the important number is the number of stat calls on a "miss" (i.e. when the requested module/package is not present in a directory). Currently, with builtin support for bytecode only files, there are 3 checks (package directory, py source file, pyc/pyo bytecode file) to be made for each path entry. The PEP proposes to reduce that to only two in the case of a miss, by checking for the cached pyc only if the source file is present (there would still be three checks for a "hit", but that only happens at most once per module lookup). While the PEP is right in saying that a bytecode-only import hook could be added, I believe it would actually be a little tricky to write one that didn't severely degrade the performance of either normal imports or bytecode-only imports. Keeping it in the core import, but turning it off by default seems much less likely to have unintended performance consequences when it is switched back on. Another option is to remove bytecode-only support from the default filesystem importer, but keep it for zipimport (since the stat call savings don't apply in the latter case). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From floris.bruynooghe at gmail.com Sun Feb 28 15:12:08 2010 From: floris.bruynooghe at gmail.com (Floris Bruynooghe) Date: Sun, 28 Feb 2010 14:12:08 +0000 Subject: [Python-Dev] __file__ In-Reply-To: <4B8A6A8F.7050008@gmail.com> References: <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> <20100227105613.6581a198@freewill.wooz.org> <20100227173502.GA9522@laurie.devork> <4B89CC14.5070902@canterbury.ac.nz> <20100228121907.GA13564@laurie.devork> <690612F5-13B4-4538-BABA-D24FB432A8CF@voidspace.org.uk> <4B8A6A8F.7050008@gmail.com> Message-ID: <20100228141208.GA14150@laurie.devork> On Sun, Feb 28, 2010 at 11:07:27PM +1000, Nick Coghlan wrote: > Michael Foord wrote: > >> Can't it look for a .py file in the source directory first (1st stat)? > >> When it's there check for the .pyc in the cache directory (2nd stat, > >> magic number encoded in filename), if it's not check for .pyc in the > >> source directory (2nd stat + read for magic number check). Or am I > >> missing a subtlety? > > > > The problem is doing this little dance for every path on sys.path. > > To unpack this a little bit for those not quite as familiar with the > import system (and to make it clear for my own benefit!): for a > top-level module/package, each path on sys.path needs to be eliminated > as a possible location before the interpreter can move on to check the > next path in the list. Aha, that was the clue I was missing. Thanks! Floris -- Debian GNU/Linux -- The Power of Freedom www.debian.org | www.gnu.org | www.kernel.org From victor.stinner at haypocalc.com Sun Feb 28 17:52:22 2010 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Sun, 28 Feb 2010 17:52:22 +0100 Subject: [Python-Dev] Challenge: escape from the pysandbox Message-ID: <201002281752.22414.victor.stinner@haypocalc.com> Hi, pysandbox is a new Python sandbox project under development. By default, untrusted code executed in the sandbox cannot modify the environment (write a file, use print or import a module). But you can configure the sandbox to choose exactly which features are allowed or not, eg. import sys module and read the file /etc/issue. Website: http://github.com/haypo/pysandbox/ Download the repository using git: git clone git://github.com/haypo/pysandbox.git or git clone http://github.com/haypo/pysandbox.git Or download the .zip or .tar.gz tarball using the "Download source" button on the website. I think that the project reached the "testable" stage. I launch a new challenge: try to escape from the sandbox. I'm unable to write strict rules. The goal is to access objects outside the sandbox. Eg. write into a file, import a module which is not in the whitelist, modify an object outside the sandbox, etc. To test the sandbox, you have 3 choices: - interpreter.py: interactive interpreter executed in the sandbox, use: --verbose to display the whole sandbox configuration, --features=help to enable help() function, --features=regex to enable regex, --help to display the help. - execfile.py : execute your script in the sandbox. It has also --features option: use --features=stdout to be able to use the print instruction :-) - use directly the Sandbox class: use methods call(), execute() or createCallback() Don't use "with sandbox: ..." because there is known but with local frame variables. I think that I will later drop this syntax because of this bug. Except of debug_sandbox, I consider that all features are safe and so you can enable all features :-) There is no prize, it's just for fun! But I will add the name of hackers founding the best exploits. pysandbox is not ready for production, it's under heavy development. Anyway I *hope* that you will quickly find bugs! -- Use tests.py to found some examples of how you can escape a sandbox. pysandbox is protected against all methods described in tests.py ;-) See the README file to get more information about how pysandbox is implemented and get a list of other Python sandboxes. pysandbox is currently specific to CPython, and it uses some ugly hacks to patch CPython in memory. In the worst case it will crash the pysandbox Python process, that's all. I tested it under Linux with Python 2.5 and 2.6. The portage to Python3 is not done yet (is someone motivated to write a patch? :-)). -- Victor Stinner http://www.haypocalc.com/ From brett at python.org Sun Feb 28 21:21:47 2010 From: brett at python.org (Brett Cannon) Date: Sun, 28 Feb 2010 12:21:47 -0800 Subject: [Python-Dev] __file__ In-Reply-To: <4B8A6A8F.7050008@gmail.com> References: <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> <20100227105613.6581a198@freewill.wooz.org> <20100227173502.GA9522@laurie.devork> <4B89CC14.5070902@canterbury.ac.nz> <20100228121907.GA13564@laurie.devork> <690612F5-13B4-4538-BABA-D24FB432A8CF@voidspace.org.uk> <4B8A6A8F.7050008@gmail.com> Message-ID: On Sun, Feb 28, 2010 at 05:07, Nick Coghlan wrote: > Michael Foord wrote: > >> Can't it look for a .py file in the source directory first (1st stat)? > >> When it's there check for the .pyc in the cache directory (2nd stat, > >> magic number encoded in filename), if it's not check for .pyc in the > >> source directory (2nd stat + read for magic number check). Or am I > >> missing a subtlety? > > > > The problem is doing this little dance for every path on sys.path. > > To unpack this a little bit for those not quite as familiar with the > import system (and to make it clear for my own benefit!): for a > top-level module/package, each path on sys.path needs to be eliminated > as a possible location before the interpreter can move on to check the > next path in the list. > > So the important number is the number of stat calls on a "miss" (i.e. > when the requested module/package is not present in a directory). > Currently, with builtin support for bytecode only files, there are 3 > checks (package directory, py source file, pyc/pyo bytecode file) to be > made for each path entry. > Actually it's four: name/__init__.py, name/__init__.pyc, name.py, and then name.pyc. And just so people have terminology to go with all of this, this search is what the finder does to say whether it can or cannot handle the requested module. > > The PEP proposes to reduce that to only two in the case of a miss, by > checking for the cached pyc only if the source file is present (there > would still be three checks for a "hit", but that only happens at most > once per module lookup). > Just to be explicit, Nick is talking about name/__init__.py and name.py (note the skipping of looking for any .pyc files). At that point only the loader needs to check for the bytecode in the __pycache__ directory. > > While the PEP is right in saying that a bytecode-only import hook could > be added, I believe it would actually be a little tricky to write one > that didn't severely degrade the performance of either normal imports or > bytecode-only imports. Keeping it in the core import, but turning it off > by default seems much less likely to have unintended performance > consequences when it is switched back on. > It all depends on how it is implemented. If the bytecode-only importer stats a directory to check for the existence of any source in order to decide not to handle it, that is an extra stat call, but that is only once per sys.path/__path__ location by the path hook, not every attempted import. Now if I ever manage to find the time to break up the default importers and expose them then it should be no more then adding the bytecode-only importer to the chained finder that already exists (it essentially chains source and extension modules). > > Another option is to remove bytecode-only support from the default > filesystem importer, but keep it for zipimport (since the stat call > savings don't apply in the latter case). > That's a very nice option. That would isolate it into a single importer that doesn't impact general performance for everyone else. -Brett > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > --------------------------------------------------------------- > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Sun Feb 28 21:29:25 2010 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 01 Mar 2010 07:29:25 +1100 Subject: [Python-Dev] __file__ In-Reply-To: References: <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> <20100227105613.6581a198@freewill.wooz.org> <20100227173502.GA9522@laurie.devork> <4B89CC14.5070902@canterbury.ac.nz> <20100228121907.GA13564@laurie.devork> <690612F5-13B4-4538-BABA-D24FB432A8CF@voidspace.org.uk> <4B8A6A8F.7050008@gmail.com> Message-ID: <1267388965.25053.86.camel@lifeless-64> On Sun, 2010-02-28 at 12:21 -0800, Brett Cannon wrote: > > Actually it's four: name/__init__.py, name/__init__.pyc, name.py, and > then name.pyc. And just so people have terminology to go with all of > this, this search is what the finder does to say whether it can or > cannot handle the requested module. Aren't there also: name.so namemodule.so ? -Rob -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From baptiste13z at free.fr Sun Feb 28 21:45:56 2010 From: baptiste13z at free.fr (Baptiste Carvello) Date: Sun, 28 Feb 2010 21:45:56 +0100 Subject: [Python-Dev] __file__ In-Reply-To: <4B8A6A8F.7050008@gmail.com> References: <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> <20100227105613.6581a198@freewill.wooz.org> <20100227173502.GA9522@laurie.devork> <4B89CC14.5070902@canterbury.ac.nz> <20100228121907.GA13564@laurie.devork> <690612F5-13B4-4538-BABA-D24FB432A8CF@voidspace.org.uk> <4B8A6A8F.7050008@gmail.com> Message-ID: Nick Coghlan a ?crit : > > Another option is to remove bytecode-only support from the default > filesystem importer, but keep it for zipimport (since the stat call > savings don't apply in the latter case). > bytecode-only in a zip is used by py2exe, cx_freeze and the like, for space reasons. Disabling it would probably hurt them. However, making a difference between zipimport and the filesystem importer means the application will stop working if I unzip the library zip file, which is surprising. Unzipping the zip file can be handy when debugging a bug caused by a forgotten module. Cheers, Baptiste From ncoghlan at gmail.com Sun Feb 28 21:46:31 2010 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 01 Mar 2010 06:46:31 +1000 Subject: [Python-Dev] __file__ In-Reply-To: References: <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> <20100227105613.6581a198@freewill.wooz.org> <20100227173502.GA9522@laurie.devork> <4B89CC14.5070902@canterbury.ac.nz> <20100228121907.GA13564@laurie.devork> <690612F5-13B4-4538-BABA-D24FB432A8CF@voidspace.org.uk> <4B8A6A8F.7050008@gmail.com> Message-ID: <4B8AD627.60103@gmail.com> Brett Cannon wrote: > Actually it's four: name/__init__.py, name/__init__.pyc, name.py, and > then name.pyc. And just so people have terminology to go with all of > this, this search is what the finder does to say whether it can or > cannot handle the requested module. Huh, I thought we checked for the directory first and only then checked for the __init__ module within it (hence the generation of ImportWarning when we don't find __init__ after finding a correctly named directory). So a normal miss (i.e. no directory) only needs one stat call. (However, I'll grant that I haven't looked at this particular chunk of code in a fairly long time, so I could easily be wrong). Robert raises a good point about the checks for extension modules as well - we should get an accurate count here so Barry's PEP can pitch the proportional reduction in stat calls accurately. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From solipsis at pitrou.net Sun Feb 28 22:32:11 2010 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 28 Feb 2010 21:32:11 +0000 (UTC) Subject: [Python-Dev] __file__ References: <20100225160832.6e7a3063@freewill.wooz.org> <4B870E2A.8090406@canterbury.ac.nz> <4B870CD5.8020105@voidspace.org.uk> <4B871227.2030707@canterbury.ac.nz> <4B88526E.7030808@voidspace.org.uk> <20100227105613.6581a198@freewill.wooz.org> <20100227173502.GA9522@laurie.devork> <4B89CC14.5070902@canterbury.ac.nz> <20100228121907.GA13564@laurie.devork> <690612F5-13B4-4538-BABA-D24FB432A8CF@voidspace.org.uk> <4B8A6A8F.7050008@gmail.com> Message-ID: Le Sun, 28 Feb 2010 21:45:56 +0100, Baptiste Carvello a ?crit?: > bytecode-only in a zip is used by py2exe, cx_freeze and the like, for > space reasons. Disabling it would probably hurt them. Source code compresses quite well. I'm not sure it would make much of a difference. AFAIR, when you create a py2exe distribution, what takes most of the place is the interpreter itself as well as any big third-party C libraries such as wxWidgets. Regards Antoine. From glyph at twistedmatrix.com Sun Feb 28 23:53:29 2010 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Sun, 28 Feb 2010 16:53:29 -0600 Subject: [Python-Dev] __file__ In-Reply-To: <4B893C80.5040200@gmail.com> References: <20100130190005.058c8187@freewill.wooz.org> <4B871227.2030707@canterbury.ac.nz> <201002271224.38093.steve@pearwood.info> <4B893C80.5040200@gmail.com> Message-ID: On Feb 27, 2010, at 9:38 AM, Nick Coghlan wrote: > I do like the idea of pulling .pyc only imports out into a separate > importer, but would go so far as to suggest keeping them as a command > line option rather than as a separately distributed module. One advantage of doing this as a separately distributed module is that it can have its own ecosystem and momentum. Most projects that want this sort of bundling or packaging really want to be shipped with something like py2exe, and I think the folks who want such facilities would be better served by a nice project website for "python sealer" or "python bundler" rather than obscure directions for triggering the behavior via options or configuration. Making bytecode loading a feature of interpreter startup, whether it's a config file, a command-line option or an environment variable, is not a great idea. For folks that want to ship a self-contained application, any of these would require an additional customization step, where they need to somehow tell their bundled interpreter to load bytecode. For people trying to ship a self-contained and tamper-unfriendly (since even "tamper-resistant" would be overstating things) library to relatively non-technical programmers, it opens the door to a whole universe of confusion and FAQs about why the code didn't load. However bytecode-only code loading is facilitated, it should be possible to bootstrap from a vanilla python interpreter running normally, as you may not know you need to load a bytecode-only package at startup. In the stand-alone case there are already plenty of options, and in the library case, shipping a zip file should be fine, since the __init__.py of your package should be plain-text and also able to trigger the activation of the bytecode-only importer. There are already so many ways to ship bytecode already, it doesn't seem too important to support in this one particular configuration (files in a directory, compiled by just importing them, in the same place as ".py" files). The real problem is providing a seamless transition path for *build* processes, not the Python code itself. Do any of the folks who are currently using this feature have a good idea as to how your build and distribute scripts might easily be updated, perhaps by a 2to3 fixer? -------------- next part -------------- An HTML attachment was scrubbed... URL: